uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,090,554
arxiv
\section{Introduction}\label{intro} In all types of project, the planning phase includes some kind of effort forecasting. Since the 1940s, researchers have been investigating the use of expert opinion in connection to getting as accurate estimations as possible \citep{delphi}. Many aspects have been studied in relation to software cost estimation due to an explosion of software-related projects in the last decades \citep{halkjelsvik2012from}. Many of these studies have empirically investigated the impact of irrelevant information (i.e.\ information that is not needed for the estimations) on software effort estimations. In \citet{jorgen2}, the results show that pre-planning effort estimates may have an impact on the detailed planning effort estimates, despite subjects being told that the early estimates are not based on historical data. Furthermore, \citet{jorgen3} report that, despite that the subjects were told that customer expectation is not an indicator of the actual effort\footnote{We use the terms ``effort,'' ``cost,'' and ``time'' interchangeably when discussing estimation in this paper because the main driver for cost is typically the effort in connection to software development, which takes time from employees that is payed for by the organizations.}, irrelevant information about the customer's expectations affects the cost estimates. In addition, the results in \citet{jorgensen} indicated that the length of the Requirements Specification had an impact, however small, on the effort estimates. Finally, in a study by \citet{aranda}, the results show that information that is clearly marked as irrelevant (i.e.\ not to be taken into account) in a requirement specification have a large impact on software cost estimates. The results in \citet{aranda} could not be explained by the subjects' experience of cost estimations. \citet{aranda} explicitly tested the cognitive bias of \emph{anchoring} and concluded that an estimate from a clearly-stated non-expert was still influencing the judgement of the participants. In general, the above mentioned studies have shown that introducing irrelevant information may lead to an increased estimation error, but with a small sample sizes of around 20 participants in each study, which implies low statistical power. One aspect that has not been studied, except for an initial study \citep{gren2017possible}, is the effect of obsolete requirements, i.e.\ requirements that are somehow marked as not to be included in the estimations but still visible to the assessors when estimating. The reason why this aspect should be studied more, is that the way software development practice often deals with requirements that should not be implemented now is to mark them as ``obsolete'' or the like \citep{wnuk2013obsolete}, which is a special type of irrelevant information. In our experience, most companies have too many requirements and it is not possible to implement all of them in the coming product\slash project\slash release, or in the next sprint for companies using an agile software development processes. It is, of course, then important to make accurate estimates of the ones that actually are to be implemented. If current approaches misguide the effort estimation, the practices must of course change or at least be informed by the impact of showing obsolete requirements to estimators. \subsection{Previous research and motivation}\label{sec:ade} The first study conducted on the topic of obsolete requirements was published in \citet{gren2017possible}. In order to clarify the experimental setup (more details are available in \citet{gren2017possible}) the authors distributed three different tasks to three groups of students in the same class. The first group (group A) was to estimate how long time it took to implement 4 requirements in weeks. Group B was given the same 4 requirements plus one extra (a total of 5 requirements). Group C was given the same 5 requirements as Group B but was instructed to leave the last requirement out of the estimation. The study was conducted with 150 university students and showed that adding obsolete requirements to the requirement specification heavily distorted the students and manipulated them into providing higher estimates for the existing requirements (i.e.\ they provided much lower estimates without any requirements marked as obsolete next to them). Before the experiment started, a pre-questionnaire was given to the students to collect the students experiences and knowledge in relation to the English language, experience from software development in industry, and experience in effort estimation. The study was conducted during one lecture in the mandatory course. In total, the experiment lasted for one hour, including introduction, explanations, pre-questionnaire, and completing the tasks. The actual time spent on the tasks, including read the instructions and performing the estimation, was 10-20 minutes. The task groups, i.e.\ A, B, and C were not overlapping. That is, the 150 students were divided into three groups for the three different estimation tasks. Meaning, 50 students performed task A (was in Group A), 50 students performed task B (Group B), and 50 performed task C (Group C). A more detailed description of the experiment, the experimental material, subjects, and setup is available in \citet{gren2017possible}. Intuitively, if the same specification is used but some additional requirements are market as obsolete in one group, these estimates should be similar, and preferably be estimated as if those requirements were not there since they were explicitly marked as obsolete. However, the result showed that the estimates instead increased heavily. The authors tried to explain the effect by suggesting that two different cognitive biases could play a role, namely the representativeness heuristic \citep{representativeness} or the decoy effect \citep{zang}. However, none of these explanations helped in quantifying the effect of obsolete requirements, which is why we decided to investigate how the found estimation bias functions in more detail by conducting further experiments. \subsection{Research goal and research question} The aim of this current family of experiments is to investigate the effects of obsolete requirement in software effort estimation further through a set of six experiments. The experiments comprise of both student participants estimating real requirements individually (Experiments 1, 2, and 3), industry practitioners doing the same (Experiment 4), and industry practitioners estimating their own requirements (Experiment 5), and the same industry practitioners as in Experiment 5 estimating their own requirements in teams over time in sprints (Experiment 6). Therefore, the overall research question we looked at from different angles is: \emph{RQ: Do obsolete requirements, explicitly stated or marked to be excluded from the effort estimation, have an impact of the size of the total estimates? and if so, how much?} \subsection{Contribution} This paper contributes with a family of six experiments to show the effect of obsolete requirements in different context and with different requirements specifications, which was large across all experiments. Moreover, this paper shows how Bayesian Data Analysis (BDA) can be used to statistically analyze studies without the use of statistical significance. By using BDA, this paper enables replications with very small sample sizes since new experiments can use what have already been learned about the parameters in this study. The remaining paper is organized as follows. In the next section (Section~\ref{bayes}), we provide a brief introduction to Bayesian Data Analysis (BDA). Section~\ref{family} presents an overview the six experiments conducted in this current study and if\slash how we changed the experimental setup after each experiment. In Section~\ref{sec:results} we show the output from each experiment. In Section~\ref{sec:disc} we discuss the findings from all the experiments, in Section~\ref{sec:limits} we discuss threats to validity, and in Section~\ref{sec:concl}, we conclude the paper and suggest future work. \section{Bayesian Data Analysis}\label{bayes} We have lately followed the development in statistics with great interest (e.g.\ \citet{munafo2017manifesto}), but a great summary that inspired the data analysis used in this study is the recent publication by \citet{mcshane2019abandon} where they argue for researchers to abandon statistical significance completely. Their remedy is the use of something that can be denoted a ``fully'' Bayesian Data Analysis (BDA) with no threshold values but an open and honest presentation of prior beliefs, data, and all the analyses conducted. In 2019, a first paper was published in software engineering critiquing current statistical practice, and suggesting BDA as a potential solution \citep{furia2019bayesian}. Any statistical investigation have data from a random variable from a probability distribution $P$. In most software engineering research, this distribution if often assumed to be normal (i.e.\ Gaussian), and if not, assumed to not exist and instead researchers use statistical tests based on ranks \citep{de2019evolution}. However, this is a pity since there are many probability distributions that could create much better models for the collected data (e.g.\ Binomial, Beta, Poisson, Log-normal, etc.). All these distributions can be described by parameters, $\theta$s. When researchers have conducted a study, some data $D$ is collected, but assumptions need to be made, or preferably, trying to find the best fitting distribution for the data. It is important to stress here that any statistical inference eventually make use of Bayes' theorem \citep{McElreath2016sra}, but a Bayesian Data Analysis approach uses this theorem more generally and in connection to parameters and models. Bayes' theorem yields: \begin{equation} P(\theta \mid D) = \frac{P(D | \theta) \times P(\theta)}{P(D)} \end{equation} Where $P(\theta \mid D)$ is the probability of the parameter $\theta$ given the data. This is called the \textit{posterior distribution} and is what should be obtained in the end for all the parameters of interest. Once the posterior is obtained, it is possible to analyze it from different perspectives and make inferences. $P(D | \theta)$ is the \textit{likelihood} that the data actually came from the assumed parameter. It is important to try different likelihoods, i.e.\ statistical models including different statistical distributions for each parameter, and compare how these different scenarios affect the posterior. $P(\theta)$ is the \textit{prior} information about the parameter, which is then not connected to the obtained data. $P(D)$ is simply a standardizing constant, expressed as the average likelihood. It is rarely possible to exactly calculate the posterior distribution, which is why we instead sample from the posterior using Markov-Chain Monte-Carlo simulation. This is one of the reasons BDA was less used before modern computers with enough computing power for such sampling methods \citep{McElreath2016sra}. As mentioned, BDA is not about Bayes' theorem, but about quantifying uncertainty much more than the frequentist approach. We can try different likelihoods, use the prior information about parameters and integrate all these into a model that include all the uncertainty for all the parameters. An controversy in BDA is the choice of priors since they will affect the results to a very large extent. Therefore, one uses weakly informative priors if no prior information exist and then uses the posterior from earlier studies in the future. What should also be done, since practical significant is the ultimate goal in research, is to use experts to provide this prior information \citep{stefan2019practical}. For a short background of BDA and why it is useful for software engineering research, we refer to \citet{furia2019bayesian}. For an example of a good text from another research field, see \citet{van2014gentle}. We would recommend readers interested in learning BDA to first read the book by \citet{McElreath2016sra} and try our the R package Rethinking\footnote{https://github.com/rmcelreath/rethinking}, and then go from defining models in Rethinking to brms \citep{burkner2017brms}, which is faster and simpler to do more advanced analyses, but less pedagogical. Both packages build on R\footnote{https://www.r-project.org/} and Stan\footnote{https://mc-stan.org/}. Other researchers lead the development of BDA and we will only apply it in this paper. We followed the steps below, which can be read about in much more detail in \citet{wilson2019ten} (some of which can be followed in the Supplementary Material): \begin{enumerate} \item Always plot the raw data to get an initial idea of what the distributions might be for what we have collected. \item Create an initial statistical model and check how it behaves without looking at the data (i.e.\ a sensitivity analysis). \item Create different models and obtain posterior distribution for all of them (i.e.\ the models in light of the data) and validate them against each other. \item Check how the chains behave in the Markov-Chain Monte-Carlo simulations to find the posteriors. \item Plot and look at the real distributions of the posteriors to assess the results. \item Calculate a Bayesian $R^2$ statistic \citep{gelman2019r} to assess variance explained by the model, but by using the posterior. \end{enumerate} \section{A Family of Experiments}\label{family} In order to investigate the estimation bias, we conducted six experiments (in addition to the experiment conducted by \citet{gren2017possible} from whom we obtained the raw data) with both students and practitioners ($N=461$) to see if obsolete requirements explicitly stated to be excluded from the effort estimate had an impact on the size of these estimates. Hereinafter, we denote the experiment published in \citet{gren2017possible} as Experiment 0 since it was the first one to be conducted on this topic but not a part of this current paper. Assessing the validity threats of Experiment 0 \citep{gren2017possible}, there is an evident problem with instructing subjects to exclude requirements on their paper next to the requirements, which is why we replicated the experiment in a set of different settings in this paper. In more detail, it may be confusing to read the phrase ``Requirement x should not be implemented,'' which is why the experiment was replicated in an as realistic setting as possible (Experiment 6). Regarding Experiment 0, first, it is not known if the results from Experiment 0 replicates with exactly the same setup (addressed in Experiments 1 and 2). Second, it was not possible to know if the length of the requirement specification is a confounding factor (addressed in Experiments 3 and 4) or if the effect might disappear by conducting the estimation in teams (addressed in Experiments 6), which many companies do. Also, having students estimate requirements (Experiments 1, 2, and 3) they know they will not implement for a system they are unfamiliar with has, of course, a great risk of being a toy problem. Experiment 4, therefore, comprised of industry participants, but they still estimated requirements they were not to implement by themselves afterward. Experiments 5 and 6 looked at this aspect by being fully in context of developers that both estimated and later implemented the requirements. Furthermore, the first set of experiments (Experiments 0--3) did not investigate the accuracy of the estimates since we did not compare to an actual implementation effort (we did not obtain ``true'' student implementation times). It could have been the case that the obsolete requirements helped the subject to decrease the estimation error. Therefore, in Experiment 4, we conducted the same experiment but with professional software developers in industry and compared the result to the true implementation time, as implemented later by their colleagues. Experiment 5 was conducted in a field-setting using the industry teams' own requirements. In Experiment 6, we used the same teams' own backlogs and sprints with requirements that they themselves implemented afterwards. We also collected qualitative data through interviews asking the teams why they thought the estimations were inaccurate. Table \ref{Tab:SumofExp} provides a summary of the six experiments (Experiments 1--6), including subjects, number of requirements and obsolete requirements, type of replication according to the taxonomy by \citet{baldassarre2014replication}, and the reason for conducting the experiment. The setup and design of each experiment is described in detail in Section \ref{sec:DesandExpMtrl} while the subjects and the selection of subjects are described in Section \ref{sec:subjects}. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Summary of the setting for each experiment.} \begin{center} \begin{tabular}{|p{0.12\linewidth}|p{0.12\linewidth}|p{0.15\linewidth}|p{0.20\linewidth}|p{0.20\linewidth}|} \hline \textbf{Experiment} & \textbf{Subjects} & \textbf{\# req / obsolete req } & \textbf{Type of replication} & \textbf{Reason}\\ \hline 1 & 150 Bachelor's students & 4-5 / 0-1 & Exact internal replication of Experiment 0 & To see if the results from Experiment 0 still holds\\ \hline 2 & 149 Bachelor's students & 4-5 / 0-1 & Exact internal replication of Experiment 0 and 1 & To see if the results from Experiments 0 and 1 still holds\\ \hline 3 & 60 Master's students & 8-10 / 0-2 & The same design as in Experiment 2, otherwise a differentiated replication & To study if twice as many requirements and obsolete requirements would influence the estimates\\ \hline 4 & 75 industry practitioners from two companies& 8-10 / 0-2 & The same design as in Experiment 2, otherwise a differentiated replication & To investigate if the results from Experiments 1--3 would hold true for practitioners in industry using real requirements from their companies\\ \hline 5 & 27 industry practitioners from three companies& 10-22 / 0-5 & Similar structure and design as in Experiment 4, but a conceptual replication & To see if the effect exists when practitioners estimate requirements from their own context.\\ \hline 6 & 27 industry practitioners from three companies& 139-304 / 0-60 & Based on Experiment 5, but a conceptual replication & To see if the effect exists when teams estimate their own requirements.\\ \hline \end{tabular} \end{center} \label{Tab:SumofExp} \end{table*} \subsection{Design and Experimental Material}\label{sec:DesandExpMtrl} The aims of \textbf{Experiments 1 and 2} were the same as for Experiment 0, i.e. to see if obsolete requirements explicitly stated to be excluded from the effort estimation have an impact of the size of the estimates. The reason for performing Experiments 1 and 2 was to investigate if the results from Experiment 0 still holds by exactly replicating Experiment 0 as reported in \citet{gren2017possible}. Thus, the design of Experiments 1 and 2 was exactly the same as for Experiment 0 (i.e.\ an internal replication). The first and second experiments had a sample size of 150 and 149 students respectively. In Experiments 1 and 2, three different tasks (A, B, and C) were designed and randomly distributed to three groups of students (Group A - performing Task A, Group B performing Task B, and Group C performing Task C) in the same class. The groups were not overlapping, i.e., in Experiment 1, 50 students performed Task A, 50 students performed Task B, and 50 performed Task C. The first group (Group A) was to estimate how long time, in weeks, it took to implement four requirements. The four requirements were: \begin{itemize} \item \textit{R1: The system shall receive uncompressed data and shall compress and save the data to desired JPEG size} \item \textit{R2: The maximum delay from a call answer is pressed to opened audio paths is X ms} \item \textit{R3: The system shall have support for Time Shift (playback with delay)} \item \textit{R4: The system shall have a login function that consists of a username and a password} \end{itemize} Group B was given the same four requirements as Group A plus one extra added requirement, hence Group B had five requirements to estimate the total effort it would take to implement the requirements. The fifth requirement was: \begin{itemize} \item \textit{R5: It shall be possible to dedicate a host buffer in RAM that is configurable between X to Y MB for HDD} \end{itemize} Since all of the five requirements were from one of our industrial partners, we had to replace the real values with “X and Y” in this paper due to confidentiality reasons. However, the students had the real values in their tasks. Group C was given the same five requirements as Group B, but was instructed to leave the last requirement (R5) out of the estimation. Both Experiment 1 and 2 were conducted during one lecture in a mandatory course. The students were given an introduction followed by a problem description. Then, a pre-questionnaire was handed out to the students to collect the students experiences and knowledge in relation to the English language, experience from software development in industry, and experience in effort estimation. After the pre-questionnaire was filled in by the students, the assignments and its instructions were given to the students. At this point, the students had time to read the instructions and to complete the estimation task. The effort estimation task was designed and conducted individually by the students. In total, the experiment lasted for about one hour, including introduction, explanations, pre-questionnaire, and completing the tasks. The actual time spent on the tasks, including reading the instructions and performing the estimation, was between 10-20 minutes. Since we also conducted Experiment 0, we present an analysis of all these three experiments (0, 1, and 2) jointly (see Section \ref{method1}). The results in \citet{jorgensen} indicated that the length of the requirements specification had an impact on effort estimations, therefore it was of interest to study the degree to which twice as many requirements and obsolete requirements would influence the estimates. Thus, in \textbf{Experiment 3}, we decided to double the number of requirements for all three tasks (A, B, and C) and to conduct the experiment with a different set of students from a different university. The third experiment had a sample size of 60 students. Since the task in each group was different from the previous experiments we could not compare the result with the results from Experiments 0--2. The design of Experiment 3, including the random distribution of students into groups, was exactly the same as for Experiments 1 and 2 (i.e. a differentiated replication), except for the number of requirements and obsolete requirements, which had been doubled in size. That is, instead of using four requirements in Group A, we had eight requirements, while the number of requirements for Group B increased from five to 10 requirements. Finally, for Group C the number of requirements increased from five to 10 where the students were told to not to take the last two requirements (instead of only one as in the previous experiments) into account when performing the estimation. Experiments 1--3 were conducted with student subjects that did not have any knowledge/expertise about the requirements, the domain or the product of which the requirements belong to, nor did they have any extensive industrial experience of software development and effort estimation. Therefore, the aim of \textbf{Experiment 4} was to investigate if the results from Experiments 1--3 would hold true for practitioners in industry using real requirements from their companies that were to be implemented in their coming sprints shortly after Experiment 4 (note that the selected requirements in Experiment 4 were not yet implemented at the time of the experiment). Experiment 4 had exactly the same number of requirements as in Experiment 3, but since the context was very different, we did not compare students' result to the result of the industry participant. Moreover, another aspect that Experiments 1--3 do not address is the investigation of the accuracy of the estimates since we did not compare to an actual implementation effort. Therefore, when the requirements used in Experiment 4 had been implemented, we collected the actual effort it took to implement the requirements. The fourth experiment had a sample size of 75 industry practitioners from two different companies. Experiment 4 was a differentiated replication of Experiment 3 and the design of Experiment 4 was exactly the same as for Experiment 3, except for the used requirements and having industry practitioners instead of student subjects. The main criteria used when selecting the 10 requirements were that they should be implemented in a real project after the experiment (to know the actual effort), and that the requirements should be understandable for all participating industry practitioners. Due to confidentiality reasons, the used requirements are not allowed to be revealed. Moreover, the questions asked in the pre-questionnaire differed from the ones used in Experiments 1--3. In Experiment 4, we asked questions about the subjects total years of experience in software development, total years of experience at their current company, and total years of experience with requirements engineering and effort estimation. These numbers were known for the sample as a whole and averaged out the effect of experience by randomizing the industry participants into the different group A, B, and C anyways. Although the subjects in Experiment 4 comprised of industry practitioners, the subjects did not estimate requirements that they were to implement. Instead, the requirements in Experiment 4 were implemented by other practitioners in the companies. Hence, the effect might only exist in contexts where an outsider, i.e. someone that will not actually implement the requirements, conduct the estimation. Therefore, \textbf{Experiment 5} looked into this aspect by being fully in the context of developers that both estimated and later implemented the requirements. Experiment 5 was conducted in a field-setting using the industry teams' own requirements. The effort estimation in Experiment 5 was based on the industry practitioners’ real requirements from their real product and sprint backlogs. The fifth experiment had a sample size of 27 industry practitioners from five complete teams at three different companies. For Experiment 5, we searched among our industrial collaboration network for software developing companies that would be interested in participating in the experiment. Three companies (hereafter named as Company C, Company D, and Company E) and five complete teams (three from Company C and one each from companies D and E, as shown in Table \ref{Tab:CompCharExp5}) were interested in the effort estimation work and decided to participate in Experiment 5. To setup and plan the experiment and to identify industry practitioners for participating in Experiment 5, we contacted three ``gate-keepers'' (one from each company). Experiment 5 followed a similar structure and design as Experiment 4 (i.e. a conceptual replication), but with real requirements from the teams' real projects where the number of requirements and obsolete requirements varied. Fig. \ref{fig:ExReq} illustrates what level of details the requirements had (written as user stories, natural language requirements, and use cases) in Experiments 5 and 6. Note that the requirements in Fig. \ref{fig:ExReq} are not the real requirements that were used in Experiments 5 and 6 (due to confidentiality reasons, the used requirements are not allowed to be revealed). \begin{landscape} \begin{figure} \includegraphics[scale=0.67]{REJ_Obs_REQ_Fig-ExReq.pdf} \caption{Example of what level of details the requirements had in Experiments 5 and 6.} \label{fig:ExReq} \end{figure} \end{landscape} For each team, the estimation effort was performed individually over two or three sprints where the number of requirements and obsolete requirements differed, both between the teams and in the sprints for each team. The reason for this difference was based on input from the ``gate-keepers'' at each company. After each sprint, the real requirements were implemented and then we collected the actual implementation effort in order to compare with the individual estimates. The main criteria used when selecting the requirements were that they should be real requirements from the team's product and\slash or sprint backlog, and that the requirements should be implemented in the coming sprint. Before the estimation of the requirements in the first sprint, the industry practitioners were given the same introduction, problem description, and pre-questionnaire as the subjects in Experiment 4. After the pre-questionnaire was filled in by the industry practitioners, the assignment (the selected real requirements from their coming sprint, which was selected by the ``gate-keepers'') and its instructions were given to the industry practitioners. This was done before the estimations of sprint 1. The industry practitioners completed the estimation work and implemented the requirements. At the beginning of sprint 2 and sprint 3, the selected requirements for each sprint were given to the industry practitioners. Again, the ``gate-keepers'' selected which requirements to include for sprint 2 and sprint 3. Please note that there was no introduction and pre-questionnaire for the second and third sprints. In total, the estimation work for each sprint lasted for about 30 minutes, while the introduction before sprint 1 lasted about 20 minutes. The ``gate-keepers'' collected the actual implementation effort from each sprint and informed the second author about the actual effort. After conducting Experiment 5, it was not possible to decide if the effect was due to the tasks being interpreted as unrealistic. Moreover, many industry practitioners perform their effort estimations by discussion in teams, thus it was unknown if group discussions may mitigate the error. In addition, there were no details/results of how the subjects reasoned when performing the estimations where obsolete requirements were visible. All of these issues were addressed in \textbf{Experiment 6}. The sixth experiment had exactly the same subjects and companies as in Experiment 5. The purpose of Experiment 6 was to create a setup that was exactly the same as when the teams work in their daily work. Moreover, the number of requirements in the previous experiments, did not reflect the number of requirements in real projects and real sprints. Therefore, we discussed with the ``gate-keepers'' at each company about modifying (i.e. ``marking'' requirements as obsolete requirements) some real requirements in the real product backlogs for the teams, without the teams knowledge that they were still part of the study. We obtained approvals from the companies and the ``gate-keepers'' to do this in order to study the affect of obsolete requirements in real situations without the possible bias from the subjects that they are aware of being part of a study. In Experiment 6, no modifications were made to the companies or the teams processes, ways of working, how requirements end up in different backlogs, decisions-making, implementation of requirements, or how estimations were done. The only modification of the companies and the teams processes and requirements was that the ``gate-keeper'' at each company modified some of the already existing requirements in the teams’ product backlogs by ``marking'' a number/selection of requirements as obsolete as they are usually marked in their real product backlogs. For example, by stating that a requirement is ``obsolete,'' ``not included,'' ``out of scope'' or simply by marking a requirement with red color. Fig. \ref{fig:ExReqandObsReq} illustrates three examples how obsolete requirements were marked, and how they were presented to the teams together with non-obsolete requirements. Note that the requirements in Fig. \ref{fig:ExReqandObsReq} are not the real requirements that were used in Experiment 6. Each team worked in their normal product and sprint backlogs in their real projects and performing the estimations and prioritization as they normally do. That is, they looked into their product backlogs (that both contained requirements and obsolete requirements) to estimate and select which requirements should be included in the next sprint and added the selected requirements to their sprint backlog. Then, the teams implemented the requirements from the sprint backlog. All the teams had access to their product backlog, which means that they saw (and could access) all the requirements, including the added/modified obsolete requirements. What the team decided to implement in a sprint was a subset of the product backlog and discussed in the sprint planning meeting. After each sprint was completed, the ``gate-keepers'' sometimes added and/or changed the number of obsolete requirements, which always happens according to them. The number of obsolete requirements for each team and in each sprint was decided by each ``gate-keeper'' to make it as realistic as possible. That is, the researchers did not influence the percentage of obsolete requirements in the product backlogs. The used requirements in Experiment 6 had the same level of details as in Experiment 5 (see Fig. \ref{fig:ExReq}). Due to confidentiality reasons, the used requirements are not allowed to be revealed. In addition, after the requirements from the sprint backlog was implemented, the ``gate-keepers'' collected the actual implementation effort for the requirements in order to compare the actual effort with the estimations. Then the process was repeated for each sprint. In total, this process lasted for three sprints for each team. After the three sprints, the second author went back to the companies to interview the team members about their experiences. The interviews used a semi-structured approach and lasted between 10 and 30 minutes. In each interview, which was conducted face-to-face at each company, one industry practitioner and the second author participated. During the interviews, notes were taken. Experiment 6 lasted for three sprints for each team, thus the total time (in weeks) for Experiment 6 was between six and nine weeks (depending on the sprint length for each team). \begin{landscape} \begin{figure} \includegraphics[scale=0.8]{REJ_Obs_REQ_Fig_StructureObsReq.pdf} \caption{Three examples of how obsolete requirements were marked and mixed with non-obsolete requirements in Experiments 5 and 6.} \label{fig:ExReqandObsReq} \end{figure} \end{landscape} \subsection{Subjects}\label{sec:subjects} \textbf{Experiment 1} comprised of Bachelor's students from the course Software Engineering Process --- Economy and Quality at Lund University, Sweden. The course was a mandatory course for third year students offered to students at the Computer Science and Information program. In total, 150 students participated in Experiment 1, which was conducted after Experiment 0. As in Experiment 0, we distributed a pre-questionnaire. The results from the pre-questionnaire in Experiment 1 showed a small variation in the English language, ranging from ``very good knowledge'' to ``fluent.'' Out of the 150 subjects, six had industrial experience of software development (between four and eight months), and five of these six subjects had about one month experience of effort estimation. The subjects in \textbf{Experiment 2} were Bachelor's students from the course Software Engineering Process --- Soft Issues at Lund University, Sweden. The course was a mandatory course for second year students offered to students at the Computer Science and Information program. In total, 149 students participated in Experiment 2. Experiment 2 was conducted in the same year as Experiment 1. The pre-questionnaire (the same as in Experiment 1) showed that the students' English language knowledge varied between ``good knowledge'' to ``fluent.'' Only one student had experience from software development in industry (about five months experience), while none of the students in Experiment 2 had any experience of effort estimation. The subjects in \textbf{Experiment 3} were Master's students from the course Requirements Engineering at Chalmers $|$ University of Gothenburg, Sweden. The course was a mandatory Master's-level course for students at the educational Master's programs of Software Engineering and Interaction Design and Technologies. In total, 60 students participated in Experiment 3. Experiment 3 was conducted after after Experiment 2. In Experiment 3, the result of the pre-questionnaire revealed a variation in the English language knowledge, ranging from ``good knowledge'' to ``fluent.'' For experiences from software development in industry, most of the students reported no experience at all (52 out of 60), and for experience of effort estimation 53 out of 60 students reported no experience. For the students that reported that they had experiences from software development in industry, the experiences varied between five months up to one year. The reported experiences of effort estimation was about one month. The subjects in \textbf{Experiments 4, 5, and 6} were industry practitioners from five different companies. For the industrial subjects, we contacted one ``gate-keeper'' at each of the five companies. The ``gate-keepers'' identified industry practitioners that (s)he thought were the most suitable and representative of the company to participate in this study, i.e.\ the ``gate-keepers'' knew that the research was about effort estimation of requirements and were to select participants that perform such work within the organization. That is, the researchers did not influence the selection of the industry practitioners, nor did the researchers have any personal relationship to the industry practitioners. The ``gate-keepers'' selected software professionals that work with requirements engineering and perform estimation work. None of the industry practitioners were students working part-time at the companies. All of the industry practitioners were fully employed by their respective company at the time of the experiments. For Experiment 4, the ``gate-keepers'' identified individual industry practitioners, while for Experiments 5 and 6, instead of identifying individual industry practitioners the ``gate-keepers'' identified complete teams that work together at the companies in their real projects. Moreover, in Experiments 5 and 6, the ``gate-keepers'' selected industry practitioners that, in addition to working with requirements engineering and perform estimation work, also were responsible for implementing the requirements. In the industrial settings (Experiments 4-6), the pre-questionnaire asked questions about the subjects total years of industrial experiences in software development, total years at their current company, and total years of experiences of requirements engineering and effort estimation. In total, 75 industry practitioners participated in \textbf{Experiment 4}, 21 from Company A and 54 from Company B. For the industry practitioners from Company A, the subjects had between 2 to 15 years of professional experience in software development, between 1 and 15 years of experiences at Company A, between 2 and 9 years of experiences in requirements engineering, and 2 to 6 years of experiences with effort estimations. For the industry practitioners from Company B, they had between 1 and 25 years of professional experience in software development, between 1 and 17 years of experiences at Company B, between 1 and 21 years of experiences in requirements engineering, and 1 to 18 years of experiences with effort estimations. The two companies, both from the telecommunication domain, varied in size around 250 employees at Company A and more than 2,700 employees at Company B. Both companies used agile development methods where Company A performed effort estimations individually while Company B performed effort estimations in teams. Both companies used hours as their effort estimation unit. More details about the two companies are not revealed for confidentiality reasons. In total, 27 industry practitioners from five teams at three different companies participated in \textbf{Experiment 5 and 6}, as shown in Table \ref{Tab:SubjCharExp5}. From Company C, 18 industry practitioners from three teams participated in Experiment 5. The industry practitioners from Company C had between 3 and 15 years of professional experience at Company C and between 3 and 20 years of professional experience in software development. From Company D, four industry practitioners from one team participated in the experiment. The industry practitioners from Company D had between 4 and 10 years of professional experience in software development and between 3 and 6 years of experiences at Company D. From Company E, 5 industry practitioners from one team participated in Experiment 5. The industry practitioners from Company E had between 1 and 8 years of professional experience at Company E and between 1 and 15 years of professional experience in software development. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Industry subject characteristics - Experiments 5 and 6.} \begin{center} \begin{tabular}{|p{0.10\linewidth}|p{0.05\linewidth}|p{0.20\linewidth}|p{0.20\linewidth}|p{0.20\linewidth}|} \hline \textbf{Company} & \textbf{Team} & \textbf{Subject/Role} & \textbf{Number of years of experience in current company} & \textbf{Number of years of experience in software development}\\ \hline C & C.1 & Developer 1 & 6 & 10\\ & & Developer 2 & 8 & 12\\ & & Developer 3 & 6 & 10\\ & & Developer 4 & 4 & 4\\ & & Product owner & 5 & 15\\ & & Senior engineer & 8 & 8\\ & C.2 & Developer 1 & 5 & 7\\ & & Developer 2 & 3 & 3\\ & & Developer 3 & 3 & 3\\ & & Product owner & 15 & 19\\ & & Software designer & 11 & 20\\ & C.3 & Developer 1 & 8 & 13\\ & & Developer 2 & 9 & 10\\ & & Developer 3 & 5 & 5\\ & & Product owner & 9 & 9\\ & & Senior engineer & 4 & 10\\ & & Software designer & 6 & 9\\ & & Software architect & 7 & 16\\ \hline D & D.1 & Developer 1 & 4 & 4\\ & & Developer 2 & 3 & 5\\ & & Developer 3 & 4 & 5\\ & & Project manager & 6 & 10\\ \hline E & E.1 & Developer 1 & 2 & 2\\ & & Developer 2 & 1 & 1\\ & & Developer 3 & 2 & 5\\ & & Project manager & 8 & 15\\ & & Product owner & 1 & 5\\ \hline \end{tabular} \end{center} \label{Tab:SubjCharExp5} \end{table*} The three companies (Company C, D, and E) are in different domains and varied in size in terms of number of requirements in their backlogs, product backlogs, and sprint backlogs (as shown in Table \ref{Tab:CompCharExp5}). Company C, from the Telecommunication domain, had about 10,000 requirements in their backlog. For the three teams (C.1, C.2 and C.3 in Table \ref{Tab:CompCharExp5}) from Company C, the product backlogs varied between 150 and 400 requirements, while the sprint backlogs varied between 5 and 30 requirements. For all three teams, the sprint length was two weeks. In Team C.1, the requirements are specified using natural language (about 75\% of the requirements) and user stories (about 25\%). In Team C.2, all of the requirements are specified as natural language requirements. Team C.3 used four different specification techniques for their requirements, about 40\% of the requirements were specified using natural language and 40\% as use cases. About 15\% of the requirements were specified as user stories and 5\% as sequence diagrams. For Company D, which is a consultancy company, the product backlog had about 10,000 requirements. The product backlog for Team D.1 from Company D had between 100 and 400 requirements, while their sprint backlog varied between 15 and 20 requirements. The sprint length for Team D.1 was two weeks. Team D.1 specified all of their requirements as natural language requirements. For Company E, from the consumer electronics domain, their backlog had about 4,000 requirements, while the product backlog for Team E.1 varied between 140-180 requirements. Team E.1's sprint backlog varied between 10 and 20 requirements and the length of their sprint was three weeks. Team E.1 specified all of their requirements as user stories. All three companies (Company C, D, and E) used agile development methods where the effort estimations were performed in teams using hours as the estimation unit at all five teams. More details about the three companies and the five teams are not revealed due to confidentiality reasons. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Company characteristics - Experiments 5 and 6} \begin{center} \begin{tabular}{|p{0.10\linewidth}|p{0.10\linewidth}|p{0.10\linewidth}|p{0.05\linewidth}|p{0.15\linewidth}|p{0.15\linewidth}|p{0.10\linewidth}|} \hline \textbf{Company} & \textbf{Domain} & \textbf{\# requirements in backlog} & \textbf{Team} & \textbf{\# requirements in product backlog} & \textbf{\# requirements in sprint backlog} & \textbf{Sprint length (in weeks)}\\ \hline C & Telecom & 10,000 & C.1 & 200--300 & 15--20 & 2\\ & & & C.2 & 150--200 & 10--30 & 2\\ & & & C.3 & 200--400 & 5--15 & 2\\ \hline D & Consultant & 10,000 & D.1 & 100--400 & 15--20 & 2\\ \hline E & Consumer electronics & 4,000 & E.1 & 140--180 & 10--20 & 3\\ \hline \end{tabular} \end{center} \label{Tab:CompCharExp5} \end{table*} \section{Results}\label{sec:results} In this section we first present the results from the separate analyses conducted and then we analyze all of them together. \subsection{Experiments 0, 1, and 2}\label{method1} We start by plotting our raw data of the estimations obtained for each of the Groups A, B, and C. In Fig.~\ref{fig:raw}, we can see that we have quite normally distributed raw data and there seems to be a difference in that $A<B<C$. \begin{figure} \includegraphics[scale=0.4]{rawdata1.pdf} \caption{Density plots of the raw data of the estimates for the different groups in Experiments 0 to 2.} \label{fig:raw} \end{figure} The likelihood functions and our weakly informative priors \citep{bernardo1975non} used when the first data was analyzed were the following: {\footnotesize \begin{IEEEeqnarray}{rCl} \mathrm{Estimate}_i & \sim & \mathrm{Normal}(\mu_i, \sigma)\\ \mathrm \mu_i & = &\beta_A \mathrm{A}_i + \beta_B \mathrm{B}_i + \beta_C \mathrm{C}_i\\ \beta_A & \sim & \mathrm{Normal}(0,1) \\ \beta_B & \sim & \mathrm{Normal}(0,1)\\ \beta_C & \sim & \mathrm{Normal}(0,1)\\ \sigma & \sim & \mathrm{HalfCauchy}(0,5) \end{IEEEeqnarray}} Note that we have a model without any intercept (3). We could use an intercept as Group A, but if we model like this, we get much more straightforward output from brms (see Supplementary Material). The priors above need some explanation. The response variable is always assumed to be Gaussian (i.e.\ normally distributed) in linear regression \citep{McElreath2016sra} which is why our estimate variable is assumed to be Gaussian with a $\mu_i$, and $\sigma$ (2). Fig.~\ref{fig:raw} also supports this claim. We obtain a posterior distribution for each of the groups, which makes them very easy to compare (4--6). When using BDA and explicitly defining our statistical model like this, it makes it possible to directly observe our hypothesis about the experiment since we could use our subjective knowledge as priors in the statistical model. In our case, not much was known about the prior distribution; however, our assumption was that the estimates given for group A should be larger than zero and not have more extreme values than 100 (the max value obtained in our data was 14.5), which will cover extreme values. It is hard to assess how a model behaves without simulating output, which is done in a sensitivity analysis (see Supplementary Material). In brief, we tested different models and chose the one above since the simulated values of the estimate were much larger than 100. We used a standard weak informative prior for sigma (7), the Half-Cauchy prior with a standard deviation of 5 \citep{gelman2008weakly}. Fig.~\ref{fig:posteriors1} shows the sampled posterior distributions, which confirms the result that was previously published, i.e.\ there is a significant difference between all the three mutually exclusive estimation groups (A: 4 requirements, B: The same 4 requirements but a fifth one added, C: The same 5 requirements as in B but the fifth was marked ``Please note that requirement 5 should not be implemented''). \begin{figure} \includegraphics[scale=0.4]{exp0to2-redigerad.pdf} \caption{Sampled posterior distributions in Experiment 0--2 for groups with median and 95\% credible interval (note that the sigma is not included).} \label{fig:posteriors1} \end{figure} Table~\ref{tab:1} shows the parameter statistics for each Group A, B, and C. We see that all the Groups are different and we obtained much higher estimates in Group C where one requirement was marked as obsolete. \begin{table} \caption{Means and 95\% credible interval for for the groups parameters and the sigma used in the likelihood model.} \label{tab:1} \begin{tabular}{llll} \hline\noalign{\smallskip} & Mean & l-95\% CI & u-95\% CI\\ \noalign{\smallskip}\hline\noalign{\smallskip} Group A & 4.43 & 4.15 & 4.70\\ Group B & 5.87 & 5.59 & 6.15\\ Group C & 9.41 & 9.13 & 9.69\\ Sigma & 1.83 & 1.72 & 1.95\\ \noalign{\smallskip}\hline \end{tabular} \end{table} By simply looking at Figure~\ref{fig:posteriors1} or reading Table~\ref{tab:1}, we see that all the groups were significantly different from each other too since almost no values even overlap. However, a measurement of effect size overall was important to calculate. The Bayesian $R^2$ was 53.8\%, which mean that around 54\% of the variance in estimations can be explained by Group, which is very high considering so many other confounding factors when people make estimations of requirements. By this we mean all the unexplained variance present in a behavioral context that should be averaged out instead of blocked. This is why effect sizes in psychological science are considered high with quite low percentage of explained variance \citep{cohen}, because they are not low in a complex system. \subsection{Experiment 3}\label{exp3} Since it was not possible to know how the longer requirements specifications with 8 and 10 requirements would affect the estimations, we used weak priors again for the third experiment, i.e.\ we started our data analysis with exactly the same model and priors as in the previous data analysis. Based on the results from the previous experiments, we could have assumed group C to be larger than A and B; however, with the new and longer requirements specification we opted to be very conservative and careful regarding the effect of C. We start again by plotting our raw data of the estimations obtained for each of the Groups A, B, and C. In Fig.~\ref{fig:raw2}, we can see that we have quite normally distributed raw data and there seems to be a difference in that $A<B<C$, just like before. \begin{figure} \includegraphics[scale=0.4]{rawdata2.pdf} \caption{Raw data for the different groups in Experiment 3.} \label{fig:raw2} \end{figure} We updated our model from the previous experiments into a Log-Normal distribution due to our sensitivity analysis (see Supplementary Material). {\footnotesize \begin{IEEEeqnarray}{rCl} \mathrm{Estimate}_i & \sim & \mathrm{LogNormal}(\mu_i, \sigma)\\ \mathrm \mu_i & = &\beta_A \mathrm{A}_i + \beta_B \mathrm{B}_i + \beta_C \mathrm{C}_i\\ \beta_A & \sim & \mathrm{Normal}(0,1) \\ \beta_B & \sim & \mathrm{Normal}(0,1)\\ \beta_C & \sim & \mathrm{Normal}(0,1)\\ \sigma & \sim & \mathrm{HalfCauchy}(0,5) \end{IEEEeqnarray}} Fig.~\ref{fig:4} shows the sampled posterior distributions, and Table~\ref{tab:4} shows the parameter for each groups with a connected 95\% credible interval. As we can see, the estimations for the groups, all the estimates increase and we see a similar pattern as in the previous experiments. The true implementation time for students was not known; however, it is expected that A have increased simply because more requirements should take more time to implement. Our main conclusion is still that the pattern of obtaining even larger estimates when told to exclude requirements still holds. \begin{figure} \includegraphics[scale=0.4]{exp3.pdf} \caption{Sampled posterior distributions in Experiment 3 for groups with median and 95\% credible interval (note that the sigma is not included).} \label{fig:4} \end{figure} \begin{table} \caption{Means and 95\% credible interval for for the groups parameters and the sigma used in the likelihood model.} \label{tab:4} \begin{tabular}{llll} \hline\noalign{\smallskip} & Mean & l-95\% CI & u-95\% CI\\ \noalign{\smallskip}\hline\noalign{\smallskip} Group A & 4.90 & 4.53 & 5.26\\ Group B & 7.85 & 7.32 & 8.50\\ Group C & 10.80 & 9.97 & 11.59\\ Sigma & 1.19 & 1.15 & 1.23\\ \noalign{\smallskip}\hline \end{tabular} \end{table} In the case of the third experiment our Bayesian $R^2 = 0.72$, which means that around 75\% percent of the variation in the estimations can be explained by which group (A, B, or C) the subjects were part of. We interpret this effect size as extremely high. \subsection{Experiment 4}\label{method4} Experiments 0--3 were conducted with student subjects that did not have any knowledge\slash expertise about the requirements, the domain or the product of which the requirements belong to, nor did they have any extensive industrial experience of software development and effort estimation. These issues were addressed in Experiment 4. Since this is the first experiment in industry, the analysis used the same weak prior knowledge as before. One of the biggest threats to the previous experiments was that it could be seen as a toy problem that would not exist in the real world where estimations are conducted. Hence, weakly informative priors were used again. We start, as always, by plotting our raw data of the estimations obtained for each of the Groups A, B, and C. In Fig.~\ref{fig:raw3}, we can again see that we have quite normally distributed raw data and there seems to be a difference in that $A<B<C$. \begin{figure} \includegraphics[scale=0.4]{rawdata3.pdf} \caption{Raw data for the different groups in Experiment 4.} \label{fig:raw3} \end{figure} For the same reason as in previous experiment, we use a Log-Normal distribution due to our sensitivity analysis (see Supplementary Material). {\footnotesize \begin{IEEEeqnarray}{rCl} \mathrm{Estimate}_i & \sim & \mathrm{LogNormal}(\mu_i, \sigma)\\ \mathrm \mu_i & = &\beta_A \mathrm{A}_i + \beta_B \mathrm{B}_i + \beta_C \mathrm{C}_i\\ \beta_A & \sim & \mathrm{Normal}(0,1) \\ \beta_B & \sim & \mathrm{Normal}(0,1)\\ \beta_C & \sim & \mathrm{Normal}(0,1)\\ \sigma & \sim & \mathrm{HalfCauchy}(0,5) \end{IEEEeqnarray}} The results of the experiment conducted in an industrial setting showed the same pattern again. Table~\ref{tab:5} shows the means, standard deviations, and credible interval just like in the previous experiments. Fig.~\ref{fig:5} shows the posterior distribution including two lines. The left line represents the actual implementation time for task A (3.5 weeks), and the right line (dashed) represents the actual implementation time for tasks B and C (5 weeks). We can see that in all cases the practitioners overestimated the implementation times. However, the over-estimations in A are lower (around 1.4 weeks) as compare to the estimates of more requirements in B (almost 2 weeks). The worst over-estimations were due to the marking of two requirements as obsolete increased the overestimation to close to 4 weeks. \begin{figure} \includegraphics[scale=0.4]{exp4actual.pdf} \caption{Sampled posterior distributions in Experiment 4 for groups with median and 95\% credible interval and the two actual implementation times (the left one for A and C, and the right dashed line for B). Note that the sigma is not included.} \label{fig:5} \end{figure} \begin{table} \caption{Means and 95\% credible interval for for the groups parameters and the sigma used in the likelihood model.} \label{tab:5} \begin{tabular}{lllll} \hline\noalign{\smallskip} & Mean & l-95\% CI & u-95\% CI\\ \noalign{\smallskip}\hline\noalign{\smallskip} Group A & 4.62 & 4.22 & 5.00\\ Group B & 6.69 & 6.17 & 7.31\\ Group C & 8.33 & 7.61 & 9.12\\ Sigma & 1.25 & 1.21 & 1.30\\ \noalign{\smallskip}\hline \end{tabular} \end{table} In the case of the fourth experiment $R^2 = 0.527$, which means that around 53\% percent of the variation in the estimations can be explained by which group (A, B, or C) the subjects were part of. We interpret this effect size as high again. \subsection{Summary of Experiments 1--4} We have now analyzed the first four experiments and can conclude that the fact that obsolete requirements have an effect of the estimations is clear (Experiments 1 and 2). From Experiment 3, the results show that the same effect was found using a twice as big requirements specifications including twice as many obsolete requirements. However, from the student experiments (Experiments 1--3) it is not possible to know if the students over- or under-estimated. From Experiment 4, the results show that the effect existed in industry where practitioners estimated real requirements later implemented by someone else at the company, and that it resulted in a gross over-estimation. The found effect sizes were 0.54, 0.75, and 0.54. Since we opted to not use any knowledge between these three sets of experiments (only between 0, 1, and 2, which led us to analyze all experimental data together), we need to be careful when comparing them or even averaging the effect. All the results show is that the effect exists and is large, even larger for larger requirements specifications and lower again in an industrial setting. The effect might only exist in contexts where an outsider, i.e.\ someone that will not actually implement the requirements, conduct the estimation. This was addressed in Experiment 5. Based on the results until this point, it would be good to create a model that can predict the over-estimations by knowing the percentage of obsolete requirements. Unfortunately, only a small subset of our data includes any information of the true implementation time and the percentage of obsolete requirements. More specifically, only Group C in Experiment 4 include that information. The data from Experiment 4 (closest to a real setting) show that the true implementation time for Group C was 5 weeks (see Figure~\ref{fig:5}). In Experiment 6, we collected more data of that kind. \subsection{Experiment 5}\label{method5} Experiment 5 did not include a large enough sample from different groups who partly estimate the same requirements (only three teams from Company C). Therefore, it is not possible to assess the different levels of the effect much further. However, this was not the main aim. The aim was instead to see if obsolete requirements have a similar effect of distorting estimates when practitioners themselves estimate requirements from their own work. In Experiment 5 all participants conducted the estimations individually, and we then calculated a mean value for each team, as shown in Table~\ref{Tab:Suasd}. The three teams from Company C had a common product backlog so we tested the same requirements (but in different order and different ones marked as obsolete) with all of them before one team then implemented them. For each team, the estimation effort was performed individually in two or three sprints where the number of requirements and obsolete requirements differed, as shown in Table \ref{Tab:Step1_Exp5}. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Number of requirements in Experiment 5.} \begin{center} \begin{tabular}{|p{0.05\linewidth}|p{0.07\linewidth}|p{0.16\linewidth}|p{0.15\linewidth}|p{0.19\linewidth}|p{0.15\linewidth}|} \hline \textbf{Team} & \textbf{Sprint} & \textbf{\# requirements} & \textbf{\# obsolete requirements} & \textbf{total \# requirements} & \textbf{Percent obsolete reqs}\\ \hline C.1 & 1 & 15 & 0 & 15 & 0\%\\ & 2 & 20 & 4 & 24 & 17\%\\ & 3 & 10 & 1 & 11 & 9\%\\ \hline C.2 & 1 & 15 & 3 & 18 & 17\%\\ & 2 & 20 & 2 & 22 & 9\%\\ & 3 & 10 & 0 & 10 & 0\%\\ \hline C.3 & 1 & 15 & 2 & 17 & 12\%\\ & 2 & 20 & 0 & 20 & 0\%\\ & 3 & 10 & 2 & 12 & 17\%\\ \hline D.1 & 1 & 15 & 4 & 19 & 21\%\\ & 2 & 17 & 0 & 17 & 0\%\\ \hline E.1 & 1 & 15 & 5 & 20 & 25\%\\ & 2 & 18 & 0 & 18 & 0\%\\ \hline \end{tabular} \end{center} \label{Tab:Step1_Exp5} \end{table*} The results from Experiment 5 are shown in Table~\ref{Tab:Suasd}. The three teams from Company C partly estimated the same requirements but different ones marked as obsolete. Overall, the results show an effect of introducing obsolete requirements, cf.\ Table~\ref{Tab:Step1_Exp5} and Table~\ref{Tab:Suasd}. Without any obsolete requirements the estimations are quite accurate, but when obsolete requirements are introduced the individuals systematically conduct over-estimations. \begin{table*}[htbp] \renewcommand{\arraystretch}{1.3} \caption{Individual and team estimations of the teams' own subset of real requirements in Experiment 5.} \begin{center} \begin{tabular}{|p{0.05\linewidth}|p{0.40\linewidth}|p{0.10\linewidth}|p{0.10\linewidth}|p{0.10\linewidth}|} \hline \textbf{Team} & \textbf{Role / Team average / actual implementation and \% actual overestimation}& \textbf{Sprint 1} & \textbf{Sprint 2}& \textbf{Sprint 3}\\ \hline C.1 & Developer 1 & 370 & 600 &465 \\ & Developer 2 & 345 & 605 &460 \\ & Developer 3 & 320 & 595 &470 \\ & Developer 4 & 330 & 620 &460 \\ & Product owner & 350 & 645 &450 \\ & Senior engineer & 356 & 612 &455 \\ & \textbf{Team average} & 345 & 613 &460 \\ & \textbf{Actual implementation} & 340 & 473 & 414 \\ & \textbf{\% actual overestimation} & 1.5\% & 30\% &11\% \\ \hline C.2 & Developer 1 & 455 & 520 &400 \\ & Developer 2 & 435 & 545 &390 \\ & Developer 3 & 490 & 500 &420 \\ & Product owner & 415 & 495 &395 \\ & Software designer & 445 & 535 &415 \\ & \textbf{Team average} & 448 & 519 &404 \\ & \textbf{Actual implementation} & 340 & 473 & 414 \\ & \textbf{\% actual overestimation} & 32\% & 10\% &-3\% \\ \hline C.3 & Developer 1 & 410 & 490 &510 \\ & Developer 2 & 405 & 450 &505 \\ & Developer 3 & 395 & 495 &530 \\ & Product owner & 425 & 495 &500 \\ & Senior engineer & 435 & 480 &515 \\ & Software designer & 430 & 505 &535 \\ & Software architect & 395 & 500 &525 \\ & \textbf{Team average} & 414 & 519 &517 \\ & \textbf{Actual implementation} & 340 & 473 & 414 \\ & \textbf{\% actual overestimation} & 21\% & 3\% &25\% \\ \hline D.1 & Developer 1 & 480 & 360 & NA \\ & Developer 2 & 495 & 350 & NA \\ & Developer 3 & 535 & 330 &NA \\ & Project manager & 550 & 390 &NA \\ & \textbf{Team average} & 515 & 358 &NA \\ & \textbf{Actual implementation} & 369 & 351 & NA \\ & \textbf{\% actual overestimation} & 40\% & 1.9\% &NA \\ \hline E.1 & Developer 1 & 776 & 484 &NA \\ & Developer 2 & 796 & 528 &NA \\ & Developer 3 & 783 & 515 &NA \\ & Project manager & 785 & 498 &NA \\ & Product owner & 703 & 529 &NA \\ & \textbf{Team average} & 769 & 511 &NA \\ & \textbf{Actual implementation} & 575 & 503 & NA \\ & \textbf{\% actual overestimation} & 34\% & 1.5\% &NA \\ \hline \end{tabular} \end{center} NA: Not Applicable, meaning teams D.1 and E.1 only performed estimations in two sprints \label{Tab:Suasd} \end{table*} We now have some more individual data on both the percentage of obsolete requirements and the resulting over-estimations. The requirements specification used in Experiment 4 all included 20\% obsolete requirements and had an actual implementation time of 3.5 weeks. As can be seen in Table~\ref{Tab:Suasd}, we obtained some more data from Experiment 5. This additional data comprised of individual estimates from several sprints in a real industrial setting. Based on all the collected data with the percentage of obsolete requirements and the over-estimations, we created a model for predicting new values using a posterior distribution and tested that model against the values obtained in Experiment 6. We have very few data point per percentage of obsolete requirements, thus it is not expected that our model would be precise. The main reason for providing it here is to show a first model that can later be trained with more data by us or other researchers. All the details of this model is in Supplementary Material, but we have 70 data points to use as data. The percentage of obsolete requirements was obtain by dividing the number of obsolete requirements by the total amount of requirements included. The over-estimation was obtained by dividing the provided estimate by the actual implementation time. \subsection{Experiment 6}\label{method6} As previously mentioned, we tweaked Experiment 5 in Experiment 6 and injected obsolete requirements into their current sprint planning without their knowledge (with permission from the ``gate-keepers''). We injected different sets of obsolete requirements over a period of three real sprints. The setup of this part of the experiment with number of requirements and obsolete ones are shown in Table \ref{Tab:Step2_Exp5}. For example, Team C.2 started with 189 requirements, of which 38 were ``marked'' as obsolete requirements, in their product backlog (Sprint 1). From the product backlog, 12 requirements (of which 0 was obsolete requirements) were selected to be included in the sprint backlog for Sprint 1, and all 12 requirements were implemented. In the beginning of Sprint 2, the product backlog contained 177 requirements of which 18 were ``marked'' as obsolete requirements. The ``gate-keeper'' in Company C changed the number of obsolete requirements in the product backlog according to how this is always done in Company C. From the product backlog, 18 requirements, including 2 obsolete requirements, were selected by Team C.2 to be included in Sprint 2. During the implementation of the selected requirements, Team C.2 realized that two obsolete requirements were selected and thus did not implement these requirements. Hence, 16 requirements were implemented in Sprint 2. Two other teams, D.1 in Sprint 3 and E.1 in Sprint 2 did also select obsolete requirements from the product backlog to their sprint backlog. In the same way as Team C.2, both these teams realized this during their sprint and did not implement the obsolete requirements. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Number of requirements in Experiment 6.} \begin{center} \begin{tabular}{|p{0.05\linewidth}|p{0.05\linewidth}|p{0.10\linewidth}|p{0.15\linewidth}|p{0.10\linewidth}|p{0.20\linewidth}|p{0.10\linewidth}|} \hline \textbf{Team} & \textbf{Sprint} & \textbf{\# requirements in product backlog} & \textbf{\# obsolete requirements in product backlog} & \textbf{\# requirements in sprint backlog} & \textbf{\# obsolete requirements in sprint backlog} & \textbf{\# implemented requirements in sprint}\\ \hline C.1 & 1 & 252 & 0 & 16 & 0 & 16\\ & 2 & 239 & 24 & 15 & 0 & 15\\ & 3 & 232 & 46 & 17 & 0 & 17\\ \hline C.2 & 1 & 189 & 38 & 12 & 0 & 12\\ & 2 & 177 & 18 & 18 & 2 & 16\\ & 3 & 163 & 0 & 25 & 0 & 25\\ \hline C.3 & 1 & 304 & 31 & 7 & 0 & 7\\ & 2 & 299 & 0 & 8 & 0 & 8\\ & 3 & 294 & 60 & 8 & 0 & 8\\ \hline D.1 & 1 & 287 & 0 & 12 & 0 & 12\\ & 2 & 275 & 56 & 14 & 0 & 14\\ & 3 & 263 & 56 & 18 & 1 & 17\\ \hline E.1 & 1 & 147 & 30 & 14 & 0 & 14\\ & 2 & 139 & 30 & 20 & 2 & 18\\ & 3 & 142 & 30 & 25 & 0 & 20\\ \hline \end{tabular} \end{center} \label{Tab:Step2_Exp5} \end{table*} The data was analyzed in the same way as in Experiment 5, but based on three real sprints per team with injected obsolete requirements in some of the sprints. We calculated a predicted impact (i.e.\ overestimation) by drawing from our posterior distribution for each percentage of obsolete requirements we wanted to predict. Since it was expected the teams would to try to adjust their estimates, qualitative data was collected by interviewing the teams after each sprint. We wanted to know how the teams were reasoning about their estimation accuracy in each sprint. What was being said in the interviews was noted immediately and summarized by the second author after each interview. Table~\ref{Tab:Exp6} shows a simplified table as compared to Experiment 5, since the unit for the estimations etc.\ were exactly the same. Table~\ref{Tab:Exp6} only shows the real sprints for each team (3 per team), the percentage of obsolete requirements they had in their product backlog when estimating the coming sprint, the estimated effort to implement the selected requirements in each sprint (see column \# requirements in sprint backlog in Table \ref{Tab:Step2_Exp5}), the actual implementation effort of the implemented requirements (see column \# implemented requirements in sprint in Table \ref{Tab:Step2_Exp5}), the predicted overestimation based on our prediction model with a 95\% credible interval, and the actual over-estimations made by the teams. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Results and predictions in Experiment 6.} \begin{center} \begin{tabular}{|p{0.05\linewidth}|p{0.05\linewidth}|p{0.18\linewidth}|p{0.10\linewidth}|p{0.07\linewidth}|p{0.20\linewidth}|p{0.10\linewidth}|} \hline \textbf{Team} & \textbf{Sprint} & \textbf{\% obsolete reqs in product backlog} & \textbf{Estimated effort (hours)} & \textbf{Actual effort (hours)} & \textbf{Median predicted overestimation [CI]} & \textbf{\% actual overestimation} \\ \hline C.1 & 1 & 0\% & 460 & 454 & 0\% & 1.3\% \\ & 2 & 10\% & 455 & 372 & 22\% [0,91] & 22\% \\ & 3 & 20\% & 465 & 337 & 27\% [0,94] & 38\% \\ \hline C.2 & 1 & 20\% & 380 & 275 & 27\% [0,94] & 38\% \\ & 2 & 10\% & 385 & 300 & 22\% [0,91] & 28\% \\ & 3 & 0\% & 520 & 355 & 0\% & 46\%* \\ \hline C.3 & 1 & 10\% & 540 & 395 & 22\% [0,91] & 37\% \\ & 2 & 0\% & 550 & 570 & 0\% & -4\% \\ & 3 & 20\% & 545 & 335 & 27\% [0,94] & 63\% \\ \hline D.1 & 1 & 0\% & 300 & 310 & 0\% & -3\% \\ & 2 & 20\% & 295 & 203 & 27\% [0,94] & 45\% \\ & 3 & 21\% & 320 & 231 & 28\% [0,91] & 39\% \\ \hline E.1 & 1 & 20\% & 589 & 590 & 27\% [0,94] & 47\% \\ & 2 & 22\% & 630 & 464 & 29\% [0,88] & 36\% \\ & 3 & 21\% & 600 & 590 & 28\% [0,91] & 1.7\%** \\ \hline \end{tabular} \end{center} *The team decided to add 30\% to the third sprint based on Sprints 1 and 2. **The team changed their estimation technique for sprint 3 into longer and more in-depth discussions and being more optimistic in their estimates. However, in Sprint 3 without any obsolete requirements, they underestimated grossly and needed an extra 150 hours (in the coming sprint) to complete what they had planned. \label{Tab:Exp6} \end{table*} Overall, the results show that our model's median values are more accurate for 10\% obsolete requirements than for 20\%, as shown in Table~\ref{Tab:Exp6}. However, the 95\% credible interval says that we would expect values to lie within $<0$ (we adjusted negative values to 0) to $94$, which indicates a lot of uncertainty in our predictions. With such large uncertainty interval, we could instead create a best guess based on the latest data obtained in Experiment 6. The results show that the over-estimations triggered by the obsolete requirements were systematically twice the percentage of the included obsolete requirements, i.e.\ the percentage of obsolete requirements can be multiplied by two to get an initial idea of the potential over-estimation, at least for around 10\% obsolete ones. However, we should be careful with this model since there is much uncertainty connected to it. All that can be concluded is that obsolete requirements seem to trigger a non-linear over-estimation. \subsubsection{Qualitative results}\label{QualRes} In order to understand the reasons for the difference between estimates vs. actual effort, we went back to the companies to interview the team members. In general, the results from the interviews show that the teams do not know why the estimates turned out the way they did. The teams were very surprised and could not explain why they over-estimated. The results from the interviews for each team are presented below. Team C.1 did not reflect much on their over-estimations after the second sprint. They did not think it mattered even if they were a bit surprised. Their reasoning was that if this happens just once it could be due to the properties of the features. They had the same reasoning even after sprint 3, even if it had happened twice in a row then. Team C.2 were very surprised by the results after the first sprint that they were done with everything so fast. They discussed a bit about it, but decided that they were unlucky and this could happen from time to time. For sprint 2, Team C.2 did include obsolete requirements in their sprint backlog. Team C.2 explained that they did not check if the requirements were marked as obsolete or not, they simply included the obsolete requirements without reading if they were obsolete. After sprints 1 and 2 they had decided to add 30\% more to implement, but with no obsolete requirements in sprint 3 they still grossly overestimated the time needed for what they selected to implement. Team C.3 explained that they did not reflect on their over-estimates at all. Team C.3 just continued as usual without any reflections or discussions. Team D.1 was extremely surprised after sprint 2 and brought it up in the sprint meeting where they discussed what happened; however, they could not understand what happened or why it happened. After sprint 3, they were equally surprised again and could not understand why they grossly overestimated the implementation time twice. Especially since this has not happen for Team D.1 before. Team E.1 had the same number of obsolete requirements in all their 3 sprints. After their first sprint, team E.1 increased the number of included features for sprint 2. However, during the second sprint, team E.1 realized that they included two features that should not be implemented, i.e. the two features were marked as obsolete. Therefore, before including features for sprint 3, team E.1 double checked the features (to make sure that they were not obsolete features), had longer and more in-depth discussions, and were more optimistic in their estimates. This led to that the time it took to decide which features to include in sprint 3 took twice as long as it usually does. As a result, team E.1 included 25 features in sprint 3, but only managed to implement 20. The five features that were not implemented took another 150 hours to implement in the coming sprint. \section{Discussion}\label{sec:disc} Overall, the six experiments have shown that having obsolete requirements visible when estimating software development effort, has a large effect on the size of the estimations in that they increase substantially. In the the industry samples, where we also had an actual implementation time, the obsolete requirements were shown to result in gross over-estimations. The results from \textbf{Experiments 1 and 2} show that the effect is very large when students do the estimations and that their study year (second or third year at the software engineering Bachelors' program) did not effect the estimates. The results from \textbf{Experiment 3} show that the effect was even larger with students estimating more requirements and being exposed to even more obsolete ones. The main finding of \textbf{Experiment 4} was that the effect was also present in an industrial setting with the same pattern as before. Thus, the experience of software estimation or software development does not remove the effect. In addition, the results from Experiment 4 show that the effect is in form of an over-estimation and quite a large one. From \textbf{Experiment 5}, the results show that the effect also exists when practitioners estimate their own organization's requirements individually. In \textbf{Experiment 6}, the results show that the effect also appeared when teams estimate their own work for their near-future implementation work, that the team discussion did not remove the effect and that the participants were oblivious to it. We call this effect the \emph{obsolete requirements effect} (or the Gren-Svensson error), which is a bias due to the presence of obsolete requirements during effort estimation. There was no big differences in results across the experiments, which is in favour of there being a real effect. There were different levels of experience in subjects between experiments, but they are hard to use as a moderator variable in our conceptually very different replications. The comparison can instead be done between experiments and we did find an effect even with industry participant who have much more experience, on average, than students. The effect of experience (and other moderating variables) within each experiment was averaged out since we randomized the participants into groups A, B and C. The moderating variables we did not succeed in averaging out were addressed by changing design or context (like length of the requirement specification or using industry participants). One parameter to take into account in the first analysis was study year for students (second or third year at university). Adding this variable did not create a better model (see Supplementary Material). Moreover, the effect is smaller, but still exists, the closer we get to a field setting, which is no surprise (\citet{jorgensen}). With such a large sample ($N=461)$ and inclusion of both students and practitioners from industry, there are good reason for generalizing the findings to the larger population of people working with requirements written in natural language or in form of user stories. The over-estimations investigated in \citet{aranda} could not be explained by the subjects' experience of cost estimations, and results in this study were also apparent both in an industrial setting and by using students at the university. By using both students and practitioners in different settings, one recommendation is that obsolete requirements should not be visible in estimation exercises, even if we do not know the details of their effect in each specific case. This has the implications that both researchers and practitioners should take the \emph{obsolete requirements effect} into account when researching\slash working with requirements and avoid or block that effect. Requirements not needed for the estimation will still affect the decision-makers assessment of, maybe, the complexity and therefore also the estimates. Therefore, a specific definition and a further quantification of this error could be helpful to both practitioners and researchers even if it could be seen as a special case of the anchoring effect. Specific adjustment of this systematic error would then be possible by simply adjusting the estimates systematically, which could be done by updating our prediction model with more data. Another great advantage of using BDA is that we share, not only our raw data, but also our posterior distributions for all parameters (see Supplementary Material). Since our posteriors then can be used as prior information in further studies, one can find effects in very small random samples. BDA also implements the fact that extraordinary claims then require extraordinary evidence and we do not reset our parameters after each replication. \subsection{Psychological explanations}\label{sec:psych} \citet{gren2017possible} offered two different psychological explanations for their found effect in a sample of 150 students. The first one was the \emph{representativeness heuristic}, which is a mental shortcut to lessen the cognitive load. When the needed information is not available when having to make a decision, people use similar or previous experiences instead. This often works well, but not always \citep{1982juu}. The representativeness heuristic is based on two components that are used to assign a subjective probability: (1) its similarity in characteristics to the parent population and (2) its reflection of the salient features of the process by which it is generated. In the case of these experiments, the larger requirements specification in Group C would be more probable since it is then more representative of a larger system \citep{heuristics}. Although this is an interesting explanation, it is not an obvious explanation of the entire effect found. The second explanation given in \citet{gren2017possible} is the \emph{decoy effect}, which is more in relation to categorical choices than extra information. The axiom of independence of irrelevant choices states that extra irrelevant options visible to the decision-maker should not affect the choice, but in some contexts, it does \citep{huber}. Since the decision in this case is not about choosing between options, this explanation is quite far away from the context of this current study and we do not think it explains any aspect of the effect found. The more obvious choice not mentioned in \citet{gren2017possible}, is the cognitive bias called \emph{anchoring-and-adjustment}, first published in \citet{representativeness}. This bias appears when a random number (the anchor) is presented to the decision-maker before the actual task, which then systematically influences the following decision toward that number. \citet{representativeness} state that the anchor is usually numerical, which implies that it does not have to be. In the case of the experiments in this study, obsolete requirements might have anchored a larger implementation effort in the subjects and their following estimations. This would explain the whole effect found if Groups C and B were the same in Experiments 1--4, however, Group C was also significantly higher than B. Perhaps, the effect is a combination of the representativeness heuristic and the anchoring effect in that the obsolete requirements became anchors, but a software system with changes in requirements became more representative of a software project prone to change even more in the future. It is important to note that these explanations are still speculative and need deeper investigation into the mental process of the subjects. The qualitative results of this study, i.e.\ asking the participants to explain their thought-process (see Section \ref{QualRes}), showed that people exposed to obsolete requirements cannot articulate an explanation. The participants were clueless and tried to adjust their estimations based on other or no information on what caused the estimation error. Therefore, more detailed and focused psychological experiments are needed to fully understand the results of this study from a psychological perspective. \subsection{How to avoid the pitfall of large estimation errors}\label{sec:howtoavoid} Even if the exact mental process behind the error is still not yet known, there are quite many findings within psychology and management of how to deal with estimation error in general to mitigate its effects (see e.g.\ \citet{hoch1996psychological}) and even some studies within the software development effort estimation (see e.g.\ \citet{connolly1997decomposed}). The former suggests that a linear decision model is to be used together with a computerized database of historical data (a more statistical approach) for more accurate forecasts in general. The latter suggests a more hands-on approach to not conducting overestimation in the software development case. Such an approach includes a couple of (decomposed) estimates instead of one (holistic) estimate. Instead of connecting the given estimation task to one single other representative experience in mind, the assessor instead had to reassess the situation and present a confidence interval with at least a lower, most probable, and upper bound. This significantly gave better estimates according to \citet{connolly1997decomposed}. They also showed that the quality of the estimates declined with increased task difficulty \citep{connolly1997decomposed}. Other known approaches to forecasting could be group forecasting, like e.g.\ the Delphi technique \citep{rowe1999delphi}. In this study, the results show that even with group forecasting, decomposed tasks or historical data in the model, we would not avoid the Gren-Svensson error since it affects all these techniques when the estimations are conducted. Decomposing tasks would work if that implies to not show any obsolete tasks next to the decomposed ones. Practitioners should probably spend extra time cleaning requirements specifications and backlogs from obsolete requirements, but also always give decomposed estimates and, through this process, think more in terms of probabilities instead of only efforts. For some practical guidelines on how to do this in the context of expert-judgment-based software effort estimation, see e.g.\ \citet{jorgensen2005practical}. \section{Threats to Validity}\label{sec:limits} Despite our efforts in addressing the validity threats after each experiment, there are still potential threats to our study. In this study, the artifacts, i.e.\ the requirements to estimate, could have negatively affected the experiment and thus the outcome for the student subjects. Since the requirements used in the student experiments were real requirements from industry, the students may not be experts in the area of the requirements. However, this threat was mitigated in two ways, (1) the used requirements for the students were general requirements from a domain that the students were familiar with, and (2) the last set of experiments in this study included industry subjects with requirements from their companies and domains. Another threat could be the selection of the student participants, which may influence the results since the student subjects were not volunteers as the experiment was performed as part of their courses. However, the results of the experiment did not affect the grading in the courses. Performing experiments with only students as subjects may be a large threat concerning the representatives when compared to industry professionals. Therefore, we also carried out experiments with industry professionals as close to their real setting as possible to mitigate this threat. Another threat concerns the number of requirements used in the experiments. The first set of experiments only included four and five requirements, which is not a realistic set of requirements. Therefore, we increased the number of requirements in several steps and in the final experiment (Experiment 6) we used the real and complete product and sprint backlogs that are used in the companies. In Experiment 6, one threat to the results could be the modification, i.e.\ to mark a set of requirements as obsolete, of the requirements in the product backlogs. This threat was mitigated by having the ``gate-keepers'' at each company to modify the requirements in exactly the same way as they always mark\slash state that a requirement is obsolete. Thus, we believe that this threat has a limited effect on the results of Experiment 6. In order to mitigate the conclusion validity, i.e.\ the ability to draw correct conclusions, we performed statistical analysis of the gathered data. By using BDA we model uncertainty both in parameters and in our models, which increases the confidence in the result. Another threat to conclusion validity may be the number of participants in the experiments. This threat was mitigated in this study by having 461 participants in total, of which 359 were student participants and 102 industry participants. However, one threat to the results of this study is that the industrial sample is much smaller than the student sample (102 versus 359), thus more studies with subjects from industry is needed. Note here that the industry participants who estimated in teams in Experiment 6 were the same individuals as in Experiment 5. Thus, we only counted them once, even if they provided data, first as individuals and then in their teams. \section{Conclusions and Future Work}\label{sec:concl} This paper set out to investigate if obsolete requirements have an effect on effort estimations in software development. Through a family of six experiments with both students and practitioners, the results show that having visible obsolete requirements to an assessor results in over-estimation. Thus, this obsolete requirements effect (or the Gren-Svensson error) should be taken into account when researching or conducting effort estimation. These findings are important contributions to both research, but perhaps primarily, practice since over-estimations due to obsolete requirements could possibly be avoided. An interesting next step for this research would be to see what kind of extra requirements increases (or decreases) the effort estimates. There are undeniably cognitive aspects that influence software effort estimation that are not taken into consideration enough. Future research includes testing the suggested decomposed estimation method presented by \citet{connolly1997decomposed} in a replication of this experiment. Then, it would be possible to see if the estimates are more accurate or, if at least, get larger variance in the decomposed estimates that could be used to trigger alarm bells for decision-makers in context. Further replications that make use of our posterior distributions and help quantify the found effect further are needed. Finally, it would be interesting to analyze if the same effect appears in the context of a tender or down-bidding since the contexts of the estimation in this current study did not have that kind of cost-cutting demand. \bibliographystyle{spbasic}
2,877,628,090,555
arxiv
\section{Introduction} Bubble laden flow appears in a variety of natural \citep{clift1978,gonnermann2007fluid} and industrial \citep{deckwer} processes. Presence of bubbles dramatically alters the transport properties of a flow \citep{mudde_rev_2005,cecc10,RBLuca,pan17,riss18,elgh19,mat19}. A single bubble of diameter $d$, because of buoyancy, rises under gravity. Its trajectory and the wake flow depend on the density and viscosity contrast with the ambient fluid, and the surface tension \citep{clift1978, bha81,tri15}. A suspension of such bubbles at moderate volume fractions generates complex spatiotemporal flow patterns that are often referred to as pseudo-turbulence or bubble-induced agitation \citep{mudde_rev_2005,riss18}. Experiments have made significant progress in characterizing velocity fluctuations of the fluid phase in pseudo-turbulence. A key observation is the robust power-law scaling in the energy spectrum with an exponent of $-3$ either in frequency $f$ or the wave-number $k$ space \citep{martinez_2010,risso_legendre_2010,mendez}. The scaling range, however, remains controversial. \citet{risso_legendre_2010} investigated turbulence in the wake of a bubble swarm and found the $k^{-3}$ scaling for length scales larger than the bubble diameter $d$ (i.e., $k<2\pi/d$), whereas \citet{martinez_2010,vivek_2016} observed this scaling for scales smaller than $d$ in a steady state bubble suspension. Experiments on bouyancy driven bubbly flows in presence of grid-turbulence \citep{lance_1991,vivek_2016,almeras2017} observe Kolmogorov scaling for scales larger than the bubble diameter and smaller than the forcing scale and a much steeper $k^{-3}$ scaling for scales smaller than the bubble diameter and larger than the dissipation scale. \citet{lance_1991} argued that, assuming production because of wakes to be local in spectral space, balance of production with viscous dissipation leads to the observed $k^{-3}$ scaling. Fully resolved numerical simulations of three-dimensional (3D) bubbly flows for a range of Reynolds number $O(10) < \Rey < O(10^3)$ \citep{roghair,bunner_tryg_2002,bunner_rvel_2002} found the $k^{-3}$ scaling for length scales smaller than $d$ ($k>2\pi/d$) and attributed it to the balance between viscous dissipation and the energy production by the wakes \citep{lance_1991}. Two mechanisms proposed to explain the observed scaling behavior in experiments are: $(i)$ superposition of velocity fluctuations generated in the vicinity of the bubbles \citep{rissou_2011}, and $(ii)$ at high $\Rey$, the instabilities in the flow through bubble swarm \citep{lance_1991,mudde_rev_2005,riss18}. In an experiment or a simulation, it is difficult to disentangle these two mechanisms. In classical turbulence, a constant flux of energy is maintained between the injection and dissipation scales \citep{frisch,per09,boffetta}. In pseudo-turbulence, on the other hand, it is not clear how the energy injected because of buoyancy is transferred between different scales. In particular, the following key questions remain unanswered: $(i)$ How do liquid velocity fluctuations and the pseudo-turbulence spectrum depend on the Reynolds number ($\Rey$)? $(ii)$ What is the energy budget and the dominant balances? $(iii)$ Is there an energy cascade (a non-zero energy flux)? In this paper, we address all of the above questions for experimentally relevant Reynolds ($\Rey$) and Atwood ($\At$) numbers. We first investigate the dynamics of an isolated bubble and show that the wake flow behind the bubble is in agreement with earlier experiments and simulations. Next for a bubbly suspension we show that the the liquid velocity fluctuations are in quantitative agreement with the experiments of \citet{risso_legendre_2010} and the bubble velocity fluctuations are in quantitative agreement with the simulations of \citet{roghair}. We then proceed to derive the scale-by-scale energy budget equation and investigate the dominant balances for different $\Rey$ and $\At$. We find that for scales smaller than the bubble diameter, viscous dissipation balances net nonlinear transfer of energy because of advection and the surface tension to give $k^{-3}$ psuedo-turbulence spectrum. Intriguingly, the dominant balances are robust and do not depend on the density contrast ($\At$). \section{Model and Numerical Details} We study the dynamics of bubbly flow by using Navier-Stokes (NS) equations with a surface tension force because of bubbles \begin{subequations} \begin{eqnarray} \rho D_t\bm{u} &=& \nabla \bm{\cdot} [2 \mu {\cal S}] - \nabla p + {\bm F}^\sigma + {\bm F}^g, \label{eqn:mom}\\ \nabla\bm{\cdot}\bm{u} &=& 0. \end{eqnarray} \label{eqn:ns} \end{subequations} Here, $D_t = \partial_t + (\bm{u}\cdot\nabla)$ is the material derivative, ${\bm u} = (u_x,u_y,u_z)$ is the hydrodynamic velocity, $p$ is the pressure, ${\cal S}\equiv (\nabla {\bm u} + \nabla {\bm u}^T)/2$ is the rate of deformation tensor, $\rho \equiv \rho_f c + \rho_b (1-c)$ is the density, $\mu\equiv \mu_f c + \mu_b (1-c)$ is the viscosity, $\rho_f$ ($\rho_b$) is the fluid (bubble) density, and $\mu_b$ ($\mu_f$) is the bubble (fluid) viscosity. The value of the indicator function $c$ is equal to zero in the bubble phase and unity in the fluid phase. The surface tension force is $\bm{F}^\sigma \equiv \sigma \kappa \hat{\bm{n}}$, where $\sigma$ is the coefficient of surface tension, $\kappa$ is the curvature, and $\hat{\bm{n}}$ is the normal to the bubble interface. ${\bm F}^{g} \equiv [\rho_a-\rho] g\hat{\bm{z}}$ is the buoyancy force, where $g$ is the accelaration due to gravity, and $\rho_a \equiv [\int \rho(c) d{\bm x}]/L^3$ is the average density. For small Atwood numbers, we employ Boussinesq approximation whereby, $\rho$ in the left-hand-side of \Eq{eqn:mom} is replaced by the average density $\rho_a$. We solve the Boussinesq approximated NS using a pseudo-spectral method \citep{canuto} coupled to a front-tracking algorithm \citep{Tryg2001, paris} for bubble dynamics. Time marching is done using a scond-order Adams-Bashforth scheme. For the non-Boussinesq NS, we use the open source finite-volume-front-tracking solver PARIS \citep{paris}. We use a cubic periodic box of volume $L^3$ and discretize it with $N^3$ collocation points. We initialize the velocity field ${\bm u}=0$ and place the centers of $N_b$ bubbles at random locations such that no two bubbles overlap. The Reynolds number $\Rey$, the Bond number $\Bo$, and the bubble volume fraction $\phi \equiv [\int (1-c) d{\bm x}]/L^3$ that we use (see table~\ref{tab:runs}) are comparable to the experiments \citep{mendez, risso_legendre_2010}. \begin{table} \begin{center} \begin{tabular}{lcccccccccccccc} {\tt runs} & $L$ & $N$ & $N_b$ &$d$ & $g$& $\mu_f$ & $\phi\%$ & $\Rey$& $\At$ & $\Bo$ & $\epsilon_\mu$ & $\epsilon_w$ & $\epsilon_{\mu,f}$& $\epsilon_{inj}$ \\ $\tt{R1}$ & $256$ & $512$ & $60$ & $24$ & $1.0$& $0.32$ & $2.6$ & $104$ & $0.04$ & $1.8$ &$3.6\cdot10^{-3}$ & $2.8\cdot 10^{-3}$ &$2.0\cdot 10^{-3}$& $3.5\cdot10^{-3}$ \\ $\tt{R2}$ & $256$ & $512$ & $60$ & $24$ & $1.0$ & $0.20$ & $2.6$ & $166$ & $0.04$ & $1.0$ &$4.3\cdot10^{-3}$ & $2.8\cdot 10^{-3}$ &$2.6\cdot 10^{-3}$& $4.3\cdot10^{-3}$ \\ $\tt{R3}$ & $128$ & $432$ & $10$ & $22$ & $8.75$ & $0.42$ & $2.6$ & $206$ & $0.04$ & $2.1$ &$9.5\cdot10^{-2}$ &$7.1\cdot 10^{-2}$& $6.7\cdot 10^{-2}$& $9.4\cdot10^{-2}$ \\ $\tt{R4}$ & $128$ & $432$ & $10$ & $22$ & $10.5$ & $0.32$ & $2.6$ & $296$ & 0.04 & $1.9$ & $1.3\cdot10^{-1}$ & $9.4\cdot 10^{-2}$ &$9.6\cdot 10^{-2}$& $1.3\cdot10^{-1}$ \\ $\tt{R5}$ & $256$ & $256$ & $40$ & $24$ & $0.1$ & $0.32$ & $1.7$ & $113$ & $0.90$ & $2.0$ &$3.2\cdot10^{-3}$ & $2.4 \cdot 10^{-3}$ & $1.8 \cdot 10^{-3}$& $3.0\cdot10^{-3}$ \\ $\tt{R6}$ & $256$ & $256$ & $40$ & $24$ & $1.0$ & $0.32$ & $1.7$ & $345$ & $0.80$ & $2.4$ &$8.1\cdot10^{-2}$ & $6.9 \cdot 10^{-2}$ & $5.4 \cdot 10^{-2}$& $8.4\cdot10^{-2}$ \\ $\tt{R7}$ & $256$ & $256$ & $40$ & $24$ & $1.0$ & $0.32$ & $1.7$ & $358$ & $0.90$ & $1.9$ &$1.0\cdot10^{-1}$ & $7.7 \cdot 10^{-2}$ & $6.2 \cdot 10^{-2}$& $1.0\cdot10^{-1}$ \\ \end{tabular} \caption{\label{tab:runs} Table of parameters used in our DNS. Here, $\delta \rho\equiv \rho_f-\rho_b$ is the density difference, $\Rey \equiv \sqrt{\rho_f \delta \rho g d^{3}}/\mu_f$ is the Reynolds number, $\Bo \equiv \delta\rho gd^2/\sigma$ is the Bond number, $\At = \delta \rho/(\rho_f + \rho_b)$ is the Atwood number, $\epsilon_w \equiv \phi(\delta \rho g d /\rho_f)^{3/2}/d$ is the estimate of the energy dissipation rate because of the bubble wakes (\cite{lance_1991}), and $\epsilon_{\mu,f}$ is the energy dissipation rate in the fluid phase. We fix the value of the fluid density $\rho_f=1$ and assume same viscosity for the fluid and the bubble phase $\mu_f/\mu_b=1$.} \end{center} \end{table} \section{Results} In subsequent sections, we investigate statistical properties of stationary pseudo-turbulence generated in buoyancy driven bubbly flows. Table~\ref{tab:runs} lists the parameters used in our simulations. Our parameters are chosen such that the Reynolds number, Bond number, and the volume fraction are comparable to those used in earlier experiments \citep{risso_legendre_2010,mendez,riss18}. We conduct simulations at both low and high-$\At$ numbers to investigate role of density difference on the statistics of pseudo-turbulence. The rest of the paper is organized as follows. In \S\S~\ref{sbub} we study the trajectory of an isolated bubbles and, consistent with the experiments, show that the bubble shape is ellipsoidal. In \S\S~\ref{ekin}-\ref{pvel} we investigate the total kinetic energy budget and the fluid and bubble centre-of-mass velocity fluctuations and make quantitative comparison with the experiments. Finally, in \S\S~\ref{esp} we study the kinetic energy spectrum and scale-by-scale energy budget analysis. We present our conclusions in \S~\ref{concl}. \subsection{Single bubble dynamics} \label{sbub} In this section we study the dynamics an initially spherical bubble as it rises because of buoyancy. The seminal work of \citet{bha81} characterized the shape and trajectory of an isolated bubble in terms of Reynolds and Bond number. Experiments on turbulent bubbly flows \citep{lance_1991, vivek_2016} observe ellipsoidal bubbles. In the following, we characterize the dynamics of an isolated bubble for the parameters used in our simulations. To avoid the interaction of the bubble with its own wake, we use a vertically elongated cuboidal domain of dimension $5d \times 5d \times 21d$. After the bubble rise velocity attains steady-state, \subfig{sing:bub}{a-c} shows the bubble shape and the vertical component of the vorticity $\omega_z = (\nabla \times {\bm u}) \cdot \hat{z}$. For $\Rey=104$ and $\At=0.04$ (run {\tt R1}), the bubble shape is oblate ellipsoid and it rises in a rectilinear trajectory. On increasing the $\Rey=295$ (run {\tt R4}), the bubble pulsates while rising and sheds varicose vortices similar to \citet{pivello_2014}. Finally, for high $\At=0.80$ and $\Rey=345$ (run {\tt R6}), similar to region III of \citet{tri15}, we find that the bubble shape is oblate ellipsoid and it follows a zigzag trajectory. \begin{figure} \includegraphics[width=0.32\linewidth]{fig_1a} \includegraphics[width=0.32\linewidth]{fig_1b} \includegraphics[width=0.32\linewidth]{fig_1c} \caption{\label{sing:bub} Bubble positions at different times (in units of $\tau_s$) and the z-component of the vorticity ($\omega_z = \partial_xu_y - \partial_yu_x$) for the case of single bubble rising under gravity. The non-dimensional parameters in representative cases are taken same as run {\tt R1} in panel (a), run {\tt R4} in panel (b), and run {\tt R6} in panel (c). Green region corresponds to $\omega_z < 0$, whereas red region corresponds to $\omega_z > 0$. We plot iso-contours corresponding to $|\omega_z| = \pm 10^{-3}$ in (a), $|\omega_z| = \pm 10^{-2}$ in (b), and $|\omega_z| = \pm 10^{-1}$ in (c).} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth] {fig_2a-b}\\ \includegraphics[width=0.45\linewidth]{fig_2c} \includegraphics[width=0.45\linewidth]{fig_2d} \caption{\label{ener:bub} Representative steady-state snapshot of the bubbles overlayed on the iso-contour plots of the $z$-component of the vorticity field $\omega_z\equiv [\nabla \times {\bm u}]\cdot \hat{\bm z}$ for $\Rey=104$, $\At=0.04$ (a) and for $\Rey=345$, $\At=0.78$ (b). Regions with $\omega_z= 2\sigma_\omega (-2 \sigma_\omega)$ are shown in red (green), where $\sigma_\omega$ is the standard deviation of $\omega_z$. As expected, bubble-wake interactions becomes more intense on increasing $\Rey$. (c) $\Rey$ versus average bubble deformation $\langle\langle S(t)/S(0)\rangle\rangle$ for low $\At=0.04$ and high $\At$ numbers, and (d) Kinetic energy evolution for the runs given in table~\ref{tab:runs}.} \end{figure} \subsection{Bubble suspension and kinetic energy budget} \label{ekin} The plots in \subfig{ener:bub}{a,b} show the representative steady state iso-vorticity contours of the $z$-component of the vorticity along with the bubble interface position for our bubbly flow configurations. As expected from our isolated bubble study in the previous section, we observe rising ellipsoidal bubbles and their wakes which interact to generate pseudo-turbulence. The individual bubbles in the suspension show shape undulations which are similar to their isolated bubble counterparts [see movies available in the supplementary material]. Furthermore, for comparable $\Bo \approx 2$, the average bubble deformation $\langle\langle S(t)/S(0) \rangle\rangle$ increases with increasing $\Rey$ [\subfig{ener:bub}{c}]. Here, $\langle \langle \cdot \rangle \rangle$ denote temporal averaging over bubble trajectories in the statistically steady state, $S(t)$ is the surface area of the bubble, and $S(0)=\pi d^2$. The time evolution of the kinetic energy $E=\langle \rho u^2 /2 \rangle$ for runs {\tt R1 - R7} is shown in \fig{ener:bub}(d). A statistically steady state is attained around $t\approx 2.5 \tau_s $, where $\tau_s=L/\sqrt{\delta \rho g d/\rho_f}$ is the approximate time taken by an isolated bubble to traverse the entire domain. Using \Eq{eqn:mom}, we obtain the total kinetic energy $E$ balance equation as \begin{equation} \partial_t \underbrace{\langle \frac{\rho {\bm u}^2}{2} \rangle}_{E} = -\underbrace{ 2 \langle \mu(c) \mathcal{S}:\mathcal{S} \rangle}_{\epsilon_\mu} + \underbrace{\langle [\rho_a -\rho(c)] u_y g \rangle}_{\epsilon_{inj}} + \underbrace{\langle \bm{F}^\sigma\cdot \bm{u} \rangle}_{\epsilon_{\sigma}}, \end{equation} where, $\langle \cdot \rangle$ represents spatial averaging. In steady state, the energy injected by buoyancy $\epsilon_{inj}$ is balanced by viscous dissipation $\epsilon_{\mu}$. The energy injected by buoyancy $\epsilon_{inj} \approx (\rho_f-\rho_b) \phi g \langle U\rangle$ where $\langle U \rangle$ is the average bubble rise velocity. Note that $\epsilon_\sigma = -\partial_t\int\sigma ds$ \citep{joseph_1976}, where $ds$ is the bubble surface element, and its contribution is zero in the steady-state. The excellent agreement between steady state values of $\epsilon_{\mu}$ and $\epsilon_{inj}$ is evident from table~\ref{tab:runs}. \citet{lance_1991} argued that the energy injected by the buoyancy is dissipated in the wakes on the bubble. The energy dissipation in the wakes can be estimated as $\epsilon_w = C_d \phi ((\delta \rho/\rho_f)g d)^{3/2}/d$, where $C_d$ is the drag coefficient. Assuming $C_d=O(1)$, we find that $\epsilon_w$ is indeed comparable to the viscous dissipation in the fluid phase $\epsilon_{\mu,f}$ (see table~\ref{tab:runs}). \subsection{Probability distribution function of the fluid and bubble velocity fluctuations} \label{pvel} In \subfig{velpdf3d}{a,b} we plot the probability distribution function (p.d.f.) of the fluid velocity fluctuations ${\bm u}^f\equiv{\bm u}[c=1]$. Both the horizontal and vertical velocity p.d.f.'s are in quantitative agreement with the experimental data of \citet{risso_legendre_2010} and \citet{riss18}. The p.d.f. of the velocity fluctuations of the horizontal velocity components are symmetric about origin and have stretched exponential tails, whereas the vertical velocity fluctuations are positively skewed \citep{risso_legendre_2010,almeras2017,vivek_2016}. Our results are consistent with the recently proposed stochastic model of \citet{risso_pdf_2016} which suggests that the potential flow disturbance around bubbles, bubble wakes, and the turbulent agitation because of flow instabilities together lead to the observed velocity distributions. We believe that the deviation in the tail of the distributions arises because of the differences in the wake flow for different $\Rey$ and $\At$ (see \fig{sing:bub}). Note that positive skewness in the vertical velocity has also been observed in thermal convection with bubbles \citep{RBLuca}. By tracking the individual bubble trajectories we obtain their center-of-mass velocity ${\bm u}^b$. In agreement with the earlier simulations of \citet{roghair}, the p.d.f.'s of the bubble velocity fluctuation are Gaussian (see \fig{bubpdf3d}). The departure in the tail of the distribution is most probably because of the presence of large scale structures observed in experiments that are absent in simulations with periodic boundaries \citep{roghair}. \begin{figure} \begin{center} \includegraphics[width=0.48\linewidth]{fig_3a} \includegraphics[width=0.48\linewidth]{fig_3b} \end{center} \caption{\label{velpdf3d} The probability distribution function of the (a) horizontal component (b) vertical component of the liquid velocity fluctuations for runs given in table \ref{tab:runs}. The p.d.f. obtained from our DNS are in excellent agreement with the experimental data of \citet{risso_legendre_2010} [Data extracted using enguage https://markummitchell.github.io/engauge-digitizer/].} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\linewidth]{fig_4a} \includegraphics[width=0.48\linewidth]{fig_4b} \caption{\label{bubpdf3d} The probability distribution function of (a) the horizontal and (b) the vertical component of the bubbles velocity fluctuations for runs {\tt R1} and {\tt R6} (see table~\ref{tab:runs}). The experimental data of \citet{martinez_2010} and numerical results of \citet{roghair} is also shown for comparison. The black continuous line represents a Gaussian distribution. } \end{figure} \subsection{Energy spectra and scale-by-scale budget} \label{esp} In the following, we study the energy spectrum \begin{eqnarray} \displaystyle E^{uu}_k &\equiv & \sum_{k-1/2<m<k+1/2} |\hat{\bm{u}}_m|^2, \nonumber \end{eqnarray} the co-spectrum \begin{eqnarray} \displaystyle E^{\rho uu}_k &\equiv & \sum_{k-1/2<m<k+1/2} \Re[\hat{(\rho {\bm u})}_{-m} \hat{\bm{u}}_m] \equiv d \mathscr{E}/dk, \nonumber \end{eqnarray} and the scale-by-scale energy budget. Our derivation of the energy budget is similar to \citep{frisch,pope} and does not require the flow to be homogeneous and isotropic. For a general field $f({\bm x})$, we define a corresponding coarse-grained field \citep{frisch} $f^<_{k} ({\bm x})\equiv \sum_{m \leq k} f_{\bm m} \exp(i {\bm m} \cdot {\bm x})$ with the filtering length scale $\ell = 2\pi/k$. Using the above definitions in Eq.~\eqref{eqn:mom}, we get the energy budget equation \begin{equation} \partial_t\mathscr{E}_k + \Pi_k + \mathscr{F}^\sigma_k = \mathscr{P}_k - \mathscr{D}_k + \mathscr{F}^g_k. \label{ebud} \end{equation} Here, $2\mathscr{E}_k = \langle \bm{u}^<_k\cdot(\rho\bm{u})^<_k\rangle$ is the cumulative energy up to wave-number $k$, $2 \Pi_k = \langle(\rho\bm{u})^<_k \bm{\cdot} (\bm{u}\bm{\cdot}\nabla\bm{u})^<_k\rangle + \langle\bm{u}^<_k\bm{\cdot} (\bm{u}\bm{\cdot}\nabla\rho\bm{u})^<_k\rangle$ is the energy flux through wave-number $k$, $2\mathscr{D}_k= -[\langle(\rho\bm{u})^<_k\bm{\cdot}\left(\nabla\bm{\cdot} [ 2\mu {\cal S}]/\rho\right)^<_k\rangle + \langle\bm{u}^<_k\bm{\cdot}(\nabla \bm{\cdot} [2 \mu {\cal S}])^<_k\rangle]$ is the cumulative energy dissipated upto $k$, $2\mathscr{F}^\sigma_k = -[ \langle(\rho\bm{u})^<_k\bm{\cdot}\left(\bm{F}^\sigma/ \rho \right)^<_k \rangle + \langle\bm{u}^<_k\bm{\cdot}(\bm{F}^\sigma)^<_k\rangle]$ is the cumulative energy transferred from the bubble surface tension to the fluid upto $k$, $2\mathscr{F}^g_k = \langle(\rho\bm{u})^<_k\bm{\cdot}\left(\bm{F}^g / \rho \right)^<_k \rangle + \langle\bm{u}^<_k\bm{\cdot}(\bm{F}^g)^<_k\rangle$ is cumulative energy injected by buoyancy upto $k$. In crucial departure from the uniform density flows, we find a non-zero cumulative pressure contribution $2\mathscr{P}_k = \langle (\rho\bm{u})^<_k \bm{\cdot}\left({\nabla p}/{\rho}\right)^<_k\rangle$. In the Boussinesq regime (small $\At$), the individual terms in the scale-by-scale budget simplify to their uniform density analogues: $\mathscr{E}_k=\rho_a \langle \bm{u}^<_k\cdot \bm{u}^<_k\rangle/2$, $\Pi_k = \rho_a \langle\bm{u}^<_k \bm{\cdot}(\bm{u\cdot}\nabla \bm{u})^<_k\rangle$, $\mathscr{D}_k = - \mu \langle |\nabla {\bm u}_k^<|^2 \rangle$, $\mathscr{F}^\sigma_k = -\langle\bm{u}^<_k\bm{\cdot} (\bm{F}^\sigma)^<_k\rangle$, $\mathscr{F}^g_k =\langle\bm{u}^<_k\bm{\cdot} (\bm{F}^g)^<_k\rangle$, and $\mathscr{P}_k=0$. \subsubsection{Low $\At$ {\rm(runs} $\tt{R1}-\tt{R4}${\rm )}} We first discuss the results for the Boussinesq regime (low $\At$). For scales smaller than the bubble diameter ($k>k_d$), the energy spectrum (\subfig{spec:lat}{a}) shows a power-law behavior $E(k)\sim k^{-\beta}$ for different $\Rey$. The exponent $\beta=4$ for $\Rey=104$, it decreases on increasing the $\Rey$ and becomes close to $\beta=3$ for the largest $\Rey=296$. We now investigate the dominant balances using the scale-by-scale energy budget analysis. In the statistically steady-state $\partial_t {\mathscr E}_k=0$, and $\Pi_k + \mathscr{F}^\sigma_k = - \mathscr{D}_k + \mathscr{F}^g_k$ (note that $\mathscr{P}_k=0$ for low $\At$). In \subfig{spec:lat}{b} and \subfig{spec:lat}{c} we plot different contributions to the cumulative energy budget for $\Rey \approx 104$ and $\Rey \approx 300$ and make the following observations: \begin{enumerate} \item Cumulative energy injected by buoyancy ${\mathscr F}_k^\sigma$ saturates around $k\approx k_d$. Thus buoyancy injects energy at scales comparable to and larger than the bubble diameter. \item Energy flux $\Pi_k >0 $ around $k\approx k_d$ and it vanishes for $k\gg k_d$. \item Especially for scales smaller than the bubble diameter, the cumulative energy transfer from the bubble surface tension to the velocity is the dominant energy transfer mechanism. \item Consistent with the earlier predictions \citep{risso_legendre_2010}, for our highest $\Rey=300$ simulation provides a direct evidence that the balance of total production $d(\Pi_k+{\mathscr F}_k^\sigma)/dk \sim k^{-1}$ with viscous dissipation [$d{\mathscr D}_k/dk = \nu k^2 E(k)$] gives the psuedo-turbulence spectra $E(k)\sim k^{-3}$ \citep{risso_legendre_2010,roghair,martinez_2010}. \end{enumerate} Our scale-by-scale analysis, therefore, suggests the following mechanism of pseudo-turbulence. Buoyancy injects energy at scales comparable to and larger to the bubble size. A part of the energy injected by buoyancy is absorbed in stretching and deformation of the bubbles and another fraction is transferred via wakes to scales comparable to bubble diameter. Similar to polymers in turbulent flows \cite{per06,per10,valente14}, the relaxation of the bubbles leads to injection of energy at scales smaller than the bubble diameter. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig_5a} \includegraphics[width=0.45\linewidth]{fig_5b}\\ \includegraphics[width=0.5\linewidth]{fig_5c} \caption{\label{spec:lat} (a) Log-log plot of energy spectra $E^{uu}_k$ versus $k/k_d$ for our high $\Rey$ low $\At$ runs {\tt R1 - R4}. Dashed dotted line indicates the $k^{-3}$ scaling. Cumulative contribution of viscous dissipation $\mathscr{D}_k$, energy injected because of buoyancy $\mathscr{F}^g_k$ and the surface tension contribution $\mathscr{F}^\sigma_k$ versus $k/k_d$ for (b) run $\tt{R1}$ and (c) run {\tt R4}. Note that, for $k>k_d$, the balance between $d\mathscr{F}_k^\sigma/dk$ and $d\mathscr{D}_k/dk$ is more prominent in panel (c) compared to (b).} \end{figure}% \subsubsection{High $\At$ {\rm(runs} $\tt{R5-R7}$\rm{)}} Similar to earlier section, here also the energy spectrum and the co-spectrum shows a $k^{-3}$ (\subfig{spec:hrehat}{a}). However, because of density variations the scale-by-scale energy budget becomes more complex. Now, in the statistically steady state $\Pi_k + \mathscr{F}^\sigma_k = \mathscr{P}_k - \mathscr{D}_k + \mathscr{F}^g_k$. In (\subfig{spec:hrehat}{b}) we plot the scale-by-scale energy budget for our high $\At$ run ${\tt R5}$. We find that the cumulative energy injected by buoyancy and the pressure contribution ${\mathscr F}_k^g + \mathscr{P}_k$ reaches a peak around $k \approx k_d$ and then decrease mildly to $\epsilon_{inj}$. Similar to the low $\At$ case, we find a non-zero energy flux for $k\approx k_d$ and a dominant surface-tension contribution to the energy budget for $k\gg k_d$. Finally, similar to last section, for $k>k_d$ the net production $d (\Pi + \mathscr{F}^\sigma)/dk \sim k^{-1}$ balances viscous dissipation $\nu k^2 E(k)$ to give $E(k)\sim k^{-3}$. \begin{figure} \centering \includegraphics[width=0.49\linewidth]{fig_6a} \includegraphics[width=0.49\linewidth]{fig_6b} \caption{\label{spec:hrehat}(a) Log-log plot of energy spectra ($\bigcirc$) $E^{uu}_k$ and co-spectrum ($\square$) $E^{\rho uu}_k$versus $k/k_d$ for our high $\Rey$ high $\At$ runs {\tt R5 - R7}. Dashed dotted line indicates the $k^{-3}$ scaling. (b) Cumulative contribution of the viscous dissipation $\mathscr{D}_k$, the contribution due to buoyancy and pressure $\mathscr{F}_k^g-\mathscr{P}_k$, the energy flux $\mathcal{\Pi}_k$ and the surface tension contribution $\mathscr{F}_k^\sigma$ versus $k/k_d$ for run ${\tt R6}$.} \end{figure} \section{Conclusion} \label{concl} To conclude, we have investigated the statistical properties of velocity fluctuations in psuedo-turbulence generated by buoyancy driven bubbly flows. The $\Rey$ that we have explored are consistent with the $\Rey \sim[300-1000]$ used in the experiments \citep{risso_legendre_2010,vivek_2016,mendez}. Our numerical simulations show that the shape of the p.d.f. of the velocity fluctuations is consistent with experiments over a wide range of $\Rey$ and $\At$ numbers. For large $\Rey$ and for low as well as high $\At$, the energy spectrum shows a $k^{-3}$ scaling but it becomes steeper on reducing the $\Rey$. We observe a non-zero positive energy flux for scales comparable to the bubble diameter. Our scale-by-scale energy budget validates the theoretical prediction that the net production balances viscous dissipation to give $E(k)\sim k^{-3}$. \section{Acknowledgments} We thank D. Mitra and S. Banerjee for discussions. This work was supported by research grant No. ECR/2018/001135 from SERB, DST (India). \input{paper_spec_bub_3d.bbl} \end{document}
2,877,628,090,556
arxiv
\section{Introduction} Stochastic partial differential equations (SPDEs) are key ingredients in numerous models in engineering, finance, and natural sciences. For example, SPDEs commonly appear in models to price interest-rate based financial derivatives (cf., for example, (1.3) in Filipovi\'c et al.~\cite{FilipovicTappeTeichmann2010} and Theorem~3.5 in Harms et al.~\cite{HarmsStefanovitsTeichmannWutrich2015}), to describe random surfaces appearing in surface growth models (cf., for example, (3) in Hairer~\cite{HairerKPZ} and (1) in Bl\"omker \& Romito~\cite{BlomkerRomito2015}), to describe the temporal dynamics in Euclidean quantum field theories (cf., for example, (1.1) in Mourrat \& Weber~\cite{MourratWeber2015}), to describe velocity fields in fully developed turbulent flows (cf., for example, (1.5) in Birnir~\cite{Birnir2013b} and (7) in Birnir~\cite{Birnir2013a}), and to describe the temporal development of the concentration of an unwanted (biological or chemical) contaminant in water (e.g., in the groundwater system, in a river, or in a water basin; cf., for example, (2.2) in Kallianpur \& Xiong~\cite{KallianpurXiong1994} and (1.1) in Kouritzin \& Long~\cite{KouritzinLong2002}). Another prominent situation where SPDEs such as the Kushner equation (cf., for example, Kushner~\cite{kushner1964differential}) and the Zakai equation (cf., for example, Zakai~\cite{zakai1969optimal}) appear is in the case of nonlinear filtering problems where SPDEs describe the density of the state space of the considered system. In particular, we refer, e.g., to \cite{BrigoHanzon98,CeciColaneri17,CoculescuGemanJeanblanc08,DuffieLando01,FreyRunggaldier10,FreySchmidt12} for filtering problems in financial engineering, we refer, e.g., to \cite{BudmanHolcombMorari91,ChenSun91,Rutzler87,SeinfeldGavalasHwang71,SolimanRay79b,WindesCinarRay89} for filtering problems in chemical engineering, and we refer, e.g., to \cite{ duc2015ensemble,buehner2017ensemble,cassola2012wind,che2016wind,falissard2013genuinely,pelosi2017adaptive} for filtering problems in weather forecasting. SPDEs arising in nonlinear filtering problems are usually high-dimensional as the number of dimensions corresponds to the state space of the considered filtering problem. Most of the SPDEs in the above named applications cannot be solved explicitly and for about 30 years it has been an active field of research to design and study numerical algorithms which can approximatively compute solutions of SPDEs. In order to be able to implement an approximation scheme for evolutionary type SPDEs on a computer both the time interval $[0,T] $ as well as the infinite dimensional state space have to be discretized. Several types of temporal discretizations and spatial discretizations have been proposed and studied in the scientific literature. In particular, we refer, e.g., to \cite{debussche2011weak,gyongy1999lattice,hausenblas2008finite,shardlow1999numerical,walsh2005finite,yan2005galerkin} for temporal discretizations based on the linear implicit Euler method, we refer, e.g., to \cite{hutzenthaler2016strong,jentzen2009overcoming,lord2010stochastic,mukam2016note,wang2015note} for temporal discretizations based on exponential Euler-type methods, % we refer, e.g., to \cite{hausenblas2002numerical,hausenblas2003approximation,shardlow1999numerical,walsh2005finite} for temporal discretizations based on linear implicit Crank--Nicolson-type methods, we refer, e.g., to \cite{bayer2016splitting,bensoussan1990approximation,bessaih2014splitting,florchinger1991time,gyongy2003splitting,legland1992splitting} for temporal discretizations based on splitting up approximation methods, we refer, e.g., to \cite{jentzen2011efficient,jentzen2015milstein,kruse2014consistency,reisinger2018stability, wang2014higher} for temporal discretizations based on higher order approximation methods, we refer, e.g., to \cite{brzezniak2013finite,geissert2009rate,katsoulakis2011noise,kovacs2010finite,lord2010modified,walsh2005finite,yan2005galerkin} for spatial discretizations based on finite elements methods, we refer, e.g., to \cite{gyongy2006numerical,millet2005implicit,pettersson2005numerical,roth2006combination,shardlow1999numerical,walsh2006numerical} for spatial discretizations based on finite differences methods, and we refer, e.g., to \cite{grecksch1996time,hausenblas2003approximation,kloeden2001linear,lord2007postprocessing,muller2007implicit,mueller2008optimalOU} for spatial discretizations based on spectral Galerkin methods. Moreover, the recent article \cite{zhang2020learning} employs deep neural networks to approximately solve some one-dimensional SPDEs. We also refer to the overview articles \cite{Gyongy02,jentzen2009numerical} and to the monographs \cite{jentzen2011taylor,kruse2014strong} for further references on numerical approximation methods for SPDEs. In this article we introduce and study a deep learning based approximation algorithm for approximating solutions of possibly high-dimensional SPDEs. In the proposed approximation algorithm we employ a deep neural network for every realization of the driving noise process of the SPDE to approximate the solution process of the SPDE under consideration. The derivation of the proposed approximation scheme is inspired by the ideas in the articles \cite{DeepSplitting,beck2018solving,han2018solving} in which deep learning based algorithms for high-dimensional PDEs have been proposed and studied. We also refer, e.g., to \cite{BeckEJentzen19, berg2018unified,chan2019machine,EHanJentzen2017,EHanJentzen2020algorithms, EYu2018deep, HenryLabordere17,hure2019some,jacquier2019deep,raissi2018deep,sirignano2018dgm} and the references therein for further articles on deep learning based approximation methods for PDEs. We test the performance of the approximation algorithm proposed in this article in the case of stochastic heat equations with additive noise (see Subsection~\ref{subsec:stoch_heat} below), stochastic heat equations with multiplicative noise (see Subsection~\ref{subsec:const-coeff} below), stochastic Black--Scholes equations with multiplicative noise (see Subsection~\ref{subsec:geom_BM} below), and Zakai equations from nonlinear filtering (see Subsection~\ref{subsec:Zakai} below). In each of these SPDEs the proposed approximation algorithm produces accurate results with short run times in up to 50 space dimensions. The remainder of this paper is organized as follows. In Section~\ref{sec:derivation} we derive (see Subsections \ref{subsec:temp-discret}--\ref{subsec:temp_disc} below) and formulate (see Subsections~\ref{subsec:algo1}--\ref{subsec:algo-Full-gen} below) the proposed approximation algorithm for SPDEs. In Section~\ref{sec:examples} we test the performance of the proposed approximation algorithm (see Subsection~\ref{subsec:algo-Full-gen} below) in the case of stochastic heat equations with additive noise (see Subsection~\ref{subsec:stoch_heat} below), stochastic heat equations with multiplicative noise (see Subsection~\ref{subsec:const-coeff} below), stochastic Black--Scholes equations with multiplicative noise (see Subsection~\ref{subsec:geom_BM} below), and Zakai equations (see Subsection~\ref{subsec:Zakai} below). In Section~\ref{sec:source} we present the Python source codes which we have used for the numerical simulations in Section~\ref{sec:examples}. \section{Derivation and description of the proposed approximation algorithm \label{sec:derivation} } Let $ T \in (0,\infty) $, $ d \in \N $, let $ \varphi\colon \R^d \to \R $ be a continuous function, let $(\Omega, \mathcal{F}, \P)$ be a probability space with a normal filtration $( \mathcal{F}_t )_{ t \in [0,T] }$ (cf., e.g., Liu \& R\"ockner~\cite[Definition 2.1.11]{liu2015stochastic}), let $ Z \colon [0,T] \times \R^d \times \Omega \to \R $ be a sufficiently regular random field which satisfies for every $ x\in\R^d $ that $ (Z_t(x))_{t\in [0,T]} \colon [0,T] \times \Omega \to \R $ is an $ (\mathcal F_t)_{t\in [0,T]} $-It\^o process, let $ \mu \colon \R^d \to \R^d, $ $ f \colon \R^d \times \R \times \R^d \to \R, $ and $ b \colon \R^d \times \R \times \R^d \to \R $ be sufficiently regular functions, let $ \sigma \colon \R^d \to \R^{ d \times d } $ be a sufficiently regular and sufficiently non-degenerate function, and let $ X\colon [0,T]\times\R^d\times\Omega\to\R $ be a random field which satisfies for every $ t\in [0,T] $, $ x\in\R^d $ that $ X_t(x)\colon\Omega\to\R $ is $\mathcal F_t$/$\B(\R)$-measurable, which satisfies for every $\omega\in\Omega$ that $ (X_t(x,\omega))_{(t,x)\in [0,T]\times\R^d}\in C^{0,2}([0,T]\times\R^d,\R) $ has at most polynomially growing partial derivatives, and which satisfies\footnote{Note that for every $d,m \in \N$ and every $(d\times m)$-matrix $A \in \R^{d\times m}$ it holds that $A^*\in \R^{m\times d}$ is the transpose of $A$.} that for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.\ that \begin{equation}\label{eq:mildFormulationSPDE} \begin{split} X_{ t }( x ) & = \varphi( x ) + \int_{ 0 }^{ t } f\big( x, X_s(x), ( \nabla X_s )( x ) \big) \, ds + \int_{ 0 }^{ t } b\big( x, X_s( x ), ( \nabla X_s )( x ) \big) \, dZ_s(x) \\ & \quad + \int_{ 0 }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma(x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \Big] \, ds. \end{split} \end{equation} Our goal is to approximately calculate under suitable hypothesis the solution process $X\colon [0,T]\times\R^d \times \Omega \to \R$ of the SPDE \eqref{eq:mildFormulationSPDE}. \subsection{Temporal approximations} \label{subsec:temp-discret} In this subsection we discretize the SPDE \eqref{eq:mildFormulationSPDE} in time by employing the splitting-up method (cf., for example, \cite{Bensoussan_SplittingUpMethodForSPDEs1992, BensoussanGlowinskiRascanu_ApproximationBySplittingUp1992, GrekschLisei_ApproximationOfStochasticNonlinearEquationsBySplittingMethod2013, gyongy2003rate, gyongy2003splitting, legland1992splitting}) to obtain a semi-discrete approximation problem. To this end let $N\in\N$, $t_0, t_1, \ldots, t_N\in [0,T]$ be real numbers with \begin{equation} \label{eq:time-step-discrete} 0 = t_0 < t_1 < \ldots < t_N = T. \end{equation} Observe that \eqref{eq:mildFormulationSPDE} yields that for every $n\in\{0,1,\ldots,N-1\}$, $t\in [t_n,t_{n+1}]$, $x\in\R^d$ it holds $\P$-a.s.~that \begin{equation} \begin{split} X_t(x) & = X_{t_n}(x) + \int_{ t_n }^{ t } f\big( x, X_s(x), ( \nabla X_s )( x ) \big) \, ds + \int_{ t_n }^{ t } b\big( x, X_s( x ), ( \nabla X_s )( x ) \big) \, dZ_s(x)\\ & \quad + \int_{t_n}^t \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \Big] \, ds. \end{split} \end{equation} This illustrates for every $n\in\{0,1,\ldots,N-1\}$, $t\in [t_n,t_{n+1}]$, $x\in\R^d$ that \begin{equation}\label{eq:Xn_discretization} \begin{split} X_t (x) & \approx X_{t_n}(x) \\ & \quad + \int_{ t_n }^{ t_{ n + 1 } } f\big( x, X_{t_n}(x), ( \nabla X_{t_{n}} )( x ) \big) \, ds + \int_{ t_n }^{ t_{ n + 1 } } b\big( x, X_{t_n}( x ), ( \nabla X_{t_{n}} )( x ) \big) \, dZ_s(x) \\ & \quad + \int_{t_{n}}^t \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d }\Big]\,ds. \end{split} \end{equation} This, in turn, suggests for every $n\in\{0,1,\ldots,N-1\}$, $t\in [t_n,t_{n+1}]$, $x\in\R^d$ that \begin{equation}\label{eq:Xapproximation} \begin{split} X_{t}(x) & \approx X_{t_n}(x) + f \bigl( x,X_{t_n}(x),(\nabla X_{t_n})(x) \bigr)\,(t_{n+1}-t_n) \\[1ex] & \quad + b\big(x, X_{t_n}(x), (\nabla X_{t_n})(x)\big)\,\big(Z_{t_{n+1}}(x)-Z_{t_n}(x)\big) \\ & \quad + \int_{t_{n}}^t \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d }\Big]\,ds . \end{split} \end{equation} To derive the splitting-up approximation let $ U\colon (0,T]\times\R^d\times\Omega\to\R $ be a random field which satisfies for every $ \omega \in \Omega $, $ n \in \{0,1,\ldots,N-1\} $ that $ (U_t(x,\omega))_{(t,x)\in (t_n, t_{n+1}]\times\R^d} \in C^{1,2}( (t_n, t_{n+1}]\times\R^d,\R) $ has at most polynomially growing partial derivatives, which satisfies for every $\omega\in\Omega$, $x\in\R^d$ that $ \int_0^T \|(\operatorname{Hess} U_s)(x,\omega)\|_{\R^{d\times d}} + \|(\nabla U_s)(x,\omega)\|_{\R^d}\,ds < \infty $, and which satisfies that for every $n \in \{0,1,\ldots,N-1\}$, $t \in (t_n,t_{n+1}]$, $x \in \R^d$ it holds $\P$-a.s.~that \begin{equation}\label{eq:mildFormulationUPDE} \begin{split} U_t(x) & = X_{t_n}(x) + f(x,X_{t_n}(x),(\nabla X_{t_n})(x))\,(t_{n+1}-t_n) \\[1ex] & \quad + b(x,X_{t_n}(x),(\nabla X_{t_n})(x))\,(Z_{t_{n+1}}(x)-Z_{t_n}(x)) \\[1ex] & \quad + \int_{t_n}^t \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} U_s )( x ) \big) + \big\langle \mu( x ), ( \nabla U_s )( x ) \big\rangle_{ \R^d } \Big]\,ds. \end{split} \end{equation} Note that \eqref{eq:Xapproximation} and \eqref{eq:mildFormulationUPDE} suggest for every $n\in\{1,2,\ldots,N\}$, $x\in\R^d$ that \begin{equation}\label{eq:tildeXapproxX} U_{t_{n}}(x) \approx X_{t_{n}}(x). \end{equation} Next let $V\colon [0,T]\times\R^d\times\Omega\to\R$ be a random field which satisfies for every $ \omega\in\Omega $, $ n\in\{0,1,\ldots,N-1\} $ that $ (V_t(x,\omega))_{(t,x)\in (t_n,t_{n+1}]\times\R^d}\in C^{1,2}((t_n,t_{n+1}]\times\R^d,\R) $ has at most polynomially growing partial derivatives, which satisfies for every $ \omega \in \Omega $, $ x \in \R^d $ that $ \int_0^T \|(\operatorname{Hess} V_s)(x,\omega)$ $\|_{\R^{d\times d}}$ $ + \|(\nabla V_s)(x,\omega)\|_{\R^d} \,ds < \infty $, and which satisfies for every $ n \in \{0,1,\ldots,N-1\} $, $ t \in (t_n, t_{n+1}] $, $ x \in \R^d $ that $ V_0(x) = \varphi(x) $ and \begin{equation}\label{eq:mildFormulationVPDE} \begin{split} V_t(x) & = V_{t_n}(x) + f(x,V_{t_n}(x),(\nabla V_{t_n})(x))\,(t_{n+1}-t_n) \\[1ex] & \quad + b(x,V_{t_n}(x),(\nabla V_{t_n})(x))\,(Z_{t_{n+1}}(x)-Z_{t_n}(x)) \\[1ex] & \quad + \int_{ t_n }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} V_s )( x ) \big) + \big\langle \mu( x ), ( \nabla V_s )( x ) \big\rangle_{ \R^d } \Big] \, ds \end{split} \end{equation} (cf., for example, Deck \& Kruse \cite{DeckKruse_ParametrixMethod2002}, Hairer et al.\ \cite[Section 4.4]{HairerHutzenthalerJentzen_LossOfRegularity2015}, Krylov~\cite[Chapter 8]{Krylov_LecturesHoelder1996}, and Krylov~\cite[Theorem 4.32]{Krylov_KolmogorovsEquations1998} for existence, uniqueness, and regularity results for \eqref{eq:mildFormulationUPDE} and \eqref{eq:mildFormulationVPDE}). Note that \eqref{eq:mildFormulationUPDE} and \eqref{eq:mildFormulationVPDE} suggest for every $n\in\{1,2,\ldots,N\}$, $x\in\R^d$ that \begin{equation} V_{t_{n}}(x) \approx U_{t_{n}}(x). \end{equation} Combining this with \eqref{eq:tildeXapproxX}, in turn, suggests for every $n\in\{1,2,\ldots,N\}$, $x\in\R^d$ that \begin{equation}\label{eq:VapproxX} V_{t_{n}}(x) \approx X_{t_{n}}(x). \end{equation} Observe that the random field $V$ is a specific splitting-up type approximation for the random field $X$ (cf., for example,~\cite{Bensoussan_SplittingUpMethodForSPDEs1992, BensoussanGlowinskiRascanu_ApproximationBySplittingUp1992, GrekschLisei_ApproximationOfStochasticNonlinearEquationsBySplittingMethod2013, gyongy2003rate, gyongy2003splitting, legland1992splitting}). In the next subsection we derive a Feynman-Kac representation for $V$ given $Z$ (cf., for example, Milstein \& Tretjakov \cite[Section 2]{MilsteinTretyakov_SolvingPSPDEsViaAveraging2009}). \subsection{An approximate Feynman-Kac type representation} \label{subsec:approx_Feynman-Kac} In this subsection we derive a Feynman-Kac representation for $V$ given $Z$. More specifically, for every sufficiently regular function $z\colon [0,T]\times\R^d \to \R$ let ${\ensuremath{\mathcal{V}}}^{(z)}\colon [0,T]\times\R^d\to\R$ satisfy for every $n\in\{0,1,\ldots,N-1\}$ that $({\ensuremath{\mathcal{V}}}^{(z)}_t(x))_{(t,x)\in (t_{n},t_{n+1}]\times\R^d}\in C^{1,2}((t_{n},t_{n+1}]\times\R^d,\R)$ has at most polynomially growing partial derivatives, which satisfies for every $ x \in \R^d $ that $ \int_0^T \|(\operatorname{Hess} {\ensuremath{\mathcal{V}}}^{(z)}_s)(x)\|_{\R^{d\times d}} + \|(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_s)(x)\|_{\R^d} \,ds < \infty $, and which satisfies for every $n\in\{0,1,\ldots,N-1\}$, $t\in (t_{n},t_{n+1}]$, $x\in\R^d$ that ${\ensuremath{\mathcal{V}}}^{(z)}_0(x) = \varphi(x)$ and \begin{equation}\label{eq:generalVZequation} \begin{split} {\ensuremath{\mathcal{V}}}^{(z)}_{t}(x) &= {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x) + f(x,{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(x))\,(t_{n+1}-t_n) \\[1ex] & \quad + b(x,{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x), (\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(x))\,(z_{t_{n+1}}(x) - z_{t_n}(x)) \\ & \quad + \int_{ t_n }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} {\ensuremath{\mathcal{V}}}^{(z)}_s )( x ) \big) + \big\langle \mu( x ), ( \nabla {\ensuremath{\mathcal{V}}}^{(z)}_s )( x ) \big\rangle_{ \R^d } \Big] \, ds . \end{split} \end{equation} Note that \eqref{eq:mildFormulationVPDE} and \eqref{eq:generalVZequation} ensure that for every $\omega\in\Omega$, $t\in [0,T]$, $x\in\R^d$ it holds that \begin{equation}\label{eq:connectionBetweenVzAndV} {\ensuremath{\mathcal{V}}}^{(Z_s(y,\omega))_{(s,y)\in [0,T]\times\R^d}}_t(x) = V_t(x,\omega). \end{equation} Combining this with \eqref{eq:VapproxX} suggests for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $\omega\in\Omega$, $n\in\{0,1,\ldots,N\}$, $x\in\R^d$ that \begin{equation}\label{eq:VapproxXWithZPluggedIn} {\ensuremath{\mathcal{V}}}^{(Z_s(y,\omega))_{(s,y)\in [0,T]\times\R^d}}_{t_n}(x) \approx X_{t_n}(x,\omega). \end{equation} Moreover, note that \eqref{eq:VapproxX}--\eqref{eq:connectionBetweenVzAndV} suggest for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $n \in \{0,1,\dots,N\}$, $x\in \R^d$ that \begin{equation}\label{eq:VapproxXWithZPluggedIn-b} {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x) \approx \E\big[ X_{t_n}(x)\!~|\!~Z=z\big]. \end{equation} In the following we introduce additional artificial stochastic processes in order to incorporate a Feynman-Kac type representation into \eqref{eq:generalVZequation}. Let $ B\colon [0,T]\times\Omega\to\R^d $ be a standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motion, let $ \xi\colon\Omega\to\R^d$ be an $\mathcal F_0$/$\B(\R^d)$-measurable function which satisfies for every $p\in (0,\infty)$, $x \in \R^d$ that $\P(\|\xi -x\|_{\R^d}\leq p)>0$ and $\E[\|\xi\|_{\R^d}^p] < \infty$, assume that $Z$ and $(\xi,B)$ are independent random variables, and let $Y\colon [0, T]\times\Omega\to\R^d$ be an $(\mathcal F_t)_{t\in [0,T]}$-adapted stochastic process with continuous sample paths which satisfies that for every $t\in [0,T]$ it holds $\P$-a.s.~that \begin{equation}\label{eq:SDEForY} Y_{t} = \xi + \int_0^t \mu(Y_s)\,ds + \int_0^t \sigma(Y_s)\,dB_s. \end{equation} Note that the assumption that for every $p\in (0,\infty)$ it holds that $\E[\|\xi\|_{\R^d}^p] < \infty$ and the assumption that $ \mu \colon \R^d \to \R^d $ and $ \sigma \colon \R^d \to \R^{ d \times d } $ are sufficiently regular functions ensure that for every $p\in (0,\infty)$ it holds that \begin{equation} \label{eq:Y-integ-cond} \sup_{t \in [0,T]} \E\big[\|Y_t\|_{\R^d}^p\big] < \infty. \end{equation} Moreover, observe that \eqref{eq:generalVZequation} implies that for every sufficiently regular function $z \colon [0,T]\times\R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$, $t\in (t_{n},t_{n+1})$, $x\in\R^d$ it holds that \begin{equation} \tfrac{\partial}{\partial t}\bigl[ {\ensuremath{\mathcal{V}}}^{(z)}_{t}(x) \bigr] = \big\langle \mu( x ), ( \nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t} )( x ) \big\rangle_{ \R^d } + \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} {\ensuremath{\mathcal{V}}}^{(z)}_{t} )( x ) \big). \end{equation} This, in turn, assures that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $n\in\{0,1,\ldots,N-1\}$, $t\in (T-t_{n+1},T-t_{n})$, $x\in\R^d$ it holds that \begin{multline}\label{eq:backwardsEquationPoppingUpInIto} \tfrac{\partial}{\partial t}\bigl[ {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(x) \bigr] + \big\langle \mu( x ), ( \nabla {\ensuremath{\mathcal{V}}}^{(z)}_{T-t} )( x ) \big\rangle_{ \R^d } + \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma( x ) ]^* ( \operatorname{Hess} {\ensuremath{\mathcal{V}}}^{(z)}_{T-t} )( x ) \big) = 0. \end{multline} Next note that It\^o's formula, the hypothesis that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds that $({\ensuremath{\mathcal{V}}}^{(z)}_t(x))_{(t,x)\in (t_{n},t_{n+1}]\times\R^d}\in C^{1,2}((t_n,t_{n+1}]\times\R^d,\R)$ (cf.\ \eqref{eq:generalVZequation}), and \eqref{eq:SDEForY} guarantee that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $n\in\{0,1,\ldots,N-1\}$, $r,t\in [T-t_{n+1},T-t_{n})$ with $r< t$ it holds $\P$-a.s.~that \begin{equation} \begin{split} {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t) & = {\ensuremath{\mathcal{V}}}^{(z)}_{T-r}(Y_r) + \int_{r}^{t} \big\langle (\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{T-s})(Y_s), \sigma(Y_s)\,dB_s \big\rangle_{\R^d} + \int_{r}^{t} \bigl( \tfrac{\partial}{\partial s} \bigl[ {\ensuremath{\mathcal{V}}}^{(z)}_{T-s} \bigr] \bigr) (Y_s)\,ds \\ & \quad + \int_{r}^{t} \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma(Y_s ) [ \sigma(Y_s ) ]^* ( \operatorname{Hess} {\ensuremath{\mathcal{V}}}^{(z)}_{T-s})( Y_s ) \big)\,ds \\ & \quad + \int_{r}^{t} \big\langle \mu( Y_s ), ( \nabla {\ensuremath{\mathcal{V}}}^{(z)}_{T-s} )( Y_s ) \big\rangle_{ \R^d } \, ds. \end{split} \end{equation} Combining this with \eqref{eq:backwardsEquationPoppingUpInIto} implies that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $n\in\{0,1,\ldots,N-1\}$, $r,t\in [T-t_{n+1},T-t_n)$ with $r< t$ it holds $\P$-a.s.~that \begin{equation}\label{eq:ItoOnOpenIntervalAfterCancellation} {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t) = {\ensuremath{\mathcal{V}}}^{(z)}_{T-r}(Y_r) + \int_{r}^t \big\langle (\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{T-s})(Y_s), \sigma(Y_s)\,dB_s \big\rangle_{\R^d}. \end{equation} Hence, we obtain that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $n \in \{0,1,\dots,N-1\}$, $t\in (T-t_{n+1},T-t_n)$ it holds $\P$-a.s.\ that \begin{equation}\label{eq:ItoOnHalfClosedInterval} {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t) = {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}}) + \int_{T-t_{n+1}}^t \big\langle (\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{T-s})(Y_s), \sigma(Y_s)\,dB_s \big\rangle_{\R^d}. \end{equation} Furthermore, note that \eqref{eq:Y-integ-cond}, the hypothesis that $\sigma \colon \R^d \to \R^{d\times d}$ is a sufficiently regular function, and the hypothesis that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n \in \{0,1,\dots,N-1\}$ it holds that $(t_n,t_{n+1}] \times \R^d \ni (t,x) \mapsto (\nabla {\ensuremath{\mathcal{V}}}^{(z)}_t)(x) \in \R^d$ is an at most polynomially growing function assure that for every $n \in \{0,1,\dots,N-1\}$, $t\in (T-t_{n+1},T-t_n)$ it holds that \begin{equation} \int_{T-t_{n+1}}^t \E \Big[\big\| [\sigma(Y_s)]^\ast (\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{T-s})(Y_s)\big\|_{\R^d}^2\Big]\, ds <\infty. \end{equation} Therefore, we obtain that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n \in \{0,1,\dots,N-1\}$, $t \in (T-t_{n+1},T-t_n)$ it holds $\P$-a.s.~that \begin{equation}\label{eq:conditionalExpectation1stTerm} \E\bigg[\int_{T-t_{n+1}}^{t} \big\langle (\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{T-s})(Y_s), \sigma(Y_s)\,dB_{s} \big\rangle_{\R^d}\!~\Big|\!~\mathcal F_{T-t_{n+1}} \bigg] = 0 . \end{equation} This and \eqref{eq:ItoOnHalfClosedInterval} demonstrate that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$, $t \in [T-t_{n+1}, T-t_{n})$ it holds $\P$-a.s.~that \begin{equation} \E\big[ {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t)\!~\big|\!~\mathcal F_{T-t_{n+1}} \big] = \E\big[ {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}})\!~\big|\!~\mathcal F_{T-t_{n+1}} \big]. \end{equation} The fact that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds that the function $\Omega \ni \omega \mapsto {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}}(\omega)) \in \R$ is $\mathcal F_{T-t_{n+1}}$/$\B(\R)$-measurable hence implies that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,$ $\dots,N-1\}$, $t \in [T-t_{n+1}, T-t_n)$ it holds $\P$-a.s.~that \begin{equation}\label{eq:conditionalExpectationWitht} \E\big[ {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t)\!~\big|\!~\mathcal F_{T-t_{n+1}}\big] = {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}}). \end{equation} Next observe that the hypothesis that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds that $({\ensuremath{\mathcal{V}}}^{(z)}_t(x))_{(t,x)\in (t_n,t_{n+1}]\times\R^d}\in C^{1,2}((t_n,t_{n+1}]\times\R^d,\R)$ has at most polynomially growing partial derivatives and the fact that for every $\omega\in\Omega$ it holds that $[0,T] \ni t \mapsto Y_t(\omega)\in \R^d$ is a continuous function ensure that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $\omega\in\Omega$, $n\in\{0,1,\ldots,N-1\}$ it holds that \begin{equation}\label{25a} \limsup_{t\nearrow T-t_n} \Big|{\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t(\omega))- {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_{T-t_n}(\omega))\Big|=0. \end{equation} Furthermore, observe that \eqref{eq:generalVZequation} and the hypothesis that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $x\in \R^d$ it holds that $ \int_0^T \|(\operatorname{Hess} {\ensuremath{\mathcal{V}}}^{(z)}_s)(x)\|_{\R^{d\times d}} + \|(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_s)(x)\|_{\R^d} \,ds < \infty $ show that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $\omega\in\Omega$, $n\in\{0,1,\ldots,N-1\}$ it holds that \begin{align}\label{25-tilde} & \limsup_{t\nearrow T-t_n} \Big|{\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_{T-t_n}(\omega))- \big[{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}(\omega)) \nonumber\\ & + f\big(Y_{T-t_n}(\omega),{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}(\omega)),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n}(\omega))\big)\,(t_{n+1}-t_n) \\ & + b\big(Y_{T-t_n}(\omega),{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}(\omega)),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n}(\omega))\big)\,\big(z_{t_{n+1}}(Y_{T-t_n}(\omega))-z_{t_n}(Y_{T-t_n}(\omega))\big) \big] \Big|=0. \nonumber \end{align} Combining this with \eqref{25a} demonstrates that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $\omega\in\Omega$, $n\in\{0,1,\ldots,N-1\}$ it holds that \begin{align}\label{eq:ptwConv} & \limsup_{t\nearrow T-t_n} \Big|{\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t(\omega))- \big[{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}(\omega)) \nonumber \\ & + f\big(Y_{T-t_n}(\omega),{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}(\omega)),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n}(\omega))\big)\,(t_{n+1}-t_n) \\ & + b\big(Y_{T-t_n}(\omega),{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}(\omega)),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n}(\omega))\big)\,\big(z_{t_{n+1}}(Y_{T-t_n}(\omega))-z_{t_n}(Y_{T-t_n}(\omega))\big) \big] \Big|=0. \nonumber \end{align} In addition, note that the hypothesis that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n \in \{0,1,\dots,N\}$ it holds that $({\ensuremath{\mathcal{V}}}^{(z)}_t(x))_{(t,x)\in (t_n,t_{n+1}]\times\R^d}\in C^{1,2}((t_n,t_{n+1}]\times\R^d,\R)$ has at most polynomially growing partial derivatives and \eqref{eq:Y-integ-cond} ensure that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $p \in (0,\infty)$ it holds that \begin{equation}\label{NRX} \bigg(\!\sup_{t \in [0,T]} \E\big[\|Y_t\|^p_{\R^d}\big] \!\bigg) + \bigg(\!\sup_{t \in [0,T)} \E\big[|{\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t)|^p\big] \!\bigg) + \bigg(\!\sup_{t \in (0,T]} \E\big[\|(\nabla{\ensuremath{\mathcal{V}}}^{(z)}_{t})(Y_{T-t})\|^p_{\R^d}\big] \!\bigg) <\infty. \end{equation} Next note that the fact that for every $x\in \R$, $\omega \in \Omega$ it holds that $X_0(x,\omega)=\varphi(x)$, the hypothesis that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $x \in \R^d$ it holds that ${\ensuremath{\mathcal{V}}}^{(z)}_0(x)=\varphi(x)$, and the hypothesis that for every $\omega \in \Omega$ it holds that $(X_t(x,\omega))_{(t,x)\in[0,T]\times \R^d} \in C^{0,2}([0,T]\times \R^d,\R)$ has at most polynomially growing partial derivatives demonstrate that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ it holds that $({\ensuremath{\mathcal{V}}}^{(z)}_0(x))_{x \in \R^d} \in C^2(\R^d,\R)$ has at most polynomially growing derivatives. This and \eqref{NRX} imply that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $p \in (0,\infty)$ it holds that \begin{equation} \bigg(\!\sup_{t \in [0,T]} \E\big[\|Y_t\|^p_{\R^d}\big] \!\bigg) + \bigg(\!\sup_{t \in [0,T]} \E\big[|{\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t)|^p\big] \!\bigg) + \bigg(\!\sup_{t \in [0,T]} \E\big[\|(\nabla{\ensuremath{\mathcal{V}}}^{(z)}_{t})(Y_{T-t})\|^p_{\R^d}\big] \!\bigg) <\infty. \end{equation} The hypothesis that $f\colon \R^d \times \R \times \R^d \to \R$ and $b\colon \R^d \times \R \times \R^d \to \R$ are sufficiently regular functions hence proves that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $p \in (0,\infty)$ it holds that \begin{equation}\label{NRZ} \begin{split} &\sup_{t \in [0,T]} \E\Big[ \big|{\ensuremath{\mathcal{V}}}^{(z)}_{t}(Y_{T-t})\big|^p\Big] \\ & + \max_{n \in \{0,1,\dots,N-1\}}\E\Big[\big|{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}) + f\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big) \,(t_{n+1}-t_n)\\ & + b\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\,\big(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})\big)\big|^p \Big] <\infty. \end{split} \end{equation} Combining \eqref{eq:ptwConv} and, e.g., Hutzenthaler et al.\ \cite[Proposition~4.5]{hutzenthaler2016strong} therefore demonstrates that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds that \begin{multline} \limsup_{t\nearrow T-t_n} \E\Big[ \big| {\ensuremath{\mathcal{V}}}^{(z)}_{T-t}(Y_t) - \big[ {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}) + f\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big) \,(t_{n+1}-t_n)\\ + b\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\,\big(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})\big) \big] \big| \Big] = 0. \end{multline} This and \eqref{eq:conditionalExpectationWitht} yield that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds $\P$-a.s.~that \begin{equation}\label{eq:conditionalExpRZ} \begin{split} & {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}})\\ & = \E\Bigl[ {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}) + f\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big) \,(t_{n+1}-t_n) \\ & \quad + b\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\,\big(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})\big) \!~\big|\!~\mathcal F_{T-t_{n+1}} \Bigr]. \end{split} \end{equation} The tower property for conditional expectations therefore assures that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds $\P$-a.s.~that \begin{equation}\label{eq:towerProperty} \begin{split} & \E\Big[{\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}})\!~|\!~\mathfrak{S}(Y_{T-t_{n+1}})\Big] \\ & = \E\Big[ {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}) + f\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big) \,(t_{n+1}-t_n)\\ & \quad + b\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\,\big(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})\big) \!~\big|\!~\mathfrak{S}(Y_{T-t_{n+1}}) \Big]. \end{split} \end{equation} In addition, observe that the fact that for every $n\in\{0,1,\ldots,N-1\}$ it holds that the function $\Omega \ni \omega \mapsto Y_{T-t_{n+1}}(\omega) \in \R^d$ is $\mathfrak{S}(Y_{T-t_{n+1}})$/$\B(\R^d)$-measurable assures that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds $\P$-a.s.~that \begin{equation} {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}}) = \E\Big[{\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}})\!~|\!~\mathfrak{S}(Y_{T-t_{n+1}})\Big] . \end{equation} This and \eqref{eq:towerProperty} imply that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds $\P$-a.s.~that \begin{equation}\label{eq:conditionalExpRZ2} \begin{split} & {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(Y_{T-t_{n+1}})\\ &= \E\Bigl[ {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}) + f\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big) \,(t_{n+1}-t_n) \\ & \quad + b\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\,\big(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})\big) \!~\big|\!~\mathfrak{S}(Y_{T-t_{n+1}}) \Bigr]. \end{split} \end{equation} Equation \eqref{eq:conditionalExpRZ2} constitutes the Feynman-Kac type representation we were aiming at. In the following subsection we employ the factorization lemma (cf., for example, Becker et al.\ \cite[Lemma~2.1]{Deepstopping} or~Klenke~\cite[Corollary 1.97]{Klenke_2014}) and the $L^2$-minimality property of conditional expectations (cf., for example, Klenke~\cite[Corollary 8.17]{Klenke_2014}) to reformulate \eqref{eq:conditionalExpRZ2} as recursive minimization problems. \subsection{Formulation as iterative recursive minimization problems} In this subsection we reformulate \eqref{eq:conditionalExpRZ2} as recursive minimization problems. For this observe that \eqref{NRZ} shows that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds that \begin{multline}\label{eq:VtnIsInL2} \E\Big[\big|{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}) + f\big(Y_{T-t_n}, {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\,(t_{n+1}-t_n)\\ + b\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\big(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})\big) \big|^2\Big] < \infty. \end{multline} The factorization lemma, e.g., in \cite[Lemma~2.1]{Deepstopping} (applied with $(S,\S)\with(\R^d,\B(\R^d))$, $\Omega\with\Omega$, $X\with Y_{T-t_{n+1}}$ for $n\in\{0,1,\ldots,N-1\}$ in the notation of \cite[Lemma~2.1]{Deepstopping}), the $L^2$-minimality property for conditional expectations, e.g., in Klenke~\cite[Corollary 8.17]{Klenke_2014} (applied with $X\with \Omega \ni \omega \mapsto {\ensuremath{\mathcal{V}}}^{(z)}(Y_{T-t_n}(\omega)) + f(Y_{T-t_n}(\omega), {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}(\omega)),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n}(\omega)))\,$ $(t_{n+1}-t_n) + b(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n}))(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})) \in \R$, ${\ensuremath{\mathcal{F}}}\with\mathfrak{S}(Y_{T-t_{n+1}})$, ${\ensuremath{\mathcal{A}}}\with{\ensuremath{\mathcal{F}}}$ in the notation of \cite[Corollary~8.17]{Klenke_2014}), the fact that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds that $\R^d \ni x \mapsto {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(x) \in \R$ is a continuous function, the fact that for every Borel measurable set $A\in\B(\R^d)$ with positive Lebesgue measure it holds that $\min_{n \in\{0,1,\dots,N-1\}} \P(Y_{T-t_{n+1}} \in A)>0$, and \eqref{eq:conditionalExpRZ2} hence imply that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{0,1,\ldots,N-1\}$ it holds that \begin{equation}\label{eq:formulationAsMinimizationProblem-OLD} \begin{split} &({\ensuremath{\mathcal{V}}}^{(z)}_{t_{n+1}}(x))_{x\in\R^d} = \operatornamewithlimits{argmin}_{u\in C(\R^d,\R)} \E \Big[ \big|u(Y_{T-t_{n+1}}) - \big[ {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n})\\ & \quad + f\big(Y_{T-t_n}, {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\,(t_{n+1}-t_n)\\ & \quad + b\big(Y_{T-t_n},{\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(Y_{T-t_n}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_n})(Y_{T-t_n})\big)\big(z_{t_{n+1}}(Y_{T-t_n})-z_{t_n}(Y_{T-t_n})\big) \big] \big|^2 \Big]. \end{split} \end{equation} Therefore, we obtain that for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n \in \{1,2,\dots,N\}$ it holds that \begin{equation}\label{eq:formulationAsMinimizationProblem} \begin{split} &({\ensuremath{\mathcal{V}}}^{(z)}_{t_{n}}(x))_{x\in\R^d} = \operatornamewithlimits{argmin}_{u\in C(\R^d,\R)} \E \Big[ \big|u(Y_{T-t_{n}}) - \big[ {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n-1}}(Y_{T-t_{n-1}}) \\ &+ f\big(Y_{T-t_{n-1}}, {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n-1}}(Y_{T-t_{n-1}}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n-1}})(Y_{T-t_{n-1}})\big)\,(t_{n}-t_{n-1})\\ &+ b\big(Y_{T-t_{n-1}}, {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n-1}}(Y_{T-t_{n-1}}),(\nabla {\ensuremath{\mathcal{V}}}^{(z)}_{t_{n-1}})(Y_{T-t_{n-1}})\big)\, \big(z_{t_{n}}(Y_{T-t_{n-1}})-z_{t_{n-1}}(Y_{T-t_{n-1}})\big) \big] \big|^2 \Big]. \end{split} \end{equation} In the following subsection we approximate for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{1,2,\ldots,N\}$ the function $\R^d\ni x\mapsto {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x)\in\R$ by suitable deep neural networks. \subsection{Deep neural network approximations} In this subsection we employ for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n\in\{1,2,\ldots,N\}$ suitable approximations for the function \begin{equation} \R^d\ni x\mapsto {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x)\in\R. \end{equation} More specifically, let $\nu\in\N$ and for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ let $\bV^{(z)}_{n}=(\bV^{(z)}_{n}(\theta,x))_{(\theta,x)\in \R^\nu \times \R^d}\colon \R^{\nu}\times\R^d\to\R$, $n\in\{0,1,\dots,N\}$, be continuously differentiable functions which satisfy for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $\theta \in \R^{\nu}$, $x\in\R^d$ that $\bV^{(z)}_0(\theta,x) = \varphi(x)$. For every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$, every $n\in\{1,2,\ldots,N\}$, $x\in\R^d$, and every \emph{suitable} $ \theta\in\R^{\nu} $ we think of $\bV^{(z)}_n(\theta,x)\in\R$ as an appropriate approximation \begin{equation}\label{eq:VThetaApproxVz} \bV^{(z)}_{n}(\theta,x)\approx {\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x) \end{equation} of ${\ensuremath{\mathcal{V}}}^{(z)}_{t_n}(x)$. Combining this and \eqref{eq:VapproxXWithZPluggedIn} indicates for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$, every $n\in\{1,2,\ldots,N\}$, $x\in\R^d$, and every \emph{suitable} $ \theta\in\R^{\nu} $ that \begin{equation} \label{eq:VThetaApproxVz2} \bV^{(z)}_{n}(\theta,x) \approx \E\big[X_{t_n}(x)\!~|\!~Z=z\big]. \end{equation} For every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ we propose to choose the functions $\bV^{(z)}_n\colon\R^{\nu}\times\R^d\to\R$, $n\in\{1,2,\ldots,N\}$, as deep neural networks (cf., for instance, \cite{Bengio09,LeCunBengioHinton15}). For example, for every $k\in\N$ let $ \mathcal{T}_{k} \colon \R^k \to \R^k $ satisfy for every $ x = ( x_1, x_2, \dots, x_k ) \in \R^k $ that \begin{equation} \label{eq:logistic} \mathcal{T}_k( x ) = \big( \! \tanh(x_1), \tanh(x_2), \dots , \tanh(x_k) \big) \end{equation} (multidimensional version of the tangens hyperbolicus), for every $ \theta = ( \theta_1, \theta_2, \dots, \theta_{ \nu } ) \in \R^{ \nu } $, $ v \in \N_0 = \{0\} \cup \N $, $ k, l \in \N $ with $ v + lk + l \leq \nu $ let $ A^{ \theta, v }_{ k, l } \colon \R^k \to \R^l $ satisfy for every $ x = ( x_1, x_2, \dots, x_k ) \in \R^k$ that \begin{equation} A^{ \theta, v }_{ k, l }( x ) = \left( \begin{array}{cccc} \theta_{ v + 1 } & \theta_{ v + 2 } & \dots & \theta_{ v + k } \\ \theta_{ v + k + 1 } & \theta_{ v + k + 2 } & \dots & \theta_{ v + 2 k } \\ \theta_{ v + 2 k + 1 } & \theta_{ v + 2 k + 2 } & \dots & \theta_{ v + 3 k } \\ \vdots & \vdots & \vdots & \vdots \\ \theta_{ v + ( l - 1 ) k + 1 } & \theta_{ v + ( l - 1 ) k + 2 } & \dots & \theta_{ v + lk } \end{array} \right) \left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_k \end{array} \right) + \left( \begin{array}{c} \theta_{ v + lk + 1 } \\ \theta_{ v + lk + 2 } \\ \theta_{ v + lk + 3 } \\ \vdots \\ \theta_{ v + lk + l } \end{array} \right) \end{equation} (affine function), let $s\in \{3,4,5,6,\dots\}$, assume that $ s(N+1)d(d+1)\leq \nu$, and for every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ let $\bV^{(z)}_n\colon\R^{\nu}\times\R^d\to\R$, $n\in\{0,1,\ldots,N\}$, satisfy for every $n\in\{1,2,\ldots,N\}$, $\theta \in \R^\nu$, $x\in \R^d$ that $\bV^{(z)}_0(\theta,x) = \varphi(x)$ and \begin{align} \label{eq:neural_network_Intro} & \bV^{(z)}_{ n }(\theta,x) =\\ & \big(A^{ \theta, (sn+s-1)d(d+1) }_{ d, 1 } \circ \mathcal{T}_d \circ A^{ \theta, (sn+s-2)d(d+1) }_{ d, d } \circ \ldots \circ \mathcal{T}_d \circ A^{ \theta, (sn+1)d(d+1) }_{ d, d } \circ \mathcal{T}_d \circ A^{ \theta, snd(d+1) }_{ d, d }\big)(x) . \nonumber \end{align} For every sufficiently regular function $z\colon [0,T]\times \R^d \to \R$ and every $n \in \{1,2,\dots,N\}$ the function $\bV^{(z)}_n\colon\R^{\nu}\times\R^d\to\R$ in \eqref{eq:neural_network_Intro} describes a fully-connected feedforward deep neural network with $s+1$ layers (1 input layer with $d$ neurons, $s-1$ hidden layers with $d$ neurons each, and 1 output layer with 1 neuron) and multidimensional versions of the tangens hyperbolicus as activation functions (see \eqref{eq:logistic}). \subsection{Stochastic gradient descent based minimization} We intend to find for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ \emph{suitable} $\theta^{z,1},\theta^{z,2},\ldots,\theta^{z,N}\in\R^{\nu}$ in \eqref{eq:VThetaApproxVz} by recursive minimization. More precisely, for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ we intend to find for $n\in\{1,2,\ldots,N\}$, $\theta^{z,0},\theta^{z,1},\dots,\theta^{z,n-1} \in \R^\nu$ a suitable $\theta^{z,n}\in\R^{\nu}$ as an approximate minimizer of the function \begin{equation}\label{eq:toMinimize} \begin{split} & \R^{\nu} \ni \theta \mapsto \E\Big[\big| \bV^{(z)}_{n}(\theta,Y_{T-t_{n}}) - \big[ \bV^{(z)}_{n-1}(\theta^{z,n-1},Y_{T-t_{n-1}}) \\ & \quad + f\big(Y_{T-t_{n-1}},\bV^{(z)}_{n-1}(\theta^{z,n-1},Y_{T-t_{n-1}}), (\nabla_x\bV^{(z)}_{n-1})(\theta^{z,n-1},Y_{T-t_{n-1}})\big)\,(t_{n}-t_{n-1})\\ & \quad + b\big(Y_{T-t_{n-1}},\bV^{(z)}_{n-1}(\theta^{z,n-1},Y_{T-t_{n-1}}), (\nabla_x\bV^{(z)}_{n-1})(\theta^{z,n-1},Y_{T-t_{n-1}})\big)\\ & \quad \cdot \big(z_{t_{n}}(Y_{T-t_{n-1}})-z_{t_{n-1}}(Y_{T-t_{n-1}})\big) \big] \big|^2\Big] \in \R \end{split} \end{equation} (cf.\ \eqref{eq:formulationAsMinimizationProblem} and \eqref{eq:VThetaApproxVz} above). To this end for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ let $B^{z,m}\colon [0,T]\times\Omega\to\R^d$, $m\in\N_0$, be i.i.d.~standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motions, let $\xi^{z,m}\colon\Omega\to\R^d$, $m\in\N_0$, be i.i.d.~$\mathcal F_0/\B(\R^d)$-measurable functions, for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m\in\N_0$ let $Y^{z,m}\colon [0,T]\times\Omega\to\R^d$ be an $(\mathcal F_t)_{t\in [0,T]}$-adapted stochastic process with continuous sample paths which satisfies that for every $t\in [0,T]$ it holds $\P$-a.s.~that \begin{equation} \label{eq:SDE-Y-m} Y^{z,m}_t = \xi^{z,m} + \int_0^t \mu(Y^{z,m}_s)\,ds + \int_0^t \sigma(Y^{z,m}_s)\,dB^{z,m}_s, \end{equation} let $\gamma\in (0,\infty)$, $M\in\N$, and for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ let $\vartheta^{z,n}=(\vartheta^{z,n}_m)_{m\in\N_0}\colon \N_0\times\Omega\to\R^\nu$, $n\in\{0,1,\ldots,N\}$, be stochastic processes which satisfy for every $n\in\{1,2,\ldots,N\}$, $m\in\N_0$ that \begin{equation}\label{eq:sgdWithOriginalY} \begin{split} &\vartheta^{z,n}_{m+1} = \vartheta^{z,n}_m - 2\gamma\cdot (\nabla_{\theta}\bV^{(z)}_n)(\vartheta^{z,n}_m,Y^{z,m}_{T-t_n}) \cdot \Big[ \bV^{(z)}_n(\vartheta^{z,n}_m,Y^{z,m}_{T-t_n}) - \bV^{(z)}_{n-1}(\vartheta^{z,n-1}_M,Y^{z,m}_{T-t_{n-1}}) \\ & \quad -f\big(Y^{z,m}_{T-t_{n-1}},\bV^{(z)}_{n-1}(\vartheta^{z,n-1}_M,Y^{z,m}_{T-t_{n-1}}), (\nabla_x\bV^{(z)}_{n-1})(\vartheta^{z,n-1}_M,Y^{z,m}_{T-t_{n-1}})\big)\,(t_{n}-t_{n-1})\\ & \quad -b\big(Y^{z,m}_{T-t_{n-1}},\bV^{(z)}_{n-1}(\vartheta^{z,n-1}_M,Y^{z,m}_{T-t_{n-1}}), (\nabla_x\bV^{(z)}_{n-1})(\vartheta^{z,n-1}_M,Y^{z,m}_{T-t_{n-1}})\big)\\ & \quad \cdot \big(z_{t_{n}}(Y^{z,m}_{T-t_{n-1}})-z_{t_{n-1}}(Y^{z,m}_{T-t_{n-1}})\big) \Big] \end{split} \end{equation} (cf.\ \eqref{phi-z-n-m-Special}--\eqref{eq:sgdWithApproximatedY-SPECIAL} below). \subsection{Temporal discretization of the auxiliary stochastic process} \label{subsec:temp_disc} Equation \eqref{eq:sgdWithOriginalY} provides us an implementable numerical algorithm in the special case where for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ one can simulate exactly from the solution processes $Y^{z,m}\colon[0,T]\times \Omega \to \R^d$, $m\in \N_0$, of the SDEs in \eqref{eq:SDE-Y-m} (cf.\ also \eqref{eq:SDEForY} above). In the case where it is not possible to simulate for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ exactly from the solution processes $Y^{z,m}\colon[0,T]\times \Omega \to \R^d$, $m\in \N_0$, of the SDEs in \eqref{eq:SDE-Y-m}, one can employ a numerical approximation method for SDEs, say, the Euler-Maruyama scheme, to approximatively simulate for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ from the solution processes $Y^{z,m}\colon[0,T]\times \Omega \to \R^d$, $m\in \N_0$, of the SDEs in \eqref{eq:SDE-Y-m}. This is subject of this subsection. More formally, note that \eqref{eq:SDE-Y-m} implies that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m\in\N_0$, $r,t\in [0,T]$ with $r\leq t$ it holds $\P$-a.s.\ that \begin{equation} Y^{z,m}_{t} = Y^{z,m}_r + \int_r^t \mu(Y^{z,m}_s)\,ds + \int_r^t \sigma(Y^{z,m}_s)\,dB^{z,m}_s. \end{equation} Hence, we obtain that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m\in\N_0$, $n\in\{0,1,\ldots,N-1\}$ it holds $\P$-a.s.~that \begin{equation} Y^{z,m}_{T-t_{n}} = Y^{z,m}_{T-t_{n+1}} + \int_{T-t_{n+1}}^{T-t_n} \mu(Y^{z,m}_s)\,ds + \int_{T-t_{n+1}}^{T-t_n} \sigma(Y^{z,m}_s)\,dB^{z,m}_s. \end{equation} This shows that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m \in \N_0$, $n\in \{0,1,\dots,N-1\}$ it holds $\P$-a.s.\ that \begin{equation} \label{NR} Y^{z,m}_{T-t_{N-(n+1)}} =Y^{z,m}_{T-t_{N-n}} + \int_{T-t_{N-n}}^{T-t_{N-(n+1)}} \mu(Y^{z,m}_s) \,ds + \int_{T-t_{N-n}}^{T-t_{N-(n+1)}} \sigma(Y^{z,m}_s) \,dB^{z,m}_s. \end{equation} Next we introduce suitable real numbers which allow us to reformulate \eqref{NR} in a more compact way. More formally, let $\tau_n \in [0,T]$, $n \in \{0,1,\dots,N\}$, satisfy for every $n\in\{0,1,\dots,N\}$ that \begin{equation} \label{NR2} \tau_n=T-t_{N-n}. \end{equation} Observe that \eqref{eq:time-step-discrete} and \eqref{NR2} ensure that \begin{equation} 0=\tau_0<\tau_1<\dots <\tau_N=T. \end{equation} Moreover, note that \eqref{NR} and \eqref{NR2} demonstrate that for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m \in \N_0$, $n \in \{0,1,\dots,N-1\}$ it holds $\P$-a.s.\ that \begin{equation} Y^{z,m}_{\tau_{n+1}} = Y^{z,m}_{\tau_{n}} + \int_{\tau_n}^{\tau_{n+1}} \mu(Y^{z,m}_s)\,ds + \int_{\tau_n}^{\tau_{n+1}} \sigma(Y^{z,m}_s)\,dB^{z,m}_s. \end{equation} This suggests for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m \in \N_0$, $n \in \{0,1,\dots,N-1\}$ that \begin{equation} \label{NR3} Y^{z,m}_{\tau_{n+1}} \approx Y^{z,m}_{\tau_{n}} + \mu( Y^{z,m}_{\tau_{n}}) \,(\tau_{n+1}-\tau_n) + \sigma( Y^{z,m}_{\tau_{n}})\, (B^{z,m}_{\tau_{n+1}}-B^{z,m}_{\tau_n}). \end{equation} Based on \eqref{NR3} we now introduce for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ suitable Euler-Maruyama approximations for the solution processes $Y^{z,m}\colon [0,T]\times \Omega \to \R^d$, $m \in \N_0$, of the SDEs in \eqref{eq:SDE-Y-m}. More formally, for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m \in \N_0$ let $\mathcal{Y}^{z,m}=(\mathcal{Y}^{z,m}_n)_{n \in\{0,1,\dots,N\}} \colon \{0,1,\dots,N\} \times \Omega \to \R^d$ be the stochastic process which satisfies for every $n \in \{0,1,\dots,N-1\}$ that ${\ensuremath{\mathcal{Y}}}^{z,m}_{0}=\xi^{z,m}$ and \begin{equation} \label{NR4} {\ensuremath{\mathcal{Y}}}^{z,m}_{n+1} = {\ensuremath{\mathcal{Y}}}^{z,m}_{n} + \mu({\ensuremath{\mathcal{Y}}}^{z,m}_{n})\,(\tau_{n+1}-\tau_n) + \sigma({\ensuremath{\mathcal{Y}}}^{z,m}_{n})\,(B^{z,m}_{\tau_{n+1}} - B^{z,m}_{\tau_n}). \end{equation} Observe that \eqref{NR2}, \eqref{NR3}, and \eqref{NR4} suggest for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m \in \N_0$, $n\in \{0,1,\dots,N\}$ that \begin{equation} {\ensuremath{\mathcal{Y}}}^{z,m}_n\approx Y^{z,m}_{\tau_n}= Y^{z,m}_{T-{t_{N-n}}}. \end{equation} This, in turn, suggests for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $m \in \N_0$, $n\in \{0,1,\dots,N\}$ that \begin{equation} \label{NR5} Y^{z,m}_{T-{t_{n}}} \approx {\ensuremath{\mathcal{Y}}}^{z,m}_{N-n}. \end{equation} In the next step we employ \eqref{NR5} to derive for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ approximations of the stochastic processes $\vartheta^{z,n}\colon \N_0 \times \Omega \to \R^\nu$, $n \in \{0,1,\dots,N\}$, in \eqref{eq:sgdWithOriginalY} which are also implementable in the case where one cannot simulate exactly from the solution processes of the SDEs in \eqref{eq:SDE-Y-m}. More precisely, for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ let $\Theta^{z,n}=(\Theta^{z,n}_m)_{m\in\N_0}\colon \N_0\times\Omega\to\R$, $n\in\{0,1,\ldots,N\}$, be stochastic processes which satisfy for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$ and every $n\in\{1,2,\ldots,N\}$, $m\in\N_0$ that \begin{align}\label{eq:sgdWithApproximatedY} &\Theta^{z,n}_{m+1} = \Theta^{z,n}_m - 2\gamma\cdot (\nabla_{\theta}\bV^{(z)}_n)(\Theta^{z,n}_m,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n}) \cdot \Big[ \bV^{(z)}_n(\Theta^{z,n}_m,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n}) - \bV^{(z)}_{n-1}(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}) \nonumber\\ & \quad -f\big({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1},\bV^{(z)}_{n-1}(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}), (\nabla_x\bV^{(z)}_{n-1})(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})\big)\,(t_{n}-t_{n-1})\nonumber\\ & \quad -b\big({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1},\bV^{(z)}_{n-1}(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}), (\nabla_x\bV^{(z)}_{n-1})(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})\big)\\ & \quad \cdot \big(z_{t_{n}}({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})-z_{t_{n-1}}({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})\big) \Big] \nonumber \end{align} (cf.\ \eqref{phi-z-n-m-Special}--\eqref{eq:sgdWithApproximatedY-SPECIAL} below). Note that \eqref{eq:sgdWithOriginalY} , \eqref{NR5}, and \eqref{eq:sgdWithApproximatedY} indicate for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$, every $n \in \{1,2,\dots,N\}$, and every sufficiently large $m \in \N_0$ that \begin{equation} \Theta^{z,n}_m \approx \vartheta^{z,n}_m. \end{equation} In the following two subsections (Subsection~\ref{subsec:algo1} and Subsection~\ref{subsec:algo-Full-gen}) we merge the above derivations to precisely formulate the proposed approximation algorithm, first, in a special case (Subsection~\ref{subsec:algo1}) and, thereafter, in the general case (Subsection~\ref{subsec:algo-Full-gen}). \subsection{Description of the proposed approximation algorithm in a special case} \label{subsec:algo1} In this subsection we describe the proposed approximation algorithm in the special case where the standard Euler-Maruyama scheme (cf., e.g., Kloeden \& Platen \cite{KloedenPlaten1992} and Maruyama \cite{Maruyama1955}) is the employed approximation scheme for discretizing \eqref{eq:SDE-Y-m} (cf.\ \eqref{NR4}) and where the plain vanilla stochastic gradient descent method with constant learning rate $\gamma \in (0,\infty)$ and batch size 1 is the employed minimization algorithm. A more general description of the proposed approximation algorithm, which allows to incorporate more sophisticated machine learning approximation techniques such as batch normalization (cf., for instance, Ioffe \& Szegedy \cite{IoffeSzegedy2015}) and the Adam optimizer (cf., for example, Kingma \& Ba \cite{KingmaBa2015}), can be found in Subsection~\ref{subsec:algo-Full-gen} below. \begin{algo}[Special case] \label{algo:1} Let $T,\gamma\in (0,\infty)$, $d,N,M\in\N$, $\varphi \in C^2(\R^d,\R)$, $s\in\{3,4,5,\ldots\}$, $\nu = s(N+1)d(d+1)$, $t_0,t_1,\ldots,t_N\in [0,T]$ satisfy \begin{equation} 0 = t_0 < t_1 < \ldots < t_N = T, \end{equation} let $\tau_0,\tau_1,\dots,\tau_n \in [0,T]$ satisfy for every $n \in \{0,1,\dots,N\}$ that $\tau_n= T-t_{N-n}$, let $ f\colon \R^d\times \R\times \R^{d} \to \R $, $ b\colon \R^d\times \R\times \R^{d} \to \R $, $ \mu\colon\R^d\to\R^d $, and $ \sigma\colon\R^d\to\R^{d\times d} $ be continuous functions, let $(\Omega,{\ensuremath{\mathcal{F}}},\P,(\mathcal F_t)_{t\in [0,T]})$ be a filtered probability space, for every function $z\colon [0,T]\times\R^d\to\R$ let $\xi^{z,m}\colon\Omega\to\R^d$, $m\in\N_0$, be i.i.d.\ $\mathcal F_0$/$\B(\R^d)$-measurable random variables, let $B^{z,m}\colon [0,T]\times\Omega\to\R^d$, $m\in\N_0$, be i.i.d.~standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motions, let ${\ensuremath{\mathcal{Y}}}^{z,m}\colon \{0,1,\ldots,N\}\times\Omega\to\R^d$, $m\in\N_0$, satisfy for every $m\in\N_0$, $n\in\{0,1,\ldots,N-1\}$ that ${\ensuremath{\mathcal{Y}}}^{z,m}_0 = \xi^{z,m}$ and \begin{equation} {\ensuremath{\mathcal{Y}}}^{z,m}_{n+1} = {\ensuremath{\mathcal{Y}}}^{z,m}_n + \mu({\ensuremath{\mathcal{Y}}}^{z,m}_n)\,(\tau_{n+1}-\tau_{n}) + \sigma({\ensuremath{\mathcal{Y}}}^{z,m}_n)\,(B^{z,m}_{\tau_{n+1}}-B^{z,m}_{\tau_{n}}), \end{equation} let $ \mathcal{T}_d \colon \R^d \to \R^d $ satisfy for every $ x = ( x_1, x_2, \dots, x_d ) \in \R^d $ that \begin{equation} \label{eq:activation} \mathcal{T}_d( x ) = \big( \! \tanh(x_1), \tanh(x_2), \dots , \tanh(x_d) \big) , \end{equation} for every $ \theta = ( \theta_1, \theta_2, \dots, \theta_{ \nu } ) \in \R^{ \nu }$, $ k, l \in \N $, $ v \in \N_0 = \{0\} \cup \N $ with $ v + lk + l \leq \nu $ let $ A^{ \theta, v }_{ k, l } \colon \R^k \to \R^l $ satisfy for every $ x = ( x_1, x_2, \dots, x_k )\in\R^k $ that \begin{equation} A^{ \theta, v }_{ k, l }( x ) = \bigg(\theta_{v+kl+1}+ \left[\textstyle\sum\limits_{i=1}^ k x_i\, \theta_{v+i}\right], \dots, \theta_{v+kl+l} + \left[\textstyle\sum\limits_{i=1}^ k x_i\, \theta_{v+(l-1)k+i}\right] \bigg), \end{equation} for every function $z\colon [0,T]\times\R^d\to\R$ let $\bV^{(z)}_n\colon\R^{\nu}\times\R^d\to\R$, $n\in\{0,1,\ldots,N\}$, satisfy for every $n\in\{1,2,\ldots,N\}$, $\theta \in \R^\nu$, $x \in \R^d$ that $\bV^{(z)}_0(\theta,x) = \varphi(x)$ and \begin{align} \label{eq:neural_network_for_a_generalCase} & \bV^{(z)}_{ n }(\theta,x) = \\ & \big(A^{ \theta, (sn+s-1)d(d+1) }_{ d, 1 } \circ \mathcal{T}_d \circ A^{ \theta, (sn+s-2)d(d+1) }_{ d, d } \circ \ldots \circ \mathcal{T}_d \circ A^{ \theta, (sn+1)d(d+1) }_{ d, d } \circ \mathcal{T}_d \circ A^{ \theta, snd(d+1) }_{ d, d }\big)(x) , \nonumber \end{align} for every function $z\colon [0,T]\times\R^d\to\R$ let $\Theta^{z,n}\colon \N_0\times\Omega\to\R^{\nu}$, $n \in \{0,1,\ldots,N\}$, be stochastic processes, for every function $z\colon [0,T]\times\R^d\to\R$ and every $n \in \{1,2,\ldots,N\}$, $m\in \N_0$ let $\phi^{z,n,m}\colon\R^{\nu}\times\Omega\to\R$ satisfy for every $\theta\in\R^{\nu}$, $\omega\in\Omega$ that \begin{equation}\label{phi-z-n-m-Special} \begin{split} &\phi^{z,n,m}(\theta,\omega) = \Big[ \bV^{(z)}_n\big(\theta,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n}(\omega)\big) - \bV^{(z)}_{n-1}(\Theta^{z,n-1}_M(\omega),{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega)) \, - (t_{n}-t_{n-1}) \\ & \cdot f\big({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega),\bV^{(z)}_{n-1}(\Theta^{z,n-1}_M(\omega),{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega)), (\nabla_x\bV^{(z)}_{n-1})(\Theta^{z,n-1}_M(\omega),{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega))\big)\\ & - b\big({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega),\bV^{(z)}_{n-1}(\Theta^{z,n-1}_M(\omega),{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega)), (\nabla_x\bV^{(z)}_{n-1})(\Theta^{z,n-1}_M(\omega),{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega))\big)\\ & \cdot \big(z_{t_{n}}({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega))-z_{t_{n-1}}({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}(\omega))\big) \Big]^2, \end{split} \end{equation} for every function $z\colon [0,T]\times\R^d\to\R$ and every $n\in\{1,2,\ldots,N\}$, $m\in\N_0$ let $\Phi^{z,n,m}\colon\R^{\nu}\times\Omega\to\R^{\nu}$ satisfy for every $\theta\in\R^{\nu}$, $\omega\in\Omega$ that $\Phi^{z,n,m}(\theta,\omega) = (\nabla_{\theta}\phi^{z,n,m})(\theta,\omega)$, and assume for every function $z\colon [0,T]\times\R^d\to\R$ and every $m\in\N_0$, $n\in\{1,2,\ldots,N\}$ that \begin{equation} \label{eq:plain-vanilla-SGD} \Theta^{z,n}_{m+1} = \Theta^{z,n}_m - \gamma\cdot\Phi^{z,n,m}(\Theta^{z,n}_m). \end{equation} \end{algo} In the setting of Framework~\ref{algo:1} we note that for every function $z\colon [0,T]\times\R^d\to\R$ and every $n \in \{1,2,\dots,N\}$, $m \in \N_0$ it holds that \begin{align}\label{eq:sgdWithApproximatedY-SPECIAL} &\Theta^{z,n}_{m+1} = \Theta^{z,n}_m - 2\gamma\cdot (\nabla_{\theta}\bV^{(z)}_n)(\Theta^{z,n}_m,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n}) \cdot \Big[ \bV^{(z)}_n(\Theta^{z,n}_m,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n}) - \bV^{(z)}_{n-1}(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}) \nonumber\\ & \quad -f\big({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1},\bV^{(z)}_{n-1}(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}), (\nabla_x\bV^{(z)}_{n-1})(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})\big)\,(t_{n}-t_{n-1})\nonumber\\ & \quad -b\big({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1},\bV^{(z)}_{n-1}(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1}), (\nabla_x\bV^{(z)}_{n-1})(\Theta^{z,n-1}_M,{\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})\big)\\ & \quad \cdot \big(z_{t_{n}}({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})-z_{t_{n-1}}({\ensuremath{\mathcal{Y}}}^{z,m}_{N-n+1})\big) \Big] \nonumber \end{align} (cf.\ \eqref{eq:sgdWithOriginalY}, \eqref{eq:sgdWithApproximatedY}, \eqref{phi-z-n-m-Special}, and \eqref{eq:plain-vanilla-SGD} above). Moreover, in the setting of Framework~\ref{algo:1} we think under suitable hypothesis for sufficiently large $N,M \in \N$, for sufficiently small $\gamma \in (0,\infty)$, for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R$, and for every $n \in \{0,1,\dots,N\}$, $x \in \R^d$ of $\mathbb V^{(z)}_n(\Theta^{z,n}_M,x)\colon \Omega \to \R$ as a suitable approximation \begin{equation}\label{V-Algo-result-Special} \mathbb V^{(z)}_n(\Theta^{z,n}_M,x) \approx \E\big[X_{t_n}(x)\!~|\!~Z=z\big] \end{equation} of $\E[X_{t_n}(x)\,|\,Z=z]$ where $ X\colon [0,T]\times\R^d\times\Omega\to\R $ is a random field which satisfies for every $ t\in [0,T] $, $ x\in\R^d $ that $ X_t(x)\colon\Omega\to\R $ is $\mathcal F_t$/$\B(\R)$-measurable, which satisfies for every $\omega\in\Omega$ that $ (X_t(x,\omega))_{(t,x)\in [0,T]\times\R^d}\in C^{0,2}([0,T]\times\R^d,\R) $ has at most polynomially growing partial derivatives, and which satisfies that for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.~that \begin{equation}\label{eq:mildFormulationSPDE-SPECIAL} \begin{split} X_{ t }( x ) & = \varphi( x ) + \int_{ 0 }^{ t } f\big( x, X_s(x), ( \nabla X_s )( x ) \big) \, ds + \int_{ 0 }^{ t } b\big( x, X_s( x ), ( \nabla X_s )( x ) \big) \, dZ_s(x) \\ & \quad + \int_{ 0 }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma(x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \Big] \, ds \end{split} \end{equation} where $Z\colon [0,T]\times \R^d \times \Omega \to \R$ is a sufficiently regular random field (cf.\ \eqref{eq:mildFormulationSPDE}, \eqref{eq:VapproxXWithZPluggedIn}, \eqref{eq:VapproxXWithZPluggedIn-b}, \eqref{eq:VThetaApproxVz}, and \eqref{eq:VThetaApproxVz2}). \subsection{Description of the proposed approximation algorithm in the general case} \label{subsec:algo-Full-gen} In this subsection we present in Framework~\ref{def:general_algorithm} a general approximation algorithm which includes the proposed approximation algorithm derived in Subsections~\ref{subsec:temp-discret}--\ref{subsec:algo1} above as a special case. Compared to Framework~\ref{algo:1}, Framework~\ref{def:general_algorithm} allows to incorporate other minimization algorithms than just the plain vanilla stochastic gradient descent method (see, e.g., \eqref{eq:plain-vanilla-SGD} in Framework~\ref{algo:1} in Subsection~\ref{subsec:algo1} above) such as, for example, the Adam optimizer (cf.\ Kingma \& Ba \cite{KingmaBa2015}). Furthermore, Framework~\ref{def:general_algorithm} also allows to incorporate more advanced machine learning techniques like batch normalization (cf. Ioffe \& Szegedy \cite{IoffeSzegedy2015} and \eqref{eq:general_batch_normalization} below). \begin{algo}[General case] \label{def:general_algorithm} Let $T \in (0,\infty)$, $M,N, d, \delta, \varrho, \nu, \varsigma \in \N$, $(J_m)_{m \in \N_0} \subseteq \N$, $t_0,t_1,\ldots,t_N\in [0,T]$ satisfy $0 = t_0 < t_1 < \ldots < t_N = T$, let $\tau_0,\tau_1,\dots,\tau_n \in [0,T]$ satisfy for every $n \in \{0,1,\dots,N\}$ that $\tau_n= T-t_{N-n}$, let $f\colon\R^d\times\R\times\R^d\to\R$, % $b\colon\R^d\times\R\times\R^d\to\R^\delta$, % $H\colon [0,T]^2\times\R^d\times\R^d\to\R^d$, and $\mathcal{H}_n\colon \R^d \times \R \times \R^d \times \R^\delta \to \R$, $n\in \{1,2,\dots,N\}$, be functions, % let $ ( \Omega, {\ensuremath{\mathcal{F}}}, \P ) $ be a probability space with a normal filtration $( \mathcal F_t )_{ t \in [0,T] }$, for every function $z\colon [0,T]\times\R^d\to\R^\delta$ and every $n \in \{1,2,\ldots,N\}$ let $ B^{z,n,m,j} \colon [0,T] \times \Omega \to \R^d $, $ m \in \N_0 $, $ j \in \N $, be i.i.d.\ standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motions, for every function $z\colon [0,T]\times\R^d\to\R^\delta$ and every $n \in \{1,2,\ldots,N\}$ let $ \xi^{z,n,m,j}\colon\Omega\to\R^d $, $ m \in \N_0 $, $ j \in \N $, be i.i.d.\ $ \mathcal F_0/ \B(\R^d) $-measurable random variables, for every function $z\colon [0,T]\times\R^d\to\R^\delta$ let $ \bV^{z,j,{\bf s}}_n\colon\R^\nu\times\R^d \to\R$, $j\in \N$, $s\in \R^\varsigma$, $n \in \{0,1,\ldots,N\}$, be functions, for every function $z\colon [0,T]\times\R^d\to\R^\delta$ and every $n \in \{1,2,\ldots,N\}$, $m \in \N_0$, $j \in \N$ let ${\ensuremath{\mathcal{Y}}}^{z,n,m,j}\colon \{0,1,\ldots,N\}\times\Omega\to\R^d$ be a stochastic process which satisfies for every $k\in\{0,1,\ldots,N-1\}$ that ${\ensuremath{\mathcal{Y}}}^{z,n,m,j}_0 = \xi^{z,n,m,j}$ and \begin{equation}\label{eq:FormalXapprox} {\ensuremath{\mathcal{Y}}}^{z,n,m,j}_{k+1} = H(\tau_{k+1},\tau_{k},{\ensuremath{\mathcal{Y}}}^{z,n,m,j}_k,B^{z,n,m,j}_{\tau_{k+1}}-B^{z,n,m,j}_{\tau_{k}}), \end{equation} for every function $z\colon [0,T]\times\R^d\to\R^\delta$ let $\Theta^{z,n}\colon\N_0\times\Omega\to\R^{\nu}$, $n \in \{0,1,\dots N\}$, be stochastic processes, for every function $z\colon [0,T]\times\R^d\to\R^\delta$ and every $n\in\{1,2,\ldots,N\}$, $m\in\N_0$, ${\bf s}\in\R^{\varsigma}$ let $\phi^{z,n,m,{\bf s}}\colon\R^{\nu}\times\Omega\to\R$ satisfy for every $\theta \in \R^{\nu}$, $\omega \in \Omega$ that \begin{align} &\phi^{z,n,m,{\bf s}}(\theta,\omega) = \frac{1}{J_m}\sum_{j=1}^{J_m} \bigg[ \bV^{z,j,{\bf s}}_n\big(\theta,{\ensuremath{\mathcal{Y}}}^{z,n,m,j}_{N-n}(\omega)\big) \nonumber\\ & - \mathcal{H}_n\Big( {\ensuremath{\mathcal{Y}}}^{z,n,m,j}_{N-n+1}(\omega), \bV^{z,j,{\bf s}}_{n-1}\big(\Theta^{z,n-1}_{M}(\omega),{\ensuremath{\mathcal{Y}}}^{z,n,m,j}_{N-n+1}(\omega)\bigr),\\ & \quad \quad \quad (\nabla_x \bV^{z,j,{\bf s}}_{n-1})\big(\Theta^{z,n-1}_{M}(\omega), {\ensuremath{\mathcal{Y}}}^{z,n,m,j}_{N-n+1}(\omega)\big), z_{t_n}\big({\ensuremath{\mathcal{Y}}}^{z,n,m,j}_{N-n+1}(\omega)\big)-z_{t_{n-1}}\big({\ensuremath{\mathcal{Y}}}^{z,n,m,j}_{N-n+1}(\omega)\big)\Big) \bigg]^2, \nonumber \end{align} % % for every function $z\colon [0,T]\times\R^d\to\R^\delta$ and every $n\in\{1,2,\ldots,N\}$, $m\in\N_0$, ${\bf s}\in\R^{\varsigma}$ let $\Phi^{z,n,m,{\bf s}}\colon\R^{\nu}\times\Omega\to\R^{\nu}$ satisfy for every $\omega\in\Omega$, $\theta\in\{\eta\in\R^{\nu}\colon \phi^{z,n,m,{\bf s}}(\cdot,\omega)\colon\R^{\nu}\to\R~\text{is differentiable at}~\eta\}$ that \begin{align} \Phi^{z,n,m,{\bf s}}(\theta,\omega) = (\nabla_{\theta}\phi^{z,n,m,{\bf s}})(\theta,\omega), \end{align} for every function $z\colon [0,T]\times\R^d\to\R^\delta$ let $\S^{z,n}\colon\R^{\varsigma}\times\R^{\nu}\times(\R^d)^{\{0,1,\ldots,N\}\times\N}$ $\to\R^{\varsigma}$, $n\in\{1,2,\ldots,N\}$, be functions, for every function $z\colon [0,T]\times\R^d\to\R^\delta$ and every $n\in\{1,2,\ldots,N\}$, $m\in\N_0$ let $\psi^{z,n}_m\colon\R^{\varrho}\to\R^{\nu}$ and $\Psi^{z,n}_m\colon\R^{\varrho}\times\R^{\nu}\to\R^{\varrho}$ be functions, for every function $z\colon [0,T]\times\R^d\to\R^\delta$ and every $n\in\{1,2,\ldots,N\}$ let $\bS^{z,n}\colon\N_0\times\Omega\to\R^{\varsigma}$ and $\Xi^{z,n}\colon\N_0\times\Omega\to\R^{\varrho}$ be stochastic processes which satisfy for every $m\in\N_0$ that \begin{equation}\label{eq:general_batch_normalization} \bS^{z,n}_{m+1} = \S^{z,n}\bigl(\bS^{z,n}_m, \Theta^{z,n}_{m}, ({\ensuremath{\mathcal{Y}}}_k^{z,n,m,i})_{(k,i)\in\{0,1,\ldots,N\}\times\N}\bigr), \end{equation} \begin{equation} \Xi^{z,n}_{m+1} = \Psi^{z,n}_{m}(\Xi^{z,n}_{m},\Phi^{z,n,m,\bS^{z,n}_{m+1}}(\Theta^{z,n}_m)), \quad \text{and} \quad \Theta^{z,n}_{m+1} = \Theta^{z,n}_{m} - \psi^{z,n}_{m}(\Xi^{z,n}_{m+1}) \label{eq:general_gradient_step}. \end{equation} \end{algo} In the setting of Framework~\ref{def:general_algorithm} we think under suitable hypothesis for sufficiently large $N \in \N$, for every sufficiently regular function $z\colon [0,T]\times\R^d\to\R^\delta$, for every sufficiently large $m \in \N$, and for every $n \in \{0,1,\dots N\}$, $x \in \R^d$ of $\mathbb{V}^{z,1,\mathbb{S}_m^{z,n}}_n(\Theta^{z,n}_m,x)\colon \Omega \to \R$ as a suitable approximation \begin{equation}\label{eq:V-approx-gen-frame} \mathbb{V}^{z,1,\mathbb{S}^{z,n}_m}_n(\Theta^{z,n}_m,x) \approx \E\big[X_{t_n}(x)\!~|\!~Z=z\big] \end{equation} of $\E[X_{t_n}(x)\,|\,Z=z]$ where $ X\colon [0,T]\times\R^d\times\Omega\to\R $ is a random field which satisfies for every $ t\in [0,T] $, $ x\in\R^d $ that $ X_t(x)\colon\Omega\to\R $ is $\mathcal F_t$/$\B(\R)$-measurable, which satisfies for every $\omega\in\Omega$ that $ (X_t(x,\omega))_{(t,x)\in [0,T]\times\R^d}\in C^{0,2}([0,T]\times\R^d,\R) $ has at most polynomially growing partial derivatives, and which satisfies that for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.~that \begin{align} X_{ t }( x ) & = \varphi( x ) + \int_{ 0 }^{ t } f\big( x, X_s(x), ( \nabla X_s )( x ) \big) \, ds + \int_{ 0 }^{ t } \big\langle b\big( x, X_s( x ), ( \nabla X_s )( x ) \big), dZ_s(x) \big\rangle_{\R^\delta} \nonumber\\ & \quad + \int_{ 0 }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma(x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \Big] \, ds \label{eq:mildFormulationSPDE-GENERAL} \end{align} where $\varphi\colon \R^d \to \R$ is a continuous function, where $\mu\colon \R^d \to \R^d$ is a sufficiently regular function, where $\sigma\colon \R^d \to \R^{d\times d}$ is a sufficiently regular and sufficiently non-degenerate function, and where $Z\colon [0,T]\times \R^d \times \Omega \to \R^\delta$ is a sufficiently regular random field (cf.\ \eqref{eq:mildFormulationSPDE}, \eqref{eq:VapproxXWithZPluggedIn}, \eqref{eq:VapproxXWithZPluggedIn-b}, \eqref{eq:VThetaApproxVz}, and \eqref{eq:VThetaApproxVz2}). \section{Examples} \label{sec:examples} In this section we depict the performance of the algorithm proposed in Subsection~\ref{subsec:algo-Full-gen} by providing numerical simulations for four example SPDEs. More precisely, we apply the proposed approximation algorithm to stochastic heat equations with additive noise (cf.\ Subsection~\ref{subsec:stoch_heat} below), to stochastic heat equations with multiplicative noise (cf.\ Subsection~\ref{subsec:const-coeff} below), to stochastic Black--Scholes equations with multiplicative noise (cf.\ Subsection~\ref{subsec:geom_BM} below), and to Zakai equations (cf.\ Subsection~\ref{subsec:Zakai} below). In each of these numerical simulations we employ the general approximation method in Subsection~\ref{subsec:algo-Full-gen} in conjunction with the Milstein approximation scheme (cf.\ \cite[Section~10.3]{KloedenPlaten1992}) and the Adam optimizer (cf.\ \eqref{eq:examples_setting_moment_estimation} and \eqref{eq:examples_setting_adam_grad_update} in Framework~\ref{frame:adam} below and Kingma \& Ba~\cite{KingmaBa2015}) with mini-batches with $ 64 $ samples in each iteration step (see Framework~\ref{frame:adam} for a detailed description). For every sufficiently regular function $z\colon[0,T]\times \R^d \to \R$ we employ in our implementation $ N$ fully-connected feedforward neural networks to represent $ \bV^{z,j,{\bf s}}_n(\theta,x) $ for $ n \in \{ 1, 2, \dots, N\} $, $ j \in \{ 1, 2, \dots, 64 \} $, $ s \in \R^{\varsigma}$, $ \theta \in \R^{ \nu } $, $x \in \R^d$. Each of these neural networks consists of $ 4 $ layers ($ 1 $ input layer [$ d $-dimensional], $ 2 $ hidden layers [both $ d+50 $ -dimensional], and $ 1 $ output layer [$1$-dimensional]). We employ the $\tanh$ activation function as our activation function for the hidden variables. We also employ batch normalization (BN) (see Ioffe \& Szegedy~\cite{IoffeSzegedy2015}) just before the first affine linear transformation (batch normalization for the input) as well as just before every application of the multidimensional version of the $\tanh$ activation function (batch normalization for the hidden layers just before activation). All the weights in the network are initialized using a normal or a uniform distribution. Each of the numerical experiments presented below is performed in {\sc Python} using {\sc TensorFlow} on a NVIDIA GeForce RTX 2080 Ti GPU. The underlying system is an AMD Ryzen~9 3950X CPU with 64 GB DDR4 memory running Tensorflow~2.1 on Ubuntu~19.10. We also refer to Section~\ref{sec:source} below for the {\sc Python} source codes associated to the numerical simulations in Subsections~\ref{subsec:stoch_heat}--\ref{subsec:Zakai} below. \begin{algo} \label{frame:adam} Assume Framework~\ref{def:general_algorithm}, let $\nu=(N+1)[(d+50)(d+1)+ (d+50)(d+51)+(d+51)]$ (cf.\ E et al.~\cite[Remark 4.1]{EHanJentzen2017} and the second paragraph of this section), $\varepsilon\in (0,\infty)$, $\beta_1 = \tfrac{9}{10}$, $\beta_2 = \tfrac{999}{1000}$, $(\gamma_m)_{m\in\N_0}\subseteq (0,\infty)$, let $\operatorname{Pow}_r \colon \R^{\nu}\to\R^{\nu}$, $r\in (0,\infty)$, satisfy for every $r\in (0,\infty)$, $x=(x_1,x_2,\ldots,x_{\nu})\in\R^{\nu}$ that $\operatorname{Pow}_r(x) = (|x_1|^r,|x_2|^r,\ldots,|x_{\nu}|^r)$, let $ \varphi\colon \R^d \to \R $, $ \mu=(\mu_1,\mu_2,\dots,\mu_d) \colon \R^d \to \R^d $, and $ \sigma \colon \R^d \to \R^{ d \times d } $ be functions, let $ X\colon [0,T]\times\R^d\times\Omega\to\R $ and $ Z \colon [0,T] \times \R^d \times \Omega \to \R^\delta $ be random fields, assume for every $ t\in [0,T] $, $ x\in\R^d $ that $ X_t(x)\colon\Omega\to\R $ is $\mathcal F_t$/$\B(\R)$-measurable, assume for every $ x\in\R^d $ that $ (Z_t(x))_{t\in [0,T]} \colon [0,T] \times \Omega \to \R^\delta $ is an $ (\mathcal F_t)_{t\in [0,T]} $-It\^o process, assume for every $\omega\in\Omega$ that $ (X_t(x,\omega))_{(t,x)\in [0,T]\times\R^d}\in C^{0,2}([0,T]\times\R^d,\R) $ has at most polynomially growing partial derivatives, assume that for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.\ that \begin{align} X_{ t }( x ) & = \varphi( x ) + \int_{ 0 }^{ t } f\big( x, X_s(x), ( \nabla X_s )( x ) \big) \, ds + \int_{ 0 }^{ t } \big \langle b\big( x, X_s( x ), ( \nabla X_s )( x ) \big), dZ_s(x) \rangle_{\R^\delta} \nonumber \\ & \quad + \int_{ 0 }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma(x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \Big] \, ds, \label{eq:mildFormulationSPDE-EXAMPLE-SETTING} \end{align} assume for every $n \in \{1,2,\dots,N\}$, $m\in\N_0$, $i\in\{0,1,\ldots,N\}$ that $ J_m = 64 $, $ t_i = \tfrac{iT}{N} $, and $ \varrho = 2 \nu $, and assume for every $m\in\N_0$, $x=(x_1,x_2,\ldots,x_{\nu}),~y=(y_1,y_2,\ldots,y_{\nu})\in\R^{\nu}$, $\eta = ( \eta_1 , \eta_2, \ldots , \eta_{\nu} )\in \R^{\nu}$ that \begin{align}\label{eq:examples_setting_moment_estimation} \Psi^n_m ( x , y , \eta ) = (\beta_1 x + (1-\beta_1) \eta, \beta_2 y + (1-\beta_2) \operatorname{Pow}_2(\eta)) \end{align} and \begin{align}\label{eq:examples_setting_adam_grad_update} \psi^n_m ( x,y ) = \biggl( \Bigl[ \sqrt{\tfrac{|y_1|}{1-(\beta_2)^m}} + \varepsilon \Bigr]^{-1} \frac{\gamma_m x_{1}}{1-(\beta_1)^m}, \ldots, \Bigl[ \sqrt{\tfrac{|y_{\nu}|}{1-(\beta_2)^m}} + \varepsilon \Bigr]^{-1} \frac{\gamma_m x_{\nu}}{1-(\beta_1)^m} \biggr). \end{align} \end{algo} Note that \eqref{eq:examples_setting_moment_estimation} and \eqref{eq:examples_setting_adam_grad_update} in Framework~\ref{frame:adam} describe the Adam optimizer (cf.\ Kingma \& Ba \cite{KingmaBa2015}, e.g., E et al.\ \cite[(32)--(33) in Section 4.2 and (90)--(91) in Section 5.2]{EHanJentzen2017}, and line 108--110 in {\sc Python} code~\ref{code:common} in Section~\ref{sec:source} below). \subsection{Stochastic heat equations with additive noise} \label{subsec:stoch_heat} In this subsection we apply the approximation algorithm in Framework~\ref{def:general_algorithm} to the stochastic heat equations with additive noise in \eqref{eq:ex-heat-add} below. Assume Framework~\ref{frame:adam}, assume that $T=1$, $N=5$, $M=8000$, $d\in\{1,5,10,20,50\}$, $\delta=1$, and $\varepsilon=10^{-8}$, let $W\colon [0,T]\times\Omega\to\R$ be a standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motion, and assume for every $s,t \in [0,T]$, $x,w\in \R^d$, $ u \in \R$, $m\in \N_0$ that $H(t,s,x,w)= x +\sqrt{2} w$, $ f(x,u,w) = 0 $, $ b(x,u,w) = 1 $, $\mu(x)=0$, $\sigma(x)w=\sqrt{2}w$, $\varphi(x)=\|x\|_{\R^d}^2$, $ Z_t(x) =W_t $, $\gamma_m = 10^{-1} \mathbbm{1}_{[0,2000]}(m) + 10^{-2} \mathbbm{1}_{(2000,4000]}(m) + 10^{-3} \mathbbm{1}_{(4000,6000]}(m) + 10^{-4} \mathbbm{1}_{(6000,8000]}(m)$, and $\mathcal{H}_n(x,u,w,z)=u+z$ (cf., for instance, Kloeden \& Platen \cite[Section~10.3]{KloedenPlaten1992}). Note that \eqref{eq:mildFormulationSPDE-EXAMPLE-SETTING} ensures that for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.\ that \begin{equation}\label{eq:ex-heat-add} X_t(x) = \|x\|_{\R^d}^2 + \int_0^t (\Delta_x X_s)(x) \,ds + W_t. \end{equation} Next we depict our numerical simulation results for the stochastic heat equations with additive noise in \eqref{eq:ex-heat-add}. In Table~\ref{table:heat-add} we present numerical approximations for the relative $L^2$-errors $\big(\E\big[|X_T(0)|^{-2}{|\mathbb{V}_N^{0,1,\mathbb{S}^{0,N}_m}\!\!(\Theta^{0,N}_m,0)-X_T(0) |^2}\big]\big)^{1/2}$ for $d \in \{1,5,10,20,50\}$ (cf.\ \eqref{eq:general_gradient_step} and \eqref{eq:V-approx-gen-frame}). In our approximative computations for the relative $L^2$-errors, the exact solutions of the SPDEs in \eqref{eq:ex-heat-add} have been approximately computed by means of the well-known result in Lemma~\ref{le:heat-additive} below. \begin{lemma}\label{le:heat-additive} Let $T\in (0,\infty)$, $d\in\N$, let $(\Omega, \mathcal{F}, \P)$ be a probability space, let $W\colon [0,T]\times\Omega\to\R$ be a stochastic process with continuous sample paths, and let $ X \colon [0,T] \times \R^d \times \Omega \to \R $ satisfy for every $ t \in [0,T] $, $ x \in \R^d $ that \begin{equation}\label{le:eq:SPDE-heat-add} X_t(x) = \|x\|_{\R^d}^2 + 2td +W_t. \end{equation} Then \begin{enumerate}[(i)] \item\label{le:eq-heat-add-1} it holds for every $\omega\in \Omega$ that $([0,T]\times\R^d \ni (t,x) \mapsto X_t(x,\omega)\in \R) \in C^{0,2}([0,T]\times\R^d,\R)$ and \item\label{le:eq-heat-add-2} it holds for every $t\in [0,T]$, $x\in\R^d$ that \begin{equation} X_t(x) = \|x\|_{\R^d}^2 + \int_0^t (\Delta_x X_s)(x)\,ds + W_t. \end{equation} \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:heat-additive}] Throughout this proof let $\mathfrak{C} \in \R^{d\times d}$ satisfy for every $x\in \R^d$ that $\mathfrak{C}x= 2x$ and let $v\colon[0,T]\times \R^d \to \R$ satisfy for every $t\in [0,T]$, $x\in \R^d$ that \begin{equation}\label{eq:le:add:PDE} v(t,x)= \|x\|_{\R^d}^2 + t \operatorname{Trace}(\mathfrak{C}). \end{equation} Note that Lemma~\ref{le:Heat-x-2} (applied with $T\with T$, $d\with d$, $\mathfrak{C}\with \mathfrak{C}$, $u\with v$ in the notation of Lemma~\ref{le:Heat-x-2}) and \eqref{eq:le:add:PDE} ensure that \begin{enumerate}[(a)] \item \label{le:heat-add-i} it holds that $v \in C^\infty([0,T]\times \R^d,\R)$ and \item \label{le:heat-add-ii} it holds for every $t\in [0,T]$, $x\in \R^d$ that $v(0,x)=\|x\|_{\R^d}^2$ and \begin{equation} \label{eq:PDE-Heat-identity} (\tfrac{\partial}{\partial t} v)(t,x)= \tfrac{1}{2} \operatorname{Trace}\big( \mathfrak{C}(\operatorname{Hess}_x v)(t,x) \big)= (\Delta_x v)(t,x). \end{equation} \end{enumerate} Moreover, note that \eqref{eq:le:add:PDE} and the fact that $\operatorname{Trace}(\mathfrak{C})=2d$ prove that for every $t\in [0,T]$, $x\in \R^d$ it holds that \begin{equation}\label{eq:heat-add-rep} v(t,x)= \|x\|_{\R^d}^2 + 2 td. \end{equation} Combining this, item~\eqref{le:heat-add-i}, and \eqref{le:eq:SPDE-heat-add} hence ensures that for every $\omega\in \Omega$ it holds that $([0,T]\times\R^d \ni (t,x) \mapsto X_t(x,\omega)\in \R) \in C^{0,\infty}([0,T]\times\R^d,\R)$. This establishes item~\eqref{le:eq-heat-add-1}. % % Moreover, note that \eqref{eq:heat-add-rep}, the fundamental theorem of calculus, \eqref{le:eq:SPDE-heat-add}, and items \eqref{le:heat-add-i} and \eqref{le:heat-add-ii} ensure that for every $t \in [0,T]$, $x \in \R^d$ it holds that \begin{align} X_t(x) &= \|x\|_{\R^d}^2 + 2td + W_t =v(t,x)+ W_t \nonumber\\ &= \|x\|_{\R^d}^2 + \int_0^t (\tfrac{\partial}{\partial s}v)(s,x) \,ds + W_t \\ &= \|x\|_{\R^d}^2 + \int_0^t (\Delta_x v)(s,x)\, ds + W_t = \|x\|_{\R^d}^2 + \int_0^t (\Delta_x X_s)(x)\, ds + W_t. \nonumber \end{align} This completes the proof of Lemma~\ref{le:heat-additive}. \end{proof} \begin{table \begin{center} \small \begin{tabular}{|c|c|c|c|c|c|} \hline \makecell{d} & \makecell{Result\\of the\\ approx.\\algorithm} & \makecell{Runtime\\ in\\ seconds} & \makecell{Reference\\ solution} & \makecell{Relative\\ pathwise\\ error} & \makecell{Relative\\ $L^2$-error}\\ \hline 1 & 2.018 & 80.04 & 2.035 & 0.0084 & \\ 1 & 4.590 & 79.26 & 4.561 & 0.0064 & \\ 1 & 3.039 & 79.07 & 3.020 & 0.0063 & 0.0060 \\ 1 & 2.323 & 78.81 & 2.322 & 0.0006 & \\ 1 & 1.482 & 79.18 & 1.489 & 0.0053 & \\ \hline 5 & 9.529 & 79.94 & 9.550 & 0.0022 & \\ 5 & 9.903 & 80.55 & 9.922 & 0.0019 & \\ 5 & 10.764 & 80.44 & 10.701 & 0.0059 & 0.0040 \\ 5 & 11.682 & 80.43 & 11.624 & 0.0050 & \\ 5 & 9.259 & 79.54 & 9.230 & 0.0032 & \\ \hline 10 & 18.841 & 80.10 & 18.970 & 0.0068 & \\ 10 & 21.157 & 80.08 & 21.078 & 0.0038 & \\ 10 & 20.899 & 80.27 & 20.766 & 0.0064 & 0.0050 \\ 10 & 21.763 & 80.40 & 21.734 & 0.0013 & \\ 10 & 20.105 & 80.91 & 20.009 & 0.0048 & \\ \hline 20 & 40.119 & 79.94 & 40.183 & 0.0016 & \\ 20 & 40.158 & 80.14 & 40.024 & 0.0034 & \\ 20 & 40.316 & 80.19 & 40.166 & 0.0037 & 0.0031 \\ 20 & 40.032 & 80.26 & 39.891 & 0.0035 & \\ 20 & 39.159 & 79.87 & 39.059 & 0.0026 & \\ \hline 50 & 98.139 & 79.36 & 98.762 & 0.0063 & \\ 50 & 100.318 & 79.79 & 101.261 & 0.0093 & \\ 50 & 100.458 & 80.71 & 100.997 & 0.0053 & 0.0063 \\ 50 & 99.777 & 80.84 & 100.196 & 0.0042 & \\ 50 & 99.812 & 80.01 & 100.330 & 0.0052 & \\ \hline \end{tabular} \caption{Numerical simulations for the stochastic heat equations with additive noise in \eqref{eq:ex-heat-add}. } \label{table:heat-add} \end{center} \end{table} \subsection{Stochastic heat equations with multiplicative noise} \label{subsec:const-coeff} In this subsection we apply the approximation algorithm in Framework~\ref{def:general_algorithm} to the stochastic heat equations with multiplicative noise in \eqref{eq:ex-heat-multi} below. Assume Framework~\ref{frame:adam}, assume that $T=0.5$, $N=25$, $M=12000$, $d\in\{1, 5, 10, 20, 50\}$, $\delta=1$, and $\varepsilon=10^{-8}$, let $W\colon [0,T]\times\Omega\to\R$ be a standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motion, and assume for every $s,t \in [0,T]$, $x,w\in \R^d$, $u,z \in \R$, $n \in \{1,2,\dots,N\}$, $m\in \N_0$ that $H(t,s,x,w)= x +\sqrt{2}w$, $ f(x,u,w) = 0 $, $ b(x,u,w) = u $, $\mu(x)=0$, $\sigma(x)w=\sqrt{2}w$, $\varphi(x)=\|x\|_{\R^d}^2$, $ Z_t(x) = W_t $, $\gamma_m = 10^{-1} \mathbbm{1}_{[0,5000]}(m)$ $ + 10^{-2} \mathbbm{1}_{(5000,7000]}(m) + 10^{-3} \mathbbm{1}_{(7000,10000]}(m) + 10^{-4} \mathbbm{1}_{(10000,12000]}(m)$, and $\mathcal{H}_n(x,u,w,z)=u\big(1+z+\frac{1}{2}z^2-\frac{1}{2}(t_n-t_{n-1})\big)$ (cf., for instance, Kloeden \& Platen \cite[Section~10.3]{KloedenPlaten1992}). Note that \eqref{eq:mildFormulationSPDE-EXAMPLE-SETTING} ensures that for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.\ that \begin{equation}\label{eq:ex-heat-multi} X_t(x) = \|x\|_{\R^d}^2 + \int_0^t (\Delta_x X_s)(x) \,ds + \int_0^t X_s(x) \,dW_s. \end{equation} Next we depict our numerical simulation results for the stochastic heat equations with multiplicative noise in \eqref{eq:ex-heat-multi}. In Table~\ref{table:heat-mult} we present numerical approximations for the relative $L^2$-errors $\big(\E\big[|X_T(0)|^{-2}{|\mathbb{V}_N^{0,1,\mathbb{S}^{0,N}_m}\!\!(\Theta^{0,N}_m,0)-X_T(0) |^2}\big]\big)^{1/2}$ for $d \in \{1,5,10,20,50\}$ (cf.\ \eqref{eq:general_gradient_step} and \eqref{eq:V-approx-gen-frame}). In our approximative computations for the relative $L^2$-errors, the exact solutions of the SPDEs in \eqref{eq:ex-heat-multi} have been approximately computed by means of the well-known result in Lemma~\ref{le:heat-x-quadrat} below. Our proof of Lemma~\ref{le:heat-x-quadrat} employs the well-known result in Lemma~\ref{le:Heat-x-2} below. \begin{lemma}\label{le:Heat-x-2} Let $T\in (0,\infty)$, $d\in\N$, let $\mathfrak{C}\in\R^{d\times d}$ be a strictly positive definite symmetric matrix, and let $u\colon[0,T]\times \R^d \to \R$ satisfy for every $t \in [0,T]$, $x \in \R^d$ that \begin{equation} u(t,x)= \|x\|_{\R^d}^2 + t\operatorname{Trace}(\mathfrak{C}). \end{equation} Then \begin{enumerate}[(i)] \item\label{le:Heat-x-2-1} it holds that $u \in C^\infty([0,T]\times \R^d, \R)$ is at most polynomially growing and \item\label{le:Heat-x-2-2} it holds for every $t\in [0,T]$, $x\in \R^d$ that $u(0,x)=\|x\|_{\R^d}^2$ and \begin{equation} \label{eq:PDE-heat-} (\tfrac{\partial}{\partial t} u)(t,x)= \tfrac{1}{2} \operatorname{Trace}\big( \mathfrak{C}(\operatorname{Hess}_x u)(t,x) \big). \end{equation} \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:Heat-x-2}] Observe that, e.g., \cite[Lemma~3.2]{beck2018solving} (applied with $C\with \mathfrak{C}$ in the notation of \cite[Lemma~3.2]{beck2018solving}) establishes items~\eqref{le:Heat-x-2-1} and \eqref{le:Heat-x-2-2}. This completes the proof of Lemma~\ref{le:Heat-x-2}. \end{proof} \begin{lemma}\label{le:heat-x-quadrat} Let $T\in (0,\infty)$, $d\in\N$, let $(\Omega, \mathcal{F}, \P)$ be a probability space, let $W\colon [0,T]\times\Omega\to\R$ be a standard Brownian motion, let $\mathfrak{C}\in\R^{d\times d}$ be a strictly positive definite symmetric matrix, and let $ X \colon [0,T] \times \R^d \times \Omega \to \R $ satisfy for every $ t \in [0,T] $, $ x \in \R^d $ that \begin{equation} X_t(x) = \exp\!\big(W_t - \tfrac{t}{2}\big) \big(t\operatorname{Trace} ( \mathfrak{C}) + \|x\|_{\R^d}^2\big). \end{equation} Then for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.\ that \begin{equation} X_t(x) = \|x\|_{\R^d}^2 + \int_0^t \tfrac12\operatorname{Trace} \big( \mathfrak{C}(\operatorname{Hess} X_s)(x) \big)\,ds + \int_0^t X_s(x)\,dW_s. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:heat-x-quadrat}] Throughout this proof let $v\colon[0,T]\times \R^d \to \R$ satisfy for every $t\in [0,T]$, $x\in \R^d$ that \begin{equation}\label{eq:le:PDE-heat} v(t,x)= \|x\|_{\R^d}^2 + t \operatorname{Trace}(\mathfrak{C}). \end{equation} Observe that It\^o's formula and \eqref{eq:le:PDE-heat} ensure that for every $t \in [0,T]$, $x \in \R^d$ it holds $\P$-a.s.\ that \begin{equation} \begin{split} X_t(x) &= \exp\!\big(W_t - \tfrac{t}{2}\big) \big(t\operatorname{Trace} ( \mathfrak{C}) + \|x\|_{\R^d}^2\big) = \exp\!\big(W_t - \tfrac{t}{2}\big) \,v(t,x)\\ &= \|x\|_{\R^d}^2 + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big) \,(\tfrac{\partial}{\partial s}v)(s,x) \,ds + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big)\, v(s,x)\,dW_s \\ & \quad + \int_0^t \left[-\tfrac{1}{2}\right]\exp\!\big(W_s - \tfrac{s}{2}\big) \,v(s,x)\,ds + \tfrac{1}{2}\int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big)\, v(s,x)\,ds\\ &= \|x\|_{\R^d}^2 + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big) \,(\tfrac{\partial}{\partial s}v)(s,x) \,ds + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big)\, v(s,x)\,dW_s. \end{split} \end{equation} Lemma~\ref{le:Heat-x-2} hence ensures that for every $t \in [0,T]$, $x \in \R^d$ it holds $\P$-a.s.\ that \begin{equation} \begin{split} X_t(x) &= \|x\|_{\R^d}^2 + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big) \,\tfrac{1}{2} \operatorname{Trace}\big( \mathfrak{C}(\operatorname{Hess}_x v)(s,x) \big)\,ds + \int_0^t X_s(x) \, dW_s\\ &= \|x\|_{\R^d}^2 + \int_0^t \tfrac{1}{2} \operatorname{Trace}\big( \mathfrak{C}(\operatorname{Hess} X_s)(x) \big)\,ds + \int_0^t X_s(x) \, dW_s. \end{split} \end{equation} This completes the proof of Lemma~\ref{le:heat-x-quadrat}. \end{proof} \begin{table \begin{center} \small \begin{tabular}{|c|c|c|c|c|c|} \hline \makecell{d} & \makecell{Result\\of the\\ approx.\\algorithm} & \makecell{Runtime\\ in\\ seconds} & \makecell{Reference\\ solution} & \makecell{Relative\\ pathwise\\ error} & \makecell{Relative\\ $L^2$-error}\\ \hline 1 & 2.801 & 668.63 & 2.796 & 0.0019 & \\ 1 & 0.742 & 667.01 & 0.720 & 0.0303 & \\ 1 & 5.334 & 667.87 & 5.272 & 0.0117 & 0.0196 \\ 1 & 0.647 & 667.41 & 0.644 & 0.0052 & \\ 1 & 0.299 & 666.31 & 0.290 & 0.0288 & \\ \hline 5 & 1.034 & 675.16 & 1.023 & 0.0108 & \\ 5 & 1.593 & 673.21 & 1.587 & 0.0038 & \\ 5 & 3.381 & 675.06 & 3.366 & 0.0044 & 0.0101 \\ 5 & 8.005 & 674.57 & 7.859 & 0.0186 & \\ 5 & 5.388 & 672.14 & 5.405 & 0.0032 & \\ \hline 10 & 36.542 & 679.55 & 36.150 & 0.0109 & \\ 10 & 8.705 & 679.59 & 8.553 & 0.0178 & \\ 10 & 27.374 & 679.59 & 26.860 & 0.0191 & 0.0136 \\ 10 & 3.384 & 678.01 & 3.362 & 0.0066 & \\ 10 & 3.437 & 678.62 & 3.407 & 0.0088 & \\ \hline 20 & 22.041 & 666.78 & 22.047 & 0.0003 & \\ 20 & 24.669 & 667.58 & 24.187 & 0.0199 & \\ 20 & 15.597 & 666.35 & 15.328 & 0.0176 & 0.0154 \\ 20 & 3.551 & 664.36 & 3.493 & 0.0167 & \\ 20 & 45.559 & 665.52 & 44.910 & 0.0145 & \\ \hline 50 & 68.935 & 665.67 & 68.553 & 0.0056 & \\ 50 & 28.652 & 666.81 & 28.250 & 0.0142 & \\ 50 & 37.778 & 665.94 & 37.770 & 0.0002 & 0.0140 \\ 50 & 23.248 & 664.37 & 22.803 & 0.0195 & \\ 50 & 14.534 & 666.92 & 14.263 & 0.0190 & \\ \hline \end{tabular} \caption{Numerical simulations for the stochastic heat equations with multiplicative noise in \eqref{eq:ex-heat-multi}. } \label{table:heat-mult} \end{center} \end{table} \subsection{Stochastic Black--Scholes equations with multiplicative noise} \label{subsec:geom_BM} In this subsection we apply the approximation algorithm in Framework~\ref{def:general_algorithm} to the stochastic Black--Scholes equations with multiplicative noise in \eqref{eq:ex:Black-Scholes} below. Assume Framework~\ref{frame:adam}, assume that $T=0.5$, $N=20$, $M=10000$, $d\in \{1,5,10,20\}$, $\delta=1$, and $\varepsilon=10^{-8}$, let $r=\tfrac{1}{50}$, $\mu_1=\tfrac{\sin(d)+1}{d}$, $\mu_2=\tfrac{\sin(2d)+1}{d}$, $\dots$, $\mu_{d}=\tfrac{\sin(d^2)+1}{d}$, $\sigma_1=\tfrac{1}{4d}$, $\sigma_2=\tfrac{2}{4d}$, $\dots$, $\sigma_{d}=\tfrac{d}{4d}$, let $ W \colon [0,T] \times \Omega \to \R $ be a standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motion, and assume for every $s,t \in [0,T]$, $x=(x_1,x_2,\dots,x_d)$, $w=(w_1,w_2,\dots,w_d)\in \R^d$, $u \in \R$, $m\in \N_0$ that $ f( x , u, w ) = 0 $, $ b( x , u , w ) = u $, $\langle\mu(x), w \rangle_{\R^d}=\sum_{i=1}^d \mu_i x_i w_i$, $\sigma(x)w =(\sigma_1 x_1 w_1, \sigma_2 x_2 w_2,\dots,\sigma_d x_d w_d)$, $\varphi(x)=\exp(-rT)\max\!\big\{[\max_{i\in\{1,2,\dots,d\}}x_i]-100,0\big\}$, $ Z_t(x) = W_t $, $\gamma_m = 10^{-1} \mathbbm{1}_{[0,4000]}(m)$ $ + 10^{-2} \mathbbm{1}_{(4000,6000]}(m) + 10^{-3} \mathbbm{1}_{(6000,8000]}(m) + 10^{-4} \mathbbm{1}_{(8000,10000]}(m)$, $\mathcal{H}_n(x,u,w,z)=u\big(1+z+\frac{1}{2}z^2-\frac{1}{2}(t_n-t_{n-1})\big)$ (cf., for instance, Kloeden \& Platen \cite[Section~10.3]{KloedenPlaten1992}), and \begin{equation} \begin{split} &H(t,s,x,w)\\ &=\Big(x_1 \exp\!\big((\mu_1-\tfrac{|\sigma_1|^2}{2})(t-s)+\sigma_1 w_1\big),\dots,x_d \exp\!\big((\mu_d-\tfrac{|\sigma_d|^2}{2})(t-s)+\sigma_d w_d\big) \Big). \end{split} \end{equation} Note that \eqref{eq:mildFormulationSPDE-EXAMPLE-SETTING} ensures that for every $t\in [0,T]$, $x \in \R^d$ it holds $\P$-a.s.\ that \begin{equation} \label{eq:ex:Black-Scholes} \begin{split} X_t(x) = \varphi(x) + \int_0^{t} \bigg[ \tfrac12 {\textstyle\sum\limits_{i=1}^d} |\sigma_i|^2 |x_i|^2 (\tfrac{\partial^2}{\partial x^2_i}X_s)(x) + {\textstyle\sum\limits_{i=1}^d} \mu_i x_i (\tfrac{\partial}{\partial x_i}X_s)(x) \bigg]\,ds + \int_0^t X_s(x)\,dW_s. \end{split} \end{equation} Next we depict our numerical simulation results for the stochastic Black--Scholes equations with multiplicative noise in \eqref{eq:ex:Black-Scholes}. In Table~\ref{table:black-mult} we present numerical approximations for the relative $L^2$-errors $\big(\E\big[|X_T(0)|^{-2}{|\mathbb{V}_N^{0,1,\mathbb{S}^{0,N}_m}\!\!(\Theta^{0,N}_m,0)-X_T(0) |^2}\big]\big)^{1/2}$ for $d \in \{1,5,10,20,50\}$ (cf.\ \eqref{eq:general_gradient_step} and \eqref{eq:V-approx-gen-frame}). In our approximative computations for the relative $L^2$-errors, the exact solutions of the SPDEs in \eqref{eq:ex:Black-Scholes} have been approximately computed by means of the well-known result in Lemma~\ref{le:BS-SPDE} below. Our proof of Lemma~\ref{le:BS-SPDE} employs the well-known result in Lemma~\ref{le:BS-PDE} below. \begin{lemma}\label{le:BS-PDE} Let $d\in\N$, $T, \sigma_1, \sigma_2,\dots,\sigma_d\in (0,\infty)$, $\mu_1,\mu_2, \dots, \mu_d \in \R$, let $\varphi\in C(\R^d, \R)$ be at most polynomially growing, and let $v\colon[0,T]\times \R^d \to \R$ satisfy for every $t \in (0,T]$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ that $v(0,x)=\varphi(x)$ and \begin{multline} v(t,x) = \tfrac{1}{(2\pi t)^{\nicefrac{d}{2}}} \int_{\R} \int_\R \dots \int_\R \Bigg[\exp\!\bigg(-\frac{\sum_{i=1}^d |y_i|^2}{2t}\bigg)\\ \varphi\bigg(x_1\exp\!\Big(\sigma_1y_1 + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_dy_d + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\Bigg]\,dy_1 \,dy_2 \,\dots dy_d. \end{multline} Then \begin{enumerate}[(i)] \item there exists a unique at most polynomially growing viscosity solution $u \in C([0,T]\times \R^d, \R)$ o \begin{equation} \label{eq:BS-PDE} (\tfrac{\partial}{\partial t} u)(t,x)= \tfrac12 \sum_{i=1}^d |\sigma_i|^2 |x_i|^2 (\tfrac{\partial^2}{\partial x^2_i}u)(t,x) + \sum_{i=1}^d \mu_i x_i (\tfrac{\partial}{\partial x_i}u)(t,x) \end{equation} with $u(0,x)=\varphi(x)$ for $(t,x)=(t,x_1,x_2,\dots,x_d) \in (0,T)\times \R^d$ and \item it holds for every $t \in [0,T]$, $x \in \R^d$ that $u(t,x)=v(t,x)$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:BS-PDE}] Throughout this proof let $(\Omega,\mathcal{F},\P)$ be a probability space and let $W=(W^{(1)},W^{(2)},\dots,W^{(d)})\colon[0,\infty) \times \Omega \to \R^d$ be a standard Brownian motion. Note that the assumption that $\varphi \in C(\R^d,\R)$ is at most polynomially growing assures that there exists a unique at most polynomially growing viscosity solution $u \in C([0,T]\times \R^d, \R)$ o \begin{equation} (\tfrac{\partial}{\partial t} u)(t,x)= \tfrac12 \sum_{i=1}^d |\sigma_i|^2 |x_i|^2 (\tfrac{\partial^2}{\partial x^2_i}u)(t,x) + \sum_{i=1}^d \mu_i x_i (\tfrac{\partial}{\partial x_i}u)(t,x) \end{equation} with $u(0,x)=\varphi(x)$ for $(t,x)=(t,x_1,x_2,\dots,x_d) \in (0,T)\times \R^d$ (cf., e.g., Beck et al.\ \cite[Corollary~3.9]{BeckHuJentz20} and Hairer et al.\ \cite[Corollary~4.17]{HairerHutzenthalerJentzen_LossOfRegularity2015}). Moreover, observe that the Feynman-Kac formula ensures that for every $t \in [0,T]$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds that \begin{equation} u(t,x) = \E\bigg[\varphi\bigg(x_1\exp\!\Big(\sigma_1W^{(1)}_t + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_d W^{(d)}_t + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\bigg] \end{equation} (cf., e.g., Beck et al.\ \cite[Corollary~3.9]{BeckHuJentz20} and Hairer et al.\ \cite[Corollary~4.17]{HairerHutzenthalerJentzen_LossOfRegularity2015}). Hence, we obtain that for every $t \in [0,T]$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds that \begin{align} &u(t,x) \nonumber\\ &=\E\bigg[\varphi\bigg(x_1\exp\!\Big(\sigma_1 W^{(1)}_t + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_d W^{(d)}_t + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\bigg]\nonumber\\ &= \tfrac{1}{(2\pi t)^{\nicefrac{d}{2}}} \int_{\R} \int_\R \dots \int_\R \Bigg[\exp\!\bigg(-\frac{\sum_{i=1}^d |y_i|^2}{2t}\bigg)\\ & \quad \varphi\bigg(x_1\exp\!\Big(\sigma_1y_1 + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_dy_d + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\Bigg]\,dy_1 \,dy_2 \,\dots dy_d \nonumber\\ &=v(t,x). \nonumber \end{align} This completes the proof of Lemma~\ref{le:BS-PDE}. \end{proof} \begin{lemma}\label{le:BS-PDE-smooth} Let $d\in\N$, $T, \sigma_1, \sigma_2,\dots,\sigma_d\in (0,\infty)$, $\mu_1,\mu_2, \dots, \mu_d \in \R$, let $\varphi\in C^2(\R^d, \R)$ have at most polynomially growing derivatives, and let $v\colon[0,T]\times \R^d \to \R$ satisfy for every $t \in (0,T]$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ that $v(0,x)=\varphi(x)$ and \begin{multline} \label{eq:smooth-def-v} v(t,x) = \tfrac{1}{(2\pi t)^{\nicefrac{d}{2}}} \int_{\R} \int_\R \dots \int_\R \Bigg[\exp\!\bigg(-\frac{\sum_{i=1}^d |y_i|^2}{2t}\bigg)\\ \varphi\bigg(x_1\exp\!\Big(\sigma_1y_1 + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_dy_d + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\Bigg]\,dy_1 \,dy_2 \,\dots dy_d. \end{multline} Then \begin{enumerate}[(i)] \item\label{le:BS-PDE-smooth-1} it holds that $v \in C^{1,2}([0,T]\times \R^d, \R)$ and \item\label{le:BS-PDE-smooth-2} it holds for every $t\in [0,T]$, $x=(x_1,x_2,\dots,x_d)\in \R^d$ that \begin{equation} \label{eq:BS-PDE-smooth} (\tfrac{\partial}{\partial t} v)(t,x)= \tfrac12 \sum_{i=1}^d |\sigma_i|^2 |x_i|^2 (\tfrac{\partial^2}{\partial x^2_i}v)(t,x) + \sum_{i=1}^d \mu_i x_i (\tfrac{\partial}{\partial x_i}v)(t,x). \end{equation} \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:BS-PDE-smooth}] Throughout this proof let $(\Omega,\mathcal{F},\P)$ be a probability space, let $W=(W^{(1)},W^{(2)},\dots,W^{(d)})\colon[0,\infty) \times \Omega \to \R^d$ be a standard Brownian motion, let $S=(S^{(1)},S^{(2)},$ $\dots,S^{(d)})\colon [0,\infty) \times \Omega \to \R^d$ satisfy for every $i\in\{1,2,\dots,d\}$, $t\in [0,\infty)$ that \begin{equation}\label{eq:def-Stoch-exp} S_t^{(i)}= \exp\!\big(\sigma_iW^{(i)}_t + (\mu_i-\tfrac{|\sigma_i|^2}{2})t\big), \end{equation} and let $V\colon [0,T]\times \R^d \times\Omega \to \R$ satisfy for every $t\in [0,T]$, $x=(x_1,x_2,\dots,x_d)\in \R^d$ that \begin{equation} \label{eq:def:V} V(t,x)= \varphi\big(x_1S^{(1)}_t, x_2S^{(2)}_t,\ldots,x_d S^{(d)}_t\big). \end{equation} Note that \eqref{eq:smooth-def-v}, \eqref{eq:def-Stoch-exp}, and \eqref{eq:def:V} ensure that for every $t \in (0,T]$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds that \begin{equation} \label{le:BS-PDE-stochRep} \begin{split} &v(t,x)\\ &= \tfrac{1}{(2\pi t)^{\nicefrac{d}{2}}} \int_{\R} \int_\R \dots \int_\R \Bigg[\exp\!\bigg(-\frac{\sum_{i=1}^d |y_i|^2}{2t}\bigg)\\ & \quad \varphi\bigg(x_1\exp\!\Big(\sigma_1y_1 + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_dy_d + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\Bigg]\,dy_1 \,dy_2 \,\dots dy_d\\ &= \E\bigg[\varphi\bigg(x_1\exp\!\Big(\sigma_1W^{(1)}_t + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_d W^{(d)}_t + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\bigg]\\ &= \E\Big[\varphi\big(x_1S^{(1)}_t, x_2S^{(2)}_t,\ldots,x_d S^{(d)}_t\big)\Big]\\ &=\E\big[V(t,x)\big]. \end{split} \end{equation} Moreover, note that \eqref{eq:def-Stoch-exp} and It\^o's formula assure that for every $i\in\{1,2,\dots,d\}$, $t \in [0,\infty)$, $x \in \R$ it holds $\P$-a.s.\ that \begin{equation} \begin{split} xS^{(i)}_t&=x\exp\!\Big(\sigma_iW^{(i)}_t + (\mu_i-\tfrac{|\sigma_i|^2}{2})t\Big)\\ &= x + \int_0^t x\exp\!\Big(\sigma_iW^{(i)}_s + (\mu_i-\tfrac{|\sigma_i|^2}{2})s\Big) \sigma_i\,dW^{(i)}_s\\ & \quad + \int_0^t x\exp\!\Big(\sigma_iW^{(i)}_s + (\mu_i-\tfrac{|\sigma_i|^2}{2})s\Big) (\mu_i-\tfrac{|\sigma_i|^2}{2})\,ds\\ & \quad +\tfrac{1}{2} \int_0^t x\exp\!\Big(\sigma_iW^{(i)}_s + (\mu_i-\tfrac{|\sigma_i|^2}{2})s\Big) |\sigma_i|^2\,ds\\ &= x + \int_0^t \sigma_i xS^{(i)}_s \,dW^{(i)}_s + \int_0^t \mu_i xS^{(i)}_s \,ds. \end{split} \end{equation} Combining this, \eqref{eq:def:V}, and It\^o's formula ensures that for every $t \in [0,\infty)$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds $\P$-a.s.\ that \begin{equation} \label{eq:Ito-varphi} \begin{split} V(t,x)&=\varphi\big(x_1S^{(1)}_t, x_2S^{(2)}_t,\ldots,x_d S^{(d)}_t\big)\\ &= \varphi(x)+ \sum_{i=1}^d \int_0^t \big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) \sigma_ix_iS^{(i)}_s \,dW^{(i)}_s\\ & \quad + \sum_{i=1}^d \int_0^t \big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) \mu_i x_iS^{(i)}_s \,ds\\ & \quad + \tfrac{1}{2}\sum_{i=1}^d \int_0^t \big(\tfrac{\partial^2}{\partial x_i^2} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) |\sigma_i|^2 |x_i|^2 \big|S^{(i)}_s\big|^2 \,ds. \end{split} \end{equation} Moreover, note that \eqref{eq:def-Stoch-exp} and the fact that $\varphi\in C^2(\R^d,\R)$ has at most polynomially growing derivatives assure that for every $p\in(0,\infty)$, $i\in \{1,2,\dots,d\}$ it holds that \begin{equation}\label{eq:-t-int-1abl} \begin{split} & \sup_{t\in [0,p]}\sup_{x_1,x_2,\dots,x_d\in [-p,p]}\E\Big[ \big|\big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_t, x_2S^{(2)}_t,\ldots,x_d S^{(d)}_t\big) \mu_i x_iS^{(i)}_t \big|^p \Big]<\infty. \end{split} \end{equation} In addition, observe that \eqref{eq:def-Stoch-exp} and the fact that $\varphi\in C^2(\R^d,\R)$ has at most polynomially growing derivatives ensure that for every $p\in(0,\infty)$, $i\in \{1,2,\dots,d\}$ it holds that \begin{equation} \label{eq:-t-int-2abl} \begin{split} \sup_{t\in [0,p]}\sup_{x_1,x_2,\dots,x_d\in [-p,p]} \E\Big[ \big|\big(\tfrac{\partial^2}{\partial x_i^2} \varphi\big)\!\big(x_1S^{(1)}_t, x_2S^{(2)}_t,\ldots,x_d S^{(d)}_t\big) |\sigma_i|^2 |x_i|^2 \big|S^{(i)}_t\big|^2 \big|^p \Big]<\infty. \end{split} \end{equation} Furthermore, note that \eqref{eq:def-Stoch-exp} and the fact that $\varphi\in C^2(\R^d,\R)$ has at most polynomially growing derivatives assure that for every $i\in \{1,2,\dots,d\}$, $t \in [0,\infty)$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds that \begin{equation} \label{eq:-t-int-dW} \begin{split} \E\bigg[ \int_0^t \big|\big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) \sigma_i x_iS^{(i)}_s\big|^2\,ds \bigg]<\infty. \end{split} \end{equation} This, \eqref{eq:-t-int-1abl}, \eqref{eq:-t-int-2abl}, \eqref{eq:Ito-varphi}, \eqref{le:BS-PDE-stochRep}, and Fubini's theorem imply that for every $t \in [0,T]$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds that \begin{equation} \label{le:BS-PDE-stochRep-Ito} \begin{split} v(t,x) &=\E\big[V(t,x)\big]\\ &=\varphi(x) + \sum_{i=1}^d\E\bigg[ \int_0^t \big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) \mu_i x_iS^{(i)}_s \,ds \bigg] \\ & \quad + \tfrac{1}{2}\sum_{i=1}^d\E\bigg[ \int_0^t \big(\tfrac{\partial^2}{\partial x_i^2} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) |\sigma_i|^2 |x_i|^2 \big|S^{(i)}_s\big|^2 \,ds \bigg]\\ &= \varphi(x) + \sum_{i=1}^d \int_0^t \E\Big[ \big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) \mu_i x_iS^{(i)}_s \Big] \,ds \\ & \quad + \tfrac{1}{2}\sum_{i=1}^d \int_0^t \E\Big[ \big(\tfrac{\partial^2}{\partial x_i^2} \varphi\big)\!\big(x_1S^{(1)}_s, x_2S^{(2)}_s,\ldots,x_d S^{(d)}_s\big) |\sigma_i|^2 |x_i|^2 \big|S^{(i)}_s\big|^2 \Big] \,ds. \end{split} \end{equation} Moreover, observe that Lemma~\ref{le:BS-PDE} assures that $v\in C([0,T]\times \R^d,\R)$. Combining \eqref{eq:-t-int-1abl}, \eqref{eq:-t-int-2abl}, \eqref{le:BS-PDE-stochRep-Ito}, the de la Vall\'ee Poussin theorem (cf., e.g., Klenke~\cite[Corollary~6.21]{Klenke_2014}), and the Vitali convergence theorem (cf., e.g., Klenke~\cite[Theorem~6.25]{Klenke_2014}) with the fundamental theorem of calculus hence ensures that \begin{enumerate}[(a)] \item\label{eq:v-C-1-0-a} it holds for every $x \in \R^d$ that $([0,T]\ni t \mapsto v(t,x)\in \R) \in C^{1}([0,T],\R)$ and % \item\label{eq:v-C-1-0-b} it holds that $([0,T]\times\R^d \ni (t,x)\mapsto (\frac{\partial}{\partial t}v)(t,x)\in \R) \in C([0,T]\times \R^d,\R)$. \end{enumerate} Next note that \eqref{eq:def:V} shows that for every $i\in \{1,2,\dots,d\}$, $t \in [0,\infty)$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds that \begin{equation} \label{eq:V-xi} \begin{split} &(\tfrac{\partial}{\partial x_i} V)(t,x_1,x_2,\dots,x_d) = S^{(i)}_{t}\big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_{t}, x_2S^{(2)}_{t},\ldots,x_d S^{(d)}_{t}\big). \end{split} \end{equation} Moreover, observe that \eqref{eq:def-Stoch-exp} and the fact that $\varphi\in C^2(\R^d,\R)$ has at most polynomially growing derivatives assure that for every $p\in (0,\infty)$, $i\in \{1,2,\dots,d\}$ it holds that \begin{equation} \label{eq:-x-DelaPou} \begin{split} \sup_{t\in [0,p]}\sup_{x_1,x_2,\dots,x_d\in [-p,p]} \E\Big[ \big|S^{(i)}_{t}\big(\tfrac{\partial}{\partial x_i} \varphi\big)\!\big(x_1S^{(1)}_{t}, x_2S^{(2)}_{t},\ldots,x_d S^{(d)}_{t}\big)\big|^p \Big] <\infty. \end{split} \end{equation} Combining this with \eqref{eq:V-xi} demonstrates that for every $p\in (0,\infty)$, $i\in \{1,2,\dots,d\}$ it holds that \begin{equation} \sup_{t\in [0,p]}\sup_{x_1,x_2,\dots,x_d\in [-p,p]} \E\Big[ \big|(\tfrac{\partial}{\partial x_i} V)(t,x_1,x_2,\dots,x_d)\big|^p \Big] <\infty. \end{equation} This, \eqref{le:BS-PDE-stochRep}, the de la Vall\'ee Poussin theorem (cf., e.g., Klenke~\cite[Corollary~6.21]{Klenke_2014}), the Vitali convergence theorem (cf., e.g., Klenke~\cite[Theorem~6.25]{Klenke_2014}), and the fundamental theorem of calculus imply that \begin{enumerate}[(I)] \item\label{eq:v-C-0-1} it holds for every $t \in [0,T]$ that $(\R^d \ni x \mapsto v(t,x)\in \R)\in C^1(\R^d,\R)$, \item \label{eq:v-C-0-1-b} it holds for every $i \in \{1,2,\dots,d\}$ that \begin{equation} \big([0,T]\times \R^d \ni(t,x)\mapsto (\tfrac{\partial}{\partial x_i}v)(t,x)\in \R\big) \in C([0,T]\times \R^d,\R), \end{equation} and % \item \label{eq:v-C-0-1-Part2} it holds for every $t \in [0,T]$, $x\in \R^d$ that \begin{equation} \begin{split} (\tfrac{\partial}{\partial x} v)(t,x) &= \E\big[(\tfrac{\partial}{\partial x} V)(t,x)\big]. \end{split} \end{equation} \end{enumerate} In addition, observe that \eqref{eq:def:V} ensures that for every $i,j\in \{1,2,\dots,d\}$, $t \in [0,\infty)$, $x=(x_1,x_2,\dots,x_d)\in \R^d$ it holds that \begin{equation} \label{eq:V-xi-xj} \begin{split} &(\tfrac{\partial^2}{\partial x_i\partial x_j} V)(t,x_1,x_2,\dots,x_d) = S^{(i)}_{t}S^{(j)}_{t}\big(\tfrac{\partial^2}{\partial x_i \partial x_j} \varphi\big)\!\big(x_1S^{(1)}_{t}, x_2S^{(2)}_{t},\ldots,x_d S^{(d)}_{t}\big). \end{split} \end{equation} Moreover, note that \eqref{eq:def-Stoch-exp} and the fact that $\varphi\in C^2(\R^d,\R)$ has at most polynomially growing derivatives assure that for every $p\in (0,\infty)$, $i,j\in \{1,2,\dots,d\}$ it holds that \begin{equation} \label{eq:-x-x-DelaPou} \begin{split} \sup_{t\in [0,p]}\sup_{x_1,x_2,\dots,x_d\in [-p,p]} \E\Big[ \big|S^{(i)}_{t}S^{(j)}_{t}\big(\tfrac{\partial^2}{\partial x_i \partial x_j} \varphi\big)\!\big(x_1S^{(1)}_{t}, x_2S^{(2)}_{t},\ldots,x_d S^{(d)}_{t}\big)\big|^p \Big] <\infty. \end{split} \end{equation} Combining this with \eqref{eq:V-xi-xj} demonstrates that for every $p\in (0,\infty)$, $i\in \{1,2,\dots,d\}$ it holds that \begin{equation} \sup_{t\in [0,p]}\sup_{x_1,x_2,\dots,x_d\in [-p,p]} \E\Big[ \big|(\tfrac{\partial^2}{\partial x_i \partial x_j} V)(t,x_1,x_2,\dots,x_d)\big|^p \Big] <\infty. \end{equation} This, item~\eqref{eq:v-C-0-1}, item~\eqref{eq:v-C-0-1-b}, item~\eqref{eq:v-C-0-1-Part2}, the de la Vall\'ee Poussin theorem (cf., e.g., Klenke~\cite[Corollary~6.21]{Klenke_2014}), the Vitali convergence theorem (cf., e.g., Klenke~\cite[Theorem~6.25]{Klenke_2014}), and the fundamental theorem of calculus imply that \begin{enumerate}[(A)] \item\label{eq:v-C-0-2} it holds for every $t \in [0,T]$ that $(\R^d \ni x \mapsto v(t,x)\in \R)\in C^2(\R^d,\R)$ and % \item \label{eq:v-C-0-2-b} it holds for every $i,j \in \{1,2,\dots,d\}$ that \begin{equation} \big([0,T]\times \R^d \ni(t,x)\mapsto (\tfrac{\partial^2}{\partial x_i \partial x_j}v)(t,x)\in \R\big) \in C([0,T]\times \R^d,\R). \end{equation} \end{enumerate} Moreover, observe that item~\eqref{eq:v-C-1-0-a}, item~\eqref{eq:v-C-1-0-b}, and the fact that $v \in C([0,T]\times \R^d, \R)$ imply that $v \in C^{1,0}([0,T]\times \R^d,\R)$. This, item~\eqref{eq:v-C-0-2}, item~\eqref{eq:v-C-0-2-b}, and item~\eqref{eq:v-C-0-1-b} demonstrate that $v \in C^{1,2}([0,T]\times \R^d,\R)$. This establishes item~\eqref{le:BS-PDE-smooth-1}. Furthermore, observe that item~\eqref{le:BS-PDE-smooth-1} and Lemma~\ref{le:BS-PDE} establish item~\eqref{le:BS-PDE-smooth-2}. This completes the proof of Lemma~\ref{le:BS-PDE-smooth}. \end{proof} \begin{lemma}\label{le:BS-SPDE} Let $d\in\N$, $T, \sigma_1, \sigma_2, \ldots, \sigma_d \in (0,\infty)$, $ \mu_1, \mu_2, \ldots, \mu_d \in \R $, let $(\Omega, \mathcal{F}, \P)$ be a probability space, let $W\colon [0,T]\times\Omega\to\R$ be a standard Brownian motion, let $\varphi \in C^2(\R^d,\R)$ have at most polynomially growing derivatives, let $ v\colon [0,T]\times\R^d\to\R $ satisfy for every $ t \in (0,T] $, $ x=(x_1,x_2,\dots,x_d) \in \R^d $ that $ v(0,x) = \varphi(x) $ and \begin{multline} v(t,x) = \tfrac{1}{(2\pi t)^{\nicefrac{d}{2}}} \int_{\R} \int_\R \dots \int_\R \Bigg[\exp\!\bigg(-\frac{\sum_{i=1}^d |y_i|^2}{2t}\bigg)\\ \varphi\bigg(x_1\exp\!\Big(\sigma_1y_1 + (\mu_1-\tfrac{|\sigma_1|^2}{2})t\Big),\ldots,x_d\exp\!\Big(\sigma_dy_d + ( \mu_d - \tfrac{|\sigma_d|^2}{2})t \Big)\bigg)\Bigg]\,dy_1 \,dy_2 \,\dots dy_d, \end{multline} and let $ X \colon [0,T] \times \R^d \times \Omega \to \R $ satisfy for every $ t \in [0,T] $, $ x \in \R^d $ that \begin{equation}\label{eq:le:BS:PDE} X_t(x) = \exp\!\big(W_t - \tfrac{t}{2}\big)\, v(t,x). \end{equation} Then \begin{enumerate}[(i)] \item \label{le:eq:BS-1} for every $\omega \in \Omega$ it holds that $([0,T]\times\R^d \ni (t,x)\mapsto X_t(x,\omega) \in \R) \in C^{0,2}([0,T]\times\R^d,\R)$ and \item \label{le:eq:BS-2} for every $t\in [0,T]$, $x\in\R^d$ it holds $\P$-a.s.\ that \begin{equation} X_t(x) = \varphi(x) + \int_0^{t} \bigg[ \tfrac12 {\textstyle{\sum\limits_{i=1}^d}} |\sigma_i|^2 |x_i|^2 (\tfrac{\partial^2}{\partial x_i^2}X_s)(x) + {\textstyle{\sum\limits_{i=1}^d}}\, \mu_i x_i (\tfrac{\partial}{\partial x_i}X_s)(x) \bigg]\,ds + \int_0^t X_s(x)\,dW_s. \end{equation} \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:BS-SPDE}] Observe that the hypothesis that $\varphi \in C^2(\R^d,\R)$ has at most polynomially growing derivatives and Lemma~\ref{le:BS-PDE-smooth} assure that $v\in C^{1,2}([0,T]\times \R^d, \R)$. Combining this and \eqref{eq:le:BS:PDE} proves that for every $\omega \in \Omega$ it holds that $([0,T]\times\R^d \ni (t,x)\mapsto X_t(x,\omega) \in \R) \in C^{0,2}([0,T]\times\R^d,\R)$. This establishes item~\eqref{le:eq:BS-1}. Moreover, observe that the fact that $v\in C^{1,2}([0,T]\times \R^d, \R)$, It\^o's formula, the assumption that for every $x\in \R^d$ it holds that $v(0,x)=\varphi(x)$, and \eqref{eq:le:BS:PDE} ensure that for every $t \in [0,T]$, $x \in \R^d$ it holds $\P$-a.s.\ that \begin{equation} \begin{split} X_t(x) &= \exp\!\big(W_t - \tfrac{t}{2}\big) \, v(t,x) \\ &= v(0,x) + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big) \,(\tfrac{\partial}{\partial s}v)(s,x) \,ds + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big)\, v(s,x)\,dW_s \\ & \quad + \int_0^t \left[-\tfrac{1}{2}\right]\exp\!\big(W_s - \tfrac{s}{2}\big) \,v(s,x)\,ds + \tfrac{1}{2}\int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big)\, v(s,x)\,ds \\ &= \varphi(x) + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big) \,(\tfrac{\partial}{\partial s}v)(s,x) \,ds + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big)\, v(s,x)\, dW_s. \end{split} \end{equation} Lemma~\ref{le:BS-PDE-smooth}, \eqref{eq:le:BS:PDE}, and item~\eqref{le:eq:BS-1} hence assure that for every $t \in [0,T]$, $x=(x_1,x_2,\dots,x_d) \in \R^d$ it holds $\P$-a.s.\ that \begin{align} X_t(x) &= \varphi(x) + \int_0^t \exp\!\big(W_s - \tfrac{s}{2}\big) \,\bigg[ \tfrac12 {\textstyle\sum\limits_{i=1}^d} |\sigma_i|^2 |x_i|^2 (\tfrac{\partial^2}{\partial x^2_i}v)(s,x) + {\textstyle\sum\limits_{i=1}^d}\, \mu_i x_i (\tfrac{\partial}{\partial x_i}v)(s,x) \bigg]\,ds \nonumber \\ & \quad + \int_0^t X_s(x) \, dW_s \\ &= \varphi(x) + \int_0^t \bigg[ \tfrac12 {\textstyle\sum\limits_{i=1}^d} |\sigma_i|^2 |x_i|^2 (\tfrac{\partial^2}{\partial x_i^2}X_s)(x) + {\textstyle\sum\limits_{i=1}^d}\, \mu_i x_i (\tfrac{\partial}{\partial x_i}X_s)(x) \bigg]\,ds + \int_0^t X_s(x) \, dW_s. \nonumber \end{align} This completes the proof of Lemma~\ref{le:BS-SPDE}. \end{proof} \begin{table}[ht] \begin{center} \small \begin{tabular}{|c|c|c|c|c|c|} \hline \makecell{d} & \makecell{Result\\ of the \\ approx.\\ algorithm} & \makecell{Runtime\\ in\\ seconds} & \makecell{Reference\\ solution} & \makecell{Relative\\ pathwise\\ error} & \makecell{Relative\\ $L^2$-error}\\ \hline 1 & 139.171 & 448.275 & 139.809 & 0.0046 & \\ 1 & 173.195 & 450.996 & 172.210 & 0.0057 & \\ 1 & 63.231 & 448.398 & 63.588 & 0.0056 & 0.0044\\ 1 & 41.285 & 442.110 & 41.161 & 0.0030 & \\ 1 & 116.699 & 439.910 & 116.847 & 0.0013 & \\ \hline 5 & 80.088 & 455.842 & 78.773 & 0.0167 & \\ 5 & 32.612 & 455.521 & 31.954 & 0.0206 & \\ 5 & 77.905 & 456.283 & 77.766 & 0.0018 & 0.0137\\ 5 & 24.200 & 442.570 & 23.843 & 0.0150 & \\ 5 & 34.496 & 443.422 & 34.610 & 0.0033 & \\ \hline 10 & 22.367 & 445.009 & 22.187 & 0.0082 & \\ 10 & 69.423 & 446.532 & 68.919 & 0.0073 & \\ 10 & 14.542 & 452.487 & 14.596 & 0.0037 & 0.0087\\ 10 & 11.286 & 455.380 & 11.285 & 0.0001 & \\ 10 & 28.276 & 455.372 & 27.839 & 0.0157 & \\ \hline 20 & 4.963 & 443.438 & 4.923 & 0.0081 & \\ 20 & 17.222 & 442.955 & 16.951 & 0.0160 & \\ 20 & 50.882 & 454.271 & 50.099 & 0.0156 & 0.0138\\ 20 & 11.090 & 455.924 & 10.915 & 0.0161 & \\ 20 & 18.192 & 454.760 & 17.986 & 0.0115 & \\ \hline \end{tabular} \caption{Numerical simulations for the stochastic Black--Scholes equations with multiplicative noise in \eqref{eq:ex:Black-Scholes}. } \label{table:black-mult} \end{center} \end{table} \subsection{Zakai equations} \label{subsec:Zakai} In this subsection we apply the approximation algorithm in Framework~\ref{def:general_algorithm} to the Zakai equations in \eqref{eq:ex:Zakai} below. Assume Framework~\ref{frame:adam}, let $\alpha= 2 \pi$, $\beta=0.25$, and $\gamma=0.1$, assume that $T=0.5$, $N=25$, $M=12000$, $d\in\{1,5,10,20,50\}$, $\delta=d$, and $\varepsilon=10^{-8}$, let $h=(h_1,h_2,\dots,h_d)\in C(\R^d,\R^d)$, let $W\colon [0,T]\times\Omega\to\R^d$ be a standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motion, assume for every $s,t \in [0,T]$, $x=(x_1,x_2,\dots,x_d)$, $w=(w_1,w_2,\dots,w_d)\in \R^d$, $u \in \R$, $z=(z_1,z_2,\dots,z_d)\in \R^d$, $m\in \N_0$ that $\varphi(x)=\big(\tfrac{\alpha}{2\pi}\big)^{\nicefrac{d}{2}} \exp(-\frac{\alpha}{2}\Vert x\Vert_{\R^d}^2)$, $h(x)=\beta x$, $\mu(x)= \gamma x[1+\Vert x\Vert_{\R^d}^2]^{-1}$, $\sigma(x)w= d^{-\nicefrac{1}{2}}(\sum_{i=1}^d w_i,\sum_{i=1}^d w_i,\dots,\sum_{i=1}^d w_i)$, $H(t,s,x,w)= x +\mu(x)(t-s) + \sigma(x) w$, $ f(x,u,w) = -\sum_{i=1}^d u \big(\frac{\partial}{\partial{x_i}}\mu_i\big)(x) $, $ b(x,u,w) = u h(x) $, $\gamma_m = 10^{-2} \mathbbm{1}_{[0,5000]}(m) + 10^{-3} \mathbbm{1}_{(5000,10000]}(m) + 10^{-4} \mathbbm{1}_{(10000,12000]}(m)$, and \begin{equation} \begin{split} \mathcal{H}_n(x,u,w,z) &=u- \bigg[{\textstyle{\sum\limits_{i=1}^d}} u \big(\tfrac{\partial}{\partial{x_i}}\mu_i\big)(x)\bigg] (t_n-t_{n-1}) +u\langle h(x),z\rangle_{\R^d} \\ & \quad + \tfrac{u}{2}\bigg[{\textstyle{\sum\limits_{i,j=1}^d}} h_i(x)h_j(x)z_i z_j\bigg] -\tfrac{u(t_n-t_{n-1})}{2}\bigg[{\textstyle{\sum\limits_{i=1}^d}} |h_i(x)|^2\bigg], \end{split} \end{equation} let $Y\colon[0,T]\times \Omega\to \R^d$ be an $( \mathcal{F}_t )_{ t \in [0,T]}$-adapted stochastic process with continuous sample paths which satisfies that for every $t \in [0,T]$ it holds $\P$-a.s.\ that \begin{equation} Y_t = Y_0 + \int_0^t \mu(Y_s)\,ds + \int_0^t\mathfrak{\sigma}(Y_s)\, dW_s \end{equation} (signal process/state process/system process), assume for every $A\in \mathcal{B}(\R^d)$ that $\P(Y_0\in A)=\int_A \varphi(x)\,dx$, let $V\colon [0,T]\times\Omega\to\R^d$ be a standard $( \mathcal{F}_t )_{ t \in [0,T] }$-Brownian motion, assume that $V$ and $W$ are independent, and assume that for every $t \in [0,T]$, $x\in \R^d$ it holds $\P$-a.s.\ that \begin{equation} Z_t(x) = \int_0^t h(Y_s)\,ds + V_t \end{equation} (observation process). Note that \eqref{eq:mildFormulationSPDE-EXAMPLE-SETTING} and the hypothesis that for every $w=(w_1,w_2,\dots,w_d)$ $\in \R^d$ it holds that $\sigma(x)w= d^{-\nicefrac{1}{2}}(\sum_{i=1}^d w_i,\sum_{i=1}^d w_i,\dots,\sum_{i=1}^d w_i)$ ensure that for every $t\in [0,T]$, $x=(x_1,x_2,\dots,x_d)\in\R^d$ it holds $\P$-a.s.\ that \begin{align} X_t(x) & = \varphi( x ) + \int_{ 0 }^{ t } f\big( x, X_s(x), ( \nabla X_s )( x ) \big) \, ds + \int_{ 0 }^{ t } \big \langle b\big( x, X_s( x ), ( \nabla X_s )( x ) \big), dZ_s(x) \rangle_{\R^\delta} \nonumber \\ & \quad + \int_{ 0 }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma(x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) + \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \Big] \, ds \nonumber \\ &=\varphi(x) + \int_0^t -\textstyle{\sum\limits_{i=1}^d} \big[X_s(x)\big(\tfrac{ \partial}{\partial{x_i}}\mu_i\big)(x)\big]\,ds + \int_0^t X_s(x) \,\langle h(x), dZ_s \rangle_{\R^d} \nonumber \\ & \quad + \int_{ 0 }^{ t } \Big[ \tfrac{ 1 }{ 2 } \operatorname{Trace}\!\big( \sigma( x ) [ \sigma(x ) ]^* ( \operatorname{Hess} X_s )( x ) \big) - \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \Big] \, ds \\ &=\varphi(x) + \int_0^t -\textstyle{\sum\limits_{i=1}^d} \big[X_s(x)\big(\tfrac{ \partial}{\partial{x_i}}\mu_i\big)(x)\big]\,ds + \int_0^t X_s(x) \,\langle h(x), dZ_s \rangle_{\R^d} \nonumber \\ & \quad + \int_{ 0 }^{ t } \bigg[ \tfrac{1}{2}\Big[{\textstyle\sum\limits_{i,j=1}^d} \big(\tfrac{ \partial^2}{\partial{x_i}\partial{x_j}}X_s\big)(x)\Big] - \big\langle \mu( x ), ( \nabla X_s )( x ) \big\rangle_{ \R^d } \bigg] \, ds. \nonumber \end{align} The fact that for every $s \in [0,T]$, $x=(x_1,x_2,\dots,x_d)\in \R^d$ it holds that \begin{equation} \textstyle{\sum\limits_{i=1}^d} \big[X_s(x)\big(\tfrac{ \partial}{\partial{x_i}}\mu_i\big)(x)\big] + \big\langle \mu( x ), ( \nabla X_s )(x) \big\rangle_{ \R^d } = \textstyle{\sum\limits_{i=1}^d} \tfrac{ \partial}{\partial{x_i}}\big(\mu_i(x) X_s(x)\big) \end{equation} % hence proves that for every $t\in [0,T]$, $x=(x_1,x_2,\dots,x_d)\in\R^d$ it holds $\P$-a.s.\ that % \begin{align} &X_t(x) \label{eq:ex:Zakai}\\ &= \varphi(x) + \int_0^t \bigg[ \tfrac{1}{2}\Big[{\textstyle\sum\limits_{i,j=1}^d} \big(\tfrac{ \partial^2}{\partial{x_i}\partial{x_j}}X_s\big)(x)\Big] -\Big[\textstyle{\sum\limits_{i=1}^d} \tfrac{ \partial}{\partial{x_i}}\big(\mu_i(x) X_s(x)\big)\Big] \bigg] \, ds + \int_0^t X_s(x) \, \langle h(x), dZ_s \rangle_{\R^d}. \nonumber \end{align} In the next step we depict our numerical simulation results for the Zakai equations described in \eqref{eq:ex:Zakai} above. In Table~\ref{table:Zakai} we present numerical approximations for the relative $L^2$-errors $\big(\E\big[|X_T(0)|^{-2}{|\mathbb{V}_N^{0,1,\mathbb{S}^{0,N}_m}\!\!(\Theta^{0,N}_m,0)-X_T(0) |^2}\big]\big)^{1/2}$ for $d \in \{1,5,10,20,50\}$ (cf.\ \eqref{eq:general_gradient_step} and \eqref{eq:V-approx-gen-frame}). \begin{table \begin{center} \small \begin{tabular}{|c|c|c|c|c|c|} \hline \makecell{d} & \makecell{Result\\ of the \\ approx.\\ algorithm} & \makecell{Runtime\\ in\\ seconds} & \makecell{Reference\\ solution} & \makecell{Relative\\ pathwise\\ error} & \makecell{Relative\\ $L^2$-error}\\ \hline 1 & 0.4699 & 830.03 & 0.4812 & 0.0236 & \\ 1 & 0.4574 & 827.84 & 0.4781 & 0.0433 & \\ 1 & 0.4719 & 827.02 & 0.4800 & 0.0167 & 0.0274 \\ 1 & 0.4681 & 828.02 & 0.4798 & 0.0243 & \\ 1 & 0.4681 & 828.28 & 0.4783 & 0.0214 & \\ \hline 5 & 0.1984 & 942.68 & 0.2063 & 0.0382 & \\ 5 & 0.2044 & 942.60 & 0.2076 & 0.0155 & \\ 5 & 0.1983 & 944.02 & 0.2058 & 0.0363 & 0.0266 \\ 5 & 0.2027 & 942.89 & 0.2072 & 0.0216 & \\ 5 & 0.2042 & 942.89 & 0.2055 & 0.0066 & \\ \hline 10 & 0.1233 & 944.60 & 0.1271 & 0.0301 & \\ 10 & 0.1246 & 943.11 & 0.1264 & 0.0142 & \\ 10 & 0.1250 & 942.84 & 0.1266 & 0.0130 & 0.0165 \\ 10 & 0.1268 & 941.63 & 0.1279 & 0.0088 & \\ 10 & 0.1269 & 942.62 & 0.1271 & 0.0016 & \\ \hline 20 & 0.0691 & 959.35 & 0.0695 & 0.0053 & \\ 20 & 0.0699 & 961.43 & 0.0714 & 0.0215 & \\ 20 & 0.0726 & 962.18 & 0.0732 & 0.0073 & 0.0117 \\ 20 & 0.0699 & 959.87 & 0.0707 & 0.0119 & \\ 20 & 0.0754 & 962.79 & 0.0753 & 0.0016 & \\ \hline 50 & 0.0283 & 957.60 & 0.0283 & 0.0011 & \\ 50 & 0.0255 & 958.08 & 0.0263 & 0.0279 & \\ 50 & 0.0307 & 955.14 & 0.0297 & 0.0341 & 0.0209 \\ 50 & 0.0257 & 958.00 & 0.0256 & 0.0032 & \\ 50 & 0.0310 & 957.00 & 0.0305 & 0.0149 & \\ \hline \end{tabular} \caption{Numerical simulations for the Zakai equations in \eqref{eq:ex:Zakai}. } \label{table:Zakai} \end{center} \end{table} \section{{\sc Python} source codes for the proposed approximation algorithm} \label{sec:source} In Subsections~\ref{subsec:stoch_heat_code}--\ref{subsec:Zakai_code} below we present the {\sc Python} source codes associated to the numerical simulations in Subsections~\ref{subsec:stoch_heat}--\ref{subsec:Zakai} above. The following {\sc Python} source code, {\sc Python} code~\ref{code:common} below, is employed in the case of each of the {\sc Python} source codes in Subsections~\ref{subsec:stoch_heat_code}--\ref{subsec:Zakai_code} below. \lstinputlisting[ label = {code:common}, caption = {\it common.py} ]{common.py} \subsection[A {\sc Python} source code associated to Subsection~\ref{subsec:stoch_heat}]{A {\sc Python} source code associated to the numerical simulations in Subsection~\ref{subsec:stoch_heat}} \label{subsec:stoch_heat_code} \lstinputlisting[ label = {code:heat_add}, caption = {\it heat\_equation\_add.py} ]{heat_equation_add.py} \subsection[A {\sc Python} source code associated to Subsection~\ref{subsec:const-coeff}]{A {\sc Python} source code associated to the numerical simulations in Subsection~\ref{subsec:const-coeff}} \label{subsec:const-coeff_code} \lstinputlisting[ label = {code:heat_mul}, caption = {\it heat\_eqation\_mul.py} ]{heat_equation_mul.py} \subsection[A {\sc Python} source code associated to Subsection~\ref{subsec:geom_BM}]{A {\sc Python} source code associated to the numerical simulations in Subsection~\ref{subsec:geom_BM}} \label{subsec:geom_BM_code} \lstinputlisting[ label = {code:BS}, caption = {\it black\_scholes.py} ]{black_scholes.py} \subsection[A {\sc Python} source code associated to Subsection~\ref{subsec:Zakai}]{A {\sc Python} source code associated to the numerical simulations in Subsection~\ref{subsec:Zakai}} \label{subsec:Zakai_code} \lstinputlisting[ label = {code:zakai}, caption = {\it zakai\_equation.py} ]{zakai_equation.py} \subsection*{Acknowledgments} This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2044-390685587, Mathematics M\"unster: Dynamics-Geometry-Structure, by the Schweizerischer Nationalfonds (SNF, Swiss National Science Foundation) through the research project $ 200020\_175699 $ ``Higher order numerical approximation methods for stochastic partial differential equations'', and by the Nanyang Assistant Professorship Grant (NAP Grant) ``Machine Learning based Algorithms in Finance and Insurance''. \bibliographystyle{acm}
2,877,628,090,557
arxiv
\section{Temperature dependence of XES spectra} In Fig.\ref{fig:fig1}, we plot XES spectra at 300 K for Ba$_{1-x}$K$_x$Fe$_2$As$_2$ (with x=0.25, 0.4, and 0.6) and BaFe$_{2-x}$Co$_x$As$_2$ (with x=0.085, 012 and 0.2). On the bottom row we show the IAD obtained from these spectra. \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}} \begin{figure} \includegraphics[scale=0.4]{figure_S1.png} \caption{\label{fig:fig1} K$_\beta$ XES at 300 K for Ba$_{1-x}$K$_x$Fe$_2$As$_2$ (a) with x=0.25, 0.4 and 0.6 and BaFe$_{2-x}$Co$_x$As$_2$ (b) with x=0.085, 012, and 0.2 at 15 K. The last row is indicating the relative IAD for Ba$_{1-x}$K$_x$Fe$_2$As$_2$ and BaFe$_{2-x}$Co$_x$As$_2$.} \end{figure} \end{document}
2,877,628,090,558
arxiv
\section{#1}} \newcommand{\subsect}[1]{\subsection{#1}} \newcommand{\subsubsect}[1]{\subsubsection{#1}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\omega{\omega} \def{\cal S}{{\cal S}} \defsq{sq} \defJ{J} \defM{M} \defE{E} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def\varepsilon{\varepsilon} \deff{f} \defe{e} \deff{f} \defe{e} \defh{h} \defg{g} \defm{m} \defme{me} \def{{\cal I}_\k}{{{\cal I}_\omega}} \def\>#1{\mathbf{#1}} \defU{U} \defX{X} \def{\mathbb H}{{\mathbb H}} \def{\mathbb Z}{{\mathbb Z}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb C}{{\mathbb C}} \parskip=1ex \oddsidemargin= 0.5cm \evensidemargin= 0.5cm \parindent=1.5em \textheight=23.0cm \textwidth=15cm \topmargin=-1.0cm \begin{document} \thispagestyle{empty} \hfill \ \vspace{2cm} \begin{center} {\LARGE{\bf{The family of quaternionic quasi-unitary }}} {\LARGE{\bf{Lie algebras and their central extensions}}} \end{center} \bigskip\bigskip \begin{center} Francisco J. Herranz$^\dagger$ and Mariano Santander$^\ddagger$ \end{center} \begin{center} {\it $^\dagger$ Departamento de F\'{\i}sica, E.U. Polit\'ecnica \\ Universidad de Burgos, E--09006 Burgos, Spain} \end{center} \begin{center} {\it $^{\ddagger}$ Departamento de F\'{\i}sica Te\'orica, Universidad de Valladolid \\ E--47011, Valladolid, Spain} \end{center} \bigskip\bigskip\bigskip \begin{abstract} The family of quaternionic quasi-unitary (or quaternionic unitary Cayley--Klein algebras) is described in a unified setting. This family includes the simple algebras $sp(N+1)$ and $sp(p,q)$ in the Cartan series $C_{N+1}$, as well as many non-semisimple real Lie algebras which can be obtained from these simple algebras by particular contractions. The algebras in this family are realized here in relation with the groups of isometries of quaternionic hermitian spaces of constant holomorphic curvature. This common framework allows to perform the study of many properties for all these Lie algebras simultaneously. In this paper the central extensions for all quasi-simple Lie algebras of the quaternionic unitary Cayley--Klein family are completely determined in arbitrary dimension. It is shown that the second cohomology group is trivial for any Lie algebra of this family no matter of its dimension. \end{abstract} \newpage \sect{Introduction} This paper is devoted to a double purpose. First, it introduces and describes the structure of a family of Lie algebras, the quaternionic quasi-unitary algebras, or quaternionic unitary Cayley--Klein algebras, which include as simple members the algebras in the Cartan series $C_{N+1}$ which in the standard notation are written as $sp(p,q), \ p+q=N+1$, as well as many non-simple members which can be obtained from the former by a sequence of contractions. The description is also done in relation to the symmetric homogeneous spaces (the quaternionic hermitian spaces of rank one) where these groups act in a natural way. The second and main purpose is to investigate the Lie algebra cohomology of the algebras in this Cayley--Klein (hereafter CK) family, in any dimension. These extensions have both mathematical interest and physical relevance. Therefore, this part of the paper can be considered as a further step in a systematic study of properties of the these families of Lie algebras \cite{graded}--\cite{unitario}, by using a formalism which allows a clear view of the behaviour of these properties under contraction; in physical terms contractions are related to some kind of approximation. In particular, the central extensions of algebras in the two other main CK families of Lie algebras (the quasi-orthogonal algebras and the two families of quasi-unitary algebras) have been studied in two previous papers, in the general situation and for any dimension \cite{ortogonal}, \cite{unitario}. We refer to these works for references and for physical motivations. The knowledge of the second cohomology group for a Lie algebra relies on the general solution of a set of linear equations, but in special cases the calculations may be bypassed by using some general results: for instance, the second cohomology group is trivial for semisimple Lie algebras. But once a contraction is made, the semisimple character disappears, and the contracted algebra \emph{might} have non-trivial central extensions. Instead of finding the general solution for the extension equations on a case-by-case basis, our approach (as developed previously for the quasi-orthogonal algebras \cite{ortogonal} and for the quasi-unitary algebras \cite{unitario}) is to do these calculations for a whole family including a large number of algebras simultaneously. In this paper we discuss the `next' family: the quaternionic quasi-unitary one. The advantages in this approach can be summed up in: a) it allows to record, in a form easily retrievable, a large number of results which can be needed in applications, both in mathematics and in physics, and b) it avoids at once and for all the case-by-case type computation of the central extensions of algebras included in each family and affords a global view on the interrelations between cohomology and contractions. Section 2 is devoted to the description of the family of quaternionic unitary CK algebras. We show how to obtain these as graded contractions of the compact algebra $u(N+1,{\mathbb H})\equiv sp(N+1)$, and we provide some details on their structure. These algebras are associated to the quaternionic hermitian spaces (of rank one) with metrics of different signatures and to their contractions, so we devote a part of this section to dwell upon these questions. In section 3 the general solution to the central extension problem for these algebras is given. The result obtained is quite simple to state: all the extensions of any algebra in the quaternionic unitary CK family are trivial. This triviality is already known (Whitehead's lemma) for the simple algebras $u(p, q,{\mathbb H})\equiv sp(p, q)$) in this family, but comes as a surprise for the rather large number of non-semisimple Lie algebras in this CK family, which can be obtained by contracting $u(p, q,{\mathbb H})$. This is also in marked contrast with the results for the central extensions of both the orthogonal and the unitary CK families, where some algebras (particularly the most contracted one) always allow some non-trivial extensions. Finally, some remarks close the paper. \sect{The family of quaternionic unitary CK algebras} To begin with we consider the compact real form of the Lie algebra in the Cartan series $C_{N+1}$. This compact real form can be realized as the Lie algebra of the complex unitary-symplectic group sometimes denoted as $USp(2(N+1))$ \cite{Gilmore} but more usually referred to shortly as the `symplectic' group, $Sp(N+1)$. The usual convention is to denote this group without any reference to a field to avoid confusion with the true \emph{symplectic} groups over either the reals $Sp(2(N+1),{\mathbb R})$ or over the complex numbers $Sp(2(N+1),{\mathbb C})$; in these last cases the term \emph{symplectic } is properly associated to the symmetry group of an antisymmetric metric. This double use of the name `symplectic' and of the symbols $Sp$ and $sp$ is rather unfortunate, and following Sudbery \cite{Sud}, we shall change the symbol for one of the families, and use $Sq, sq$ for the unitary-symplectic groups and algebras usually denoted, without any field reference, by $Sp, sp$. The group $Sq(N+1)\equiv USp(2(N+1))$ is the intersection of the complex \emph{unitary} group $U(2(N+1),{\mathbb C})$ and the complex \emph{symplectic} group $Sp(2(N+1),{\mathbb C})$: $$ Sq(N+1)\equiv USp(2(N+1))=U(2(N+1),{\mathbb C})\cap Sp(2(N+1),{\mathbb C}), $$ which is a consequence of the nature of $Sq(N+1)$ as the group of quaternionic matrices leaving invariant a quaternionic hermitian definite positive metric. We recall that all other non-compact real forms in the Cartan series $C_{N+1}$ are the real \emph{symplectic} algebra $sp(2(N+1),{\mathbb R})$, and the quaternionic pseudo-unitary algebras $sq(p,q)$, $p+q=N+1$, which allow a realization as $$ Sq(p,q)\equiv USp(2p, 2q)=U(2p, 2q,{\mathbb C})\cap Sp(2(N+1),{\mathbb C}), $$ and they are the groups of quaternionic matrices leaving invariant a quaternionic hermitian metric of signature $(p,q)$. The Lie algebra $sq(N+1)$ has dimension $2(N+1)^2 +(N+1)$ and is usually realized by $2(N+1)\times 2(N+1)$ complex matrices \cite{Gilmore,Helgason}. The alternative realization which we shall consider in this paper, in accordance to the interpretation of these groups and algebras as quaternionic unitary ones $Sq(N+1) \equiv U(N+1, {\mathbb H})$ \cite{Fulton}, is done by means of \emph{antihermitian} matrices over the quaternionic skew field ${\mathbb H}$: \begin{equation} J_{ab}=-e_{ab}+e_{ba} \qquad M^\alpha_{ab}=i_\alpha(e_{ab}+e_{ba}) \qquad E^\alpha_a=i_\alpha e_{aa} \label{ba} \end{equation} where $a<b$, $a,b=0,1,\dots,N$, $\alpha=1,2,3$; $i_1=i$, $i_2=j$, $i_3=k$ are the usual quaternionic units, and $e_{ab}$ is the $(N+1) \times (N+1)$ matrix with a single 1 entry in row $a$, column $b$. Notice that the matrices $J_{ab}$ and $M^\alpha_{ab}$ are traceless, but the trace of $E^\alpha_a$ is a non-zero pure imaginary quaternion, so the realization is by antihermitian quaternionic matrices whose trace has a zero real part. When quaternions are realized as $2\times 2$ complex matrices (see e.g. \cite{Chev}) then (\ref{ba}) reduces to the usual realization of $sq(N+1)$ by complex matrices $2(N+1) \times 2(N+1)$ which are at the same time complex unitary and complex symplectic; we remark that all these matrices are traceless. The multiplication of quaternionic units is encoded in $i_\alpha i_\beta =-\delta_{\alpha\beta}+\sum_{\gamma=1}^3\varepsilon_{\alpha\beta\gamma}i_\gamma $ where $\varepsilon_{\alpha\beta\gamma}$ is the completely antisymmetric unit tensor with $\varepsilon_{123}=1$. This relation allows to derive the expression for the Lie bracket of two pure quaternionic matrices $X^\alpha=i_\alpha X$, $Y^\beta=i_\beta Y$, where $X$, $Y$ are real matrices, as \begin{equation} [X^\alpha,Y^\beta]=-\delta_{\alpha\beta}[X,Y]+ \sum_{\gamma=1}^3\varepsilon_{\alpha\beta\gamma}i_\gamma\{X,Y\} \end{equation} where both the commutator and the anticommutator $\{X,Y\}=XY+YX$ of the real matrices $X, Y$ appear. Using this formula, the commutation relations of $sq(N+1)$ in the basis (\ref{ba}) read \begin{equation} \begin{array}{lll} [J_{ab},J_{ac}] = J_{bc} &\qquad [J_{ab},J_{bc}] =-J_{ac} &\qquad [J_{ac},J_{bc}] = J_{ab}\cr [M_{ab}^\alpha,M_{ac}^\alpha] = J_{bc} &\qquad [M_{ab}^\alpha,M_{bc}^\alpha] = J_{ac} &\qquad [M_{ac}^\alpha,M_{bc}^\alpha] = J_{ab} \cr [J_{ab},M_{ac}^\alpha] = M_{bc}^\alpha &\qquad [J_{ab},M_{bc}^\alpha] =-M_{ac}^\alpha &\qquad [J_{ac},M_{bc}^\alpha] =-M_{ab}^\alpha \cr [M_{ab}^\alpha,J_{ac}] =-M_{bc}^\alpha &\qquad [M_{ab}^\alpha,J_{bc}] =-M_{ac}^\alpha &\qquad [M_{ac}^\alpha,J_{bc}] = M_{ab}^\alpha \cr [J_{ab},J_{de}]= 0 &\qquad [M_{ab}^\alpha,M_{de}^\alpha] =0 &\qquad [J_{ab},M_{de}^\alpha] =0 \cr \multicolumn{3}{l}{ [J_{ab},E_d^\alpha] = ( \delta_{ad} -\delta_{bd})M_{ab}^\alpha \qquad [M_{ab}^\alpha,E_d^\alpha] = -( \delta_{ad} -\delta_{bd}) J_{ab}}\cr \multicolumn{3}{l}{ [J_{ab},M_{ab}^\alpha] = 2(E_{b}^\alpha-E_{a}^\alpha) \qquad [E_{a}^\alpha,E_b^\alpha] = 0 }\cr \end{array} \label{bb} \end{equation} \begin{equation} \begin{array}{l} [M_{ab}^\alpha,M_{ac}^\beta] = \varepsilon_{\alpha\beta\gamma}M_{bc}^\gamma \qquad [M_{ab}^\alpha,M_{bc}^\beta] = \varepsilon_{\alpha\beta\gamma}M_{ac}^\gamma \qquad [M_{ac}^\alpha,M_{bc}^\beta] = \varepsilon_{\alpha\beta\gamma}M_{ab}^\gamma \cr [M_{ab}^\alpha,M_{de}^\beta] = 0\qquad [M_{ab}^\alpha,M_{ab}^\beta] = 2\varepsilon_{\alpha\beta\gamma}(E_a^\gamma + E_b^\gamma) \cr [M_{ab}^\alpha,E_{d}^\beta] =(\delta_{ad}+\delta_{bd}) \varepsilon_{\alpha\beta\gamma}M_{ab}^\gamma \qquad [E_{a}^\alpha,E_{b}^\beta] =2\delta_{ab}\varepsilon_{\alpha\beta\gamma}E_{a}^\gamma \end{array} \label{bc} \end{equation} where hereafter the following notational conventions are assumed: \begin{itemize} \item Whenever three indices $a$, $b$, $c$ appear, they are always assumed to verify $a<b<c$. \item Whenever three indices $a$, $b$, $d$ appear, $a<b$ is assumed but the index $d$ is arbitrary, and it might coincide with either $a$ or $b$. \item Whenever four indices $a$, $b$, $d$, $e$ appear, $a<b$, $d<e$ and all of them are assumed to be different. \item Whenever three quaternionic indices $\alpha$, $\beta$, $\gamma$ appear, they are also assumed to be different (so they are always some permutation of $123$). \item There is no any implied sum over repeated indices; in particular there is no sum in $\gamma$ in expressions like $\varepsilon_{\alpha\beta\gamma}X^\gamma$. \end{itemize} This matrix realization of the Lie algebra $sq(N+1)$ displays clearly the existence of several subalgebras. By one hand, the $\frac 12 N(N+1)$ generators $J_{ab}$ $(a,b=0,1,\dots,N)$ close an orthogonal algebra $so(N+1)$ whose non-zero commutation rules are written in the first row of (\ref{bb}). On the other hand, for each \emph{fixed} $\alpha=1,2,3$, the $(N+1)^2$ generators $\{J_{ab},M^\alpha_{ab},E^\alpha_{a}\}$ ($a,b=0,1,\dots,N;\ a<b$) give rise to an algebra isomorphic to the unitary algebra $u(N+1)$ with commutators given by (\ref{bb}); these subalgebras we denote as $u^\alpha (N+1)$. Hence $sq(N+1)$ contains \emph{three} subalgebras isomorphic to $u(N+1)$, whose intersection is a subalgebra $so(N+1)$. The family of algebras we study in this paper can be obtained as graded contractions \cite{Montigny,Moody} from $sq(N+1)$. The algebra $sq(N+1)$ can be endowed with a grading by a group ${\mathbb Z}_2^{\otimes N}$ constituted by $2^N$ involutive automorphisms $S_{\cal S}$ defined by \begin{eqnarray} &&S_{\cal S} J_{ab} = (-1)^{\chi_{\cal S}(a) + \chi_{\cal S}(b)} J_{ab} \cr &&S_{\cal S} M^\alpha_{ab} = (-1)^{\chi_{\cal S}(a) + \chi_{\cal S}(b)}M^\alpha_{ab} \qquad S_{\cal S} E^\alpha_{a}= E^\alpha_{a}\qquad \alpha=1,2,3; \end{eqnarray} where ${\cal S}$ denotes any subset of the set of indices $\{0, 1, \dots, N\}$, and $\chi_{\cal S}(a)$ denotes the characteristic function over ${\cal S}$. A particular solution of the ${\mathbb Z}_2^{\otimes N}$ graded contractions of $sq(N+1)$ leads to a family of Lie algebras which are called quaternionic unitary CK algebras or quaternionic quasi-unitary Lie algebras \cite{tesis,goslar}. This family comprises the simple quaternionic unitary and pseudo-unitary algebras $sq(p,q)$ $(p+q=N+1)$ in the Cartan series $C_{N+1}$ as well as many non-simple real Lie algebras which can be obtained from the former by contractions. Collectively, all these algebras preserve some properties related to simplicity, so they belong to the class of so-called `quasi-simple' Lie algebras \cite{Rozenfeld1, Rozenfeld2}, which explains the use of the prefix quasi in their name. Overall this is very similar to the situation of the families of quasi-orthogonal algebras (with $so(N+1)$ as the initial Lie algebra \cite{graded,gradedb}) or to the families of quasi-unitary or quasi-special unitary algebras over the complex numbers (starting from either $u(N+1)$ or $su(N+1)$ \cite{unitario}). The quaternionic unitary CK algebras can be described by means of $N$ real coefficients $\omega_a$ ($a=1,\dots,N$) and are denoted collectively as $sq_{\omega_1,\dots,\omega_N}(N+1)$, or in an abbreviated form, as $sq_\omega(N+1)$ where $\omega$ stands for $\omega=(\omega_1,\dots,\omega_N)$. Introducing the two-index coefficients $\omega_{ab}$ as \begin{equation} \omega_{ab}:=\omega_{a+1}\omega_{a+2}\cdots\omega_b \qquad a,b=0,1,\dots,N \quad a<b \qquad \omega_{aa}:=1 \label{bf} \end{equation} then the commutation relations of the generic CK algebra in the family $sq_\omega(N+1)$ are given by \cite{tesis} \begin{equation} \begin{array}{lll} [J_{ab},J_{ac}] = \omega_{ab}J_{bc} &\qquad [J_{ab},J_{bc}] =-J_{ac} &\qquad [J_{ac},J_{bc}] = \omega_{bc}J_{ab}\cr [M_{ab}^\alpha,M_{ac}^\alpha] =\omega_{ab} J_{bc} &\qquad [M_{ab}^\alpha,M_{bc}^\alpha] = J_{ac} &\qquad [M_{ac}^\alpha,M_{bc}^\alpha] =\omega_{bc} J_{ab} \cr [J_{ab},M_{ac}^\alpha] = \omega_{ab}M_{bc}^\alpha &\qquad [J_{ab},M_{bc}^\alpha] =-M_{ac}^\alpha &\qquad [J_{ac},M_{bc}^\alpha] =-\omega_{bc}M_{ab}^\alpha \cr [M_{ab}^\alpha,J_{ac}] =-\omega_{ab}M_{bc}^\alpha &\qquad [M_{ab}^\alpha,J_{bc}] =-M_{ac}^\alpha &\qquad [M_{ac}^\alpha,J_{bc}] = \omega_{bc}M_{ab}^\alpha \cr [J_{ab},J_{de}]= 0 &\qquad [M_{ab}^\alpha,M_{de}^\alpha] =0 &\qquad [J_{ab},M_{de}^\alpha] =0 \cr \multicolumn{3}{l}{ [J_{ab},E_d^\alpha] = ( \delta_{ad} -\delta_{bd})M_{ab}^\alpha \qquad [M_{ab}^\alpha,E_d^\alpha] = -( \delta_{ad} -\delta_{bd}) J_{ab}}\cr \multicolumn{3}{l}{ [J_{ab},M_{ab}^\alpha] = 2\omega_{ab}(E_{b}^\alpha-E_{a}^\alpha) \qquad [E_{a}^\alpha,E_b^\alpha] = 0 }\cr \end{array} \label{bd} \end{equation} \begin{equation} \begin{array}{l} [M_{ab}^\alpha,M_{ac}^\beta] = \omega_{ab}\varepsilon_{\alpha\beta\gamma}M_{bc}^\gamma \qquad [M_{ab}^\alpha,M_{bc}^\beta] = \varepsilon_{\alpha\beta\gamma}M_{ac}^\gamma \qquad [M_{ac}^\alpha,M_{bc}^\beta] = \omega_{bc}\varepsilon_{\alpha\beta\gamma}M_{ab}^\gamma \cr [M_{ab}^\alpha,M_{de}^\beta] = 0\qquad [M_{ab}^\alpha,M_{ab}^\beta] = 2\omega_{ab}\varepsilon_{\alpha\beta\gamma}(E_a^\gamma + E_b^\gamma) \cr [M_{ab}^\alpha,E_{d}^\beta] =(\delta_{ad}+\delta_{bd}) \varepsilon_{\alpha\beta\gamma}M_{ab}^\gamma \qquad [E_{a}^\alpha,E_{b}^\beta] =2\delta_{ab}\varepsilon_{\alpha\beta\gamma}E_{a}^\gamma \end{array} \label{be} \end{equation} where we adhere to the notational conventions given after (\ref{bc}). The pattern of subalgebras previously discussed for the compact form $sq(N+1)$ clearly holds for any member of the complete family. The quaternionic unitary CK algebra $sq_\omega(N+1)$ contains also as Lie subalgebras an orthogonal CK algebra $so_\omega(N+1)$ \cite{tesis,ortogonal} and \emph{three} unitary CK algebras $u_\omega^\alpha (N+1)$ \cite{tesis,unitario} where $\alpha=1, 2, 3$; the commutation relations of the former correspond to the first row of (\ref{bd}) and those of the latter are given by (\ref{bd}) (for an index $\alpha$ fixed). Hence we find the sequence \begin{equation} so_\omega(N+1)\subset u_\omega^\alpha (N+1)\subset sq_\omega(N+1). \end{equation} \subsect{The quaternionic unitary CK groups} The matrix realization (\ref{ba}) allows a natural interpretation of the quaternionic unitary CK algebras as the Lie algebras of the motion groups of the homogeneous symmetric spaces with a quaternionic hermitian metric (the two-point homogeneous spaces of quaternionic type and rank one). Let us consider the space ${\mathbb H}^{N+1}$ endowed with a hermitian (sesqui)linear form $\langle \ .\ |\ . \ \rangle_\omega : {\mathbb H}^{N+1}\times {\mathbb H}^{N+1}\to {\mathbb H}$ defined by \begin{equation} \langle \>a|\>b \rangle_\omega:= \bar a^0 b^0 + \bar a^1 \omega_1 b^1 + \bar a^2 \omega_1 \omega_2 b^2 + \dots + \bar a^N \omega_1\cdots\omega_N b^N = \sum_{i=0}^N \bar a^i\omega_{0i} b^i \label{cb} \end{equation} where $\>a,\>b\in {\mathbb H}^{N+1}$ and $\bar a^i$ means the quaternionic conjugation of the component $a^i$. For the moment, we assume that we are in the generic case with all $\omega_a\ne 0$. The underlying metric is provided by the matrix \begin{equation} {{\cal I}_\k} = {\mbox{diag}}\ (1,\, \omega_{01},\,\omega_{02},\dots,\,\omega_{0N}) = {\mbox{diag}}\ (1,\, \omega_1,\,\omega_1\omega_2,\dots,\,\omega_1\cdots\omega_N) \label{ca} \end{equation} and the CK group $Sq_{\omega_1,\dots,\omega_N}(N+1)\equiv Sq_{\omega}(N+1)$ is defined as the group of linear isometries of this hermitian metric over a quaternionic space. Thus the isometry condition for an element $U$ of the Lie group \begin{equation} \langle U \>a| U \>b \rangle_\omega= \langle \>a| \>b \rangle_\omega \qquad \forall\, \>a,\>b\in {\mathbb H}^{N+1}, \label{cc} \end{equation} leads to the following relation \begin{equation} U^\dagger{{\cal I}_\k} U={{\cal I}_\k} \qquad \forall U\in Sq_{\omega}(N+1) \label{cd} \end{equation} which for the Lie algebra implies \begin{equation} X^\dagger{{\cal I}_\k} +{{\cal I}_\k} X=0 \qquad \forall X\in sq_{\omega}(N+1). \label{ce} \end{equation} From this equation, it is clear that the quaternionic unitary CK algebra is generated by the following $(N+1)\times (N+1)$ ${{\cal I}_\k}$-antihermitian matrices over ${\mathbb H}$ (cf. (\ref{ba})) \begin{equation} J_{ab}=-\omega_{ab}e_{ab}+e_{ba} \qquad M^\alpha_{ab}=i_\alpha(\omega_{ab}e_{ab}+e_{ba}) \qquad E_a^\alpha=i_\alpha e_{aa} . \label{cf} \end{equation} These matrices can be checked to satisfy the commutation relations (\ref{bd}) and (\ref{be}). When any of the constants $\omega_a$ are equal to zero, then the set of linear isometries of the hermitian metric over the quaternions (\ref{cc}) is larger than the group generated by (\ref{cf}), though in these cases there exists additional geometric structures in ${\mathbb H}^{N+1}$, which are related to the existence of invariant foliations, and the proper definition of the automorphism group for these structures leads again to the matrix Lie algebra generated by (\ref{cf}) with commutation relations (\ref{bd}) and (\ref{be}). The action of the group $Sq_{\omega}(N+1)$ in ${\mathbb H}^{N+1}$ is not transitive, and the `sphere' with equation \begin{equation} \langle \>x|\>x \rangle_\omega:= \sum_{i=0}^N \bar x^i\omega_{0i} x^i =1 \label{cg} \end{equation} is stable. However, if we take $O=(1, 0, \dots, 0)$ as a reference point in this sphere, the realization (\ref{cf}) shows that the isotropy subgroup of $O$ is $Sq_{\omega_2,\omega_3, \dots, \omega_N}(N)$, and the isotropy subgroup of the \emph{ray} of $O$ is $Sq(1)\otimes Sq_{\omega_2,\omega_3, \dots, \omega_N}(N)$ (note that the quaternions being non-commutative, a choice for left or right multiplication for scalars is required). Here the algebra $sq(1)$ of the subgroup $Sq(1)$ can be identified with the Lie algebra of automorphisms of the quaternions, generated by the three matrices \begin{equation} I^\alpha=i_\alpha \sum_{a=0}^N e_{aa}\qquad\alpha=1,2,3 \end{equation} which can be identified to the three quaternionic units. We note in passing that these are the elements of the Lie algebra which are unavoidably realized by matrices with non-zero pure imaginary trace, as all the generators $E_a^\alpha$ can be expressed in terms of zero trace combinations (say $B_{l}^\alpha \equiv E_{l-1}^\alpha - E_l^\alpha, \ l=1, \dots, N$) and the three $I^\alpha$. In this way we find the quaternionic hermitian homogeneous spaces as associated to the quaternionic unitary family of CK groups: \begin{equation} Sq_{\omega_1, \omega_2, \omega_3, \dots, \omega_N}(N+1)/\big( Sq(1) \otimes Sq_{\omega_2,\omega_3, \dots, \omega_N}(N) \big), \label{ch} \end{equation} For fixed $\omega_1, \omega_2, \omega_3, \dots, \omega_N$ this space, which has real dimension $4N$, has a natural real quadratic metric (either riemannian, pseudoriemannian or degenerate `riemannian'), coming from the real part of the quaternionic hermitian product in the ambient space. At the origin and in an adequate basis, this metric is given by the diagonal matrix with entries $(1,\,\omega_2,\,\omega_2\omega_3,\dots,\,\omega_2\cdots\omega_N)$, each entry repeated four times. The three well known hermitian elliptic, euclidean and hyperbolic quaternionic spaces, of constant holomorphic curvature $4K$ (either $K>0$, $K=0$ and $K<0$ respectively) appear in this family as associated to the special values $\omega_1=K$ and $\omega_2=\omega_3=\dots=\omega_N=1$, where the metric is riemannian (definite positive). All CK hermitian spaces of quaternionic type with $\omega_1=K$ have constant holomorphic curvature $4K$ and the signature (and/or the eventual degeneracy) of the metric is determined by the remaining constants $\omega_2,\omega_3,\dots,\omega_N$. When all these constants are different from zero, but some are negative, the metric is pseudoriemannian (indefinite and not degenerate), and when some of the constants $\omega_2, \omega_3, \dots, \omega_N$ vanish the metric is degenerate. \subsect{Structure of the quaternionic unitary CK algebras} As each real coefficient $\omega_a$ can be positive, negative or zero, the quaternionic unitary CK family $sq_{\omega}(N+1)$ includes $3^N$ Lie algebras. Semisimple algebras appear when all the coefficients $\omega_a$ are different from zero: these are the algebras $sq(p,q)$ in the Cartan series $C_{N+1}$, where $p$ and $q$ $(p+q=N+1)$ are the number of positive and negative terms in the matrix ${{\cal I}_\k}$ (\ref{ca}). If we set all $\omega_a=1$ we recover the initial compact algebra $sq(N+1)$. When one or more coefficients $\omega_a$ vanish the CK algebra turns out to be a non-semisimple Lie algebra; the vanishing of one (or several) coefficient $\omega_a$ is equivalent to performing an (or series of) In\"on\"u--Wigner contraction \cite{IW,Weimar}. Some of the quaternionic unitary CK algebras are isomorphic; for instance, the isomorphism \begin{equation} sq_{\omega_1,\omega_2,\dots,\omega_{N-1},\omega_N}(N+1)\simeq sq_{\omega_N,\omega_{N-1},\dots,\omega_2,\omega_1}(N+1) \label{na} \end{equation} (that interchanges $\omega_{ab}\leftrightarrow \omega_{N-b,N-a}$) is provided by the map \begin{equation} \begin{array}{ll} J_{ab}\to J'_{ab}=-J_{N-b,N-a}&\cr M^1_{ab}\to M'^1_{ab}=-M^2_{N-b,N-a} &\qquad E^1_{a}\to E'^1_{a}=- E^2_{N-a} \cr M^2_{ab}\to M'^2_{ab}=-M^1_{N-b,N-a} &\qquad E^2_{a}\to E'^2_{a}=- E^1_{N-a} \cr M^3_{ab}\to M'^3_{ab}=-M^3_{N-b,N-a} &\qquad E^3_{a}\to E'^3_{a}=- E^3_{N-a} . \end{array} \label{nb} \end{equation} Each algebra in the family of quaternionic unitary CK algebras has many subalgebras isomorphic to orthogonal, unitary, or special unitary CK algebras, as well as many subalgebras isomorphic to quaternionic unitary algebras in the family $sq_\omega(M+1)$ with $M<N$. A clear way to describe this is to denote by $X_{ab}$ the four generators $\{J_{ab},M^\alpha_{ab}\}$ $(\alpha=1,2,3)$, by $E_a$ the set of three generators $E_a^\alpha$, and arrange the basis generators of $sq_{\omega}(N+1)$ as follows: \begin{center} \begin{tabular}{ccccc|ccccc} $E_0$& $X_{01} $&$ X_{02} $&$\ldots$&$X_{0\, a-1} $& $X_{0a}$&$X_{0\, a+1}$& $\ldots$&$X_{0N}$\\ &$E_1$& $ X_{12} $&$\ldots$&$X_{1\, a-1} $& $X_{1a}$&$X_{1\, a+1}$ &$\ldots$&$X_{1N}$\\ & &$\ddots$&$ $&$\vdots$& $\vdots$&$\vdots$& &$\vdots$\\ & &$ $&$E_{a-2}$&$X_{a-2\,a-1}$& $X_{a-2\,a}$&$X_{a-2\,a+1}$ &$\ldots$&$X_{a-2\,N}$\\ & &$ $& &$E_{a-1}$& $X_{a-1\,a}$&$X_{a-1\,a+1}$& $\ldots$&$X_{a-1\,N}$\\ \cline{6-9} & & &\multicolumn{2}{c}{\,} &$E_{a}$&$X_{a\,a+1}$& $\ldots$&$X_{a N}$\\ & & &\multicolumn{2}{c}{\,} & & $\ddots$& $$& $\vdots$\\ & & &\multicolumn{2}{c}{\,} && & $E_{N-1}$&$X_{N-1\,N}$\\ & & &\multicolumn{2}{c}{\,} & & && $E_{N}$\\ \end{tabular} \end{center} A Cartan subalgebra is made up of the $N+1$ generators $E_{0}^3, E_{1}^3, \dots, E_{N}^3$ (in the outermost diagonal). In this arrangement the generators to the left and below the rectangle span subalgebras $sq_{\omega_1,\dots,\omega_{a-1}}(a)$ and $sq_{\omega_{a+1},\dots,\omega_N}(N+1-a) $ respectively, while the generators inside the rectangle do not span a subalgebra unless $\omega_a=0$ (and in this case this is an abelian subalgebra). The unitary subalgebras $u_\omega^\alpha(N+1) $ appear in a similar way by keeping only $J_{ab}$, a single $M_{ab}^\alpha$ out of each $X_{ab}$ and a single $E_a^\alpha$ out of each set $E_a$ (for a fixed $\alpha$). By keeping only $J_{ab}$ we get the $so_\omega(N+1)$ subalgebra. If a coefficient $\omega_a=0$, then the contracted algebra has a semidirect structure \begin{equation} sq_{\omega_1,\dots,\omega_{a-1},\omega_a=0,\omega_{a+1},\dots,\omega_N}(N+1) \equiv t \odot ( sq_{\omega_1,\dots,\omega_{a-1}}(a) \oplus sq_{\omega_{a+1},\dots,\omega_N}(N+1-a)) \label{ma} \end{equation} where $t$ is spanned by the generators inside the rectangle (it is an abelian subalgebra of dimension $4a(N+1-a)$), while $sq_{\omega_1,\dots,\omega_{a-1}}(a)$ and $sq_{\omega_{a+1},\dots,\omega_N}(N+1-a)$ are two quaternionic unitary CK subalgebras spanned by the generators in the triangles to the left and below the rectangle. When there are several coefficients $\omega_a=0$ the contracted algebra has simultaneously several semidirect structures (\ref{ma}). Notice that when $\omega_1=0$ the contracted algebra has the structure \begin{equation} sq_{0,\omega_2,\dots,\omega_N}(N+1) \equiv t_{4N} \odot ( sq(1) \oplus sq_{\omega_{2},\dots,\omega_N}(N)) \label{mb} \end{equation} and here the subindex $4N$ in $t$ is the real dimension of the flat homogeneous space (\ref{ch}) which can be identified with ${\mathbb H}^N$ endowed with a flat metric given, over ${\mathbb H}$, by the diagonal matrix $(1, \omega_2, \omega_2\omega_3, \dots, \omega_2\omega_3\cdots\omega_N)$; when all these are positive this Lie algebra can be called inhomogeneous quaternionic unitary algebra $isq(N)$. \sect{Central extensions} After having described the structure of the quaternionic unitary CK algebras, we now turn to the second goal of this paper: to give a complete description of all possible central extensions of the algebras in the quaternionic unitary CK family. The outcome of this study is simple to state: in any dimension, and for all quaternionic unitary CK algebras --no matter of how many $\omega_a$ are equal or different from zero--, there are no non-trivial central extensions. For any $r$-dimensional Lie algebra with generators $\{X_1,\dots,X_r\}$ and structure constants $C_{ij}^k$, a generic central extension by the one-dimensional algebra generated by $\Xi$ will have $(r+1)$ generators $( X_i,\Xi) $ with commutation relations given by \begin{equation} [X_i,X_j]=\sum_{k=1}^r C_{ij}^k X_k + \xi_{ij} \Xi \qquad [\Xi,X_i]=0 . \label{de} \end{equation} The extension coefficients or central charges $\xi_{ij}$ must be antisymmetric in the indices $i,j$, $\xi_{ji}=-\xi_{ij}$ and must fulfil the following conditions coming from the Jacobi identities for the generators $X_i, X_j, X_l$ in the extended Lie algebra: \begin{equation} \sum_{k=1}^r \left( C_{ij}^{k}\xi_{kl}+C_{jl}^{k}\xi_{ki}+C_{li}^{k}\xi_{kj} \right) =0 . \label{df} \end{equation} If for a set of arbitrary real numbers $\mu_i$ we perform a change of generators: \begin{equation} X_i\to X'_i=X_i+\mu_i\Xi, \label{ChangeGens} \end{equation} the commutation rules for the generators $\{X'_i\}$ are given by the expressions (\ref{de}) with a new set of $\xi_{ij}' = \xi_{ij} -\sum_{k=1}^r C_{ij}^k \mu_k $, where $\delta\mu(X_i, X_j) = \sum_{k=1}^r C_{ij}^k \mu_k$ is the two-coboundary generated by $\mu$. Extension coefficients differing by a two-coboundary correspond to equivalent extensions; and those extension coefficients which are a two-coboundary $\xi_{ij}= -\sum_{k=1}^r C_{ij}^k \mu_k $ correspond to trivial extensions; the classes of equivalence of non-trivial two-cocycles determine the second cohomology group of the Lie algebra. \subsect{Central extensions of the unitary CK subalgebras} In order to simplify further computations, we first state the result about the structure of the central extensions of the unitary CK algebra $u_\omega(N+1)$\cite{unitario}, which will naturally appear when studying the extensions of the quaternionic unitary CK algebras, because each $sq_\omega(N+1)$ contains three such unitary CK subalgebras. \medskip \noindent {\bf Theorem 3.1.} \noindent The commutation relations of any central extension $\overline{u}_\omega^\alpha (N+1)$ of the unitary CK algebra ${u}_\omega^\alpha (N+1)$ with generators $\{J_{ab},M^\alpha_{ab},E^\alpha_{a}\}$ ($a,b=0,1,\dots,N$ and quaternionic index $\alpha$ fixed) by the one-dimensional algebra generated by $\Xi$ are \begin{equation} \begin{array}{ll} [J_{ab},J_{ac}] =\omega_{ab}(J_{bc}+h_{bc}^\alpha\Xi) &\qquad [M_{ab}^\alpha,M_{ac}^\alpha] =\omega_{ab}(J_{bc}+h_{bc}^\alpha\Xi)\cr [J_{ab},J_{bc}] =-(J_{ac} +h_{ac}^\alpha\Xi) &\qquad [M_{ab}^\alpha,M_{bc}^\alpha] =J_{ac}+h_{ac}^\alpha\Xi\cr [J_{ac},J_{bc}] =\omega_{bc}(J_{ab} + h_{ab}^\alpha\Xi) &\qquad [M_{ac}^\alpha,M_{bc}^\alpha] =\omega_{bc}(J_{ab} + h_{ab}^\alpha\Xi) \cr [J_{ab},J_{de}]=0 &\qquad [M_{ab}^\alpha,M_{de}^\alpha] =0\cr [J_{ab},M_{ac}^\alpha] =\omega_{ab}(M_{bc}^\alpha+g_{bc}^\alpha \Xi) &\qquad [M_{ab}^\alpha,J_{ac}] =-\omega_{ab}(M_{bc}^\alpha + g_{bc}^\alpha\Xi) \cr [J_{ab},M_{bc}^\alpha] =-(M_{ac}^\alpha+ g_{ac}^\alpha \Xi) &\qquad [M_{ab}^\alpha,J_{bc}] =-(M_{ac}^\alpha + g_{ac}^\alpha \Xi) \cr [J_{ac},M_{bc}^\alpha] =-\omega_{bc}(M_{ab}^\alpha +g_{ab}^\alpha\Xi) &\qquad [M_{ac}^\alpha,J_{bc}] =\omega_{bc}(M_{ab}^\alpha + g_{ab}^\alpha \Xi) \cr \multicolumn{2}{l}{ [J_{ab},E_d^\alpha] = ( \delta_{ad} -\delta_{bd})(M_{ab}^\alpha + g_{ab}^\alpha \Xi) \qquad [J_{ab},M_{de}^\alpha] =0}\cr \multicolumn{2}{l}{ [M_{ab}^\alpha,E_d^\alpha] = -( \delta_{ad} -\delta_{bd}) (J_{ab} + h_{ab}^\alpha\Xi)} \end{array} \label{ddaa} \end{equation} \begin{equation} [J_{ab},M^\alpha_{ab}] = 2\omega_{ab}(E_b^\alpha - E_a^\alpha)+ f_{ab}^\alpha \Xi \qquad [E_a^\alpha,E_b^\alpha]=e_{a,b}^{\alpha}\Xi\qquad a< b \label{da} \end{equation} where \begin{equation} f_{ab}^\alpha =\sum_{s=a+1}^b \omega_{a\,s-1}\omega_{sb}f_{s-1\,s}^\alpha. \label{db} \end{equation} The extension is characterized by the following types of extension coefficients: \noindent {\bf Type I:} $N(N+1)/2$ coefficients $g_{ab}^\alpha$ and $N(N+1)/2$ coefficients $h_{ab}^\alpha$ ($a<b$ and $a,b=0,1,\dots,N$). \noindent {\bf Type II:} $N$ coefficients ${f^\alpha_{a-1\,a}}$ ($a=1,\dots,N$). \noindent {\bf Type III:} $N(N+1)/2$ coefficients $e^{\alpha}_{a,b}$ ($a<b$ and $a,b=0,1,\dots,N$), satisfying \begin{equation} \omega_{ab} e^{\alpha}_{a,b}=0\qquad \omega_{ab} (e^{\alpha}_{a,c}-e^{\alpha}_{b,c}) =0\qquad \omega_{bc} (e^{\alpha}_{a,b}-e^{\alpha}_{a,c}) =0\qquad a<b<c. \label{dc} \end{equation} \medskip This theorem expresses the results previously obtained in \cite{unitario} but in a different basis (we are using here a different set of Cartan generators) so that we use another notation for the extension coefficients. The extension coefficients are classed into types according as their behaviour under contraction. All type I coefficients correspond to central extensions which are trivial for all the unitary CK algebras, no matter of how many coefficients $\omega_a$ are equal to zero, since they can be removed at once by means of the redefinitions \begin{equation} J_{ab}\to J_{ab} + h_{ab}^\alpha\Xi \qquad M_{ab}^\alpha\to M_{ab}^\alpha + g_{ab}^\alpha \Xi . \label{ddcc} \end{equation} Each type II coefficient ${f^\alpha_{a-1\,a}}$ gives rise to a non-trivial extension if $\omega_a= 0$ and to a trivial one otherwise. That is, these extensions become non-trivial through the contraction and they behave as pseudoextensions \cite{Aldaya,Azcarraga}. On the other hand, when a type III coefficient $e^{\alpha}_{a,b}$ is non-zero, the extension that it determines is always non-trivial so that it cannot appear through a pseudoextension process. Therefore, the only extensions which can be non-trivial for a given algebra in the CK family $\overline{u}_\omega(N+1)$ are those appearing in the Lie brackets (\ref{da}). We also recall that the dimension of the second cohomology group of a unitary CK algebra ${u}_\omega(N+1)$ with $n$ coefficients $\omega_a$ equal to zero is \begin{equation} {\mbox{dim}}\, (H^2({u}_\omega(N+1),{\mathbb R})=n + \frac {n(n+1)}{2} = \frac {n(n+3)}2 \label{dd} \end{equation} where the first term $n$ corresponds to the extension coefficients ${f^\alpha_{a-1\,a}}$ and the second term $\frac {n(n+1)}{2}$ to the extensions determined by $e^{\alpha}_{a,b}$. \subsect{Central extensions of the quaternionic unitary CK algebras} In the sequel we determine the non-trivial extension coefficients $\xi_{ij}$ for a generic centrally extended quaternionic unitary CK algebra $\overline{sq}_\omega(N+1)$ (\ref{de}) by solving the Jacobi identities (\ref{df}). First, we consider a generic extended unitary CK subalgebra, say $\overline{u}^1_\omega(N+1)$, spanned by the generators $\{J_{ab},M^1_{ab},E^1_{a},\Xi\}$ ($a,b=0,1,\dots,N;\, a<b$) with pure quaternionic index equal to 1. It is clear that the set of Jacobi identities involving only these generators lead to the results given in the theorem 3.1. Hence, we find the commutation relations (\ref{ddaa}) and (\ref{da}) with extension coefficients denoted $g^1_{ab}$, $h^1_{ab}$, ${f^1_{ab}}$ and $e^1_{a,b}$; we apply the redefinitions (cf.\ (\ref{ddcc})) \begin{equation} J_{ab}\to J_{ab} + h^1_{ab}\Xi \qquad M_{ab}^1\to M_{ab}^1 + g_{ab}^1 \Xi \label{ka} \end{equation} and the Lie brackets of $\overline{u}^1_\omega(N+1)\subset \overline{sq}_\omega(N+1)$ turn out to be \begin{equation} \begin{array}{lll} [J_{ab},J_{ac}] = \omega_{ab}J_{bc} &\qquad [J_{ab},J_{bc}] =-J_{ac} &\qquad [J_{ac},J_{bc}] = \omega_{bc}J_{ab}\cr [M_{ab}^1,M_{ac}^1] =\omega_{ab} J_{bc} &\qquad [M_{ab}^1,M_{bc}^1] = J_{ac} &\qquad [M_{ac}^1,M_{bc}^1] =\omega_{bc} J_{ab} \cr [J_{ab},M_{ac}^1] = \omega_{ab}M_{bc}^1 &\qquad [J_{ab},M_{bc}^1] =-M_{ac}^1 &\qquad [J_{ac},M_{bc}^1] =-\omega_{bc}M_{ab}^1 \cr [M_{ab}^1,J_{ac}] =-\omega_{ab}M_{bc}^1 &\qquad [M_{ab}^1,J_{bc}] =-M_{ac}^1 &\qquad [M_{ac}^1,J_{bc}] = \omega_{bc}M_{ab}^1 \cr [J_{ab},J_{de}]= 0 &\qquad [M_{ab}^1,M_{de}^1] =0 &\qquad [J_{ab},M_{de}^1] =0 \cr \multicolumn{3}{l}{ [J_{ab},E_d^1] = ( \delta_{ad} -\delta_{bd})M_{ab}^1 \qquad [M_{ab}^1,E_d^1] = -( \delta_{ad} -\delta_{bd}) J_{ab}} \end{array} \label{kb} \end{equation} \begin{equation} [J_{ab},M_{ab}^1] = 2\omega_{ab}(E_{b}^1-E_{a}^1)+ f_{ab}^1 \Xi \qquad [E_{a}^1,E_b^1] = e_{a,b}^{1}\Xi\qquad a< b . \label{kc} \end{equation} The two remaining extended unitary CK subalgebras $\overline{u}^\lambda_\omega(N+1)\subset \overline{sq}_\omega(N+~1)$ with $\lambda=2,3$ are generated by $\{J_{ab}, M^\lambda_{ab},E^\lambda_{a},\Xi\}$ (hereafter we shall reserve $\lambda$ to stand exclusively for the quaternionic indices $\lambda=2,3$, whereas $\alpha,\beta,\gamma$ are allowed to take on any value $1,2,3$). The subalgebras $\overline{u}^\lambda_\omega(N+1)$ have generic extended Lie brackets (as (\ref{de})) except for the common orthogonal CK subalgebra ${so}_\omega(N+1)$ spanned by the generators $\{ J_{ab}\}$ which is non-extended and whose Lie brackets are already written in (\ref{kb}). For the two remaining unitary subalgebras, we have already used up the redefinition concerning to the common generators in ${so}_\omega(N+1)$, so we cannot apply directly the results of the theorem 3.1 and we have to compute their corresponding Jacobi identities by taking into account that initially both contain a non-extended ${so}_\omega(N+1)$. As the equations so obtained are similar to those written in detail in \cite{unitario} we omit them and give the final result. The extension coefficients that appear are denoted $g^\lambda_{ab}$, $h^\lambda_{a\,a+1}$, ${f^\lambda_{ab}}$ and $e^\lambda_{a,b}$, for $\lambda=2,3$; the Lie brackets of $\overline{u}^\lambda_\omega(N+1)$ read \begin{equation} \begin{array}{ll} \multicolumn{2}{l}{[M_{ab}^\lambda ,M_{ac}^\lambda ] =\omega_{ab} J_{bc} \qquad [M_{ab}^\lambda ,M_{bc}^\lambda ] =J_{ac} \qquad [M_{ac}^\lambda ,M_{bc}^\lambda ] =\omega_{bc} J_{ab} } \cr [J_{ab},M_{ac}^\lambda ] =\omega_{ab}(M_{bc}^\lambda +g_{bc}^\lambda \Xi) &\qquad [M_{ab}^\lambda ,J_{ac}] =-\omega_{ab}(M_{bc}^\lambda + g_{bc}^\lambda \Xi) \cr [J_{ab},M_{bc}^\lambda ] =-(M_{ac}^\lambda + g_{ac}^\lambda \Xi) &\qquad [M_{ab}^\lambda ,J_{bc}] =-(M_{ac}^\lambda + g_{ac}^\lambda \Xi) \cr [J_{ac},M_{bc}^\lambda ] =-\omega_{bc}(M_{ab}^\lambda +g_{ab}^\lambda \Xi) &\qquad [M_{ac}^\lambda ,J_{bc}] =\omega_{bc}(M_{ab}^\lambda + g_{ab}^\lambda \Xi)\cr \multicolumn{2}{l}{[J_{ab},M_{de}^\lambda ] =0 \qquad \qquad [M_{ab}^\lambda ,M_{de}^\lambda ] =0}\cr \multicolumn{2}{l}{ [J_{ab},E_d^\lambda ] = ( \delta_{ad} -\delta_{bd})(M_{ab}^\lambda + g_{ab}^\lambda \Xi) }\cr \multicolumn{2}{l}{ [M_{ab}^\lambda ,E_d^\lambda ] = -( \delta_{ad} -\delta_{bd}) J_{ab} \qquad b>a+1}\cr \multicolumn{2}{l}{ [M_{a\, a+1}^\lambda,E_d^\lambda ] = -( \delta_{ad} -\delta_{a+1\, d})( J_{a\, a+1} + h_{a\, a+1}^\lambda \Xi ) }\cr \end{array} \label{kd} \end{equation} \begin{equation} [J_{ab},M _{ab}^\lambda] = 2\omega_{ab}(E_b^\lambda - E_a^\lambda )+ f_{ab}^\lambda \Xi \qquad [E_a^\lambda ,E_b^\lambda ]=e_{a,b}^\lambda\Xi\qquad a< b . \label{ke} \end{equation} The coefficients ${f^\lambda_{ab}}$ and $e^\lambda_{a,b}$ ($\lambda=2,3$) are characterized by the theorem 3.1 (see (\ref{db}) and (\ref{dc})), while the extensions $h^\lambda_{a\,a+1}$ are subjected to the relations \begin{equation} \omega_ah^\lambda_{a\,a+1}=0\qquad \omega_{a+2}h^\lambda_{a\,a+1}=0 . \label{kf} \end{equation} Notice that now the coefficients $h^\lambda_{ab}$ with $b>a+1$ are zero (this is a direct consequence of the presence of the non-extended ${so}_\omega(N+1)$). We now apply the redefinitions given by \begin{equation} M_{ab}^\lambda\to M_{ab}^\lambda + g_{ab}^\lambda \Xi\qquad \lambda= 2,3 \label{kg} \end{equation} and a glance to (\ref{kd}) shows that the corresponding extensions are always trivial, so the extension coefficients $g_{ab}^\lambda$ are eliminated. At this point the complete set of Lie brackets of $\overline{sq}_\omega(N+1)$ turn out to be \begin{equation} \begin{array}{lll} [J_{ab},J_{ac}] = \omega_{ab}J_{bc} &\qquad [J_{ab},J_{bc}] =-J_{ac} &\qquad [J_{ac},J_{bc}] = \omega_{bc}J_{ab}\cr [M_{ab}^\alpha,M_{ac}^\alpha] =\omega_{ab} J_{bc} &\qquad [M_{ab}^\alpha,M_{bc}^\alpha] = J_{ac} &\qquad [M_{ac}^\alpha,M_{bc}^\alpha] =\omega_{bc} J_{ab} \cr [J_{ab},M_{ac}^\alpha] = \omega_{ab}M_{bc}^\alpha &\qquad [J_{ab},M_{bc}^\alpha] =-M_{ac}^\alpha &\qquad [J_{ac},M_{bc}^\alpha] =-\omega_{bc}M_{ab}^\alpha \cr [M_{ab}^\alpha,J_{ac}] =-\omega_{ab}M_{bc}^\alpha &\qquad [M_{ab}^\alpha,J_{bc}] =-M_{ac}^\alpha &\qquad [M_{ac}^\alpha,J_{bc}] = \omega_{bc}M_{ab}^\alpha \cr [J_{ab},J_{de}]= 0 &\qquad [M_{ab}^\alpha,M_{de}^\alpha] =0 &\qquad [J_{ab},M_{de}^\alpha] =0 \cr \multicolumn{3}{l}{ [J_{ab},E_d^\alpha] = ( \delta_{ad} -\delta_{bd})M_{ab}^\alpha \qquad [M_{ab}^1,E_d^1] = -( \delta_{ad} -\delta_{bd}) J_{ab}}\cr \multicolumn{3}{l}{ [M_{ab}^\lambda,E_d^\lambda] = -( \delta_{ad} -\delta_{bd}) J_{ab}\qquad b>a+1} \end{array} \label{dg} \end{equation} \begin{equation} [M_{a\, a+1}^\lambda,E_d^\lambda ] = -( \delta_{ad} -\delta_{a+1\, d})( J_{a\, a+1} + h_{a\, a+1}^\lambda \Xi ) \label{ddgg} \end{equation} \begin{equation} [J_{ab},M_{ab}^\alpha] = 2\omega_{ab}(E_{b}^\alpha-E_{a}^\alpha)+ f_{ab}^\alpha \Xi \qquad [E_{a}^\alpha,E_b^\alpha] = e_{a,b}^{\alpha}\Xi\qquad a< b \label{dh} \end{equation} \begin{equation} \begin{array}{l} [M_{ab}^\alpha,M_{ac}^\beta] = \omega_{ab}\varepsilon_{\alpha\beta\gamma}M_{bc}^\gamma + \varepsilon_{\alpha\beta\gamma}m_{ab,ac}^{\alpha,\beta}\Xi \cr [M_{ab}^\alpha,M_{bc}^\beta] = \varepsilon_{\alpha\beta\gamma}M_{ac}^\gamma + \varepsilon_{\alpha\beta\gamma}m_{ab,bc}^{\alpha,\beta}\Xi \cr [M_{ac}^\alpha,M_{bc}^\beta] = \omega_{bc}\varepsilon_{\alpha\beta\gamma}M_{ab}^\gamma+ \varepsilon_{\alpha\beta\gamma}m_{ac,bc}^{\alpha,\beta}\Xi \cr [M_{ab}^\alpha,M_{de}^\beta] = \varepsilon_{\alpha\beta\gamma}m_{ab,de}^{\alpha,\beta}\Xi \cr [M_{ab}^\alpha,M_{ab}^\beta] = 2\omega_{ab}\varepsilon_{\alpha\beta\gamma}(E_a^\gamma + E_b^\gamma)+ \varepsilon_{\alpha\beta\gamma}m_{ab}^{\alpha,\beta}\Xi \cr [M_{ab}^\alpha,E_{d}^\beta] =(\delta_{ad} +\delta_{bd})\varepsilon_{\alpha\beta\gamma}M_{ab}^\gamma+ \varepsilon_{\alpha\beta\gamma}me_{ab,d}^{\alpha,\beta}\Xi \cr [E_{a}^\alpha,E_{b}^\beta] =2\delta_{ab}\varepsilon_{\alpha\beta\gamma}E_{a}^\gamma + \varepsilon_{\alpha\beta\gamma}e_{a,b}^{\alpha,\beta}\Xi . \end{array} \label{di} \end{equation} Therefore the Lie brackets (\ref{dg}) are non-extended, the extension coefficients $h_{a\, a+1}^\lambda$ appearing in (\ref{ddgg}) satisfy (\ref{kf}), the coefficients of the commutators (\ref{dh}) are characterized by the theorem 3.1, and the extension coefficients in the commutators (\ref{di}) are still generic, the redefinitions (\ref{ka}) and (\ref{kg}) having been already incorporated in the brackets (\ref{di}). The list of all remaining extension coefficients is \begin{equation} h_{a\, a+1}^\lambda \qquad f_{ab}^\alpha\qquad e_{a,b}^{\alpha}\qquad m_{ab,de}^{\alpha,\beta}\qquad m_{ab}^{\alpha,\beta}\qquad me_{ab,d}^{\alpha,\beta}\qquad e_{a,b}^{\alpha,\beta} \label{dj} \end{equation} where the two quaternionic indices $\alpha$, $\beta$ are always different. We sort the coefficients $m_{ab,de}^{\alpha,\beta}$, $me_{ab,d}^{\alpha,\beta}$ and $e_{a,b}^{\alpha,\beta}$ into several subsets as follows: \noindent $\bullet$ Coefficients $m_{ab,de}^{\alpha,\beta}$ involving \emph{four} different indices $a,b,d,e$. If we rename and sort these four different indices as $a<b<c<d$ these coefficients are \begin{equation} m_{ab,cd}^{\alpha,\beta}\qquad m_{ac,bd}^{\alpha,\beta}\qquad m_{ad,bc}^{\alpha,\beta} . \label{dk} \end{equation} \noindent $\bullet$ Coefficients $m_{ab,de}^{\alpha,\beta}$ involving \emph{three} different indices. If we write the indices as $a<b<c$ the coefficients are \begin{equation} m_{ab,ac}^{\alpha,\beta}\qquad m_{ab,bc}^{\alpha,\beta}\qquad m_{ac,bc}^{\alpha,\beta} . \label{dl} \end{equation} \noindent $\bullet$ Coefficients $me_{ab,d}^{\alpha,\beta}$ with \emph{two} different indices $a<b$ and a third one $d\in \{a,b\}$: \begin{equation} me_{ab,a}^{\alpha,\beta}\qquad me_{ab,b}^{\alpha,\beta} . \label{dm} \end{equation} \noindent $\bullet$ Coefficients $me_{ab,d}^{\alpha,\beta}$ with \emph{two} different indices $a<b$ and a third index $d\notin \{a,b\}$: \begin{equation} me_{ab,d}^{\alpha,\beta} . \label{dmm} \end{equation} \noindent $\bullet$ Coefficients $e_{a,b}^{\alpha,\beta}$ with \emph{two} different indices $a<b$: \begin{equation} e_{a,b}^{\alpha,\beta} . \label{dn} \end{equation} \noindent $\bullet$ Coefficients $e_{a,b}^{\alpha,\beta}$ with a \emph{single} index $a$: \begin{equation} e_{a,a}^{\alpha,\beta} . \label{do} \end{equation} In the sequel we proceed to compute the Jacobi identities involving the above coefficients; the results obtained in any equation will be automatically introduced in any further equation, so the order we consider for enforcing the Jacobi identities is an integral part of the derivation, and should be respected. We denote the Jacobi identity (\ref{df}) of the generators $X_i$, $X_j$ and $X_l$ by $\{X_i, X_j ,X_l\}$. The following equations imply the vanishing of some coefficients: \begin{equation} \begin{array}{ll} \{M_{a\, a+1}^3,E_{a}^1,E_{a+1}^2\}: &\quad h_{a\, a+1}^2=0 \cr \{M_{a\, a+1}^2,E_{a}^1,E_{a+1}^3\}: &\quad h_{a\, a+1}^3=0 \end{array} \label{eebb} \end{equation} \begin{equation} \{E_{a}^\gamma,E_{a}^\beta,E_{b}^\alpha\}:\quad e_{a,b}^{\alpha}=0 \label{ea} \end{equation} \begin{equation} \begin{array}{ll} \{M_{ab}^\alpha,M_{ac}^\alpha,E_{c}^\gamma\}: &\quad m_{ab,ac}^{\alpha,\beta}=0\cr \{M_{ab}^\alpha,M_{bc}^\alpha,E_{c}^\gamma\}: &\quad m_{ab,bc}^{\alpha,\beta}=0\cr \{M_{ac}^\alpha,M_{bc}^\alpha,E_{b}^\gamma\}: &\quad m_{ac,bc}^{\alpha,\beta}=0 \end{array} \label{eb} \end{equation} \begin{equation} \begin{array}{ll} \{J_{ab} ,M_{cd}^\beta,E_{b}^\alpha\}: &\quad m_{ab,cd}^{\alpha,\beta}=0\cr \{J_{bc} ,M_{ad}^\alpha,E_{c}^\beta\}: &\quad m_{ad,bc}^{\alpha,\beta}=0\cr \{J_{ab} ,M_{bc}^\alpha,M_{bd}^\beta\}: &\quad m_{ac,bd}^{\alpha,\beta}-m_{ad,bc}^{\beta,\alpha}=0 \end{array} \label{ec} \end{equation} \begin{equation} \begin{array}{ll} \{M_{ab}^\beta,E_{a}^\beta,E_{b}^\gamma\}: &\quad me_{ab,a}^{\alpha,\beta}=0\cr \{M_{ab}^\beta,E_{b}^\beta,E_{a}^\gamma\}: &\quad me_{ab,b}^{\alpha,\beta}=0 \end{array} \label{ed} \end{equation} \begin{equation} \{M_{ab}^\gamma,E_{a}^\beta,E_{d}^\beta\}: \qquad me_{ab,d}^{\alpha,\beta}=0 \label{ee} \end{equation} \begin{equation} \{E_{a}^\alpha,E_{b}^\alpha,E_{b}^\gamma\}: \qquad e_{a,b}^{\alpha,\beta}=0 \label{ef} \end{equation} so that the only remaining coefficients are $f_{ab}^\alpha$, $m_{ab}^{\alpha,\beta}$ and $e_{a,a}^{\alpha,\beta}$. The Jacobi identities \begin{equation} \begin{array}{ll} \{J_{ab},M_{ab}^\alpha,E_{a}^\beta\}: &\quad 2\omega_{ab}e_{a,a}^{\alpha,\beta} -m_{ab}^{\alpha,\beta}+f_{ab}^\gamma =0\cr \{J_{ab},M_{ab}^\alpha,E_{b}^\beta\}: &\quad 2\omega_{ab}e_{b,b}^{\alpha,\beta} -m_{ab}^{\alpha,\beta}-f_{ab}^\gamma =0 \end{array} \label{eg} \end{equation} allows us to express the coefficients $f_{ab}^\alpha$, $m_{ab}^{\alpha,\beta}$ in terms of the $e_{a,a}^{\alpha,\beta}$ as follows \begin{equation} \begin{array}{l} f_{ab}^\gamma=\omega_{ab}(e_{b,b}^{\alpha,\beta}-e_{a,a}^{\alpha,\beta})\cr m_{ab}^{\alpha,\beta}=\omega_{ab}(e_{b,b}^{\alpha,\beta}+e_{a,a}^{\alpha,\beta}) . \end{array} \label{eh} \end{equation} Notice that the first equation is consistent with the relation (\ref{db}). Hence the only Lie brackets of $\overline{sq}_\omega(N+1)$ (\ref{dg})--(\ref{di}) which still involve extension coefficients are \begin{equation} \begin{array}{l} [J_{ab},M_{ab}^\gamma] = 2\omega_{ab}\left\{ (E_{b}^\gamma + \frac 12 e_{b,b}^{\alpha,\beta} \Xi) - (E_{a}^\gamma + \frac 12 e_{a,a}^{\alpha,\beta} \Xi)\right\} \cr [M_{ab}^\alpha,M_{ab}^\beta] = 2\omega_{ab}\varepsilon_{\alpha\beta\gamma}\left\{ (E_a^\gamma +\frac 12 e_{a,a}^{\alpha,\beta} \Xi) +( E_b^\gamma+ \frac 12 e_{b,b}^{\alpha,\beta} \Xi)\right\} \cr [E_{a}^\alpha,E_{a}^\beta] =2\varepsilon_{\alpha\beta\gamma}(E_{a}^\gamma + \frac 12 e_{a,a}^{\alpha,\beta} \Xi ). \end{array} \label{ei} \end{equation} These equations clearly suggest to introduce the redefinition \begin{equation} E_{a}^\gamma \to E_{a}^\gamma + \frac 12 e_{a,a}^{\alpha,\beta} \Xi \label{ej} \end{equation} which explicitly shows the triviality of all the extensions determined by the coefficients $e_{a,a}^{\alpha,\beta}$ (and consequently, by all the $f_{ab}^\alpha$ and $m_{ab}^{\alpha,\beta}$). Therefore it is not necessary to compute more Jacobi identities and we can conclude that the most general central extension $\overline{sq}_\omega(N+1)$ of any algebra in this family is always trivial. This result can be summed up in the following statement: \medskip \noindent {\bf Theorem 3.2.} \noindent The second cohomology group $H^2({sq}_\omega(N+1),{\mathbb R})$ of any Lie algebra belonging to the quaternionic unitary CK family is always trivial, for any $N$ and for any values of the set of constants $\omega_1, \omega_2, \dots, \omega_N$: \begin{equation} {\mbox{dim}}\, (H^2({sq}_\omega(N+1),{\mathbb R}) )=0 . \label{ek} \end{equation} \sect{Concluding remarks} This paper completes the study of cohomology of the quasi-simple or CK Lie algebras in the three main series (orthogonal, unitary and quaternionic unitary), as associated to antihermitian matrices over ${\mathbb R}, {\mathbb C}$ or ${\mathbb H}$. In contrast to the quasi-orthogonal or quasi-unitary cases, where the dimension of the second cohomology group of a generic algebra in the CK family ranges between $0$ for the simple algebras and a maximum positive value for the most contracted algebra (with all $\omega_a=0$), all the central extensions of any of the algebras in the quaternionic quasi-unitary family are always trivial, even for the most contracted algebra. Therefore from the three types of extensions found in the quasi-orthogonal or quasi-unitary cases, only the first type (extensions which are trivial for all the algebras in the family) is present here. However we should remark the suitability of a CK approach to the study of the central extensions of a complete family, because a case-by-case study (for any given algebra in the family) would be not more easy than the general analysis we have performed. In addition to these three \emph{main} families of CK algebras, whose simple members $so(p, q), su(p, q), sq(p,q)$ can be realised as antihermitian matrices over either ${\mathbb R}$, ${\mathbb C}$ or ${\mathbb H}$, there are other CK families. In the $C_{N+1}$ Cartan series, the remaining real Lie algebra is the real symplectic $sp(2(N+1), {\mathbb R})$, which can be interpreted in terms of CK families either as the single simple member of its own CK family $sp_{\omega_1, \dots, \omega_{N}}(2(N+1), {\mathbb R})$, or alternatively and more like the interpretation in this paper, as the unitary family $u_{\omega_1, \dots, \omega_{N}}((N+1), {\mathbb H}')$ over the algebra of the split quaternions ${\mathbb H}'$ (a pseudo-orthogonal variant of quaternions, where $i_1, i_2, i_3$ still anticommute, but their squares are $i_1^2=-1, i_2^2=1, i_3^2=1$; this is not a division algebra). The cohomology properties of algebras in this CK family could be studied using an approach similar to that made in this paper for the quaternionic unitary CK algebras. This study, a well as the study of the central extensions of the CK series of the real Lie algebras $su^*(2r)\approx sl(r, {\mathbb H}), so^*(2N), sl(N+1, {\mathbb R})\approx su(N+1, {\mathbb C}')$ not included in the three main `signature' series is worth of a separate consideration. \bigskip\bigskip \noindent {\Large{{\bf Acknowledgments}}} \bigskip This work was partially supported by DGICYT (project PB94--1115) from the Ministerio de Educaci\'on y Cultura de Espa\~na and by Junta de Castilla y Le\'on (Projects CO1/396 and CO2/297). \bigskip\bigskip
2,877,628,090,559
arxiv
\section{Introduction} In recent years the PER community has placed an increased emphasis on improving student learning in upper-division physics courses (e.g., \cite{Manogue01,Singh08}). Assessments are a useful tool for driving and validating course transformations undertaken to achieve this goal. They can be used to identify persistent student difficulties to inform instruction. Furthermore, they provide a measure of student performance, which helps to evaluate the effectiveness of different pedagogies \cite{Pollock12a}. In introductory physics, standardized assessments like the FCI \cite{Hestenes92}, consisting largely of conceptual multiple-choice questions, have allowed for reliable measurement of student performance across universities and over time (e.g., \cite{Hake98}). They have also been used for identifying specific difficulties students have employing concepts (e.g., \cite{Redish99}). Upper-division courses, however, utilize more sophisticated mathematics and require students to use a broader set of skills and practices to solve problems. While open-ended questions are better suited for assessing students' ability to approach and to solve problems in upper-division, using them to measure student performance reliably so that meaningful comparisons can be made requires significant training for graders \cite{Chasteen12}. One solution that would allow open-ended assessments to fulfill both roles is the use of separate rubrics: a grading rubric that untrained graders can use to score students' answers and measure performance; and a difficulties rubric for trained researchers to unpack common student difficulties. We have developed such rubrics for the Colorado Classical Mechanics/Math Methods Instrument (CCMI) \cite{Caballero13}. This paper presents the design and a description of the grading and difficulties rubrics. Outcomes from these rubrics will be the focus of a longer publication. To describe the development of the rubrics, we chose to focus on just one question from the CCMI that asks students to construct a differential equation. We will discuss whether the grading rubric for this question gives reliable scores when used by untrained graders, and also what information can be obtained from the difficulties rubric. \section{Data} \begin{figure*} [t!] \small \fbox{ \begin{minipage}{6.5in} {\bf{Learning Goal Evaluated: Students should be able to use Newton's laws to translate a given physical situation into a differential equation}}\\ A particle (mass, {\it{m}}) is confined to move on the {\it{x}}-axis between two objects that attract it. The particle does not leave the region between the two attractive objects. \begin{itemize} \item One object is located at {\it{x}} = 0, and the attractive force between the object and the particle is proportional to the square of the distance between them with proportionality constant {\it{c}}. \item The second object is located at {\it{x}} = 10, and the attractive force between the object and the particle is inversely proportional to the distance between them with proportionality constant {\it{k}}. \end{itemize} Write down a differential equation that describes the position of the particle as a function of time, {\it{x(t)}}. \end{minipage}} \caption{CCMI question designed to assess how well students can construct the equations of motion from a description of a physical situation. Responses are open-ended with students providing as much or as little information as they see fit.} \label{fig:DifferentialQ} \end{figure*} Course transformation of a Classical Mechanics/Math Methods course at The University of Colorado (CU) \cite{Pollock12} was initiated with the development of broad course-scale learning goals outlining what instructors wanted students to be able to do at the end of the course (e.g. {\it{translate a physical description to a mathematical equation}}), and topic-scale learning goals that blended concepts and skills (e.g. {\it{students should be able to use Newton's laws to translate a given physical situation into a differential equation}}) \cite{LearningGoals}. The CCMI is an open-ended instrument designed to assess a subset of these topic-scale learning goals \cite{Caballero13}. As such, the instrument highlights specific areas where students struggle. Moreover, as most course-scale goals are incorporated in at least one question, it also provides an indication of how well the course is meeting its overall goals. Figure \ref{fig:DifferentialQ} shows the question designed to assess the learning goals mentioned above. Conceptually the question examines students' understanding of the relationship between force and position while also testing how well students are able to translate the physical description into a mathematical expression. The data presented in this paper was collected {\it{in situ}} and was part of the validation process of the CCMI. The CCMI was administered at three medium to large enrollment universities (courses with between 25 and 75 students) with physics majors. Students were given the assessment as a paper-based 50 minute in-class test. Students were not told about it in advance and their performance did not count towards their final grade, though instructors explained that they valued the assessment. Think-aloud interviews were conducted with six students at CU. Students worked through all the questions on the CCMI using Livescribe pens so that their written work could be synced with their speech. \section{Rubric Design} The development of both the grading and difficulties rubrics is grounded in student work. Analysis of the first data sets focused on examining what students did. Students' solutions and the combination of errors they made were used to infer where their approach broke down and where they experienced specific difficulties. While we cannot know how students are reasoning from looking at their written work, coordinated interviews and observations have given us a stronger idea. Patterns in students' answers for each question were identified with the intent of creating categories for a grading rubric similar to that of the Colorado Upper-division Electrostatics (CUE) Diagnostic \cite{Chasteen12}. The CUE grading rubric explicitly defines what points should be assigned to each question for a variety of student responses, while also categorizing student difficulties. For graders to use this rubric consistently, significant training is required. Initial feedback from instructors at CU and elsewhere was that they wanted to be able to use the CCMI to quickly score students' answers to compare from semester to semester and against similar implementations at different institutions. As a result, we decided that the grading rubric should use a mastery approach (described below), and that a separate difficulties rubric be designed to provide an organization of students' approaches and errors to enable a faster yet meaningful interpretation of students' answers for trained users. \subsection{Grading Rubric} The grading rubric is structured based on a mastery approach, where only the final answer is considered and points are taken away for errors in that answer. This means that graders need only attend to one part of the students' answers and can score based on obvious features. In order to determine the points that should be allocated to each question on the CCMI a group of faculty at CU were asked to rank the questions based on their perceived importance of the learning goal(s) that the questions assessed. Similarly, once the categories of errors within each question were determined, faculty ranked the severity of those errors so that point deductions could be assigned. It is not important for all graders to agree with our point scheme but rather that they achieve consistent scores using the rubric. The grading rubric for the differential equation question (Figure \ref{fig:DifferentialQ}) is shown in Table \ref{tab:GradingRubric}. A correct answer is worth four points and the rubric describes how points are deducted for different errors, providing examples where necessary (it does not list all the possibilities). The illustrative errors are those commonly seen in students' answers. Error types are weighted based on the significance of the error as determined by faculty at CU. It is possible that an answer may have the correct structure but due to the occurrence of multiple mistakes receive a score of zero. However, this has not been the case for the data that we have scored thus far (N=123). Students who approach the problem correctly, i.e. construct force expressions from the description given, but do not write a differential equation receive no credit for their answer. To check inter-rater reliability for the grading rubric, a random sample of 25 student answers were scored by three independent untrained graders and their scores were compared. The graders scores agreed for 24 of the 25 student answers. Cohen's kappa was calculated to be 0.95, which indicates `almost perfect' agreement \cite{Landis77}. This suggests that the rubric successfully produces consistent scores when used by untrained graders. \begin{table} \caption{Grading rubric for the CCMI question shown in Figure \ref{fig:DifferentialQ}. It outlines what points should be taken away for the described errors and provides illustrative examples.} \footnotesize \begin{tabular}{L{1.3cm}L{1.5cm}L{3.8cm}} \toprule {\bf{Points}} & {\bf{Error Category}} & {\bf{Description/Examples}} \\ \hline Full credit (4) & Correct & $m\ddot{x}=-cx^2 + \frac{k}{10-x}$ or an equivalent form\\ \hline Minus 0.5 point & Neglected mass & Mass does not appear in the differential equation\\ \hline Minus 1 point each (1 max) & Neglected coefficients & Constants (c,k) do not appear in the differential equation\\ \hline Minus 1 point each (2 max) & Distance dependence error & Any errors in the distance dependence (e.g. $x$ instead of $x^2$ in the first term, 1/$x$ instead of 1/($10-x$) in the second term)\\ \hline Minus 1 point each (2 max) & Sign error & Sign error in front of either term (e.g. negative sign instead of positive sign for the second term as written above)\\ \hline No credit (0) & Incorrect & No credit for any other responses \newline $\bullet$ First order equation in $x$ \newline $\bullet$ Differential equation equal to a constant \newline $\bullet$ Not a differential equation \\ \hline \end{tabular} \label{tab:GradingRubric} \end{table} \subsection{Difficulties Rubric} The difficulties rubric seeks to capture information about students' difficulties that is lost in the grading rubric due to its mastery approach. The grading rubric for the differential equation question (Table \ref{tab:GradingRubric}) only accounts for two categories observed in students' answers: those with the correct differential equation; and those with a final expression of the correct structure but with errors. Some examples of other expression types seen in students' answers are listed together in the ``no credit'' section of the grading rubric. The difficulties rubric is inclusive of these other answer types where: the final expression is constructed from force terms using descriptions given but is not a second order differential equation; there is no final expression despite constructing force terms from descriptions given; the final expression is a second order differential equation that does not use the description given. Although students who do not write a second order differential equation receive a zero score for their answers, some had the ability to represent the force descriptions in a mathematical expression(s), but could not translate that to a differential equation correctly. Looking more deeply at all categories of students' responses in this way helps us to pinpoint where students' difficulties lie (in this case moving from force to a differential equation), and also provides us with a broader sense of how students solve problems. To help us classify students' approaches and group the large variety of errors seen in students' answers, a task analysis \cite{Catrambone1996} was carried out for the question. The necessary steps for constructing the differential equation for this question are outlined here: \begin{itemize} \item Visualize the scenario presented in the problem \item Determine the distance the particle is from each object in terms of $x$ \item Write expressions for the magnitude of the forces in terms of $x$ \item Recognize that the forces due to each object are in opposite directions and determine the sign for each force \item Write an expression for the net force on the particle \item Recognize that the force can be written as $m\ddot x$, making the expression for the net force a differential equation \end{itemize} \begin{table}[t!] \caption{Difficulties rubric designed to categorize the errors in students' responses to the question shown in Figure \ref{fig:DifferentialQ} and help infer difficulties based on their written work} \footnotesize \begin{tabular}{L{1.7cm}L{3.8cm}L{6cm}L{2.8cm}} \toprule {\bf{Category}} & {\bf{Claim}} & {\bf{Evidence}} & {\bf{Example}} \\ \hline \multirow{3}{1.7cm}{\bf{Distance}} & Does not distinguish between distances & Uses same distance value for each force & $m \ddot x =-cx^2+k/x$ \\ & Distinguishes between distances incorrectly & Uses different variables for each distance \underline{or} determines the second distance incorrectly using the position of the second object & $m \ddot x =-cx^2+k/(x-10)$ \underline{or} $m \ddot x =-cx_1^2+k/x_2$ \\ \hline {\bf{Force terms}} & Misrepresents the force description in the mathematical expression/s & Error in the way distance is used in either or both force terms \underline{and/or} proportionality constants not present in force terms & $m \ddot x =-1/x+1/(10-x)$ \\ \hline {\bf{Sign}} & Determines or represents the direction of the forces incorrectly & No minus sign in the final expression \underline{or} a minus sign in front of the wrong term in the final expression & $m \ddot x = cx^2-k/(10-x)$ \\ \hline \multirow{2}{1.7cm}{\bf{Structure}} & \multirow{3}{3.8cm}{Difficulty representing the force expression as a differential equation} & Mass does not appear in the final expression & $ \ddot x =-cx^2+k/(10-x)$ \\ & & One side of the expression is written in terms of $x$ and the other in terms of another variable & $m \ddot x =-cr^2+k/(10-r)$ \\ & & Final expression is a first order differential equation \underline{or} a function & $dx/dt= cx^2+kx$ \underline{or} $x(t)= cx^2+k/x$ \\ & & No final expression & $F_{1}=-cx^2$, $F_2=k/(10-x)$\\ \hline \multirow{3}{1.7cm}{\bf{Other approaches}} & Recall a differential equation & Write a generic expression not using the description given in the question & $A\ddot{x}+B\dot+x=0$ \\ & Use terms given in description to write an expression & Terms $c$, $k$, and $x$ appear in the expression in some form & $x(t)=\dot{x}(k-c)$ \\ \bottomrule \label{tab:DifficultiesRubric} \end{tabular} \end{table} Although all these steps need to be taken to construct the differential equation they need not necessarily be completed in this order. For example, students may start writing the net force without writing an expression for each individual force first, but in doing so they must consider each individual force. The think-aloud interviews confirm the variance in the order that students consider these steps. Due to the nature of this problem, students' written answers often only contain a simple diagram showing the position of the particle and the objects, two separate force expressions and a final expression (differential equation or otherwise). In some cases, students wrote only the final expression. Therefore, for this particular question we make most of our inferences about student difficulties based on the final expression (for other questions on the CCMI we were able to obtain more information from other parts of students' answers). The difficulties rubric for the differential equation question is shown in Table \ref{tab:DifficultiesRubric}. The rubric categorizes student difficulties within certain steps of the task analysis, makes claims within those categories, and provides the evidence from students' written work that we are using to make those claims, along with examples for each. For the most part, the final expression (or lack thereof) was indicative of specific student difficulties. One example of this is students who use the same value for distance, in most cases $x$, for both forces. These students failed to distinguish between the distance the particle is from each object. This error was even seen frequently in answers where students had drawn a diagram to indicate the position of the objects and the particle. However, for categories like the sign of the forces, the claim is intentionally broad because an error in the sign of the forces is not enough to infer whether students incorrectly determined the direction of the forces or whether they made a mistake representing that direction. \section{Conclusions and Future Work} To separate the roles of open-ended assessment, we have designed a grading rubric that can be used to reliably score student answers, and a difficulties rubric that can be used to break down students' answers and categorize errors such that specific claims can be made about areas where students are struggling. Because the difficulties rubric was designed based on student written work, there are cases where we find it difficult to pinpoint specific student difficulties from an error in their answers. For example, it is possible that for the sign category in Table \ref{tab:DifficultiesRubric} answers where there is no minus sign in the final expression students have just not considered the direction of the forces. However, to verify claims like these more written data and targeted interviews are required. Future work will focus on refining and verifying the claims made in the difficulties rubric, and will investigate the outcomes of applying both rubrics to student work. \\ The authors would like to thank the members of PERL@MSU and PER@C for their useful comments and suggestions at various stages of this work. We would especially like to thank Steven Pollock for his valuable feedback on this manuscript. \bibliographystyle{aipproc}
2,877,628,090,560
arxiv
\section{Introduction} The neutral hydrogen 21cm line of is one of the most promising tools to study the observable universe. Tomographic observation of the redshifted 21cm line could be used to reveal the evolution of the intergalactic medium (IGM) throughout the Epoch of Reionization (EoR) \citep{1997ApJ...475..429M,2003ApJ...596....1C}, to map out the large scale structure and constrain the cosmological parameters including the dark energy equation of state \citep{2008PhRvL.100i1303C,2008PhRvD..78b3529M}. In principle, it could even probe the cosmic dark age \citep{Loeb:2003ya}. Comparing with the cosmic microwave background (CMB), which images the Universe at the last scattering surface during the epoch of recombination, one advantage of the redshifted 21cm tomography signal as a cosmological probe is that it provides three dimensional (3D) map of the Universe at different redshifts, giving more information and also showing how the Universe evolved. In recent years, a number of experiments set the 21cm observation as one of their main scientific goals, for example the experiment with existing telescopes such as the GBT (the Green Bank Telescope; \citealt{2010Natur.466..463C, 2013ApJ...763L..20M, 2013MNRAS.434L..46S}), GMRT (the Giant Metrewave Radio Telescope; \citealt{2011MNRAS.413.1174P}), and telescopes newly built or being built, such as the 21CMA \citep{2016ApJ...832..190Z}, Tianlai (\citealt{2012IJMPS..12..256C}), BINGO (BAO from Integrated Neutral Gas Observations; \citealt{2013MNRAS.434.1239B}), LOFAR (the LOw-Frequency Array; \citealt{2013A&A...556A...2V}), MWA (the Murchison Widefield Array; \citealt{2013PASA...30....7T}), PAPER (the Precision Array for Probing the Epoch of Re-ionization; \citealt{2010AJ....139.1468P}), CHIME (the Canadian Hydrogen Intensity Mapping Experiment; \citealt{2014SPIE.9145E..22B}), HERA (the Hydrogen Epoch of Reionization Array; \citealt{2017PASP..129d5001D}), FAST \citep{2011IJMPD..20..989N} and the SKA \citep{2013arXiv1311.4288H}, etc. Although the redshifted 21cm line can provide a large amounts of information for cosmology, its detection is difficult. The cosmological 21cm signal whose brightness temperature $\sim$0.14 mK at redshift $z$$\sim$0.8 \citep{2010Natur.466..463C, 2013ApJ...763L..20M} is highly contaminated by the foreground emissions whose brightness temperatures are 4--5 orders of magnitude higher \citep{2008MNRAS.388..247D,2012MNRAS.419.3491L}. The foregrounds at low frequency include primarily the Galactic synchrotron emission which is originated from the cosmic ray electrons moving in the Galactic magnetic field, the Galactic free-free emission which is produced by free electrons scattering off ions without being captured, extragalactic radio sources such as radio-loud galaxies and quasars \citep{1999A&A...345..380S,2008MNRAS.389.1319J}. Additionally, the observation are also affected by the radio frequency interference (RFI) and propagation effects in the ionosphere. Removing the foregrounds has been a big challenge in the redshifted 21cm experiments, and a number of methods have been proposed and developed. The general idea is to exploit the different properties of foregrounds and cosmological 21cm signal. Along one line of sight (LoS), the cosmological 21cm signal varies with redshift or frequency randomly, while the the astrophysical foregrounds vary smoothly. The foreground can in principle be removed by light-of-sight fitting \citep{2006ApJ...650..529W,2008MNRAS.391..383G} or by cross-correlating different frequency bins data \citep{2005ApJ...625..575S}. More recently, blind or semi-blind methods such as the Singular Value Decomposition (SVD) method \citep{2011MNRAS.413.1174P,2013ApJ...763L..20M}, Robust Principle Component Analysis \citep{Zuo:2018gzm}, and Independent Component Analysis \citep{2012MNRAS.423.2518C,2014MNRAS.441.3271W}, are applied. In this paper, we introduce a simple and fast method which is based on the Wiener filter to extract the 21cm signal. The Wiener filter has been widely used in signal processing, especially for removing noise in time series or images. We use a simulation to demonstrate our method. The rest of this paper is structured as follows. In Sec. 2, we describe the Wiener filter method in general, and also describe the set up of our simulation. In Sec. 3, we apply the method to the simulated data, and present the results we obtained. Finally, we discuss a number of relevant issues in the data processing and concludes in Sec. 4. \section{Method} \label{sec:method} In a HI intensity mapping survey experiment, the observed sky emission is a mixture of the 21cm signal, noise and foregrounds, to extract the cosmologically interesting 21cm signal, their different statistical properties are used to separate them. On the relevant scale, the cosmological 21cm signal varies stochastically, as the signal strength at each frequency corresponds to the emission of a specific redshift, and on large scales the density at each different position are independent to each other (though there is correlation to some degree). By comparison, the foreground varies smoothly in frequency space, so it could in principle be distinguished from the 21cm line. In addition to the sky signals, the electronic circuit of the receiver also generate noise. After bandpass calibration, the noise could be approximated as zero-mean white noise. Based on their different properties, it is in possible, at least in principle, to extract the 21cm signal from the much stronger foregrounds and noise. In an actual radio telescope, the data would be first pre-processed to remove bad data (e.g. those with hardware malfunction or radio frequency interference), calibrated, re-binned in frequency and time resolution, re-arranged in predefined order, then used to form images cubes with two angular dimension and one frequency dimension. These steps will depend on the particular telescope in question, e.g. the data of a single dish telescope would be processed very differently from the data the of an interferometer array. Here, we will deal the data processing after these steps. We shall assume that an image cube have been obtained through these steps. Below we make the 21cm signal extraction in two steps. In the first step, we filter the data along each l.o.s, to remove the foreground component by using their different properties. As a result, the foreground data could be significantly suppressed, while the 21cm signal and the thermal noise is kept. In the second step, we filtering the data in the 2D angular space, to remove the randomly fluctuating noise signal, while keeping the more stable and consistent 21cm signal. \subsection{Wiener Fileter} We assume that in an experiment the observational data $\mat{y}$ is linearly related to $\mat{x}$, the physical quantity we try to measure, \begin{equation} \mat{y} = \mat{A}\, \mat{x} + \mat{n} \label{eq:y} \end{equation} where $\mat{A}$ is the response matrix of the system, and $\mat{n}$ is random noise. We use boldface letter to denote vectors and matrices, and $^T$ denotes matrix or vector transpose (we assume the data are real numbers here). The covariance matrices for the signal and noise are $\mat{S}=\langle \mat{x}\, \mat{x}^T \rangle$, $\mat{N}=\langle \mat{n}\, \mat{n}^T \rangle$ respectively. If $\mat{S}$ and $\mat{N}$ are known, an unbiased estimate of the signal can be obtained by applying a Wiener filter $\mat{\mat{W}}$ to the data \citep{Tegmark:1996qs}, \begin{equation} \hat{\mat{x}} = \mat{W} \mat{y} \equiv \mat{S} \mat{A}^{T} \left[\mat{A} \mat{S} \mat{A}^T + N \right]^{-1} \mat{y} \label{eq:wiener} \end{equation} The Wiener filter is optimal in the sense that it minimizes the variance of the estimator $$V=\langle (\mat{x}-\hat{\mat{x}})(\mat{x}^{T}-\mat{\hat{x}^T}) \rangle$$. \subsection{Simulation Setup} As the signal extraction utilizes our prior knowledge or expectation of the 21cm signal, foreground and noise, the optimal filter depends on the statistics of them, so the property of the filter also depends on the particular problem. Here we describe the basic setup we use in this exercise. We consider the the mid-redshift experiment aimed at detecting the baryon acoustic oscillation (BAO) signal of the large scale structure, such as the ongoing survey projects on GBT \citep{2013ApJ...763L..20M,2013MNRAS.434L..46S}, and dedicated experiments such as Tianlai \citep{2012IJMPS..12..256C,Xu:2014bya,Zhang:2016whm,Zhang:2016miz}, CHIME \citep{2014SPIE.9145E..22B}, BINGO \citep{2014arXiv1405.7936D} and HIRAX \citep{2016arXiv160702059N}. For such projects, the (synthetic) aperture is $50 \sim100$ m. Below, we shall consider a fiducial frequency of 800 MHz ($z=0.7755$) and 100 m aperture, the corresponding full width half maximum (FWHM) resolution of $0.25^\circ$. We generate the 21cm signal as follow. We adopt the Planck 2015 \citep{2016A&A...594A..13P} best fit cosmological parameters for our fiducial model. For $z= 0.7755$ the corresponding comoving angular diameter distance is $D_A = 1068.95 \ensuremath{\,{\rm Mpc}}~h^{-1}$. We consider a frequency resolution of 0.1 MHz, the corresponding to a comoving radial distance interval of $\Delta D_c=0.43 \ensuremath{\,{\rm Mpc}} ~h^{-1}$ at this redshift. For our 21cm simulation, we generate an image cube with $200^3$ voxels, which is convenient for computation. Then the frequency interval is 20 MHz, i.e. $790 \sim 810 \ensuremath{\, {\rm MHz}}$. The corresponding angular size per pixel is $\Delta \theta = \Delta D_c / D_A = 0.023^{\circ}$, smaller than the beam FWHM, which is good as our computation precision would not be affected by too large pixel sizes. The whole box has an angular area of $4.6^{\circ} \times 4.6^{\circ}$. The volume of the corresponding 3D box in real space is $V = L^3 = (86\,{\rm Mpc}\,h^{-1})^3$. We assume a thermal noise with $\sigma_n=200$ mK per voxel, which is $\sim 10^3$ times of the 21cm fluctuation $\sim$0.2 mK in our frequency (redshift) range. Note that the beam area is about $10^2$ times of the pixel area, so if the pixel noise is independent of each other, this corresponds to a noise level of $\sigma_T =20 \ensuremath{\, {\rm mK}}$ per beam. Note that using the measurement equation \begin{equation} \sigma_T \sim \frac{T_{\rm sys}}{\eta \sqrt{\Delta \nu t}}, \label{eq:meaEq} \end{equation} where $\eta < 1$ is the efficiency of the system, and take $\Delta \nu =0.1\ensuremath{\, {\rm MHz}}$. If the system is stable with a typical system temperature of $T_{\rm sys} =20\ensuremath{\, {\rm K}} \sim 100 \ensuremath{\, {\rm K}}$, and the noise is thermal, this level of noise can be achieved with a few minutes integration time. However, in the real telescopes there might be a noise floor of non-thermal origin preventing the noise reaching the low thermal value given by Eq.~\ref{eq:meaEq} no matter how long the integration time, so the actual noise might be higher. \begin{figure}[htbp] \centering \includegraphics[width=0.46\textwidth]{ms2018-0108fig1.pdf} \caption{ Power spectrum of dark matter at $z=0.7755$ (with non-linear correction).} \label{fig:Pk} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.46\textwidth]{ms2018-0108fig2_1.pdf}\\ \includegraphics[width=0.46\textwidth]{ms2018-0108fig2_2.pdf} \caption{Top: the simulated 21cm signal at 800 MHz; bottom: the 21cm signal convolved with a FWHM $0.25^{\circ}$ Guassian beam.} \label{fig:inmap} \end{figure} Assuming the neutral hydrogen evolution is linear on the relevant scales, the dark matter power spectrum is shown in Fig.~\ref{fig:Pk}. We then generate random density distribution in the simulation box according to the power spectrum. Adopting an HI bias of $b_{\rm HI}=0.70$ and HI density ratio $\Omega_{\rm HI}=6.6\times10^{-4}$ \cite{2010Natur.466..463C,2013ApJ...763L..20M} , the average brightness temperature of 21cm signal around $z$$\sim$0.8 is given by \begin{eqnarray} \bar{T}_{21} &\approx & 0.284 \left(\frac{\Omega_{\rm HI}}{10^{-3}}\right) \left(\frac{h}{0.73}\right) \left(\frac{1+z}{1.8}\right)^{1/2} \nonumber \\ && \times \left(\frac{\Omega_m+\Omega_{\Lambda}(1+z)^{-3}}{0.37}\right)^{-1/2} \ensuremath{\, {\rm mK}} \label{eq:T21mean} \end{eqnarray} The simulated 21cm fluctuation temperature map with $\delta T_{21}(\vec{x}) = b_{\rm HI}\bar{T}_{21} \delta(\vec{x}) $ is shown in the top panel of Fig.~\ref{fig:inmap}. We also produce a map convolved with a Gaussian beam whose FWHM resolution is $0.25^{\circ}$, corresponding to the resolution of a telescope with an aperture of $\sim 100 \ensuremath{\,{\rm m}}$ at 800 MHz frequency, this is shown in the bottom panel of Fig.~\ref{fig:inmap}. We also include a few simple foreground models in the simulation. At the low frequencies, we consider three kinds of spectrum indices of diffuse foreground: the Galactic synchrotron emission which dominates the low-frequency sky; a slightly more sophisticated model with frequency-varying index; multiple indices which are likely to be produced by additional components such as synchrotron emission, free-free emission, etc. In the simplest case, the brightness temperature of the foreground component is modeled with a single spectral index, \begin{eqnarray} f(\nu) = A_* \left( \frac{\nu}{\nu_*} \right)^{\beta_*} \label{eq:1index} \end{eqnarray} We adopt $A_*=5.3$ K, the average temperature of foreground at 800 MHz \citep{2017MNRAS.464.3486Z}, $\beta_*=-2.76$ \citep{1988A&AS...74....7R, 2001A&A...368.1123T}. A slight improvement is to consider a running index foreground given by \cite{2006ApJ...650..529W} \begin{eqnarray} f(\nu) = A_* \left( \frac{\nu}{\nu_*} \right)^{\beta_* -0.1\ln(\nu/\nu_*)} \label{eq:runningindex} \end{eqnarray} Finally, for multiple indices foreground \begin{eqnarray} f(\nu) = 5.3 \left( \frac{\nu}{\nu_*} \right)^{-2.76} + 0.2 \left( \frac{\nu}{\nu_*} \right)^{-2.1} + 0.1 \left( \frac{\nu}{\nu_*} \right)^{-3.2} \label{eq:multiindex} \end{eqnarray} \begin{figure}[htbp] \centering \includegraphics[width=0.44\textwidth]{ms2018-0108fig3_1.pdf} \includegraphics[width=0.44\textwidth]{ms2018-0108fig3_2.pdf} \caption{ The input spectrum along one line of sight. Top: the input 21cm signal; Bottom: 21cm signal + single index foreground + noise. } \label{fig:freq_1index_input} \end{figure} In Fig.~\ref{fig:freq_1index_input}, we show the input 21cm signal along one line of sight in the top panel, and the the total temperature including the 21cm signal, the foreground (a single power index component) and randomly generated noise in the bottom panel. \section{Results} Here we apply this method to the extraction of 21cm signal from the observational data with foreground and noise. In the present paper, we shall adopt a two step procedure: in the first step, we process the data cube along each line of sight by removing the smoothly distributed foreground component. In the second step, we apply the Wiener filter in the two dimension angular space, which reduce the noise significantly to recover the 21cm signal. \subsection{Frequency Filtering} \label{sec:freqspace} In frequency space, for the relatively low resolution (0.1 MHz) required for intensity mapping, we may neglect the small side lobes in the frequency channels, and take $\mat{A}=\mat{I}$, where $\mat{I}$ is the identity matrix. We rewrite Eq.~(\ref{eq:y}) as \begin{eqnarray} y(\nu) = f(\nu) + s(\nu) + n(\nu) \label{eq:ynu} \end{eqnarray} where $f(\nu)$ is the foreground, $s(\nu)$ is the 21cm signal, $n(\nu)$ is the noise, which we assume to be a white noise with zero mean. We assume the signal, foreground and noises are uncorrelated with each other, so that $\langle \mathbf{(f+s+n)\,(f+s+n)}^T \rangle =\mat{F} + \mat{S} + \mat{N}$, where $\mat{S}=\langle \mat{s s^T} \rangle $, $ \mat{F}=\langle \mat{f f^T} \rangle $, and $\mat{N}=\langle \mat{n n^T} \rangle $. Along one line of sight, both the 21cm signal and the noise are stochastically varying on the relevant scales, while the foreground is more smooth, so here we may use this to extract the foreground from the data first. If we ignore the slight imperfection in frequency channels which are fairly small, so that the response can be treated as $\delta$ function, the foreground extraction filter is given by $\mat{W}_\nu^f=\mat{F}[\mat{F}+\mat{S}+\mat{N}]^{-1}$, while the signal+noise is given by $\mat{W}_\nu = \mat{I}-\mat{W}_\nu^f$. We note that in the real world the foreground is unknown, so strictly speaking the Wiener filter method can not be applied. Nevertheless the foreground is believed to be smooth in the frequency space, so that even though the Wiener filter constructed this way is not very precise, it could still serve as a low-pass filter to extract the smooth component of the data. In fact, we also tried applying a simple low-pass filter and found the result is practically the same. \begin{figure}[htbp] \centering \includegraphics[width=0.23\textwidth]{ms2018-0108fig4_1.pdf} \includegraphics[width=0.23\textwidth]{ms2018-0108fig4_2.pdf} \\ \includegraphics[width=0.23\textwidth]{ms2018-0108fig4_3.pdf} \includegraphics[width=0.23\textwidth]{ms2018-0108fig4_4.pdf} \\ \includegraphics[width=0.23\textwidth]{ms2018-0108fig4_5.pdf} \includegraphics[width=0.23\textwidth]{ms2018-0108fig4_6.pdf} \caption{ Left: extracted foregrounds ($\mat{W}^f\,\mat{y}$). Right: extracted 21cm+noise ($[\mat{I}-\mat{W}^f]\,\mat{y}$). Top to bottom: different foreground models with single index, running index and multiple indices respectively. } \label{fig:freq_3index} \end{figure} The filtered data $\mat{W}^f \mat{y}$ and $[\mat{I}-\mat{W}^f]\,\mat{y}$ are shown for a random line of sight in Fig.~\ref{fig:freq_3index}. From top to bottom, the three panels show the result for the single index, varying index and multi-component cases respectively. The smoothly varying and stochastically varying components are separated, but the 21cm signal is still mixed with the noise. \subsection{Angular Space Filtering} \label{sec:angular} We make angular space filtering to separate the 21cm signal from the noise. To simplify the notation, we will omit the frequency variable $\nu$ in the expressions given below, though it should be understood that all the observables and beams are functions of $\nu$. We label the pixels of the sky with the angular position $\vec{\theta}$, then the Gaussian beam response $\mat{A}$ is given by \begin{equation} A_{\vec{\theta},\vec{\theta}^\prime}= \frac{1}{\sqrt{2\pi}\sigma_\theta} e^{-|\vec{\theta}-\vec{\theta}^\prime|^2/(2\sigma_\theta^2)} \end{equation} The covariance matrix $\mat{S}$ is given by the angular correlation function of the signal, \begin{eqnarray} \mat{S}_{\vec{\theta},\vec{\theta^\prime}}&=&\langle T_{\rm 21}(\vec{\theta}) T_{\rm 21}(\vec{\theta}^\prime) \rangle = C(|\vec{\theta}-\vec{\theta}^\prime|) \label{eq:STT} \end{eqnarray} Here we have assumed that the 21cm signal is statistically isotropic and homogeneous, i.e. the statistics does not depend on the position in the sky or the direction $\vec{\theta}-\vec{\theta}^\prime$. Expanding in spherical harmonics, \begin{equation} T(\vec{\theta})=\sum_{l,m} a_{l,m} Y_{l,m}(\vec{\theta}) \label{eq:Tsph} \end{equation} Substitute Eq.~(\ref{eq:Tsph}) into Eq.~(\ref{eq:STT}), and using the relation $\langle a_{l,m} a_{l^\prime,m^\prime}^* \rangle =C_l \delta_{l,l^\prime} \delta_{m,m^\prime}$, we obtain \begin{equation} C(|\vec{\theta}-\vec{\theta}^\prime|) = \sum_{l,m} C_l Y_{l,m}(\vec{\theta}) Y_{l,m}^*(\vec{\theta}^\prime) \end{equation} Using the addition theorem for spherical harmonics, $$\sum_{m=-l}^l Y_{l,m}(\vec{\theta}) Y_{l,m}^*(\vec{\theta}^\prime) = \frac{2l+1}{4\pi} P_l(\vec{\theta} \cdot \vec{\theta}^\prime)$$ where $P_l$ is the Legendre polynomial of $l$-th order, and $\vec{\theta} \cdot \vec{\theta}^\prime = \cos\theta$ where $\theta$ is used to denote the angle between the unit vectors $\vec{\theta}$ and $\vec{\theta}^\prime$, we finally obtain \begin{equation} \mat{S}_{\vec{\theta},\vec{\theta^\prime}}=\sum_{l} \frac{2l+1}{4\pi} C_l ~P_l(\cos\theta) \end{equation} The $C_l$ can be computed from the power spectrum by considering its projection on a thin shell with bandwidth $\Delta\nu$ \citep{Santos:2004ju}, \begin{equation} C_l(\nu)=\frac{2 \Delta \nu^2}{\pi}\int k^2 d k P_{\rm 21}(k,\nu) j_l^2[k r(\nu)] \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{ms2018-0108fig5_1.pdf} \includegraphics[width=0.45\textwidth]{ms2018-0108fig5_2.pdf} \caption{ $C_l$ (top) and $C(\theta)$ of 21cm signal at $z\sim 0.8$. } \label{fig:Cl} \end{figure} In Fig.~\ref{fig:Cl} we show the angular power spectrum $C_l$ (top panel) and corresponding angular correlation function $C(\theta)$ (bottom) panel. The correlation between 21cm signal drops rapidly at $l \sim 10^2$ or degree scale. The noise covariance matrix for pixels $\vec{\theta},\vec{\theta}^\prime$ is given by \begin{equation} \mat{N}_{\vec{\theta},\vec{\theta}^\prime}= N_0 \delta_{\vec{\theta},\vec{\theta}^\prime}, \end{equation} where for simplicity we have assumed a constant noise $N_0$. Note that in Eq.~(\ref{eq:y}), if the vector $\mat{x}$ denoted sky pixels while $\mat{y}$ denotes time-ordered data, then the different elements of $\mat{n}$ are data taken at different time and may be considered independent random samples, so the noise matrix $\mat{N}$ is diagonal. If we use pixels finer than the beam size, then when we re-bin the time-ordered data into sky pixels, the noise in adjacent pixels with angular distance smaller than the beam size would be correlated, this is automatically taken into account in the Wiener filter Eq.~(\ref{eq:wiener}) by the response matrix $\mat{A}$ and $\mat{A}^T$. In the real world, the noise may be more complicated, for example, the noise level may be direction-dependent, either due to brighter sky temperature, or because the operating condition of the telescope receiver. Furthermore, even the data taken at different time may have some correlation due to the presence of $1/f$ noise. These effects can also be handled by the Wiener filtering algorithm, if the noise covariance matrix $\mat{N}$ is known. In the present work we shall assume the simple case where the noise is uncorrelated and constant. \begin{figure}[htbp] \centering \includegraphics[width=0.46\textwidth]{ms2018-0108fig6_1.pdf} \includegraphics[width=0.46\textwidth]{ms2018-0108fig6_2.pdf} \includegraphics[width=0.46\textwidth]{ms2018-0108fig6_3.pdf} \caption{ Top Panel: 21cm map + noise; Middle Panel: extracted 21cm map; Bottom: difference between the extracted and input 21cm maps. } \label{fig:pix_ext21} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.46\textwidth]{ms2018-0108fig7_1.pdf} \includegraphics[width=0.46\textwidth]{ms2018-0108fig7_2.pdf} \includegraphics[width=0.46\textwidth]{ms2018-0108fig7_3.pdf} \caption{ Same as Fig.~\ref{fig:pix_ext21}, but with higher noise level (2000 mK). } \label{fig:map_highnoise} \end{figure} Fig.~\ref{fig:pix_ext21} shows the extracted 21 cm map by applying the Wiener filter. From top to bottom, we plot the input map with 21cm signal plus 200 mK noise, the extracted 21cm map, and the difference between the extracted map and the input map. Despite of the high noise level, the 21cm signal is successfully recovered, the difference between the recovered map and the original one is very small. Note that the Wiener filter is obtained by using the angular correlation function computed from the cosmological model, which corresponds to the ensemble average value, the actual realization may differ slightly due to sample variance, so the recovery would not be perfect. In fact, the method could also work reasonably well even if the noise level is still higher. This is shown in Fig.~\ref{fig:map_highnoise}, where the noise is assumed to be 2000 mK. Here we see that the difference between the recovered 21cm map and the original one is larger than in Fig.~\ref{fig:pix_ext21}, but the overall structure of the 21cm intensity is still clearly seen, and the difference between the two maps is still much smaller than the 21cm brightness temperature. In Fig.~\ref{fig:powerspec} we show the recovered 21cm power spectrum and the relative error. The error bars are estimated from the variance in $k$-space. The error obtained here is for the simulation box which is $(86 \ensuremath{\,{\rm Mpc}} ~h^{-1})^3$. If we assume that the relative error scales simply as $\Delta P/P \sim V^{-1/2}$, we estimate that in order to achieve 1\% statistical precision on power spectrum at $k=0.1 \ensuremath{\,{\rm Mpc}} ~h^{-1}$, the required survey volume is $(370 \ensuremath{\,{\rm Mpc}} ~ h^{-1})^3$. The actual error may be larger when sampling variance and imperfection in reconstruction are taken into account. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{ms2018-0108fig8.pdf} \caption{The recovered 21cm power spectrum (top panel) and the relative error (bottom panel) for this simulation box. We show the power for the original (blue line) signal, the reconstruction with 200 mK noise (red symbols), and 2000 mK noise (green symbols) in the figure.} \label{fig:powerspec} \end{figure} \begin{figure*} \centering \includegraphics[width=0.46\textwidth]{ms2018-0108fig9_1.pdf} \includegraphics[width=0.46\textwidth]{ms2018-0108fig9_2.pdf}\\ \includegraphics[width=0.46\textwidth]{ms2018-0108fig9_3.pdf} \includegraphics[width=0.46\textwidth]{ms2018-0108fig9_4.pdf} \caption{Effect of inaccurate beam on map reconstruction. We assume here the real beam size is $0.25^\circ$. Top Left: input 21cm map; Top Right: reconstructed 21cm map from noise of 200 mK, with the correct beam size of $0.25^\circ$; Bottom Left: reconstruction with the incorrect beam width of $0.23^\circ$; Bottom Right: reconstruction with the incorrect beam width of $0.20^\circ$.} \label{fig:pix_ext21} \end{figure*} When making the map, the Wiener filter automatically deconvolves the data. In the above analysis the beam is assumed to be perfectly known, so the recovery is very accurate even when noise is present. Indeed, some finer details whose scales are smaller than that of the beam width are recovered in the reconstructed map, which shows the power of the Wiener filter method. However, in the real world the beam is only known either by electromagnetic field simulation, or by calibration measurements, and both approaches have errors. We make a simple demonstration of the effect of inaccurately known beam by the following excise: we assume the real beam is an Gaussian with a beam width (FWHM) of $0.25^\circ$, but then in the reconstruction we use Gaussian beams with slightly different beam widths. The result is shown in Fig.~\ref{fig:pix_ext21}. The top panels show the original 21cm signal (top left, same as the top panel of Fig.~\ref{fig:inmap}) and the reconstructed map with the correct beam size (top right, same as the middle panel of Fig.~\ref{fig:pix_ext21}), we put these here again for easy comparison. The bottom panels show the reconstructed map with Gaussian beams of incorrect beam widths of $0.23^\circ$ (Bottom Left) and $0.20^\circ$ (Bottom Right). We see that when the incorrect beam widths are used, the whole reconstructed map become "fuzzier", the finer details of the original maps are lost, though the overall large scale structure are still very similar. In the real world, of course, the deviation from the beam might be more irregular and complicated, but the overall effect would be losing the details below the beam resolution while retaining the larger overall structures. \section{Discussions and Conclusion} \label{sec:conclusion} A 3D cosmological neutral hydrogen survey over a large fraction of the sky is an efficient way to study our universe. A number of instruments have been developed or are being designed for such surveys. However, a great challenge is that both the foreground radiation and the noise are several orders of magnitude larger than the 21cm signal. To extract the cosmological 21cm signal from the data collected from such instruments, an efficient extraction method is required. This paper is an exploratory study of this issue using the Wiener filter, which is widely utilized in signal processing field. We have taken as an example the analysis of data processing for a mid-redshift experiment, which is aimed at measuring the dark energy equation of state by using the BAO features in the large scale structure. However, the method is also applicable for the EoR experiments. Assuming that the data has been pre-processed, and an image cube have been produced, we used a two step procedure to extract the 21cm signal. We first subtract the foreground by removing the smooth component in the frequency spectrum along each line of sight. Previously, \citet{2012MNRAS.419.3491L} applied the Wiener filter method to extract the foreground from 21cm experiment, but they are mostly concerned mostly with the frequency spectrum, which is applicable to 21cm global spectrum experiment, or to one line of sight for the 21cm intensity mapping experiment, corresponding to this first step. However, we then go one step further, extracting the 21cm signal by applying the Wiener filter on the two dimensional angular space. Our simulation show that the 21cm signal could be recovered with good precision. In actual data analysis, the power spectrum of the 21cm signal is not precisely known, but from other cosmological observations approximate value could be inferred. Starting from an approximate value, one can apply the filter iteratively to improve the estimate. In the present study we have made a number of simplifying assumptions. In an actual experiment, the beam shape is more complicated, frequency-dependent and only known to a limited precision, the calibration procedure may introduce additional errors, and the noise may be non-thermal and have more complicated statistical properties. All of these factors may affect the extraction of the 21cm signal. To overcome these problems, one needs to consider the specific experiment. Nevertheless, the Wiener filtering may provide a very useful tool for 21cm data analysis. \acknowledgements This research is supported by the Ministry of Science and Technology grant 2016YFE0100300, NSFC grants 11473044, U1501501, U1631118, 11633004, and CAS grant QYZDJ-SSW-SLH017. F. Q. Wu also acknowledge the support by the CSC Cai Yuanpei grant. \bibliographystyle{aasjournal}
2,877,628,090,561
arxiv
\section{Abstract:} In 1966, Galli proposed a question: whether every connected graph has a vertex that is common to all it longest paths. The answer to that is negative. Another related question was raised in 1995 at British Combinatorial Conference: Do any three longest paths in a connected graph have a vertex in common? In this paper it is shown that the answer to that question is \textbf{yes}.\\ \\ Keywords: longest path, intersection of longest paths. \section{Introduction:} In 1966, during a colloquium on graph theory, Galli[1] asked whether every longest path in a connected graph have a vertex in common. A few years later, Walther[2] answered this question negatively by exhibiting a counter example on 25 vertices. Another question was asked in 1995, whether every three longest path in a connected graph has a vertex in common[3]. The only known progress in this problem were obtained by Axenovich[4], who proved the problem for outer-planar graphs, and by Rezende, Fernansex, Martin, Wakabayashi[5], who proved that in a connected graph in which all non-trivial blocks are Hamiltonian, then any three paths have a common vertex.\\ \\ In this paper, I show that for a 2-dimensional connected graph, any three longest paths share a common vertex.\\ \section{The proof:} \textbf{Theorem:} Any three longest paths in a connected graph has a common vertex. \\ \\ \textbf{proof:}\\ Let \textbf{\textit{G}} be a \textit{connected graph} in a 2-dimensional plane. Let the length of the longest path in \textbf{\textit{G}} be \(n\). Suppose \(P_1,P_2,P_3\) are paths in \textbf{\textit{G}} of length \(n\) and \(P_i=\langle p_{i1},p_{i2},...,p_{in}\rangle\). \\ \\ \textbf{Lemma 1:} Any two longest paths in \textbf{\textit{G}} has at least one common vertex.\\ \\ proof:\\ Consider two longest paths \(P_i\) and \(P_j\). Lets assume they don't have a common vertex. Since \textbf{\textit{G}} is connected, there is a path \(P^\prime\) connecting \(p_{ik}\) and \(p_{jk^\prime}\) for some \(k,k^\prime \in \{1,2,...,n\}\) such that \(P^\prime\) shares no vertex with \(P_i\cup P_j\) other than \(p_{ik}\) and \(p_{jk^\prime}\). Say \(P^\prime = \langle p_{ik},v_1,v_2,...,v_b,p_{jk^\prime}\rangle\).\\ \\ Without loss of generality we can assume \(k,k^\prime\geq\lceil\frac{n}{2}\rceil\) (we can always reverse the numbering). Then we can construct a new path \(P^*=\langle p_{i1},p_{i2},...,p_{ik},v_1,v_2,...,v_b,p_{jk^\prime},...,p_{j1}\rangle\)\\ \\ Clearly \(P^*\) has a length of at least \(n+1\) which contradicts the assumption that \textbf{\textit{G}} has no path of length greater that \(n\).\\ \\ Hence, any two longest path in a connected graph must have at least one vertex in common. \\ \\ \textbf{Lemma 2:} If the length of the longest path in a connected graph is even, then either the longest path is unique or any two longest paths have more than one vertex in common.\\ \\ proof:\\ Assume the contrary, that there are two longest paths \(P_i\) and \(P_j\) of length \(2m\) and have only one vertex in common. Say \(p_{ik}=p_{jk^\prime}\) for some \(k,k^\prime \in \{1,2,...,2m\}\). Without loss of generality, we can assume that \(k,k^\prime\geq m\).\\ \\ case 1: If \(k\) or \(k^\prime\) is greater than \(m\).\\ \\ Suppose \(k>m\). The path form \(p_{i1}\) to \(p_{ik}\)(or \(p_{jk^\prime}\)) along \(P_i\) has at least \(m+1\) vertices and path from \(p_{ik}\)(or \(p_{jk^\prime}\)) to \(p_{j1}\) along \(P_j\) has at least \(m\) vertices. So, this path \(\langle p_{i1},p_{i2},...,p_{ik}(or\;p_{jk^\prime}),p_{jk^\prime - 1},...,p_{j1}\rangle\) has a length of a least \(2m+1\), which is a contradiction.\\ \\ case 2: If \(k=k^\prime=m\).\\ \\ Consider the path from \(p_{in}\) to \(p_{ik}\)(or \(p_{ik^\prime}\)) along \(P_i\) and then to \(p_{jn}\) along \(P_j\). Clearly, this path \(\langle p_{in},p_{in-1},...,p_{ik}(or\;p_{jk^\prime}),p_{jk^\prime+1},...,p_{jn}\rangle\) has a length of \(2m+1\). Which is a contradiction. \\ \\ Hence, in a connected graph with even longest path length, either the longest path is unique or any two longest paths have more than one common vertex. \\ \\ \subsection{Case 1:} Suppose \(P_i\) and \(P_j\) are two longest paths in a connected graph \textbf{\textit{G}}. If \(p_{ik}=p_{jk^\prime}\) (i,e., if \(P_i\) and \(P_j\) share a vertex) for some \(k,k^\prime\in\{1,2,...,n\}\), then \(k=k^\prime\).\\ \\ Assuming the above statement to be true, we proceed as following:\\ \\ \textbf{Lemma 3:} Suppose \(P_1,P_2,P_3\) are three longest paths in a graph \textbf{\textit{G}}. Let \(\{p_{2t_1},p_{2t_2},...,p_{2t_a}\}\) be the common vertices of \(P_1\) and \(P_2\) and let \(\{p_{2r_1},p_{2r_2},...,p_{2r_b}\}\) be the common vertices of \(P_2\) and \(P_3\) where \(t_i,r_j\in\{1,2,...,n\}\) and \(t_1<t_2<...<t_a\) and \(r_1<r_2<...<r_b\). Then there are no \( p,q\in \{1,2,...,a\}\) and \(s\in\{1,2,...,b\}\) such that \(t_p<r_s<t_q\).\\ \\ proof:\\ Note: We know that \(p_{2t_1}=p_{1t_1},p_{2t_2}=p_{1t_2},...,p_{2t_a}=p_{1t_a}\) and \(p_{2r_1}=p_{3r_1},p_{3r_2}=p_{3r_2},...,p_{2r_b}=p_{3r_b}\). Assume there are such \(p,q\in\{1,2,...,a\}\) and \(s\in\{1,2,...,b\}\) such that \(t_p<r_s<t_q\). Without loss of generality, we can assume there are no intersections of \(P_2\) with \(P_1\) and \(P_3\) between the points \(p_{2t_p}\) and \(p_{2tq}\) other than \(p_{2r_u}\) and assume \(p_{2t_p}\) is not common to all the three paths. Now consider the path from \(p_{31}\) to \(p_{2r_s}\) along \(P_3\). It is clear from lemma 3 that \(\;\mid\mid\langle p_{31},p_{32},...,p_{2r_s}\rangle\mid\mid=r_s\). Now walk up from \(p_{2r_s}\) to \(p_{2t_p}\) along \(P_2\), \(\mid\mid\langle p_{2r_s-1},...,p_{2t_p}\rangle\mid\mid=r_s-t_p\). Then from \(p_2t_p\) to \(p_2t_q\) along \(P_1\), \(\mid\mid\langle p_{1t_p-1},...,p_{2t_q}\rangle\mid\mid=t_q-t_p\). Then from \(p_2t_q\) to \(p_{1n}\) along \(P_1\), \(\mid\mid\langle p_{2t_q+1},...,p_{1n}\rangle\mid\mid=n-t_q\) (we can walk this path because \(t_q\) is the largest index we have walked this far, hence vertices \(p_{1t_q+1},...,p_{1n}\) has not been walked on before). So, the total path length is \(r_s+(r_s-t_p)+(t_q-t_p)+(n-t_q)=n+2(r_s-t_p)>n\), which contradicts the assumption that the largest path length is \(n\). Hence there are no \( p,q\in \{1,2,...,a\}\) and \(s\in\{1,2,...,b\}\) such that \(t_p<r_s<t_q\).\\ \\ This implies either three paths intersect at the same vertices, or \(t_1<t_2<...<t_a<r_1<r_2<...<r_b\)\\ \\ We work with the later scenario.\\ \\ Suppose \(P_1,P_2,P_3\) are three longest paths in a graph \textbf{\textit{G}}. We assume that the do not have a common vertex. Let \(\{p_{2t_1},p_{2t_2},...,p_{2t_a}\}\) be the common vertices of \(P_1\) and \(P_2\) and let \(\{p_{2r_1},p_{2r_2},...,p_{2r_b}\}\) be the common vertices of \(P_2\) and \(P_3\) where \(t_i,r_j\in\{1,2,...,n\}\) and \(t_a<r_b\). \\ Call \(\mid\mid\langle p_{11},p_{12}...,p_{2t_a}\rangle\mid\mid=\mid\mid\langle p_{21},p_{22},...,p_{2t_a}\rangle\mid\mid=x\) and \(\mid\mid\langle p_{2r_1+1},p_{2r_1+2}...,p_{2n}\rangle\mid\mid=\mid\mid\langle p_{3r_1+1},p_{3r_1+2},...,p_{3n}\rangle\mid\mid=y\). Then \(\mid\mid\langle p_{2t_a+1},p_{2t_a+2}...,p_{2r_1}\rangle\mid\mid=n-x-y\). \\ Now consider the path \(\langle p_{1n},p_{1n-1}...,p_{2t_a}\rangle\), then from here, take the path \(\langle p_{2t_a+1},p_{2t_a+2}...,p_{2r_1}\rangle\) and then \(\langle p_{3r_1+1},p_{3r_1+2}...,p_{3n}\rangle\). The path length is \((n-x)+(n-x-y)+(n-y)=3n-2(x+y)\)\\ But \(3n-2(x+y)\leq n\Rightarrow n\leq x+y\), which is a contradiction because \(x+y<\mid\mid p_2\mid\mid\).\\ Hence, they must share a common vertex. \subsection{Case 2:} Assuming the statement of case 1 is false.\\ \\ We will be using a lemma 1 from Axenovich's paper, which is \begin{figure}[H] \centering \includegraphics[width=\linewidth]{4.PNG} \caption{Lemma 1 of Axenovich's paper} \end{figure} The paths \(P_1\) and \(P_2\) would have a configuration similar to this: \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{graph1.png} \caption{} \end{figure} If we stretch \(P_1\) straight, and represent the intersections by lines connecting the common vertices on the paths, then it would look like this: \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{graph2.png} \caption{} \end{figure} Now we introduce a third path of same length \(P_3\) and examine how it interacts with \(P_1\) and \(P_2\). \\ \\ From lemma 1, we know that \(P_3\) will intersect both \(P_1\) and \(P_2\) at some point. So, \(P_3\) will jump from \(P_1\) to \(P_2\). To avoid \(Q_2\) configuration, we see that \(P_3\) can jump from \(P_1\) to \(P_2\) only in the following 12 fashion: \begin{figure}[H] \centering \includegraphics[width=.5\textwidth]{graph3.png} \caption{} \includegraphics[width=.5\textwidth]{graph4.png} \caption{} \includegraphics[width=.5\textwidth]{graph5.png} \caption{} \end{figure} Every other possible fashion of intersection is similar to one of these. Figure 3 is equivalent to figure 7, figure 4 is equivalent to figure 8 and figure 6 is equivalent to figure 9. \begin{figure}[H] \centering \includegraphics[width=.5\textwidth]{graph6.png} \caption{} \includegraphics[width=.5\textwidth]{graph7.png} \caption{} \end{figure} \begin{figure}[H] \centering \includegraphics[width=.5\textwidth]{graph8.png} \caption{} \end{figure} Note: Whatever may the choice for \(y\) be, the segment \(\langle x,y\rangle\) of \(P_3\) doesn't intersect with any other path. (We can always find such a segment of \(P_3\) as \(P_3\) jumps from \(P_1\) to \(P_2\) at some point.) Consider this path: start at \(x\) i.e., the bottom of the yellow line connecting \(P_3\) and \(P_1\), jump from \(P_3\) to \(P_1\), now, move right, to the closest black line connecting \(P_1\) and \(P_2\), jump to \(P_2\) from \(P_1\), then walk towards \(y\) on \(P_2\) and then jump from \(P_2\) to \(P_3\) and move towards \(x\). \\ Example: \begin{figure}[H] \centering \includegraphics[width=\linewidth]{graph12.png} \caption{} \label{} \end{figure} We can always walk such a path, as the segment of \(P_3\) doesn't intersect with anything, and segments of \(P_1\) and \(P_2\) don't intersect due to construction. We get a cycle which satisfies the properties of \(Q_1\). rename \(P_2\) as \(P_0\) and \(P_3\) as \(P_2\). And segment of \(P_0,P_1,P_2\) as \(S_0,S_1,S_2\) respectively. Clearly interior of \(S_0\) and interior of \(S_2\) do not contain any vertices of \(P_1\) and interior of \(S_1\) and interior of \(S_2\) do not contain any vertices of \(P_0\). \begin{figure}[H] \centering \includegraphics[width=\linewidth]{graph13.png} \caption{} \label{} \end{figure} Hence, we can't avoid both \(Q_1\) and \(Q_2\) configuration simultaneously if we want \(P_1,P_2,P_3\) not to have a common vertex. And hence, \(P_1,P_2,P_3\) must have a common vertex. \section{Acknowledgements:} I thank Jui Koley (Sarojini Naidu College for Women, Kolkata, India), for motivating me to do this research and I would like to thank Simion De (Indian Statistical Institute, Kolkata, India) for cross checking my proof. \section{References:} {[1]} Galli,T., \textit{Problem 4}, in: \textit{Theory of Graphs} (1968), p. 362.\\ \\ {[2]} Walther, H.,. Combinatorial Theory 6 (1969), pp. 1–6.\\ \\ {[3]} Research problems, Discrete Mathematics 167/168 (1997), pp. 605–615, 15th British Combinatorial Conference.\\ \\ {[4]} Axenovich, M.,\textit{ When do three longest paths have a common vertex?}, Discrete Mathematics, Algorithms and Applications 1 (2009), pp. 115–120.\\ \\ {[5]} Susanna F. de Rezende Cristina G. Fernandes Daniel M. Martin Yoshiko Wakabayashi., \textit{Intersection of Longest Paths in a Graph}. \end{document}
2,877,628,090,562
arxiv
\section{Introduction} In recent years the discoveries of gravitational waves (GW) by the LIGO and Virgo Collaborations have opened a new window to the Universe~\cite{LIGO, Virgo, GEO600, GW150914, GW151226, GW170104, GW170814, GW170817, GW170608, GWTC-1}. KAGRA will join the global GW detector network in 2019~\cite{KAGRA} and LIGO-India in 2025~\cite{LIGO-India}, improving source localization and parameter estimation~\cite{lrr-prospects}, while LISA Pathfinder's exceptional performance~\cite{pathfinder-2016} -- showing that the LISA mission is feasible -- and maturing pulsar timing arrays~\cite{IPTA-2013} mark the beginning of multiwavelength, multiband GW astronomy. Compact binary systems are the most prominent sources for the present and future GW observatories. So far these events have been analyzed using quasi-circular GW templates, as radiation-reaction effects tend to circularize the orbits~\cite{peters-1963, peters-1964} for prototypical sources. For such systems one can thus assume that by the time the binary enters the sensitivity band of current ground-based detectors the eccentricity will be close to zero. However, there are a number of astrophysical scenarios in which binary systems could have moderate eccentricities when entering the sensitivity band of ground-based detectors~\cite{huerta-2016, tiwari-2016, gondan-2018-1, gondan-2018-2, rodriguez-2018, dorazio-2018, zevin-2018}. Recently, there have been studies showing that triple interactions among black holes can produce coalescing binaries with moderate eccentricities ($\sim 0.1$) when entering the LIGO band~\cite{samsing-2014, samsing-2017, samsing-2018-1} or large eccentricities ($\sim 0.9$) when entering the LISA band~\cite{bonetti-2018}. This has major implications on how to distinguish between binary black hole (BBH) formation channels~\cite{samsing-2018-2} and motivates the development of waveforms valid for nonzero eccentricities. There has been great effort to model GWs of eccentric binary systems. One usually employs the quasi-Keplerian parametrization~\cite{damour-1985, memmesheimer-2004} to describe the conservative binary orbits. The phasing description, developed in Refs.~\cite{damour-2004, koenigsdoerffer-2006} and discussed in great detail for low-eccentricity binaries in Ref.~\cite{moore-2016}, efficiently incorporates the effects of radiation reaction, describing the binary dynamics on three different timescales: the orbital timescale, the periastron precession timescale, and the radiation-reaction timescale. In addition, the secular evolution of the orbital elements has been completed at the third post-Newtonian (3PN) order in Refs.~\cite{arun-2008-1, arun-2008-2, arun-2009}, including hereditary effects. Using this, several waveform models have been developed in the past years~\cite{yunes-2009, cornish-2010, key-2011, huerta-2014, huerta-2017, huerta-2018, gopakumar-2011, tanay-2016, hinder-2018, cao-2017, klein-2018}, for both nonspinning and spinning binaries. In this paper, we extend the work in Ref.~\cite{mishra-2015} by computing the tail contributions to the GW amplitudes for compact binaries in eccentric orbits at the third post-Newtonian level. Combining our tail results with the instantaneous ones, we then incorporate post-adiabatic corrections~\cite{damour-2004, koenigsdoerffer-2006, moore-2016} to get a complete waveform including radiation-reaction effects valid during the early inspiral of the binary system. We present all our results in modified harmonic (MH) gauge in terms of the post-Newtonian parameter $\bar{x} = ( G m \bar{\omega} / c^3 )^{2/3}$, where $G$ denotes the gravitational constant, $c$ the speed of light, $m$ the total mass of the binary, and $\bar{\omega}$ the adiabatic orbital frequency (see Sec.~\ref{sec: full waveform}), as well as a certain time eccentricity $\bar{e} = \bar{e}_t$ associated with the PN-accurate quasi-Keplerian parametrization. To calculate the complicated tail integrals, we work within a low-eccentricity expansion and express everything in terms of the mean anomaly $l$ and the phase angle $\lambda$, which accounts for the periastron advance. Compared to the results in Ref.~\cite{mishra-2015}, ours will thus not be valid for arbitrary eccentricities. Moreover, they will need to be completed by the memory contributions, which we will tackle in a follow-up paper~\cite{ebersold-2019}. This paper is structured as follows: In Sec.~\ref{sec: prerequisites} we quickly review the basics of spherical harmonic decomposition and recall how to connect the radiative multipole moments to the actual source moments. We also review the conservative 3PN-accurate quasi-Keplerian parametrization~\cite{memmesheimer-2004}. In Sec.~\ref{sec: phasing}, we discuss how to incorporate post-adiabatic corrections~\cite{damour-2004, koenigsdoerffer-2006} into this description. In Sec.~\ref{sec: hereditary}, we are then in a position to calculate the various tail integrals appearing in the source multipole moments. In Sec.~\ref{sec: full waveform}, we combine these results with the instantaneous ones and introduce post-adiabatic corrections. We also compare our results to the circular waveforms in Ref.~\cite{blanchet-2008}. Finally, in Sec.~\ref{sec: summary}, we give a brief summary of our work. Throughout this paper we mostly present results up to $\mathcal{O}(e)$, though expressions up to $\mathcal{O}(e^6)$ for all tail and post-adiabatic modes will be listed in a supplemental \emph{Mathematica} file~\cite{supplement}. \section{Construction of the waveform for compact binaries in eccentric orbits}\label{sec: prerequisites} \subsection{Polarizations and spherical-mode decomposition} The gravitational waves emitted by an isolated system near future radiative infinity are encoded in the transverse-traceless ($\textnormal{TT}$) projection $h_{ij}^\textnormal{TT}$ of the deviation of the space-time metric $g_{\mu\nu}$ from a flat metric $\eta_{\mu\nu}=\text{diag}(-1,1,1,1)$, in a radiative-type Cartesian-like coordinate grid $X^\mu = (cT, \bm{X})$, at order $1/R$, where $R = |\bm{X}|$ denotes the Euclidean distance of the vector $\bm{X}$ to the origin. It is convenient to chose this origin at the center of mass of the full system and to introduce the standard spherical coordinates $(\Theta, \Phi)$ associated with the so-defined Cartesian frame, for which the relation $X^i = R \, (\cos \Phi \sin \Theta, \sin \Phi \sin\Theta, \cos\Theta)$ holds. The radiative property of this frame ensures that a null geodesic going through the origin at time $T_R$ will reach an observer with position $\bm{X}$ at time $T=T_R + R/c$. If $\bm{N}(\Theta, \Phi)=\bm{X}/R$ denotes the unit direction of that observer, the plane span by the vectors $\bm{P}(\Theta, \Phi)$ and $\bm{Q}(\Theta, \Phi)$ belonging to some arbitrary direct orthonormal triad $(\bm{N},\bm{P},\bm{Q})$ must be transverse to the direction of propagation of wave rays. The transverse-traceless projection $h_{ij}^\textnormal{TT}$ can be uniquely decomposed into symmetric trace-free (STF) radiative mass-type ($U_L$) and current-type ($V_L$) multipole moments as: \begin{align} \label{eq: hTT} h_{ij}^\textnormal{TT} =&\; \frac{4 G}{c^2 R} \mathcal{P}_{ijab}(\bm{N}) \sum_{\ell=2}^{\infty} \frac{1}{c^\ell \ell!} \Big\{ N_{L-2} U_{ab L-2} \nonumber\\ &- \frac{2 \ell}{c (\ell + 1)} N_{c L-2}\epsilon_{cd(a} V_{b)d L-2} \Big\} \Big|_{T_R} + \mathcal{O} \left( \frac{1}{R^2} \right) \,. \end{align} Here $\mathcal{P}_{ijab} = \mathcal{P}_{ia} \mathcal{P}_{jb} - \frac{1}{2} \mathcal{P}_{ij} \mathcal{P}_{ab}$, with $\mathcal{P}_{ij} = \delta_{ij} - N_i N_j$, is the $\textnormal{TT}$ projection operator. The waveform is usually projected on the transverse symmetric basis $e^+_{ij} = \frac{1}{2} (P_i P_j - Q_i Q_j)$, $e^\times_{ij} = P_{(i} Q_{j)}$, \begin{align} \begin{pmatrix} h_+ \\ h_\times \end{pmatrix} &= \begin{pmatrix} e^+_{ij} \\ e^\times_{ij} \end{pmatrix} \, h_{ij}^\textnormal{TT} \,, \end{align} the resulting components being referred to as the plus and cross polarizations, respectively. Equivalently the complex basis formed by the vector $\bm{m} = (\bm{P} + \mathrm{i} \bm{Q}) / \sqrt{2}$ of spin weight 2 and its complex conjugate $\overline{\bm{m}}$ of spin weight $-2$ can be used. From the transverse trace-free character of the waveform, it follows that \begin{align} h &= h_+ - \mathrm{i} h_\times = h_{ij}^\textnormal{TT} \, \overline{m}^i \overline{m}^j \,. \end{align} From now on we shall assume that the vector $\bm{m}$ is proportional to $\bm{m}_S = (\partial \bm{N} / \partial \theta + \mathrm{i} \sin^{-1} \! \theta \, \partial \bm{N} / \partial \phi) / \sqrt{2}$ so that the functions adapted to the spherical decomposition of the spin $-2$ quantity $h$ are the usual spin-weighted spherical harmonics of weight $-2$, which will be denoted by $Y_{-2}^{\ell m}(\Theta, \Phi)$. In our conventions, they are given by \begin{subequations} \begin{align} Y_{-2}^{\ell m}(\Theta, \Phi) =&\; \sqrt{\frac{2\ell+1}{4\pi}} d_2^{\ell m}(\Theta) e^{i m \Phi} \,,\\ d_2^{\ell m} =&\; \sum_{k=k_\text{min}}^{k_\text{max}} \frac{(-1)^k}{k!} \nonumber\\ &\times \frac{\sqrt{(\ell+m)!(\ell-m)!(\ell+2)!(\ell-2)!}} {(k-m+2)! (\ell+m-k)! (\ell-k-2)!} \nonumber\\ &\times \left(\cos\frac{\Theta}{2}\right)^{2\ell+m-2k-2} \left( \sin \frac{\Theta}{2} \right)^{2k-m+2} \,, \end{align} \end{subequations} with $k_\text{min} = \max(0,m-2)$ and $k_\text{max} = \min(\ell+m,\ell-2)$. Thus, the gravitational waveform may be decomposed into spherical modes $h^{\ell m}$ as \begin{align}\label{eq: mode decomposition} h_+ - \mathrm{i} h_\times &= \sum_{\ell=2}^{+\infty} \sum_{m=-\ell}^{\ell} h^{\ell m} Y_{-2}^{\ell m} (\Theta, \Phi) \,. \end{align} The spherical harmonic modes $h^{\ell m}$ can be written in terms of the radiative mass-type ($U^{\ell m}$) and current-type ($V^{\ell m}$) multipole moments, \begin{align}\label{eq: hlm rad mom} h^{\ell m} &= -\frac{G}{\sqrt{2}R c^{\ell+2}} \left(U^{\ell m} - \frac{\mathrm{i}}{c} V^{\ell m} \right) \,, \end{align} with the inverse relations \begin{subequations} \begin{align} U^{\ell m} &= -\frac{R c^{\ell +2}}{\sqrt{2}G} \left( h^{\ell m} + (-1)^m \overline{h}{}^{\ell -m} \right) \,,\\ V^{\ell m} &= -\frac{R c^{\ell + 3} }{\sqrt{2} \mathrm{i} G} \left( -h^{\ell m} + (-1)^m \overline{h}{}^{\ell -m} \right) \,. \end{align} \end{subequations} The radiative moments ($U^{\ell m}$, $V^{\ell m}$) are actually related to the STF radiative moments ($U_L$, $V_L$) by \begin{subequations}\label{eq: radiative STF} \begin{align} U^{\ell m} &= \frac{4}{\ell!} \sqrt{ \frac{(\ell+1) (\ell+2)}{2 \ell (\ell-1)}} \alpha_L^{\ell m} U_L \,,\\ V^{\ell m} &= -\frac{8}{\ell !} \sqrt{ \frac{\ell (\ell+2)}{2 (\ell+1) (\ell-1)}} \alpha_L^{\ell m} V_L \,, \end{align} \end{subequations} where the $\alpha_L^{\ell m}$ denote a set of constant STF tensors that connect the basis of spherical harmonics $Y^{\ell m}(\Theta, \Phi)$ to the set of STF tensors $N_{\langle L \rangle}$ as \begin{subequations} \begin{align} N_{\langle L \rangle}(\Theta, \Phi) &= \sum_{m=-\ell}^{\ell} \alpha_L^{\ell m} Y^{\ell m} (\Theta, \Phi) \,,\\ Y^{\ell m}(\Theta, \Phi) &= \frac{(2\ell+1)!!}{4\pi\ell!} \overline{\alpha}_L^{\ell m} N^{\langle L \rangle}(\Theta, \Phi) \,. \end{align} \end{subequations} They can be calculated through \begin{align} \alpha_L^{\ell m} &= \int \mathrm{d}\Omega\; N_{\langle L \rangle} \bar{Y}^{\ell m} \,, \end{align} and are given explicitly in Eq.~(2.12) of Ref.~\cite{thorne-1980}. Remarkably, for planar binaries, there exists a mode separation~\cite{kidder-2008, faye-2012} such that $h^{\ell m}$ is completely determined by mass-type radiative multipole moments $U^{\ell m}$ for $\ell+m$ even and by current-type radiative multipole moments $V^{\ell m}$ for $\ell+m$ odd, hence \begin{subequations} \begin{align} h^{\ell m} &= -\frac{G}{\sqrt{2} R c^{\ell+2}} U^{\ell m} &&\textnormal{if } \ell+m \textnormal{ is even} \,,\\ h^{\ell m} &= \frac{\mathrm{i} G}{\sqrt{2} R c^{\ell+3}} V^{\ell m} &&\textnormal{if } \ell+m \textnormal{ is odd} \,. \end{align} \end{subequations} Let us finally specify the choice of the Cartesian frame and polarization vectors in the case of interest where the source is a binary system of pointlike objects with bound orbits, since this choice will fully set the amplitude modes computed in the present paper. We adopt the same conventions as in Ref.~\cite{blanchet-2008}. In the absence of spin, the orbits stay in a plane. The vector $\bm{e}_3$ is taken to be the unit normal vector orienting the sense of the motion positively. For the polarization vector $\bm{P}$, we pick the unit vector pointing towards the ascending node $\bm{N} \times \bm{e}_3$, with $\bm{N}$ representing the direction of the Earth observer. Therefore, we can also make it coincide with $\bm{e}_1$. To complete the triads $\bm{e}_a$ and $(\bm{N},\bm{P},\bm{Q})$ we pose $\bm{e}_2=\bm{e}_3 \times \bm{e}_1$ and $\bm{Q}=\bm{N}\times\bm{P}$. Notice that, by construction, $\bm{N}$ belongs to the plane spanned by $\{\bm{e}_2,\bm{e}_3\}$. Its spherical coordinates, in terms of the inclination of the binary $\iota$, are thus $(\Theta = \iota, \Phi = \pi/2)$. \subsection{Multipole moments}\label{sec: multipole moments} From Eqs.~(\ref{eq: hlm rad mom}--\ref{eq: radiative STF}), we see that we need to relate the $U_L$ and $V_L$ to the actual source. In the multipolar post-Minkowsian (MPM) post-Newtonian (PN) formalism, the radiative moments ($U_L$, $V_L$) are functionals of six sets of source moments ($I_L$, $J_L$, $W_L$, $X_L$, $Y_L$, $Z_L$). The relations between the radiative moments and the source moments have been obtained at the 3PN order and are listed in Ref.~\cite{blanchet-2008}, Eqs.~(5.4--5.11). We can split the the expressions for the radiative moments into two parts, namely the instantaneous and the hereditary parts: \begin{align} U_L &= U_L^\textnormal{inst} + U_L^\textnormal{hered} \,. \end{align} The instantaneous contributions only depend on the state of the source at a given retarded time, while the hereditary parts depend on, and thus require knowledge of, the entire past history of the source. At leading order, the instantaneous parts of the radiative moments are directly related to the source moments as \begin{subequations} \begin{align} U_L^\textnormal{inst}(t_r) &= I_L^{(\ell)}(t_r) + \mathcal{O}(c^{-3}) \,,\\ V_L^\textnormal{inst}(t_r) &= J_L^{(\ell)}(t_r) + \mathcal{O}(c^{-3}) \,, \end{align} \end{subequations} with $t_r$ denoting here a ``dummy'' variable. Corrections from the gauge moments ($W_L$, $X_L$, $Y_L$, $Z_L$) enter at higher orders. In this work, we will focus on the hereditary tail contributions. For a complete treatment of the instantaneous contributions, we refer to Ref.~\cite{mishra-2015}. To the desired accuracy, the hereditary contributions to the radiative moments are given by \begin{widetext} \begin{subequations}\label{eq: U_L} \begin{align} \label{eq: U_ij} U_{ij}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{11}{12} \right] I_{ij}^{(4)}(t_r - \tau) - \frac{2G}{7c^5} \int_{-\infty}^{t_r} \mathrm{d}\tau\; I_{a\langle i}^{(3)}(\tau) I_{j\rangle a}^{(3)}(\tau) \nonumber\\ &+ 2 \left( \frac{GM}{c^3} \right)^2 \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln^2\left( \frac{\tau}{2\tau_0} \right) + \frac{57}{70} \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{124627}{44100} \right] I_{ij}^{(5)}(t_r - \tau) + \mathcal{O}(c^{-7}) \,,\\ % \label{eq: U_ijk} U_{ijk}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{97}{60} \right] I_{ijk}^{(5)}(t_r - \tau) \nonumber\\ &+ \frac{G}{c^5} \int_{-\infty}^{t_r} \mathrm{d}\tau\; \left[ -\frac{1}{3} I_{a\langle i}^{(3)}(\tau) I_{jk\rangle a}^{(4)}(\tau) - \frac{4}{5} \epsilon_{ab\langle i} I_{ja}^{(3)}(\tau) J_{k\rangle b}^{(3)}(\tau) \right] + \mathcal{O}(c^{-6}) \,,\\ % \label{eq: U_ijkl} U_{ijkl}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{59}{30} \right] I_{ijkl}^{(6)}(t_r - \tau) + \frac{2G}{5c^3} \int_{-\infty}^{t_r} \mathrm{d}\tau\; I_{\langle ij}^{(3)}(\tau) I_{kl\rangle}^{(3)}(\tau) + \mathcal{O}(c^{-5}) \,,\\ % \label{eq: U_ijklm} U_{ijklm}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{232}{105} \right] I_{ijklm}^{(7)}(t_r - \tau) + \frac{20G}{21c^3} \int_{-\infty}^{t_r} \mathrm{d}\tau\; I_{\langle ij}^{(3)}(\tau) I_{klm\rangle}^{(4)}(\tau) + \mathcal{O}(c^{-4}) \,, \end{align} \end{subequations} \begin{subequations}\label{eq: V_L} \begin{align} \label{eq: V_ij} V_{ij}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{7}{6} \right] J_{ij}^{(4)}(t_r - \tau) + \mathcal{O}(c^{-6}) \,,\\ % \label{eq: V_ijk} V_{ijk}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{5}{3} \right] J_{ijk}^{(5)}(t_r - \tau) + \mathcal{O}(c^{-5}) \,,\\ % \label{eq: V_ijkl} V_{ijkl}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{119}{60} \right] J_{ijkl}^{(6)}(t_r - \tau) + \mathcal{O}(c^{-4}) \,, \end{align} \end{subequations} \end{widetext} where $M = m (1 - \nu x / 2)+\mathcal{O}(c^{-4})$ is the Arnowitt-Deser-Misner (ADM) mass of the source, $m = m_1 + m_2$ the total mass, $\nu = m_1 m_2 / m^2$ the symmetric mass ratio, and $\tau_0$ an arbitrary length scale originally introduced in the MPM formalism. None of the other moments contributes to the hereditary part of the waveform~(\ref{eq: hTT}) at 3PN order, since \begin{subequations} \begin{align} U_{L>5}^\textnormal{hered} &= \mathcal{O}(c^{-3}) \,, \\ V_{L>4}^\textnormal{hered} &= \mathcal{O}(c^{-3}) \,. \end{align} \end{subequations} In the above hereditary contributions, there are two different types of integrals: those with logarithms and those without. The logarithmic integral in the first line of Eq.~(\ref{eq: U_ij}) is called the tail integral while the one on the second line is the tails-of-tails integral. On the other hand, the integral without a logarithmic kernel is the memory integral. Note that there are no memory contributions to the radiative current moments $V_L$. Physically, wave tails come from the scattering of the linear waves, generated by the matter source, off the space-time curvature due to the total ADM mass of the isolated system. It is a (power of) monopole-wave interaction effect with a weak past dependence. By contrast, the memory pieces of the waves are produced by the effective stress-energy tensor of the source radiation itself. It is a wave-wave interaction effect with a strong past dependence~\cite{blanchet-1992}. The expressions for the source moments ($I_L$, $J_L$) in terms of the binary separation $r$, its time derivative $\dot{r}$, the polar angle $\phi$ of the relative position, and its derivative $\dot{\phi}$ are now required. Observing Eqs.~(\ref{eq: U_L}--\ref{eq: V_L}), we note that $I_{ij}$, $J_{ij}$ and $I_{ijk}$ are needed to an accuracy of 1PN, while all other multipole moments are only needed to leading Newtonian order. The relevant expressions are listed in Ref.~\cite{arun-2008-2} using standard harmonic (SH) coordinates. The logarithms appearing at 3PN order in the SH gauge can, however, be transformed away in appropriate modified harmonic coordinates, as demonstrated Sec.~IV~B of Ref.~\cite{arun-2008-2}. For the hereditary parts, this will not make any difference, as we shall only need relative 1PN-accurate expressions for certain ($I_L$, $J_L$), but, when adding up instantaneous terms from Ref.~\cite{mishra-2015} to our hereditary parts, we shall always work within the MH gauge. The binary separation vector will be represented by $x^i\equiv r\, n^i$, whereas $v^i=\mathrm{d} x^i/\mathrm{d} t$ will stand for the relative velocity. The expressions relevant for the calculation of the hereditary parts are \begin{widetext} \begin{subequations}\label{eq: I_L} \begin{align} \label{eq: I_ij} I_{ij} =&\; \nu m \left( A_1\, x_{\langle ij\rangle} + A_2\, \frac{r \dot{r}}{c^2} x_{\langle i} v_{j\rangle} + A_3\, \frac{r^2}{c^2} v_{\langle ij\rangle}\right) + \mathcal{O}(c^{-7}) \,,\\ % \label{eq: I_ijk} I_{ijk} =&\; -\nu m \Delta \left( B_1\, x_{\langle ijk\rangle} + B_2\, \frac{r \dot{r}}{c^2} x_{\langle ij} v_{j\rangle} + B_3\, \frac{r^2}{c^2} x_{\langle i} v_{jk\rangle}\right) + \mathcal{O}(c^{-6}) \,,\\ % \label{eq: I_ijkl} I_{ijkl} =&\; \nu m (1-3\nu) x_{\langle ijkl\rangle} + \mathcal{O}(c^{-5}) \,,\\ % \label{eq: I_ijklm} I_{ijklm} =&\; -\nu m \Delta (1-2\nu) x_{\langle ijklm\rangle} + \mathcal{O}(c^{-4}) \,, \end{align} \end{subequations} \begin{subequations}\label{eq: J_L} \begin{align} \label{eq: J_ij} J_{ij} =&\; -\nu m \Delta \left( C_1\, \epsilon_{ab\langle i} x_{j\rangle a} v_b + C_2\, \frac{r \dot{r}}{c^2} \epsilon_{ab\langle i} v_{j\rangle b} x_{a} \right) + \mathcal{O}(c^{-6}) \,,\\ % \label{eq: J_ijk} J_{ijk} =&\; \nu m (1-3\nu) \epsilon_{ab\langle i} x_{jk\rangle a} v_b + \mathcal{O}(c^{-5}) \,,\\ % \label{eq: J_ijkl} J_{ijkl} =&\; -\nu m \Delta (1-2\nu) \epsilon_{ab\langle i} x_{jkl\rangle a} v_b + \mathcal{O}(c^{-4}) \,, \end{align} \end{subequations} where $\Delta = (m_1 - m_2) / m$ is the mass difference ratio and the constants $A_i$, $B_i$, and $C_i$ read \begin{subequations} \begin{align} A_1 =&\; 1 + \frac{1}{c^2} \left[ v^2 \left( \frac{29}{42} - \frac{29 \nu}{14} \right) + \frac{Gm}{r} \left( -\frac{5}{7} + \frac{8\nu}{7} \right) \right] \,,\\ A_2 =&\; -\frac{4}{7} + \frac{12\nu}{7} \,,\\ A_3 =&\; \frac{11}{21} - \frac{11\nu}{7} \,,\\ B_1 =&\; 1 + \frac{1}{c^2} \left[ v^2 \left( \frac{5}{6} - \frac{19\nu}{6} \right) + \frac{Gm}{r} \left( -\frac{5}{6} + \frac{13\nu}{6} \right) \right] \,,\\ B_2 =&\; -(1-2\nu) \,,\\ B_3 =&\; 1-2\nu \,,\\ C_1 =&\; 1 + \frac{1}{c^2} \left[ v^2 \left( \frac{13}{28} - \frac{17 \nu}{7} \right) + \frac{Gm}{r} \left( \frac{27}{14} + \frac{15\nu}{7} \right) \right] \,,\\ C_2 =&\; \frac{5}{28} (1-2\nu) \,. \end{align} \end{subequations} \end{widetext} \subsection{Quasi-Keplerian parametrization}\label{sec: keplerian parametrization} The expressions in Eqs.~(\ref{eq: I_L}--\ref{eq: J_L}) in terms of the variables ($r$, $\dot{r}$, $\phi$, $\dot{\phi}$) are the most general ones. Now, when calculating the tail integrals, we should replace the latter quantities by their actual analytic time evolution for eccentric orbits. At the third post-Newtonian order, the conservative orbital dynamics of compact binaries in eccentric orbits is specified by providing the following generalized quasi-Keplerian parametrization~\cite{memmesheimer-2004} for the dynamical variables $r$ and $\phi$: \begin{subequations}\label{eq: quasi-keplerian} \begin{align} r =\;& a_r \left(1 - e_r \cos u \right) \,,\\ \phi - \phi_{0} =\;& (1 + k ) v + \left(f_{4\phi} + f_{6\phi} \right) \sin (2v) \nonumber\\ &+ \left(g_{4\phi} + g_{6\phi} \right) \sin (3v) + i_{6\phi}\sin (4v) \nonumber\\ &+ h_{6\phi} \sin (5v) \label{eq: quasi-keplerian phi} \,,\\ \text{where} \quad v =&\; 2 \arctan \left[\left( \frac{ 1 + e_{\phi} }{1 - e_{\phi}} \right)^{1/2} \tan \frac{u}{2} \right] \,. \end{align} \end{subequations} An interesting feature in the above equations is the presence of different eccentricity parameters $e_r$ and $e_\phi$, introduced in such a way that the parametrization looks ``Keplerian''. The parameter $k$ is nothing but the periastron advance per orbital revolution. The parameters $a_r$, $e_r$, and $e_\phi$ are the PN-accurate semi-major axis and the radial and angular eccentricities, while $f_{4\phi}$, $f_{6\phi}$, $g_{4\phi}$, $g_{6\phi}$, $i_{6\phi}$, and $h_{6\phi}$ are some orbital functions of the energy and angular momentum that enter at the 2PN and 3PN orders. The explicit expressions are available in Ref.~\cite{memmesheimer-2004}. The eccentric anomaly $u$ is linked to the mean anomaly $l$ through the 3PN-accurate Kepler equation \begin{align}\label{eq: 3PN_KE} l =&\; u - e_t \sin u + \left(g_{4t} + g_{6t} \right)(v-u) \nonumber\\ &+ \left(f_{4t} + f_{6t} \right)\sin v + i_{6t} \sin (2v) + h_{6t} \sin (3v)\,. \end{align} Here, $e_t$ is another eccentricity parameter, usually called the time eccentricity, and the functions $g_{4t}$, $g_{6t}$, $f_{4t}$, $f_{6t}$, $i_{6t}$, and $h_{6t}$ are additional 2PN and 3PN orbital functions of the energy and angular momentum. Together, Eqs.~(\ref{eq: quasi-keplerian}) and (\ref{eq: 3PN_KE}) fully parametrize the conservative orbital dynamics of compact binaries on eccentric orbits. Note that we choose to express all our equations in terms of the post-Newtonian parameter $x = (Gm\omega / c^3)^{2/3}$ and the time eccentricity $e = e_t$, with $\omega = (1+k)n$ being the orbital frequency and $n = 2 \pi / P$ the mean motion associated with the period $P$. In the next section, we shall introduce post-adiabatic corrections to this quasi-Keplerian description. We will then have to replace the parameters ($x$, $e$) with their slowly evolving counterparts ($\bar{x}$, $\bar{e}$). The appearance of the periastron precession at first post-Newtonian order introduces a double periodic motion on two timescales: the orbital timescale and the precession timescale. It is thus customary to split the phase $\phi$ into an angle $\lambda$ that is linear in $l$ and an oscillatory part $W(l)$ that is $2\pi$-periodic in $l$~\cite{gopakumar-2002, damour-2004, tessmer-2007}. This leads us to write \begin{subequations} \begin{align} \phi =\;& \lambda + W(l) \,,\\ \lambda =\;& \phi_0 + (1+k)l \,,\\ W(l) =\;& (1+k)(v-l) + \left(f_{4\phi} + f_{6\phi} \right) \sin (2v) \nonumber\\ &+ \left(g_{4\phi} + g_{6\phi} \right) \sin (3v) + i_{6\phi}\sin (4v) \nonumber\\ &+ h_{6\phi} \sin (5v) \label{eq: quasi-keplerian W}\,, \end{align} \end{subequations} with $\phi_0$ denoting the initial polar angle at $u=0$. To evaluate the various time integrals appearing in the tail contributions to the waveform, we will need explicit expressions for $u$ and $\phi$ in terms of the angles $l$ and $\lambda$. This can be achieved by solving the Kepler equation (\ref{eq: 3PN_KE}). We employ the method described in Ref.~\cite{boetzel-2017}, which yields \begin{subequations}\label{eq: KE solution} \begin{align} u &= l + \sum_{s=1}^{\infty} A_s \sin(sl) \,,\\ A_s &= \frac{2}{s} J_s(s e_t) + \sum_{j=1}^{\infty} \alpha_j \left\{ J_{s+j}(s e_t) - J_{s-j}(s e_t) \right\} \,, \end{align} \end{subequations} where the constants $\alpha_j$ are some PN-accurate functions of the energy and angular momentum entering at the second post-Newtonian order. It remains to display an explicit expression for the $2\pi$-periodic function $W(l)$ in terms of $l$, \begin{subequations}\label{eq: W solution} \begin{align} W(l) =\;& \sum_{s=1}^{\infty} \mathcal{W}_s \sin(sl) \,,\\ \mathcal{W}_s =\;& (1+k) B_s + \left(f_{4\phi} + f_{6\phi} \right) \sigma_s^{2v} \nonumber\\ &+ \left(g_{4\phi} + g_{6\phi} \right) \sigma_s^{3v} + i_{6\phi} \sigma_s^{4v} + h_{6\phi} \sigma_s^{5v} \,, \end{align} \end{subequations} with the constants $B_s$ and $\sigma_s^{jv}$ given in Eqs.~(C8) and (32b) of Ref.~\cite{boetzel-2017}. We finally find, expanding to $\mathcal{O}(x^3)$ and $\mathcal{O}(e)$, \begin{subequations}\label{eq: KE solution expanded} \begin{align} \label{eq: u solution} u =&\; l + e \sin(l) + x^2 \left(-\frac{15}{2} + \frac{9\nu}{8} + \frac{\nu^2}{8} \right) e\sin(l) \nonumber\\ &+ x^3 \left( -55 + \frac{104593\nu}{1680} + \frac{3\nu^2}{4} + \frac{\nu^3}{24} \right) e \sin(l) \,,\\ \label{eq: phi solution} \phi =&\; \lambda + 2 e \sin(l) + x (10-\nu) e\sin(l) \nonumber\\ &+ x^2 \left( 52 - \frac{235\nu}{12} + \frac{\nu^2}{12} \right) e \sin(l) \nonumber\\ &+ x^3 \bigg( 292 + \left(-\frac{420131}{840} + \frac{287\pi^2}{32} \right) \nu \nonumber\\ &+ \frac{521\nu^2}{24} + \frac{\nu^3}{24} \bigg) e\sin(l) \,. \end{align} \end{subequations} We shall use these expressions to write the source multipole moments ($I_L$, $J_L$) in terms of $l$ and $\lambda$. \section{Phasing of the orbital elements}\label{sec: phasing} So far, we used the conservative quasi-Keplerian description of the dynamics of nonspinning compact binaries. This analytic parametrization is possible due to the fact that the conservative problem admits four integrals of motion, or even two, when the problem is restricted to the orbital plane. In our case, those two integrals are encoded in the two intrinsic constants $x$ and $e=e_t$. There also exist two extrinsic constants $c_l$ and $c_\lambda$, \begin{subequations} \begin{align} l(t) &= n(t-t_0) + c_l \,, \\ \lambda(t) &= (1+k) n (t-t_0) + c_\lambda \,, \end{align} \end{subequations} corresponding to the initial values of the two phase angles $l$ and $\lambda$, respectively. We now move to include phasing effects due to energy and angular momentum loss into this quasi-Keplerian parametrization. An efficient description of the dynamics of nonspinning compact binaries with phasing is presented in Refs.~\cite{damour-2004,koenigsdoerffer-2006}. Following Ref.~\cite{damour-1983}, they employ a method of \emph{variation of constants} where the constants of motion of the conservative problem ($x$, $e$, $c_l$, $c_\lambda$) are treated as time-varying quantities. Specifically, the post-Newtonian parameter $x = x(t)$ and the time eccentricity $e = e(t)$ are now genuine functions of time, while the angles $l$ and $\lambda$ are given by \begin{subequations} \begin{align} l(t) &= \int_{t_0}^{t} n(t') \mathrm{d} t' + c_l(t) \,, \\ \lambda(t) &= \int_{t_0}^{t} [1+k(t')] n(t') \mathrm{d} t' + c_\lambda(t) \,. \end{align} \end{subequations} To obtain the evolution of the functions $c_\alpha(t) = (x(t), e(t), c_l(t), c_\lambda(t))$, one starts from the PN-accurate equations of motion \begin{subequations} \begin{align} \dot{\bf{x}} &= \bf{v} \,, \\ \dot{\bf{v}} &= \mathcal{A}_0(\bf{x}, \bf{v}) + \mathcal{A}'(\bf{x}, \bf{v}) \,, \end{align} \end{subequations} with $\mathcal{A}_0$ being the conservative and $\mathcal{A}'$ the dissipative piece of the equations of motion. These equations are first solved neglecting the dissipative term $\mathcal{A}'$, leading to the conservative quasi-Keplerian description of Sec.~\ref{sec: keplerian parametrization}. The full solution including radiation reaction is then found by varying the ``constants'' $c_\alpha(t)$, leading to differential equations of the form \begin{align} \frac{\mathrm{d} c_\alpha}{\mathrm{d} l} &= G_\alpha(l, c_\alpha) \,. \end{align} One can then introduce a two-scale decomposition of all phase variables $c_\alpha(l)$ into a slow (radiation-reaction timescale) secular drift and a fast (orbital timescale) periodic oscillation as \begin{align} c_\alpha(t) = \bar{c}_\alpha(t) + \tilde{c}_\alpha(t) \,, \end{align} with \begin{subequations} \begin{align} \frac{\mathrm{d}\bar{c}_\alpha}{\mathrm{d} l} &= \bar{G}_\alpha(l, c_\alpha) \label{eq: secular evolution} \,,\\ \frac{\mathrm{d}\tilde{c}_\alpha}{\mathrm{d} l} &= \tilde{G}_\alpha(l, c_\alpha) = G_\alpha(l, c_\alpha) - \bar{G}_\alpha(l, c_\alpha) \label{eq: osc evolution} \,, \end{align} \end{subequations} $\bar{G}_\alpha$ and $\tilde{G}_\alpha$ here being the orbital averaged and oscillatory pieces of $G_\alpha$. The secular evolution of the orbital elements (\ref{eq: secular evolution}) can also be derived from the heuristic balance equations $\langle \mathrm{d} E/\mathrm{d} t \rangle = - \langle \mathcal{F} \rangle$ and $\langle \mathrm{d} J/\mathrm{d} t \rangle = - \langle \mathcal{G} \rangle$, where $\mathcal{F}$ is the energy flux and $\mathcal{G}$ the angular momentum flux. This approach is discussed at the 3PN order in a series of papers~\cite{arun-2008-1, arun-2008-2, arun-2009}, which notably take care of the hereditary contributions to the energy and angular momentum fluxes. After the above procedure is applied, we have \begin{subequations}\label{eq: two-scale decomp} \begin{align} x(t) &= \bar{x}(t) + \tilde{x}(t) \,, \\ e(t) &= \bar{e}(t) + \tilde{e}(t) \,, \\ c_l(t) &= \bar{c}_l + \tilde{c}_l(t) \,, \\ c_\lambda(t) &= \bar{c}_\lambda + \tilde{c}_\lambda(t) \,, \end{align} \end{subequations} where $\bar{c}_l$ and $\bar{c}_\lambda$ are found to be true integration constants. The secular evolution of the orbital elements $\bar{n}(t)$, $\bar{k}(t)$, $\bar{x}(t)$, and $\bar{e}(t)$ is given in Sec.~VI of Ref.~\cite{arun-2009}. At leading order, these equations reduce to the famous formulas by Peters and Mathews~\cite{peters-1963,peters-1964}: \begin{subequations}\label{eq: peters-mathews} \begin{align} \frac{\mathrm{d}\bar{x}}{\mathrm{d} t} &= \frac{c^3 \nu}{Gm} \frac{\bar{x}^5}{(1-\bar{e}^2)^{7/2}} \left( \frac{64}{5} + \frac{584}{15} \bar{e}^2 + \frac{74}{15} \bar{e}^4 \right) \,,\\ \frac{\mathrm{d}\bar{e}}{\mathrm{d} t} &= -\frac{c^3 \nu}{Gm} \frac{\bar{e} \, \bar{x}^4}{(1-\bar{e}^2)^{5/2}} \left( \frac{304}{15} + \frac{121}{15} \bar{e}^2 \right) \,. \end{align} \end{subequations} The periodic variations in Eqs.~(\ref{eq: two-scale decomp}) can be computed from Eqs.~(34) and (35) of Ref.~\cite{koenigsdoerffer-2006} and are explicitly given in Eqs.~(36). Note, though, that there is an error in the expressions for $\tilde{c}_l$ and $\tilde{c}_\lambda$ provided by Eqs.~(36c) and (36d) of that paper. Indeed, the periodic variations $\tilde{c}_l$ and $\tilde{c}_\lambda$ refer to the zero-average oscillatory contributions to $c_l$ and $c_\lambda$. They are found by integrating Eqs.~(35) and then subtracting the orbital average, i.e., finding the unique zero-average primitive, so that we are left with a purely oscillatory solution. Now, we find that, unfortunately, the explicit orbital averages of Eqs.~(36c) and (36d) in Ref.~\cite{koenigsdoerffer-2006} do not give zero. This is because the averaging of these terms is performed over the eccentric anomaly $\mathrm{d} u$, whereas the orbital averaging requires integrating temporal variations over an orbital period and, therefore, should be done using $\mathrm{d} l = (1 - e \cos u) \mathrm{d} u$. We show below the corrected expressions for $\tilde{c}_l$ and $\tilde{c}_\lambda$ in terms of $e_t = \bar{e}$, $\xi = \bar{x}^{3/2}$ and $u = \bar{u}$, as they appear in Ref.~\cite{koenigsdoerffer-2006}: \begin{widetext} \begin{subequations}\label{eq: corrected cl} \begin{align} \tilde{c}_l =\;& -\frac{2 \xi^{5/3} \nu}{45 e_t^2} \bigg\{ \frac{144 e_t^2}{\chi} + \frac{18 - 258 e_t^2}{\chi^2} + \frac{-56 + 92 e_t^2 - 36 e_t^4}{\chi^3} + \frac{105 (1 - e_t^2)^2}{\chi^4} \nonumber\\ &- \frac{1}{2 (1 - e_t^2)^{1/2}} \left[ 134 - 339 e_t^2 + 288 e_t^2 \sqrt{1 - e_t^2} \right] \bigg\} + \mathcal{O}(\xi^{7/3}) \,,\\ \tilde{c}_\lambda =\;& \frac{2 \xi^{5/3} \nu}{45 e_t^2} \bigg\{ \left[ \frac{18}{\chi^2} - \frac{56 - 36 e_t^2}{\chi^3} + \frac{105 (1 - e_t^2)}{\chi^4} \right] \sqrt{1 - e_t^2} - \frac{144 e_t^2}{\chi} - \frac{18 - 258 e_t^2}{\chi^2} + \frac{56 - 92 e_t^2 + 36 e_t^4}{\chi^3} - \frac{105 (1 - e_t^2)^2}{\chi^4} \nonumber\\ &- \frac{1}{2 (1 - e_t^2)} \left[ 134 - 147 e_t^2 + 288 e_t^4 - \left( 134 - 339 e_t^2 \right) \sqrt{1 - e_t^2} \right] \bigg\} + \mathcal{O}(\xi^{7/3}) \,. \end{align} \end{subequations} \end{widetext} Similarly, we split the angles $l$ and $\lambda$ into orbital averaged and oscillatory contributions \begin{subequations}\label{eq: two-scale decomp 2} \begin{align} l(t) &= \bar{l}(t) + \tilde{l}(t) \,, \\ \lambda(t) &= \bar{\lambda}(t) + \tilde{\lambda}(t) \,, \end{align} \end{subequations} with $\bar{l}(t)$ and $\bar{\lambda}(t)$ defined by \begin{subequations}\label{eq: secular l and la} \begin{align} \bar{l}(t) &= \int_{t_0}^{t} \bar{n}(t')\mathrm{d} t' + \bar{c}_l \,,\\ \bar{\lambda}(t) &= \int_{t_0}^{t} [1+\bar{k}(t')] \bar{n}(t') \mathrm{d} t' + \bar{c}_\lambda \,. \end{align} \end{subequations} The oscillatory contributions $\tilde{l}$ and $\tilde{\lambda}$ are calculated as in Eqs.~(39) of Ref.~\cite{koenigsdoerffer-2006}, \begin{subequations} \begin{align} \tilde{l}(\bar{l}) &= \int \frac{\tilde{n}}{\bar{n}} \mathrm{d} l + \tilde{c}_l(\bar{l}) \,, \\ \tilde{\lambda}(\bar{l}) &= \int \left[ (1 + \bar{k}) \frac{\tilde{n}}{\bar{n}} + \tilde{k} \right] \mathrm{d} l + \tilde{c}_\lambda(\bar{l}) \,, \end{align} \end{subequations} where $\tilde{k} = (\partial k / \partial n) \tilde{n} + (\partial k / \partial e_t) \tilde{e}_t$ denotes the periodic part of $k$ and the integrals again mean the unique zero-average primitives. Equations~(40) for $\tilde{l}$ and $\tilde{\lambda}$ in Ref.~\cite{koenigsdoerffer-2006} are erroneous, since they do not average to zero either. We list below the corrected expressions: \begin{widetext} \begin{subequations}\label{eq: corrected lp} \begin{align} \tilde{l}(l) =\;& \frac{\xi^{5/3} \nu}{15 (1 - e_t^2)^3} \bigg\{ (602 + 673 e_t^2) \chi + (314 - 203 e_t^2 - 111 e_t^4) \ln \chi - (602 + 673 e_t^2) + \frac{-98 + 124 e_t^2 + 46 e_t^4 - 72 e_t^6}{\chi} \nonumber\\ &- \frac{105 (1 - e_t^2)^3}{\chi^2} - \frac{1}{2} \bigg[ 432 + 444 e_t^2 + 543 e_t^4 - 144 e_t^6 - (838 - 826 e_t^2 - 12 e_t^4) \sqrt{1 - e_t^2} + (628 - 406 e_t^2 - 222 e_t^4) \nonumber\\ &\times \ln \bigg( \frac{1 + \sqrt{1 - e_t^2}}{2} \bigg) \bigg] \bigg\} + \frac{\xi^{5/3} \nu}{5 (1 - e_t^2)^{7/2}} \left( 96 + 292 e_t^2 + 37 e_t^4 \right) \int \left[ 2 \tan^{-1} \left( \frac{\beta_t \sin u}{1 - \beta_t \cos u} \right) + e_t \sin u \right] \chi \mathrm{d} u \nonumber\\ &+ \tilde{c}_l(l) + \mathcal{O}(\xi^{7/3}) \,,\\ \tilde{\lambda}(l) =\;& \tilde{l}(l) - \tilde{c}_l(l) + \tilde{c}_\lambda(l) + \mathcal{O}(\xi^{7/3}) \,. \end{align} \end{subequations} \end{widetext} The errors in Eqs.~(36c), (36d), and (40) of Ref.~\cite{koenigsdoerffer-2006}, though, do not affect the other equations of that work. We refer to Appendix~\ref{sec: integrals} for some integral relations necessary to compute the zero-average primitives. We finally give expressions for the oscillatory contributions $\tilde{x}$, $\tilde{e}$, $\tilde{l}$, and $\tilde{\lambda}$ in terms of the slowly evolving variables $\bar{x}$, $\bar{e}$, and $\bar{l}$. We list here the expressions to $\mathcal{O}(\bar{e}^2)$: \begin{subequations}\label{eq: periodic variations} \begin{align} \tilde{x}(t) =\;& \nu \bar{x}^{7/2} \bar{e} \bigg[ 80 \sin(\bar{l}) + \frac{1436}{15} \bar{e} \sin(2\bar{l}) \nonumber\\ &+ \bar{e}^2 \left( \frac{4538}{15} \sin(\bar{l}) + \frac{6022}{45} \sin(3\bar{l}) \right) \bigg] \nonumber\\ &+ \mathcal{O}(\bar{x}^{9/2}) \,, \\ \tilde{e}(t) =\;& -\nu \bar{x}^{5/2} \bigg[ \frac{64}{5} \sin(\bar{l}) + \frac{352}{15} \bar{e} \sin(2\bar{l}) \nonumber\\ &+ \bar{e}^2 \left( \frac{1138}{15} \sin(\bar{l}) + \frac{358}{9} \sin(3\bar{l}) \right) \bigg] \nonumber\\ &+ \mathcal{O}(\bar{x}^{7/2}) \,,\\ \tilde{l}(t) =\;& -\nu \bar{x}^{5/2} \bigg[ \frac{64}{5\bar{e}} \cos(\bar{l}) + \frac{352}{15} \cos(2\bar{l}) \nonumber\\ &+ \bar{e} \left( \frac{1654}{15} \cos(\bar{l}) + \frac{358}{9} \cos(3\bar{l}) \right) \nonumber\\ &+ \bar{e}^2 \left( \frac{694}{15} \cos(2\bar{l}) + \frac{1289}{20} \cos(4\bar{l}) \right) \bigg] \nonumber\\ &+ \mathcal{O}(\bar{x}^{7/2}) \,,\\ \tilde{\lambda}(t) =\;& -\nu \bar{x}^{5/2} \bigg[ \frac{296}{3} \bar{e} \cos(\bar{l}) + \frac{199}{5} \bar{e}^2 \cos(2\bar{l}) \bigg] \nonumber\\ &+ \mathcal{O}(\bar{x}^{7/2}) \,. \end{align} \end{subequations} These results agree with Eqs.~(4.9) of Ref.~\cite{moore-2016}, except two constant terms in $\tilde{l}(t)$ and $\tilde{\lambda}(t)$, due to the already mentioned incorrect average. Indeed, all our results are purely oscillatory, zero-average functions and thus correctly describe the periodic post-adiabatic corrections. Given the waveform in terms of the conservative quasi-Keplerian parametrization, one can then include post-adiabatic effects by making the simple substitutions \begin{subequations}\label{eq: phasing subst} \begin{align} x &\rightarrow \bar{x} + \tilde{x} \,, \\ e &\rightarrow \bar{e} + \tilde{e} \,, \\ l &\rightarrow \bar{l} + \tilde{l} \,, \\ \lambda &\rightarrow \bar{\lambda} + \tilde{\lambda} \,. \end{align} \end{subequations} As all of the periodic (tilde) contributions are of relative 2.5PN order compared to the slowly evolving (bar) parts, we only have to make these substitutions at leading Newtonian and 0.5PN order in the $h^{\ell m}$ to be accurate to 3PN order. In all higher-order terms, we can simply replace the variables ($x$, $e$, $l$, $\lambda$) by their secular evolving parts ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$). Note that Eq.~(\ref{eq: quasi-keplerian phi}) gives the relation between the geometrical phase $\phi$ and the angles $l$ and $\lambda$. We can rewrite this relation in terms of the slowly evolving angles $\bar{l}$ and $\bar{\lambda}$ and find \begin{align}\label{eq: quasi-keplerian phi bar} \phi =\;& \lambda + W(l) = \bar{\lambda} + \bar{W}(\bar{l}) + \tilde{\lambda} + (\tilde{v} - \tilde{l}) \,, \end{align} where $\bar{W}(\bar{l})$ is given by Eq.~(\ref{eq: quasi-keplerian W}), but with all quantities on the RHS replaced with their secular evolving parts, and the periodic variation $\tilde{v}$ of the true anomaly is given by \begin{align} \tilde{v} &= \frac{\partial \bar{v}}{\partial \bar{u}} \, \tilde{u} + \frac{\partial \bar{v}}{\partial \bar{e}} \, \tilde{e} \nonumber\\ &= \frac{\sqrt{1 - \bar{e}^2}}{1 - \bar{e} \cos \bar{u}} \tilde{u} + \frac{\sin \bar{u}}{\sqrt{1 - \bar{e}^2} (1 - \bar{e} \cos \bar{u})} \tilde{e} \,. \end{align} Expanded to $\mathcal{O}(\bar{x}^3)$ and $\mathcal{O}(\bar{e})$ this finally gives us \begin{align} \phi =&\; \bar{\lambda} + 2 \bar{e} \sin(\bar{l}) + \bar{x} (10-\nu) \bar{e} \sin(\bar{l}) \nonumber\\ &+ \bar{x}^2 \left( 52 - \frac{235\nu}{12} + \frac{\nu^2}{12} \right) \bar{e} \sin(\bar{l}) \nonumber\\ &- \bar{x}^{5/2} \nu \left( \frac{128}{5} + \frac{888}{5} \bar{e} \cos(\bar{l}) \right) \nonumber\\ &+ \bar{x}^3 \bigg( 292 + \left(-\frac{420131}{840} + \frac{287\pi^2}{32} \right) \nu \nonumber\\ &+ \frac{521\nu^2}{24} + \frac{\nu^3}{24} \bigg) \bar{e} \sin(\bar{l}) \,. \end{align} This is very similar to Eq.~(\ref{eq: phi solution}), but with the quantities on the RHS replaced by their slowly evolving parts and with additional terms at 2.5PN order. \section{Hereditary Contributions}\label{sec: hereditary} \subsection{Tail integrals} Note that tail effects start appearing at 1.5PN order, and thus post-adiabatic corrections to those will only enter the waveform at 4PN order and beyond. We can thus neglect any radiation-reaction effects in this section and only consider the conservative problem. At the end, we can then replace all variables ($x$, $e$, $l$, $\lambda$) with their slowly evolving counterparts ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$) to get the secular evolving amplitudes. We now employ the quasi-Keplerian parametrization introduced in Sec.~\ref{sec: keplerian parametrization}. As we use the two angles $l$ and $\lambda$ to parameterize the orbital motion, time derivatives of the source multipole moments ($I_L$, $J_L$) can be calculated as \begin{align} \frac{\mathrm{d}}{\mathrm{d} t} =&\; n \left( \frac{\mathrm{d}}{\mathrm{d} l} + (1+k) \frac{\mathrm{d}}{\mathrm{d}\lambda} \right) \,. \end{align} We use a low-eccentricity expansion to simplify expressions, so we expand everything in powers of both $x$ and $e$. Inserting Eqs.~(\ref{eq: KE solution expanded}) into the source multipole moments (\ref{eq: I_L}--\ref{eq: J_L}), and substituting those into the radiative moments (\ref{eq: U_L}--\ref{eq: V_L}) we can then easily calculate the spherical harmonic modes in terms of $l$ and $\lambda$. We find, e.g., for the dominant $h^{22}_\textnormal{tail}$ mode \begin{align} h^{22}_\textnormal{tail} =&\; \frac{8 G m \nu}{c^2 R} x^{5/2} \sqrt{\frac{\pi}{5}} \frac{x^{3/2} c^3}{Gm} \nonumber\\ &\times \int_{0}^{\infty} \mathrm{d}\tau\, \mathrm{e}^{-2\mathrm{i} (\lambda - \lambda(\tau))} \left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{11}{12} \right] \nonumber\\ &\times \bigg[ -8 + e \left( \frac{3}{2} \mathrm{e}^{\mathrm{i} (l - l(\tau))} - \frac{81}{2} \mathrm{e}^{-\mathrm{i} (l - l(\tau))} \right) \bigg] \,, \end{align} where $l(\tau) = n\tau$ and $\lambda(\tau) = (1+k)n\tau$ and where we restrict ourselves to the leading post-Newtonian order and $\mathcal{O}(e)$. All other modes can be calculated similarly and be given as integrals over past history. These integrals can then be solved using the standard formulas \begin{subequations} \begin{align} \int_{0}^{\infty} \mathrm{d}\tau\,\mathrm{e}^{-\mathrm{i}\omega \tau} =&\; -\frac{\mathrm{i}}{\omega} \,,\\ \int_{0}^{\infty} \mathrm{d}\tau\,\mathrm{e}^{-\mathrm{i}\omega \tau} \ln\left( \frac{\tau}{2\tau_0} \right)=&\; \nonumber\\ -\frac{1}{\omega} \bigg( \frac{\pi}{2} \sign{\omega} \;-&\; \mathrm{i} \left[\ln(2|\omega| \tau_0) + \gamma_\textnormal{E} \right] \bigg) \,,\\ \int_{0}^{\infty} \mathrm{d}\tau\,\mathrm{e}^{-\mathrm{i}\omega \tau} \ln^2\left( \frac{\tau}{2\tau_0} \right)=&\; \nonumber\\ -\frac{\mathrm{i}}{\omega} \bigg( \frac{\pi^2}{6} +\bigg( \frac{\pi}{2} \sign{\omega} \;-&\; \mathrm{i} \left[ \ln(2|\omega| \tau_0) + \gamma_\textnormal{E} \right] \bigg)^2 \bigg) \,. \end{align} \end{subequations} Note that for terms of the form $\int \mathrm{d}\tau\, \mathrm{e}^{-\mathrm{i} (\alpha \, l(\tau) + \beta \, \lambda(\tau))} [\dots]$ we have $\omega = n(\alpha + (1+k) \beta)$. We are now able to give the tail contributions to the spherical harmonic modes in terms of the parameters $x$, $e = e_t$ and the angles $\phi$ and $l$. The modes have the following structure: \begin{align} h^{\ell m}_\textnormal{tail} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{tail} \,. \end{align} The various contributions to, e.g., the $H^{22}_\textnormal{tail}$ mode are given to $\mathcal{O}(e)$ by \begin{widetext} \begin{subequations}\label{eq: h22-tail} \begin{align} (H^{22}_\textnormal{tail})_\textnormal{1.5PN} =&\; x^{3/2} \Bigg( 2 \pi + 6 \mathrm{i} \ln \left( \frac{x}{x_0'} \right) + e \bigg\{ \mathrm{e}^{-\mathrm{i} l} \left[ \frac{11 \pi}{4} + \frac{27 \mathrm{i}}{2} \ln \left( \frac{3}{2} \right) + \frac{33}{4} \mathrm{i} \ln \left( \frac{x}{x_0'} \right) \right] \nonumber\\ &+\mathrm{e}^{\mathrm{i} l} \left[\frac{13 \pi }{4} + \frac{3 \mathrm{i}}{2} \ln (2) + \frac{39}{4} \mathrm{i} \ln \left( \frac{x}{x_0'} \right) \right] \bigg\} \Bigg) \,,\\ % (H^{22}_\textnormal{tail})_\textnormal{2.5PN} =&\; x^{5/2} \Bigg( \pi \left( -\frac{107}{21} + \frac{34 \nu}{21} \right) + \left( -\frac{107 \mathrm{i}}{7} + \frac{34 \mathrm{i} \nu}{7} \right) \ln \left( \frac{x}{x_0'} \right) \nonumber\\ &+ e \bigg\{ \mathrm{e}^{\mathrm{i} l} \bigg[ -\frac{9 \mathrm{i}}{2} + \pi \left( \frac{229}{168} + \frac{61 \nu}{42} \right) + \left( \frac{473 \mathrm{i}}{28} - \frac{3 \mathrm{i} \nu}{7} \right) \ln (2) + \left( \frac{229 \mathrm{i}}{56} + \frac{61 \mathrm{i} \nu }{14} \right) \ln \left( \frac{x}{x_0'} \right) \bigg] \nonumber\\ &+ \mathrm{e}^{-\mathrm{i} l} \bigg[ -\frac{27 \mathrm{i}}{2} + \pi \left( -\frac{1081}{168} + \frac{137 \nu}{42} \right) + \left( \frac{27 \mathrm{i}}{4} + 9 \mathrm{i} \nu \right) \ln \left(\frac{3}{2} \right) \nonumber\\ &+ \left( -\frac{1081 \mathrm{i}}{56} + \frac{137 \mathrm{i} \nu }{14} \right) \ln \left( \frac{x}{x_0'} \right) \bigg] \bigg\} \Bigg) \,,\\ % (H^{22}_\textnormal{tail})_\textnormal{3PN} =&\; x^3 \Bigg( -\frac{515063}{22050} + \frac{428 \mathrm{i} \pi }{105} + \frac{2 \pi^2}{3} + \left( -\frac{428}{35} + 12 \mathrm{i} \pi \right) \ln \left( \frac{x}{x_0'} \right) - 18 \ln^2 \left( \frac{x}{x_0'} \right) \nonumber\\ &+ e \bigg\{ \mathrm{e}^{-\mathrm{i} l} \bigg[ -\frac{515063}{7200} + \frac{749 \mathrm{i} \pi}{60} + \frac{49 \pi^2}{24} + \left( -\frac{2889}{70} + \frac{81 \mathrm{i} \pi}{2} \right) \ln \left( \frac{3}{2} \right) - \frac{81}{2} \ln^2 \left( \frac{3}{2} \right) \nonumber\\ &+ \left( -\frac{749}{20} + \frac{147 \mathrm{i} \pi }{4} - \frac{243}{2} \ln \left( \frac{3}{2} \right) \right) \ln \left( \frac{x}{x_0'} \right) - \frac{441}{8}\ln^2\left( \frac{x}{x_0'} \right) \bigg] \nonumber\\ &+ \mathrm{e}^{\mathrm{i} l} \bigg[ -\frac{14936827}{352800} + \frac{3103 \mathrm{i} \pi }{420} + \frac{29 \pi^2}{24} + \left( -\frac{107}{70} + \frac{3 \mathrm{i} \pi}{2} \right) \ln (2) + \frac{3}{2} \ln^2(2) \nonumber\\ &+ \left( -\frac{3103}{140} + \frac{87 \mathrm{i} \pi }{4} - \frac{9}{2} \ln(2) \right) \ln \left( \frac{x}{x_0'} \right) -\frac{261}{8} \ln^2 \left( \frac{x}{x_0'} \right) \bigg] \bigg\} \Bigg) \,. \end{align} \end{subequations} \end{widetext} Here, $x_0'$ is related to the arbitrary constant $\tau_0$ by \begin{align} x_0' &= \left( \frac{Gm}{c^3} \frac{\mathrm{e}^{11/12 - \gamma_\textnormal{E}}}{4 \tau_0} \right)^{2/3} \,. \end{align} We list expressions for all $h^{\ell m}_\textnormal{tail}$ modes in a supplemental \emph{Mathematica} file. \subsection{Memory integrals} The nonlinear memory effect arises from the nonlogarithmic integrals in Eqs.~(\ref{eq: U_L}); e.g., for the $\ell = 2$ modes we have \begin{align} U_{ij}^\textnormal{mem} (t_r) =&\; -\frac{2G}{7c^5} \int_{-\infty}^{t_r} \mathrm{d}\tau\; I_{a\langle i}^{(3)}(\tau) I_{j\rangle a}^{(3)}(\tau) \,. \end{align} There are two types of memory arising from these integrals: DC (or ``direct current'') memory and oscillatory memory. The DC memory is a slowly increasing, nonoscillatory contribution to the gravitational-wave amplitude, entering at Newtonian order. This leads to a difference in the amplitude between early and late times: \begin{align} \Delta h_\textnormal{mem} &= \lim_{t \rightarrow +\infty} h(t) - \lim_{t \rightarrow -\infty} h(t) \,. \end{align} The oscillatory memory, on the other hand, is a normal periodic contribution entering the gravitational-wave amplitude at higher PN order. In Refs.~\cite{arun-2004} and~\cite{blanchet-2008}, the authors give expressions for both leading-order DC and oscillatory memory in the circular limit. The calculation of DC memory has been extended to 3PN order for circular binaries in Ref.~\cite{favata-2009} and to Newtonian order for eccentric binaries in Ref.~\cite{favata-2011}. In this paper, we will only briefly discuss the leading-order contributions to the DC and oscillatory memory for eccentric binaries, such that we can compare our results to the circular limit in Ref.~\cite{blanchet-2008}. The complete post-Newtonian corrections to the nonlinear memory are dealt with in a subsequent paper~\cite{ebersold-2019}, completing the hereditary contributions to the gravitational-wave amplitudes for nonspinning eccentric binaries. Following the same steps as in the previous section, we can calculate the derivatives of the source moments, and we find, e.g., for the $20$-mode: \begin{align} h^{20}_\textnormal{DC} =&\; \frac{256}{7} \frac{G m \nu}{c^2 R} \sqrt{ \frac{\pi}{30}} \int_{-\infty}^{t_r} \mathrm{d} t\, \left( 1 + \frac{313}{48} e^2 \right) x^5 \,. \end{align} We find that all DC memory modes will consist of such integrals of the form \begin{align} h^{\ell 0}_\textnormal{DC} \propto&\; \int_{-\infty}^{t_r} \mathrm{d} t\, x^p(t) \, e^q(t) \,. \end{align} One can rewrite this as an integral over the eccentricity \begin{align}\label{eq: hl0 mem integral} h^{\ell 0}_\textnormal{DC} \propto&\; \int_{e_i}^{e(t_r)} \mathrm{d} e\, \left( \frac{\mathrm{d} e}{\mathrm{d} t} \right)^{-1} x^p(e) \, e^q \,, \end{align} where $e_i$ is some initial eccentricity at early times. Solving the evolution equations~(\ref{eq: peters-mathews}) to leading order, we find \begin{align} x(e) =&\; x_0 \left( \frac{e_0}{e} \right)^{12/19} \,, \end{align} where $x(e_0) = x_0$. We can insert this into Eq.~(\ref{eq: hl0 mem integral}) together with the evolution equation $\mathrm{d} e/\mathrm{d} t$ and integrate over $e$. We then find DC memory at leading Newtonian order in the $20$-mode and $40$-mode: \begin{subequations} \begin{align} h^{20}_\textnormal{DC} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \; \frac{-5}{14 \sqrt{6}} \left\{ 1 - \left( \frac{e}{e_i} \right)^{12/19} \right\} \,,\\ h^{40}_\textnormal{DC} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \; \frac{-1}{504 \sqrt{2}} \left\{ 1 - \left( \frac{e}{e_i} \right)^{12/19} \right\} \,. \end{align} \end{subequations} The time derivatives of the oscillatory modes are computed in the same way. We find that they consist of integrals of the form \begin{align} h^{\ell m}_\textnormal{osc} \propto&\; \int_{-\infty}^{t_r} \mathrm{d} t\, x^p(t) \, e^q(t) \, \mathrm{e}^{\mathrm{i} (s \lambda + r l)} \,, \end{align} which can be integrated to give \begin{align} h^{\ell m}_\textnormal{osc} \propto&\; -\frac{\mathrm{i}}{n ( r + (1+k) s)} x^p \, e^q \, \mathrm{e}^{\mathrm{i} (s \lambda + r l)} \,. \end{align} Note that there are oscillatory memory contributions entering the waveform at 1.5, 2, 2.5 and 3PN order. We list here only the 2.5 and 3PN terms that have a circular limit, as to compare our results to Ref.~\cite{blanchet-2008}. We refer to our follow-up work~\cite{ebersold-2019} for a complete treatment of nonlinear memory. The modes have the following structure: \begin{align} h^{\ell m}_\textnormal{osc} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{osc} \,. \end{align} The various contributions to $\mathcal{O}(e)$ are: \begin{subequations} \begin{align} H^{31}_\textnormal{osc} =&\; \frac{-121\, x^3 \nu \Delta}{45 \sqrt{14}} \left( 1 + e \left\{ \frac{301}{242} \mathrm{e}^{-\mathrm{i} l} + \mathrm{e}^{\mathrm{i} l} \right\} \right) \,,\\ H^{33}_\textnormal{osc} =&\; \frac{11\, x^3 \nu \Delta}{27 \sqrt{210}} \left( 1 + e \left\{ \frac{9}{2} \mathrm{e}^{-\mathrm{i} l} + \frac{3}{22} \mathrm{e}^{\mathrm{i} l} \right\} \right) \,,\\ H^{44}_\textnormal{osc} =&\; \frac{\mathrm{i}\, x^{5/2} \nu}{9 \sqrt{35}} \left( 1 + e \left\{ \frac{7}{5} \mathrm{e}^{-\mathrm{i} l} + 3 \mathrm{e}^{\mathrm{i} l} \right\} \right) \,,\\ H^{51}_\textnormal{osc} =&\; \frac{-13\, x^3 \nu \Delta}{63 \sqrt{385}} \left( 1 + e \left\{ \frac{251}{208} \mathrm{e}^{-\mathrm{i} l} + \mathrm{e}^{\mathrm{i} l} \right\} \right) \,,\\ H^{53}_\textnormal{osc} =&\; \frac{-x^3 \nu \Delta}{189 \sqrt{330}} \left( 1 + e \left\{ \frac{201}{16} \mathrm{e}^{\mathrm{i} l} - \frac{369}{32} \mathrm{e}^{-\mathrm{i} l} \right\} \right) \,,\\ H^{55}_\textnormal{osc} =&\; \frac{9\, x^3 \nu \Delta}{35 \sqrt{66}} \left( 1 + e \left\{ \frac{2285}{1296} \mathrm{e}^{-\mathrm{i} l} + \frac{985}{288} \mathrm{e}^{\mathrm{i} l} \right\} \right)\,. \end{align} \end{subequations} \section{Constructing the full 3PN-accurate waveform}\label{sec: full waveform} We now want to construct the full 3PN-accurate waveform valid during the inspiral of a binary system. We begin by adding up the two contributions to the spherical harmonic modes: \begin{align} h^{\ell m} &= (h^{\ell m})_\textnormal{inst} + (h^{\ell m})_\textnormal{hered} \,. \end{align} Note that we are still missing some memory contributions. These will be computed in full in our follow-up work~\cite{ebersold-2019}, and we will give expressions for the full waveform including memory there. \subsection{Instantaneous parts} The instantaneous parts $(h^{\ell m})_\textnormal{inst}$ of the spherical harmonic modes for compact binaries in elliptical orbits have already been calculated to the third post-Newtonian order in Ref.~\cite{mishra-2015}, although the results do not include post-adiabatic corrections to the quasi-Keplerian parametrization. They are given in terms of the constants of motion $x$ and $e = e_t$ and parametrized by the eccentric anomaly $u$. We will rewrite these in terms of the mean anomaly $l$ by using the solution to the Kepler equation~(\ref{eq: u solution}). This gives us expressions for the instantaneous contributions to the different modes in terms of the post-Newtonian parameter $x$ and the time eccentricity $e$, parametrized by the angles $\phi$ and $l$. The modes again have the following structure: \begin{align}\label{eq: hlm inst} h^{\ell m}_\textnormal{inst} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{inst} \,. \end{align} The various contributions to, e.g., the $H^{22}_\textnormal{inst}$ mode are given to $\mathcal{O}(e)$ by \begin{widetext} \begin{subequations}\label{eq: h22 inst} \begin{align} (H^{22}_\textnormal{inst})_\textnormal{Newt} =&\; 1 + e \bigg\{ \frac{1}{4} \mathrm{e}^{-\mathrm{i} l} + \frac{5}{4} \mathrm{e}^{\mathrm{i} l} \bigg\} \,,\\ % (H^{22}_\textnormal{inst})_\textnormal{1PN} =&\; x \Bigg( -\frac{107}{42} + \frac{55 \nu}{42} + e \bigg\{ \mathrm{e}^{-\mathrm{i} l} \left[ -\frac{257}{168} + \frac{169 \nu}{168} \right] + \mathrm{e}^{\mathrm{i} l} \left[ -\frac{31}{24} + \frac{35 \nu}{24} \right] \bigg\} \Bigg) \,,\\ % (H^{22}_\textnormal{inst})_\textnormal{2PN} =&\; x^2 \Bigg( -\frac{2173}{1512} - \frac{1069 \nu}{216} + \frac{2047 \nu^2}{1512} + e \bigg\{ \mathrm{e}^{\mathrm{i} l} \left[ -\frac{2155}{252} - \frac{1655 \nu}{672} + \frac{371 \nu^2}{288} \right] \nonumber\\ &+ \mathrm{e}^{-\mathrm{i} l} \left[ -\frac{4271}{756} - \frac{35131 \nu}{6048} + \frac{421 \nu^2}{864} \right] \bigg\} \Bigg) \,,\\ % (H^{22}_\textnormal{inst})_\textnormal{2.5PN} =&\; -x^{5/2} \mathrm{i} \nu \Bigg( \frac{56}{5} + e \bigg\{ \frac{7817}{420} \mathrm{e}^{\mathrm{i} l} + \frac{2579}{84} \mathrm{e}^{-\mathrm{i} l} \bigg\} \Bigg) \,,\\ % (H^{22}_\textnormal{inst})_\textnormal{3PN} =&\; x^3 \Bigg( \frac{761273}{13200} + \left( -\frac{278185}{33264} + \frac{41 \pi^2}{96} \right) \nu - \frac{20261 \nu^2}{2772} + \frac{114635 \nu^3}{99792} + \frac{856}{105} \ln \left( \frac{x}{x_0} \right) \nonumber\\ &+ e \bigg\{ \mathrm{e}^{\mathrm{i} l} \left[ \frac{6148781}{75600} + \left( -\frac{199855}{3024} + \frac{41 \pi^2}{48} \right) \nu - \frac{9967 \nu^2}{1008} + \frac{35579 \nu^3}{36288} + \frac{3103}{210} \ln \left( \frac{x}{x_0} \right) \right] \nonumber\\ &+ \mathrm{e}^{-\mathrm{i} l} \left[ \frac{150345571}{831600} + \left( -\frac{121717}{20790} - \frac{41 \pi^2}{192} \right) \nu - \frac{86531 \nu^2}{8316} - \frac{33331 \nu^3}{399168} + \frac{749}{30} \ln \left( \frac{x}{x_0} \right) \right] \bigg\} \Bigg) \,, \end{align} \end{subequations} \end{widetext} where $x_0 = Gm/(c^3 \tau_0)$ is related to $x_0'$ by \begin{align}\label{eq: logx0 relation} \ln x_0' &= \frac{11}{18} -\frac{2}{3}\gamma_\textnormal{E} - \frac{4}{3} \ln 2 + \frac{2}{3} \ln x_0 \,. \end{align} \subsection{Post-adiabatic corrections} We now move to include post-adiabatic corrections into the waveform. As already mentioned in Sec.~\ref{sec: hereditary}, post-adiabatic corrections to the hereditary contributions will only enter at 4PN. We are thus left with computing the corrections to the instantaneous contributions as described in Sec.~\ref{sec: phasing}. Schematically, the substitutions in Eq.~(\ref{eq: phasing subst}) may be described as \begin{align} h^{\ell m}(&x, e, l, \lambda) \nonumber\\ &\Downarrow \nonumber\\ h^{\ell m}(\bar{x} + \tilde{x}, \bar{e} &+ \tilde{e}, \bar{l} + \tilde{l}, \bar{\lambda} + \tilde{\lambda}) \nonumber\\ &\Downarrow \nonumber\\ h^{\ell m}(\bar{x}, \bar{e}, \bar{l}, \bar{\lambda}) + \bigg\{ \frac{\partial h^{\ell m}}{\partial x} \tilde{x} \,+\,& \frac{\partial h^{\ell m}}{\partial e} \tilde{e} + \frac{\partial h^{\ell m}}{\partial l} \tilde{l} + \frac{\partial h^{\ell m}}{\partial \lambda} \tilde{\lambda} \bigg\} \nonumber\\ &\Downarrow \nonumber\\ h^{\ell m}(\bar{x}, \bar{e} , \bar{l}, \bar{\lambda}) \,+&\, \frac{1}{c^5} \, h^{\ell m}_\textnormal{post-ad} (\bar{x}, \bar{e}, \bar{l}, \bar{\lambda}) \,. \end{align} In particular, we only need to make these substitutions at leading Newtonian and 0.5PN order. At higher orders, we simply replace the variables ($x$, $e$, $l$, $\lambda$) by their secular evolving parts ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$) to get the secular evolving waveform. The post-adiabatic contributions to the different modes in terms of the secular evolving parameters $\bar{x}$ and $\bar{e}$, parametrized by the angles $\phi$ and $\bar{l}$, have the following form: \begin{align} h^{\ell m}_\textnormal{post-ad} =&\; \frac{8 G m \nu}{c^2 R} \bar{x} \sqrt{\frac{\pi}{5}} \mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{post-ad} \,. \end{align} For example, the $H^{22}_\textnormal{post-ad}$ mode, that arises from including the post-adiabatic corrections in $(H^{22}_\textnormal{inst})_\textnormal{Newt}$, is given by \begin{align}\label{eq: h22-post-ad} H^{22}_\textnormal{post-ad} =&\; \nonumber\\ \frac{192}{5} & \bar{x}^{5/2} \mathrm{i} \nu \Bigg( 1 + \bar{e} \bigg\{ \frac{401}{72} \mathrm{e}^{-\mathrm{i}\bar{l}} + \frac{293}{72} \mathrm{e}^{\mathrm{i}\bar{l}} \bigg\} \Bigg) \,. \end{align} We can combine these post-adiabatic contributions with the instantaneous ones to get the full secular evolving instantaneous waveform in terms of the variables ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$). The result has again the following form: \begin{align} h^{\ell m}_\textnormal{inst} =&\; \frac{8 G m \nu}{c^2 R} \bar{x} \sqrt{\frac{\pi}{5}} \mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{inst} \,. \end{align} In e.g.~the $H^{22}_\textnormal{inst}$ mode we find that the only term that is modified is the 2.5PN order: \begin{align} &(H^{22}_\textnormal{inst})_\textnormal{2.5PN} =\nonumber\\ &\quad\quad -\bar{x}^{5/2} \mathrm{i} \nu \Bigg( 24 + \bar{e} \bigg\{ \frac{43657}{420} \mathrm{e}^{\mathrm{i}\bar{l}} + \frac{1013}{140} \mathrm{e}^{-\mathrm{i}\bar{l}} \bigg\} \Bigg) \,. \end{align} All other orders are exactly as in Eqs.~(\ref{eq: h22 inst}), but with ($x$, $e$, $l$, $\lambda$) replaced by ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$) . \subsection{Log cancellation}\label{sec: log cancel} We observe that both instantaneous and tail terms still have some dependence on the arbitrary constant $x_0'$ (or $x_0$). We find that this dependence on $x_0'$ can be reabsorbed in a shift of the coordinate time $t$ \cite{blanchet-1996, arun-2004} through a redefinition of the mean anomaly as \begin{align} \xi &= \bar{l} - \frac{3GM}{c^3} \bar{n} \ln \Big( \frac{\bar{x}}{x_0'} \Big) \,, \end{align} where $M = m (1 - \nu \bar{x} / 2)$ is the ADM mass. Note that there are no post-adiabatic corrections to $n$ and $x$ here, as phasing effects would only enter at $1.5+2.5$PN order. This also means that both $\xi$ and $\bar{l}$ follow the same evolution, i.e., $\mathrm{d}\xi/\mathrm{d} t = \mathrm{d}\bar{l}/\mathrm{d} t = \bar{n}$, and they only differ by a constant factor. To simplify the final expressions, we also introduce a redefined phase $\psi$ such that Eq.~(\ref{eq: quasi-keplerian phi bar}) gives the relation between $\xi$ and $\psi$: \begin{align} \psi =\;& \bar{\lambda}_\xi + \bar{W}_\xi + \tilde{\lambda}_\xi + (\tilde{v}_\xi - \tilde{l}_\xi) \,. \end{align} Here \begin{align} \bar{\lambda}_\xi =&\; \bar{\lambda} - \frac{3GM}{c^3} (1 + \bar{k}) \bar{n} \ln \Big( \frac{\bar{x}}{x_0'} \Big) \,, \end{align} is the phase $\bar{\lambda}$ evaluated at the shifted time defined by $\xi$, and $\bar{W}_\xi$, $\tilde{\lambda}_\xi$, $\tilde{v}_\xi$, and $\tilde{l}_\xi$ are defined as in Eq.~(\ref{eq: quasi-keplerian phi bar}), but with $\bar{l}$ replaced by $\xi$. From this, we can easily deduce that \begin{align} \psi =\;& \phi + \sum_{s=1}^{\infty} \frac{1}{s!} \Bigg[ \left( \xi - \bar{l} \right)^s \left( \frac{\mathrm{d}}{\mathrm{d}\bar{l}} \right)^s \nonumber\\ &+ \left( \bar{\lambda}_\xi - \bar{\lambda} \right)^s \left( \frac{\mathrm{d}}{\mathrm{d}\bar{\lambda}} \right)^s \Bigg] \phi \,. \end{align} Note that the phase $\psi$ does not have the same geometric interpretation as $\phi$. Expanding these equations to $\mathcal{O}(\bar{x}^3)$ and $\mathcal{O}(\bar{e})$, we find \begin{subequations} \begin{align} \bar{l} =&\; \xi + 3 \left( \bar{x}^{3/2} - \bar{x}^{5/2} \left( 3 + \frac{\nu}{2} \right) \right) \ln \Big( \frac{\bar{x}}{x_0'} \Big) \,,\\ \phi =&\; \psi + \bigg( \bar{x}^{3/2} \left( 3 + 6 \bar{e} \cos(\xi) \right) \nonumber\\ &+ \bar{x}^{5/2} \left( -\frac{3\nu}{2} + 6 \bar{e} (2 - \nu) \cos(\xi) \right) \bigg) \ln \Big( \frac{\bar{x}}{x_0'} \Big) \nonumber\\ &- 9 \bar{x}^3 \bar{e} \sin(\xi) \ln^2 \Big( \frac{\bar{x}}{x_0'} \Big) \,. \end{align} \end{subequations} This redefinition of the time coordinate results in the cancellation of all log terms involving the arbitrary constant $x_0'$. \subsection{Full waveform} The full waveform in terms of the redefined angles $\xi$ and $\psi$ -- minus some memory contributions -- has the following form: \begin{align} h^{\ell m} =&\; \frac{8 G m \nu}{c^2 R} \bar{x} \sqrt{\frac{\pi}{5}} \mathrm{e}^{-\mathrm{i} m\psi} H^{\ell m} \,. \end{align} The various contributions to, e.g., the $H^{22}$ mode are given to $\mathcal{O}(\bar{e})$ by \begin{widetext} \begin{subequations}\label{eq: Hlm inst+hered} \begin{align} H^{22}_\textnormal{Newt} =&\; 1 + \bar{e} \bigg\{ \frac{1}{4} \mathrm{e}^{-\mathrm{i}\xi} + \frac{5}{4} \mathrm{e}^{\mathrm{i}\xi} \bigg\} \,,\\ % H^{22}_\textnormal{1PN} =&\; \bar{x} \Bigg( -\frac{107}{42} + \frac{55 \nu}{42} + \bar{e} \bigg\{ \mathrm{e}^{-\mathrm{i}\xi} \left[ -\frac{257}{168} + \frac{169 \nu}{168} \right] + \mathrm{e}^{\mathrm{i}\xi} \left[ -\frac{31}{24} + \frac{35 \nu}{24} \right] \bigg\} \Bigg) \,,\\ % H^{22}_\textnormal{1.5PN} =&\; \bar{x}^{3/2} \Bigg( 2 \pi + \bar{e} \bigg\{ \mathrm{e}^{-\mathrm{i} \xi} \left[ \frac{11 \pi }{4} + \frac{27 \mathrm{i}}{2} \ln \left( \frac{3}{2} \right) \right] + \mathrm{e}^{\mathrm{i} \xi} \left[\frac{13 \pi }{4} + \frac{3 \mathrm{i}}{2} \ln(2) \right] \bigg\} \Bigg) \,,\\ % H^{22}_\textnormal{2PN} =&\; \bar{x}^2 \Bigg( -\frac{2173}{1512} - \frac{1069 \nu}{216} + \frac{2047 \nu^2}{1512} + \bar{e} \bigg\{ \mathrm{e}^{\mathrm{i}\xi} \left[ -\frac{2155}{252} - \frac{1655 \nu}{672} + \frac{371 \nu^2}{288} \right] \nonumber\\ &+ \mathrm{e}^{-\mathrm{i}\xi} \left[ -\frac{4271}{756} - \frac{35131 \nu}{6048} + \frac{421 \nu^2}{864} \right] \bigg\} \Bigg) \,,\\ % H^{22}_\textnormal{2.5PN} =&\; \bar{x}^{5/2} \Bigg( -\frac{107 \pi}{21} + \left( -24 \mathrm{i} + \frac{34 \pi}{21} \right) \nu \nonumber\\ &+ \bar{e} \bigg\{ \mathrm{e}^{\mathrm{i} \xi} \bigg[ -\frac{9 \mathrm{i}}{2} + \frac{229 \pi}{168} + \left( -\frac{43657 \mathrm{i}}{420} + \frac{61 \pi}{42} \right) \nu + \left( \frac{473 \mathrm{i}}{28} - \frac{3 \mathrm{i} \nu }{7} \right) \ln (2) \bigg] \nonumber\\ &+ \mathrm{e}^{-\mathrm{i} \xi} \bigg[ -\frac{27 \mathrm{i}}{2} -\frac{1081 \pi}{168} + \left( -\frac{1013 \mathrm{i}}{140} + \frac{137 \pi}{42} \right) \nu + \left( \frac{27 \mathrm{i}}{4} + 9 \mathrm{i} \nu \right) \ln \left( \frac{3}{2} \right) \bigg] \bigg\} \Bigg) \,,\\ % H^{22}_\textnormal{3PN} =&\; \bar{x}^3 \Bigg( \frac{27027409}{646800} + \frac{428 \mathrm{i} \pi}{105} + \frac{2 \pi^2}{3} - \frac{856 \gamma_\textnormal{E}}{105} + \left( -\frac{278185}{33264} + \frac{41 \pi^2}{96} \right) \nu - \frac{20261 \nu^2}{2772} + \frac{114635 \nu^3}{99792} \nonumber\\ &- \frac{1712 \ln(2)}{105} - \frac{428 \ln(\bar{x})}{105} \nonumber\\ &+ \bar{e} \bigg\{ \mathrm{e}^{-\mathrm{i} \xi} \bigg[ \frac{219775769}{1663200} + \frac{749 \mathrm{i} \pi}{60} + \frac{49 \pi^2}{24} - \frac{749 \gamma_\textnormal{E}}{30} + \left( -\frac{121717}{20790} - \frac{41 \pi^2}{192}\right) \nu - \frac{86531 \nu^2}{8316} - \frac{33331 \nu^3}{399168} \nonumber\\ &+ \left( -\frac{2889}{70} + \frac{81 \mathrm{i} \pi}{2}\right) \ln \left( \frac{3}{2} \right) - \frac{81}{2} \ln^2 \left( \frac{3}{2} \right) - \frac{749 \ln(2)}{15} - \frac{749 \ln(\bar{x})}{60} \bigg] \nonumber\\ &+ \mathrm{e}^{\mathrm{i} \xi} \bigg[ \frac{55608313}{1058400} + \frac{3103 \mathrm{i} \pi}{420} + \frac{29 \pi^2}{24} - \frac{3103 \gamma_\textnormal{E}}{210} + \left( -\frac{199855}{3024} + \frac{41 \pi^2}{48} \right) \nu -\frac{9967 \nu^2}{1008} + \frac{35579 \nu^3}{36288} \nonumber\\ &+ \left( -\frac{6527}{210} + \frac{3 \mathrm{i} \pi}{2}\right) \ln(2) + \frac{3 \ln^2(2)}{2} - \frac{3103 \ln(\bar{x})}{420} \bigg] \bigg\} \Bigg) \,. \end{align} \end{subequations} \end{widetext} For completeness all equations relating the different angles $\bar{l}$, $\bar{\lambda}$, $\xi$ and $\psi$ are listed in Appendix~\ref{sec: quasi-kepl relations}. \subsection{Quasi-Circular limit}\label{sec: circular} We now check our results against those in Ref.~\cite{blanchet-2008} in the quasi-circular limit. Note that the eccentricity is not a gauge-independent quantity and one thus has to be careful when talking about the circular limit. For a thorough discussion on different eccentricity parameters and discrepancies between them we refer to Refs.~\cite{loutrel-2018, loutrel-2019}. Normally, one uses the orbital averaged description for the evolution of $x$ and $e$, where one finds that the evolution equations~(\ref{eq: peters-mathews}) drive the eccentricity to zero during the inspiral. When introducing post-adiabatic corrections, this will not be true anymore, as the eccentricity is split into a orbital averaged part $\bar{e}$ and a periodic oscillatory part $\tilde{e}$. The orbital averaged part $\bar{e}$ will still follow the same evolution equations~(\ref{eq: peters-mathews}) and thus be driven to zero, but the periodic variations $\tilde{e}$ will generally grow larger as the binary inspirals. As discussed in Ref.~\cite{loutrel-2019}, the orbital averaged description also breaks down in the late inspiral, failing to capture a secular growth in the eccentricity observed when directly integrating the two-body equations of motion. In our case, it is reasonable to consider the circular limit as the limit where $\bar{x} \rightarrow x$ and $\bar{e} \rightarrow 0$, with $x$ being the standard circular frequency parameter. Then, the evolution equations~(\ref{eq: peters-mathews}) reduce to the usual circular evolution equation \begin{align} \dot{x} &= \frac{64c^3 \nu}{5Gm} x^5 + \mathcal{O}(x^6)\,. \end{align} In this limit our redefined phase $\psi$ reduces to \begin{align} \psi|_{\bar{e}=0} &= \phi - 3 \left(1 - \frac{\nu x}{2}\right) x^{3/2} \ln \Big( \frac{x}{x_0'} \Big) \,, \end{align} which matches exactly the phase $\psi$ used in Ref.~\cite{blanchet-2008}. We can thus directly compare our results to the circular limit by setting $\bar{e} = 0$ and $\bar{x}|_{\bar{e}=0} = x$. We find, e.g., for the $h^{22}$ mode \begin{align} h^{22} = \frac{8Gm\nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \mathrm{e}^{-2\mathrm{i}\psi} H^{22} \,, \end{align} \begin{widetext} \begin{align} H^{22} =\;& 1 + x \left( -\frac{107}{42} + \frac{55\nu}{42} \right) + 2\pi x^{3/2} + x^2 \left( -\frac{2173}{1512} - \frac{1069\nu}{216} + \frac{2047\nu^2}{1512} \right) + x^{5/2} \left( -\frac{107\pi}{21} + \left( -24\mathrm{i} + \frac{34\pi}{21} \right) \nu \right) \nonumber\\ &+ x^3 \bigg( \frac{27027409}{646800} + \frac{428\mathrm{i}\pi}{105} + \frac{2 \pi^2}{3} - \frac{856 \gamma_\textnormal{E}}{105} + \left( -\frac{278185}{33264} + \frac{41\pi^2}{96} \right) \nu - \frac{20261 \nu^2}{2772} + \frac{114635\nu^2}{99792} \nonumber\\ &- \frac{1712}{105} \ln(2) - \frac{428}{105} \ln(x) \bigg) \,. \end{align} \end{widetext} This matches Eq.~(9.4a) of Ref.~\cite{blanchet-2008}. Similarly, we can compare the other modes and find perfect agreement in all of them. \section{Conclusion}\label{sec: summary} In this work, we computed the tail contributions to the 3PN-accurate gravitational waveform from nonspinning compact binaries on eccentric orbits. This extends the work on instantaneous contributions in Ref.~\cite{mishra-2015} and will be completed with the memory contributions in a follow-up paper~\cite{ebersold-2019}. We also include post-adiabatic corrections to the quasi-Keplerian parametrization when combining our tail results with the instantaneous ones, giving us the full waveform (neglecting memory) that can be compared to the circular one in the limit $e \rightarrow 0$. The tail contributions to the $h^{22}$ mode are given at 3PN order and to $\mathcal{O}(e)$ in Eq.~(\ref{eq: h22-tail}), the post-adiabatic corrections in Eq.~(\ref{eq: h22-post-ad}). All other $h^{\ell m}$ modes up to $\ell = 5$ are listed in the supplemental \emph{Mathematica} notebook~\cite{supplement}. To reiterate, all results are in MH coordinates, which differ from the SH coordinates at 3PN order. Note that the instantaneous results in Ref.~\cite{mishra-2015} can be applied to binary systems of arbitrary eccentricities, while the tail results presented here are calculated in a small eccentricity expansion. This is due to the complicated tail integrals over past history, which can only be analytically calculated when decomposing the integrand into harmonics of the orbital timescale using an eccentricity expansion. This means that our results are not applicable for large eccentricities $e \sim 1$, though they might give accurate results for moderate eccentricities $e \sim 0.4$ when combined with orbital evolution equations that are not expanded in eccentricity; see, e.g., Ref.~\cite{klein-2018}. \acknowledgments We thank Riccardo Sturani for a first review. Y.~B. is supported by the Swiss National Science Foundation and a Forschungskredit of the University of Zurich, Grant No.~FK-18-084. Y.~B. would like to acknowledge the hospitality of the Institut d’Astrophysique de Paris during the final stages of this collaboration.
2,877,628,090,563
arxiv
\section{Introduction} Topological field and string theories have been the focus of extensive investigation in the last two decades. These models are more tractable than their physical counterparts but still capture some interesting physical quantities, in particular those related to the vacuum structure of the full quantum theory. Due to the topological nature of the model these quantities can be often computed exactly. The underlying reason is their deep relation to geometric and topological invariants of the physical space where the original model is defined. In this note we will focus on the Donaldson--Thomas (DT) invariants \cite{Donaldson:1996kp}. From the viewpoint of the topological string they can be defined as follows. One starts with a Calabi--Yau manifold $X$ on which the topological A--model is defined. Then the DT invariant corresponds to the number of bound states formed by a single D6 brane wrapping the full Calabi--Yau manifold with a D2 brane wrapping a 2--cycle $C \subset X$ in homology class $\beta$ and $m$ D0 branes. This configuration is encoded in a mathematical object called an \textit{ideal sheaf} and the set of all possible configurations is described by the moduli space of ideal sheaves $I_{m} (X,\beta)$. This space is also known as the Hilbert scheme of points and curves of the threefold $\mathrm{Hilb}^m (X , \beta)$. Then the DT invariant $D^m_{\beta} (X)$ is defined as the ``volume'' of this moduli space. If the Calabi--Yau is toric all the geometric information can be essentially encoded in a combinatorial problem and the topological string has a reformulation in terms of the classical statistical mechanics of a melting crystal \cite{Okounkov:2003sp}. In this more physical setting the DT invariants parametrize the atomic configurations of the melting crystal. This leads to a very non--trivial conjecture that the geometrical information captured by the DT invariants is equivalent to Gromov--Witten theory, since they are two different expansions of the same topological string amplitude. So far this conjecture has been proven in a number of cases \cite{MNOP}. A detailed understanding of DT theory on Calabi--Yaus could sharpen our knowledge about the geometrical meaning of the topological string and thus about the vacuum structure of the full string theory. In this note we will report about some progress towards this ambitious goal \cite{Cirafici:2008sn}. Namely we will only consider local toric threefolds on which the DT problem can be conjecturally rephrased as a topological gauge theory \cite{Iqbal:2003ds}. We will put this conjecture on firmer grounds and by employing the techniques of equivariant localization show how to set the ground for explicit computations. We will apply our formalism to higher rank DT invariants on the Coulomb branch of the gauge theory. \section{The Topological Gauge Theory and Equivariant Localization} Let us consider a local toric threefold $X$. In this case DT theory can be (conjecturally) described by a six dimensional abelian topological gauge theory living on the worldvolume of the D6 brane wrapping $X$ \cite{Iqbal:2003ds}. This gauge theory is the topologically twisted version of maximally supersymmetric Yang--Mills in six dimensions \cite{Blau:1997pp}--\cite{Acharya:1997gp}. Its bosonic matter content consists of a gauge field $A_{\mu}$, a complex Higgs field $\Phi$ and a $(3,0)$ form $\rho^{3,0}$ along with their complex conjugates. Essentially its action has the form of the topological density \begin{equation} \label{instaction} \frac{1}{2}\, \int_X \, \mathrm{Tr} \Big( F_A \wedge F_A \wedge k_0 + \mbox{$\frac\vartheta3$}\, F_A \wedge F_A \wedge F_A \Big) \ , \end{equation} supplemented by a gauge fixing term. Here $k_0$ is the background K{\"a}hler two-form of $X$ and $\vartheta$ is the six-dimensional theta-angle which will be identified with the topological string coupling~$g_s$. This gauge theory has a BRST symmetry and hence localizes onto the moduli space of solutions of the fixed point equations \begin{eqnarray} \label{inste} F_A^{2,0} &=& \overline{\partial}\,_A^{\dagger} \rho \ , \nonumber\\[4pt] F_A^{1,1} \wedge k_0 \wedge k_0 + [\rho, \overline{\rho}\,] &=& l~k_0 \wedge k_0 \wedge k_0 \ , \nonumber\\[4pt] \mathrm{d}_A \Phi &=& 0 \ . \end{eqnarray} The solutions of these equations minimize the gauge theory action and we will therefore call them generalized instantons or just instantons. On a Calabi--Yau manifold we can set the field $\rho$ to zero. Then the first two equations reduce to the Donaldson--Uhlenbeck--Yau (DUY) equations which are conditions of stability for holomorphic bundles over~$X$ with finite characteristic classes. The introduction of this auxiliary gauge theory essentially reformulates DT theory as a (generalized) instanton counting problem. The gauge theory localizes onto the moduli space $\mathcal{M} (X)$ of holomorphic bundles (or coherent sheaves) on $X$ and the instanton multeplicities in the instanton expansion of the path integral represent the DT invariants. Note that in the gauge theory language it is immediate to generalize DT theory to a non--abelian $U(N)$ setting, with an arbitrary number of D6 branes (corresponding to generic rank $N$ bundles). So far we have reduced the difficult algebro--geometrical problem of counting sheaves to a more tractable path integral. Unfortunately the theory as it stands is not very manageable since moduli spaces of instantons suffer from non-compactness problems arising both from singularities where instantons shrink to zero size as well from the non-compactness of the ambient space $X$ on which the gauge theory is defined. The way out comes from an analogous issue in instanton counting in four dimensional twisted $\mathcal{N}=2$ theory. In \cite{Nekrasov:2002qd} Nekrasov proposed that equivariant localization techniques could be used in combination with a noncommutative deformation of the theory to evaluate directly the instanton factors. This idea has turned out to be very powerful allowing for explicit computations in the four dimensional setting and can be applied to our six dimensional case \cite{Iqbal:2003ds}. Here we will only consider the case of flat space $X = \mathbf{C}^3$ unless explicitly mentioned. The noncommutative deformation resolves the small instanton singularities of the moduli space $\mathcal{M} (X)$ and provides a natural compactification of $\mathcal{M} (X)$. Also working equivariantly can be easily implemented: on $\mathbf{C}^3$ there is naturally the action of the torus $\mathbf{T}^3$ coming from the maximal torus of the $U(3)$ group generating rotational isometries that preserve the K\"ahler form of $\mathbf{C}^3$. On the coordinates of $\mathbf{C}^3$ this torus acts as $z_i \longrightarrow z_i {\,\rm e}\,^{{\,{\rm i}\,} \epsilon_i}$, $i=1,2,3$. The equivariant model can be obtained by modifying the BRST operator so that it becomes an equivariant differential with respect to this toric action. In other words we restrict attention to field configurations that are annihilated by the old BRST operator only up to a toric action. After these modifications the gauge theory localizes onto the fixed points of the equivariant BRST operator. One can show that these fixed points are isolated and their contribution to the path integral can be computed by direct equivariant integration, by using the Duistermaat--Heckman formula or its generalizations. The problem of computing the path integral is now reduced to two simpler ones, namely the classification of the critical points of the equivariant BRST differential and the actual evaluation of the instanton factor. These goals can be accomplished in two distinct but ultimately equivalent ways as we are about to see. \section{The Noncommutative Theory} The path integral of the noncommutative field theory can be evaluated directly by using equivariant localization. After the noncommutative deformation we can think of the theory as an infinite--dimensional matrix model where the fields are replaced by operators acting on a separable Hilbert space. This approach has the advantage that some explicit instanton solutions can be constructed and it provides a natural compactification of the instanton moduli space. In terms of the noncommutative fields the instanton equations become \begin{eqnarray} \big[Z^{i}\,,\, Z^{j}\big] + \epsilon^{i j k} \,\big[Z^{\dagger}_{k}\,,\, \rho\big] &=& 0 \ , \nonumber\\[4pt] \big[Z^{i}\,,\, Z^{\dagger}_{i}\big] + \big[\rho\,,\, \rho^{\dagger} \big] &=& 3~1_{N\times N} \ , \nonumber\\[4pt] \big[Z_{i} \,,\, \Phi\big] &=& \epsilon_{i}\, Z_{i} \ , \label{adhmform} \end{eqnarray} where in the last equation there is no sum over the index $i$ and the right--hand side reflects explicitly the equivariant deformation. These sets of equations can be solved by three-dimensional harmonic oscillator algebra. The unique irreducible representation of this algebra is provided by the Fock module \begin{equation} \mathcal{H}=\mathbf{C} \big[\alpha_1^\dagger\,,\,\alpha_2^\dagger\,,\, \alpha_3^\dagger\big]|0,0,0\rangle=\bigoplus_{n_1,n_2,n_3\in \mathbf{N}_0}\, \mathbf{C} |n_1,n_2,n_3\rangle \ , \label{Fockspdef} \end{equation} where $|0,0,0\rangle$ is the Fock vacuum with $\alpha_i|0,0,0\rangle=0$ for $i=1,2,3$, and the orthonormal basis states $|n_1,n_2,n_3\rangle$ are connected by the usual action of the creation and annihilation operators $\alpha_i^\dagger$ and $\alpha_i$. The operators $Z^i$ may then be taken to act on the Hilbert space $ \mathcal{H}_W =W\otimes \mathcal{H} $ where $W\cong\mathbf{C}^N$ is a Chan-Paton multiplicity space of dimension $N$, the number of D6-branes (and the rank of the gauge theory). The space $W$ carries the nonabelian degrees of freedom and we understand $Z^i$ and $\Phi$ as $N \times N$ matrices of operators acting on $\mathcal{H}$. We can diagonalize the field $\Phi$ using the $U(N)$ gauge symmetry. One can now classify the fixed points of the nonabelian gauge theory by generalizing the arguments of~\cite{Iqbal:2003ds,swpart}. We are prescribed to compute the path integral over configurations of the Higgs field whose asymptotic limit is $\mathbf{a} = \mathrm{diag}(a_1 , \dots , a_N)\in u(1)^N$. With this choice of boundary condition the noncommutative field $\Phi$ has the form $ \Phi = \mathbf{a} \otimes {1}_{\mathcal{H}} + {1}_{N\times N} \otimes \Phi_{\mathcal{H}} $. The degeneracies of the asymptotic Higgs vevs breaks the gauge group $U(N)\to\prod_i\,U(k_i)$ with $\sum_i\,k_i=N \ . $ Correspondingly, the Chan-Paton multiplicity space $W$ decomposes into irreducible representations $W=\bigoplus_i\,W_i$ with $\dim_\mathbf{C} W_i=k_i$. Due to the equivariant deformation the theory now localizes on $U(1)^N$ noncommutative instantons. These correspond to ideals $\mathcal{I}$ of codimension $k$ in $\mathbf{C} [z_1 , z_2 , z_3]$ that are associated, via partial isometries, to subspaces of the full Hilbert space of the form $\bigoplus_{f\in\mathcal{I}}\, f\big(\alpha_1^\dag\,,\,\alpha_2^\dag\,,\,\alpha_3^\dag \big)|0,0,0\rangle \ $. These ideals are generated by monomials $z^i\,z^j\,z^k$ and are in one-to-one correspondence with three-dimensional partitions, with the triplet $(i,j,k)$ corresponding to boxes of the partition. More precisely, the set of solutions can be completely classified in terms of coloured partitions $ \vec\pi=(\pi_1, \ldots, \pi_N)$, which are rows of $N$ ordinary three-dimensional partitions $\pi_l$ labelled by $a_l$. We can now write the full path integral as a sum over critical points and compute the fluctuation factor around each critical point. This factor assumes the form of a ratio of functional determinants \begin{equation} \frac{\det \left(\mathrm{ad}\, \Phi \right)\, \det \left(\mathrm{ad}\, \Phi + \epsilon_1 + \epsilon_2 \right)\, \det \left(\mathrm{ad}\, \Phi + \epsilon_1 + \epsilon_3 \right)\, \det \left(\mathrm{ad}\, \Phi + \epsilon_2 + \epsilon_3 \right)}{\det \left(\mathrm{ad}\, \Phi+ \epsilon_1 + \epsilon_2 + \epsilon_3 \right)\, \det \left(\mathrm{ad}\, \Phi + \epsilon_1 \right)\, \det \left(\mathrm{ad}\, \Phi + \epsilon_2 \right)\, \det \left(\mathrm{ad}\, \Phi + \epsilon_3 \right)} \ , \end{equation} where the $\epsilon_i$ parametrize the toric action. This ratio can be computed explicitly to give a factor of $(-1)^{N |\vec\pi|}$. Combined with a similar computation for the instanton action, this gives the instanton expansion \begin{equation} \label{Znc} \mathcal{Z}_{\rm DT}^{U(1)^N}\big(\mathbf{C}^3\big) =\sum_{\vec\pi}\, (-1)^{(N+1)\,|\vec\pi|}~q^{|\vec\pi|} \ , \end{equation} where $q = - {\,\rm e}\,^{{\,{\rm i}\,} \vartheta}={\,\rm e}\,^{-g_s}$. \section{Matrix Quantum Mechanics and Coherent Sheaves} The second approach consists in the introduction of an appropriate topological matrix quantum mechanics \cite{Cirafici:2008sn}. From the string theoretical point of view this corresponds to the effective action on a gas of D0 branes that bound the original D6 on $\mathbf{C}^3$. From the perspective of the gauge theory it arises as quantization of the collective coordinates around each instanton solution and provides a higher dimensional generalization of the ADHM construction of instantons. Indeed one can see this explicitly by parametrizing each holomorphic bundle (or coherent sheaf) on the projective space $\mathbf{P}^3$ that corresponds to a compactification of the physical space $\mathbf{C}^3$ in term of a set of algebraic matrix equations that we'll call generalized ADHM equations. This can be done by using Beilinson's theorem which states that for any coherent sheaf $\mathcal{E}$ on $\mathbf P^3$ there is a spectral sequence $E_s^{p,q}$ with $E_1$-term $E_1^{p,q} = H^q \big( {\mathbf P}^3 \,,\, \mathcal{E}(-r) \otimes \Omega^{-p}_{{\mathbf P}^3}(-p) \big) \otimes \mathcal{O}_{{\mathbf P}^3}(p)$ for $p \le 0$ that converges to the original sheaf (here $\Omega_{\mathbf{P}^3}$ and $\mathcal{O}_{\mathbf{P}^3}$ are respectively the sheaf of differential forms and the structure sheaf). By the appropriate set of boundary conditions this spectral sequence degenerates at the $E_2$ term. The outcome of this procedure is that the original sheaf can be described as the only non--vanishing cohomology of a four term complex. The associated conditions yield a particular set of matrix equations plus stability conditions. One can show that this system boils down to the following set of generalized ADHM equations \begin{equation} [B_1,B_2]+I\,J = 0 \ , \qquad [B_1,B_3]+I\,K = 0 \ , \qquad [B_2,B_3] = 0 \ , \label{ADHMeqs}\end{equation} where $B_i\in \mathrm{End}(V)$, $i=1,2,3$, $I\in\mathrm{Hom}(W,V)$ and $J,K\in\mathrm{Hom}(V,W)$ and a suitable stability condition has to be imposed. The vector spaces $V$ and $W$ arise in the geometrical construction outlined above as particular cohomology groups of the sheaf $\mathcal{E}$. These equations are naturally in correspondence with the noncommutative instantons described in the previous section. In particular in the abelian case $N=1$ the stability conditions allow us to set $J=K=0$ and the cohomology sheaf $\mathcal{E}$ is isomorphic to the ideal $\mathcal{I}$ that enters in the description of the Hilbert scheme in terms of noncommutative instantons. As we localize the theory onto its $U(1)^N$ phase this is the relevant case. One can easily construct a cohomological matrix model starting from these equations. In this framework $V$ with $\mathrm{dim} V = k$ represent the gas of $k$ D0 branes (or the charge $k$ topological sector in the gauge theory) while $W$ stands for the D6 branes and its dimension is the rank of the gauge theory $N$. The matrices $B_i$ arise from 0--0 strings and represent the position of the coincident D0-branes inside the D6-branes. On the other hand, the field $I$ describes open strings stretching from the D6-branes to the D0-branes. It characterizes the size and orientation of the D0-branes inside the D6-branes. Other fields are necessary to close the equivariant BRST algebra and localize the theory on the generalized ADHM equations but we refer the reader to \cite{Cirafici:2008sn} for a complete treatment. In the abelian case the generalized ADHM equations ensure that the critical points can be expressed by a certain sequence of maps between the spaces $V$ and $W$. This configuration can be explicitly mapped into a three dimensional partition thus recovering the classification of the fixed points that we found in the noncommutative setting. The generalization to the $U(1)^N$ theory is simple and corresponds to $N$--tuples of three dimensional partitions. We will denote a generic fixed point $f$ as $\vec{\pi} = (\pi_1 , \dots , \pi_N)$. Accordingly at the fixed points the vector spaces $V$ and $W$ have the following weight decompositions \begin{equation} \label{decompos} V_f = \sum_{l=1}^N {\,\rm e}\,^{ {\,{\rm i}\,} a_l} \sum_{(i,j,k)\in \pi_l} t_1^{i-1} t_2^{j-1} t_3^{k-1} \ , \qquad W_f = \sum_{l=1}^N {\,\rm e}\,^{ {\,{\rm i}\,} a_l} \ , \end{equation} where $t_i = {\,\rm e}\,^{ {\,{\rm i}\,} \epsilon_i}$ generate the $\mathbf{T}^3$ action. The computation of the instanton factors proceeds as in the four dimensional case \cite{Nekrasov:2002qd,Bruzzo:2002xf}. For every fixed point we can describe the local structure of the moduli space via an equivariant complex that encodes the linearization of the generalized ADHM equations up to linearized (complexified) gauge invariance. The character of this complex \begin{equation} \label{character} \chi_{f} (\mathbf{C}^3)^{[k]} = W_f^* \otimes V_f - \frac{ V_f^* \otimes W_f}{t_1 t_2 t_3} + V_f^* \otimes V_f \frac{(1-t_1) (1-t_2) (1-t_3)}{t_1 t_2 t_3} \ , \end{equation} contains all the local information needed in the localization formula and can be used to compute explicitly the instanton factors following \cite{Nekrasov:2002qd,Bruzzo:2002xf}. In (\ref{character}) the subscript $f$ is stressing that the computation only holds at a particular fixed point and the conjugation acts on the elements of the weight decomposition as $t_i^* = t_i^{-1}$. A straightforward computation gives the fluctuation factor $(-1)^{N |\vec \pi|}$. To get the partition function the last missing ingredient is the instanton action (\ref{instaction}). This can be obtained by writing the universal sheaf $\mathcal{E}$ on the moduli space as $\mathcal{E} = W \oplus V \otimes (S^- \ominus S^+ )$ where $S^{\pm}$ are the positive/negative chirality spinor bundles over $\mathbf{P}^3$. By using the correspondence between spinors and differential forms we can decompose its Chern character at a given fixed point as \begin{equation} \mathrm{ch} (\mathcal{E}_{\vec \pi}) = W_{\vec \pi} - (1-t_1) (1-t_2) (1-t_3) V_{\vec \pi} \, . \end{equation} Collecting all pieces of information one can write down the full partition function \begin{equation} \mathcal{Z}_{\rm DT}^{U(1)^N}({\mathbf C}^3) =\sum_{f = \{ \pi_1 \cdots \pi_N \} } (-1)^{N \sum_{l=1}^N |\pi_l|} {\, {\rm e} \,} ^{{\, {\rm i} \,} \vartheta \sum_{l=1}^N |\pi_l|} \ , \end{equation} that agrees precisely with (\ref{Znc}). \section{The Coulomb Branch on a Toric Manifold} The construction just outlined carries on to the case of a general toric manifold $X$. The geometric information of a toric manifold is encoded in a trivalent graph $\Delta$. The vertices $f$ of $\Delta$ are locally isomorphic to $\mathbf{C}^3$ while the edges $e$ corresponds to rational curves of K\"ahler area $t_e$. By using the localization procedure on the physical space the gauge theory localizes onto a sum of contributions associated with each vertex corresponding to three dimensional partitions and a set of propagators associated with the edges. Each propagator depends on the area $t_e$ of the rational curve and on a two dimensional partition that arises when gluing together two different three dimensional partitions as a section of the common leg. In the rank 1 case one recovers the Calabi--Yau crystal picture directly from the gauge theory. In the more general rank $N$ setting on the Coulomb branch one finds \begin{equation} \mathcal{Z}_{\rm DT}^{U(1)^N} (X) = \sum_{\vec\pi_f} q^{\mathbf{I}} (-1)^{(N+1) \mathbf{I}} \prod_{e \in \mathrm{edge}} (-1)^{\sum_{l,l'=1}^N |\lambda_{l,e}| |\lambda_{l',e}| m_{1,e}} {\, {\rm e} \,}^{- \sum_{l=1}^N |\lambda_{l,e}| t_e} \ , \end{equation} where $q= -{\,\rm e}\,^{ {\,{\rm i}\,} \vartheta}$ and \begin{equation} \mathbf{I} = \sum_f \sum_{l=1}^N |\pi_{l,f}| + \sum_{e \in \mathrm{edge}} \sum_{l=1}^N \sum_{(i,j) \in \lambda_{e,l}} \left( m_{1,e} (i-1) + m_{2,e} (j-1) + 1 \right) \ . \end{equation} The integers $m_{1,e}$ and $m_{2,e}$ determine the normal bundle to the rational curve corresponding to the edges. \section{Conclusions} In \cite{Cirafici:2008sn} we have studied the relationship with a six dimensional topological Yang--Mills theory and Donaldson--Thomas invariants. As a first step one can use equivariant localization to write the partition function of the noncommutative deformation of the theory as a sum over point--like instantons. These noncommutative instantons can be interpreted in purely geometrical terms as certain coherent sheaves on $\mathbf{P}^3$ through a higher dimensional generalization of the ADHM formalism. In turn this can be used to construct a topological matrix quantum mechanics that dynamically describes the stable coherent sheaves. This formalism can be used, for example, to compute the rank $N$ partition function on a toric manifold; the result is the $N$-th power of the abelian result with an $N$ dependent sign shift. This shift can be absorbed in a redefinition of the string coupling constant $g_s \longrightarrow g_s - N {\rm i} \pi$. This modification is natural from the point of view of the OSV conjecture \cite{Ooguri:2004zv} that relates the entropy of a BPS black hole with the topological string amplitude. The parameters that enter in the topological string amplitude are functions of the D--brane charges at the attractor point of the BPS moduli space. In the presence of D6 branes this relation is consistent with the above shift. \bigskip \section*{Acknowledgments} M.C. wishes to thank the organizers of the workshop for providing such a stimulating atmosphere in such a beautiful location. This work was supported in part by the Marie Curie Research Training Network Grants {\sl Constituents, Fundamental Forces and Symmetries of the Universe} (Project~MRTN-CT-2004-005104) and {\sl Superstring Theory} (Project~MRTN-CT-2004-512194) from the European Community's Sixth Framework Programme, and by the INTAS Grant {\sl Strings, Branes and Higher Spin Fields} (Contract~03-51-6346). \bigskip
2,877,628,090,564
arxiv
\section{Introduction} Recently, the ATLAS Collaboration \cite{atlas} reported an experimental anomaly in $WH$ or $ZH$ production in the $q\bar q b\bar b$ final state at $\sqrt{s} = 13$ TeV with an apparent excess at around 3 TeV resonance mass region. Note that CMS also searched for the same channels \cite{cms}. Though they did not claim observing anything peculiar, we can see that there is a visible peak of more than $2\sigma$ at around 2.7 TeV. Currently, the CMS observation does not support the 3 TeV excess of ATLAS base on the narrow width resonance analysis. The broad width analysis has not been fully studied, and so it is hard to make conclusion for broad width resonance case. We shall focus on interpreting the ATLAS result while we emphasize that the CMS result does not falsify the ATLAS result. The excessive cross section is roughly \cite{atlas} (which is estimated from the 95\% CL upper limits on the cross section curves) \begin{equation} \label{first} \sigma (pp \to W' \to WH) \times B(H \to b\bar b) \approx 5 \pm 1.5 \;{\rm fb} \;. \end{equation} A similar excess was seen in $ZH$ production. The local excesses are at about $3.2 -3.4 \sigma$ for both $WH$ and $ZH$ channels at around 3 TeV, while the global significance is about $2.2\sigma$. Nevertheless, the boosted hadronic decays of $W$ and $Z$ have substantial overlap at about 60\% level, which means that it is difficult to differentiate between the $W$ and $Z$ bosons. In the following, we focus on the excess interpreted as a 3 TeV $WH$ resonance. We attempt to interpret that there is a 3 TeV spin-1 resonance $W'$ that decays into $WH$. The $W'$ can arise from a number of extended symmetric models, e.g., $SU(2)_1 \times SU(2)_2 \times U(1)_X$ \cite{Dobrescu:2015yba,221}. With an additional $SU(2)$ symmetry, which is broken at the multi-TeV scale, there will be extra $W'$ and $Z'$ bosons, whose masses may be similar or differ depending on the symmetry-breaking pattern. Then the decay $W' \to WH$ can explain the excess with a resonance structure. Similarly, the $Z' \to ZH$ can explain the excess in $ZH$ production. Here we focus on the $WH$ channel. The $W'$ boson couples to the right-handed fermions with a strength $g_R$, independent of the left-handed weak coupling. The $W'$ boson can then be produced via $q\bar q'$ annihilation. The $W'$ boson can mix with the standard model (SM) $W$ boson via a mixing angle, say $\sin\phi_w$, so that the $W'$ boson can decay into $WZ$ and $WH$ with a mixing-angle suppression, and right-handed fermions. Previously, there was the 2 TeV $WZ$ and $WW$ anomaly which motivated a lot of phenomenological activities. One of the constraints was the $WH$ constraint because the Equivalence theorem (ET) states that $\Gamma(W' \to WZ) \approx \Gamma( W' \to WH)$ in the heavy $W'$ limit \cite{previous}. In the model that we are considering, it is indeed true in the alignment limit $\beta \to \pi/2+\alpha$. Here we attempt to explore how much we can deviate from the alignment limit so that the $WH$ channel can be enhanced while suppressing the $WZ$, thus satisfying the constraint from $WZ$ \cite{ATLAS:2016yqq, ATLAS:2016cwq, ATLAS:2016npe, CMS:2016mwi, CMS:2016pfl}, dijet \cite{ATLAS:dijet, CMS:dijet}, and precision Higgs data \cite{atlas-2hdm}. \footnote {The leptonic constraint on $W'$ and $Z'$ are so strong that we opt for the leptophobic nature for the $W'$ and $Z'$ bosons. } The organization of this note is as follows. In the next section, we describe the $SU(2)_1 \times SU(2)_2 \times U(1)_X$ model that we consider in this work. In Sec. III, we demonstrate the deviation from the alignment limit. In Sec. IV, we discuss all the relevant constraints. We present the results in Sec. V, and conclude and comment in Sec. VI. \section{The $SU(2)_1 \times SU(2)_2 \times U(1)_X$ model} We follow \cite{Dobrescu:2015yba,221} a renormalizable model based on the $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ symmetry. In addition to the SM fermions and gauge bosons, this model also contains new gauge bosons $ W' $, $ Z' $, the right-handed neutrinos $ N_R $, and also some extra scalars from the extended Higgs sector: a complex $ SU(2)_R $ triplet $ T $ and a complex $ SU(2)_L \times SU(2)_R $ bidoublet $ \Sigma $. We summarize the particle contents and gauge charges in Table~\ref{tab:model} of this $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ Model. \begin{table}[h!] \caption{\small \label{tab:model} The particle contents and gauge charges of the $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ Model \cite{Dobrescu:2015yba}. } \vspace{1.0mm} \begin{ruledtabular} \begin{tabular}{ l c c c } Fields & $ SU(2)_{L} $ & $ SU(2)_R $ & $ U(1)_{B-L} $ \\ \hline $ (u_L ,d_L) $ & 2 & 1 & +1/3 \\ $ (u_R ,d_R) $ & 1 & 2 & +1/3 \\ $ (\nu_L ,l_L) $ & 2 & 1 & -1 \\ $ (N_R ,l_R) $ & 1 & 2 & -1 \\ $ \Sigma $ & 2 & 2 & 0 \\ $ T $ & 1 & 3 & +2 \\ \end{tabular} \end{ruledtabular} \end{table} We focus on the extended Higgs sector to study the mass and mixing of new gauge bosons $ W' $, $ Z' $. There are two steps of symmetry breaking from two sets of complex scalar fields, separately. First, the $ SU(2)_R $ triplet scalar $ T =(T^{++}, T^+, T^0) $ breaks $ SU(2)_R \times U(1)_{B-L} $ to $ U(1)_Y $ by acquiring a large vacuum-expectation value (VEV) at the multi-TeV scale. \[ \langle T \rangle = (0,0, u_T)^T \ . \] The heavy masses of $W'$ and $Z'$ are set by $ u_T $. Second, the $ SU(2)_L \times SU(2)_R $ bidoublet scalar, \begin{equation} \Sigma = \left( \begin{array}{cc} \Phi^{0*}_1 & \Phi^{+}_2 \\ -\Phi^{-}_1 & \Phi^0_2 \end{array}\right ) \; , \end{equation} develops a VEV at the electroweak scale $v=(v_1^2+v_2^2)^{1\over2} \approx 246$~GeV. \begin{equation} \langle \Sigma \rangle = \frac{1}{\sqrt{2}} \left( \begin{array}{cc} v_1 & 0 \\ 0 & e^{i\alpha_\Sigma} v_2 \end{array} \right ) \; = \frac{v}{\sqrt{2}} \left( \begin{array}{cc} \cos\beta & 0 \\ 0 & e^{i\alpha_\Sigma} \sin\beta \end{array} \right ) \; , \end{equation} which further breaks $ SU(2)_L \times U(1)_Y $ to $ U(1)_Q $, where $Q=T_3^L+T_3^R+\fr12(B-L)$. The phase $ \alpha_\Sigma$ is CP-violating, and we do not include its effects in this work. The ratio $ \tan\beta=v_2/v_1 $ of two VEV's follows the same notation as two-Higgs-doublet models (2HDM). This symmetry breaking induces a small mixing between the charged gauge bosons. Explicitly, the field content of $\Sigma$ is given by \begin{eqnarray} \Phi^0_1 &=& \fr1{\sqrt{2}}[ v_1+(-H\sin\alpha +H'\cos\alpha -iA^0\sin\beta +iG^0\cos\beta ) ] \ , \nonumber \\ \Phi^0_2 &=& \fr1{\sqrt{2}}[ v_2+( H\cos\alpha +H'\sin\alpha +iA^0\cos\beta +iG^0\sin\beta ) ] \ , \nonumber \\ \Phi^+_1 &=& \cos\beta G^+ - \sin\beta H^+ \ , \nonumber \\ \Phi^+_2 &=& \sin\beta G^+ + \cos\beta H^+ \ , \label{eq:phis} \end{eqnarray} with the $H$ being the observed 125 GeV Higgs boson, $H'$ the heavy Higgs boson, $H^\pm$ the charged Higgs boson, $A$ the pseudoscalar Higgs boson, and $G^{\pm}$, $G^0$ the Nambu-Goldstone bosons. We are interested in the energy scale $u_T$ much larger than the electroweak scale $v$. Therefore, the scalar fields from the triplet $T$ are decoupled from the electroweak scale. At the energy scale lower than $u_T$, the scalar sector only consists of the bidoublet $\Sigma$, which is the same as the 2HDM with the doublet fields $H^T_1=(\Phi^+_1,\Phi^0_1)^T$ and $H^T_2=(\Phi^+_2,\Phi^0_2)^T$~\cite{Dobrescu:2015yba}. The electrically-charged states, $ W^{\pm}_L $ and $ W^{\pm}_R $, of the $ SU(2)_L $ and $ SU(2)_R $ symmetries will mix to form physical gauge bosons, $ W^{\pm} $ and $ W'^{\pm} $, \begin{equation} \left( \begin{array}{c} W^{\pm} \\ W'^{\pm} \end{array} \right ) = \left( \begin{array}{cc} \cos\phi_w & \sin\phi_w \\ -\sin\phi_w & \cos\phi_w \end{array} \right ) \; \left( \begin{array}{c} W^{\pm}_L \\ W^{\pm}_R \end{array} \right ) \;. \end{equation} The $ W^{\pm}_L - W^{\pm}_R $ mixing angle $ \phi_w $ satisfies \begin{equation} \sin\phi_w = \frac{g_R}{g_L}\left(\frac{m_W}{m_{W'}}\right)^2 \sin2\beta\;, \end{equation} and the $ W $ and $ W' $ masses are given by \begin{equation} m_W = \fr12 g_L v \ ,\;\;\; m_{W'} = g_R u_T\;, \end{equation} where $ g_L $ and $ g_R $ are the $SU(2)_L \times SU(2)_R$ gauge couplings. We assume that the mass of the right-handed neutrino is heavier than the $W'$, such that the decay $W' \to l_R N_R$ is kinematically forbidden. There are other possible decay modes for the $W'$ into other Higgs bosons \cite{Dobrescu:2015yba} if they are kinematically allowed: e.g., \[ W'^+ \to H^+A,\;Z H^+,\; W^+ H',\; W^+ A,\; H^+ H,\; H^+ H' \;. \] Such decay widths depend on the mass parameters and are highly model dependent, and so we treat the sum of these decay widths as a restricted variable parameter denoted by $\Gamma^{\rm other}_{W'}$. \section{Deviations from the Alignment limit} In this section, we would derive the $W'WZ$ and $W'WH$ couplings in this $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ model, using the 2HDM convention, by rewriting the bidoublet $\Sigma$ in terms of two doublets $(\Phi^+_1,\Phi^0_1)^T$ and $(\Phi^+_2,\Phi^0_2)^T$~\cite{Dobrescu:2015yba}. The deviation from the ET, $\Gamma(W' \to WZ)\neq \Gamma(W' \to WH)$, can be realized, if the mixing angles $\alpha$ and $\beta$ in 2HDM stays away from the alignment limit. Or vice versus, the ET is restored when $\beta \to \alpha+\pi/2$. The mass mixing term between $W$ and $W'$ comes from the bidoublet and is given by, with the VEV's of the decomposed doublets denoted by $v_1=v\cos\beta$ and $v_2=v\sin\beta$, \begin{equation} m_{WW'}^2 = 2 \times \frac{g_R}{\sqrt{2}} \frac{v_1}{\sqrt{2}} \frac{g_L}{\sqrt{2}} \frac{v_2}{\sqrt{2}} =\frac{g_R g_L}{2} v_1 v_2 \ . \end{equation} Note that the factor of 2 in front comes from two ways of matchings. So the induced mixing is described by \begin{equation} \left(\begin{array}{c} W \\ W' \end{array}\right) =\left(\begin{array}{cc} \cos\phi_w & -\sin\phi_w \\ \sin\phi_w & \cos\phi_w \end{array}\right) \left(\begin{array}{c} W_L \\ W_R \end{array}\right) \ ,\quad \sin\phi_w \approx m_{WW'}^2/m_{W'}^2 \ . \end{equation} Similarly, there is mixing between $Z$ and $Z'$. The mixing angle $\phi_w$ induces the coupling $W^\dagger W'Z$ from the gauge vertices $W_L^\dagger W_L Z$ and $W_R^\dagger W_R Z$ of different strengths, according to the SM pattern $T^L_3-Q\sin^2\theta_W$. The two contributions sum up to $$ \frac{g_L}{\cos\theta_W}\sin\phi_w \left[ -(0-\sin^2\theta_W) +(1-\sin^2\theta_W)\right] = \frac{g_L}{\cos\theta_W}\sin\phi_w \ .$$ \begin{equation} (\hbox{coupling of } W'^\dagger W Z)\equiv g_{W'WZ} =\frac12\frac{g_L^2 g_R v^2}{ m_{W'}^2\cos\theta_W} \sin\beta\cos\beta = g_R \frac{m_W m_Z}{m_{W'}^2} \sin2\beta \ . \label{eq:WPWZcoupling} \end{equation} However, the leading vertex $W'^\dagger W H$ is given {\it not} explicitly from the mixing, but derived by the following steps, \begin{equation} (\hbox{coupling of } W'^\dagger W H)\equiv g_{W'WH} =\frac{\partial m_{WW'}^2 }{\partial (v_1/\sqrt2)} \frac{\partial \Phi_1^0 }{\partial H}+ \frac{\partial m_{WW'}^2 }{\partial (v_2/\sqrt2)} \frac{\partial \Phi_2^0 }{\partial H} \ . \end{equation} Therefore, \begin{equation} g_{W'WH} =\frac{g_L g_R v}{2} \left[-\frac{v_2}{v}\sin\alpha +\frac{v_1}{v}\cos\alpha \right] = g_R m_W \cos(\alpha+\beta) \ . \end{equation} Similarly, the Goldstone boson $G^0$, associated with $Z$, also accompanies with $H$. \begin{equation} (\hbox{coupling of } W'^\dagger W G^0) \equiv g_{W'W G^0} = \frac{\partial m_{WW'}^2 }{\partial (v_1/\sqrt2)} \frac{\partial \Phi_1^0 }{\partial G^0}+ \frac{\partial m_{WW'}^2 }{\partial v_2/\sqrt2} \frac{\partial \Phi_2^0 }{\partial G^0} \ \ . \end{equation} \begin{equation} g_{W'W G^0} =\frac{g_L g_R v}{2} i \left[\frac{v_2}{v}\cos\beta +\frac{v_1}{v}\sin\beta \right] =i\frac{g_L g_R v}{2} \left[ 2\sin\beta\cos\beta \right] =i g_R m_W \sin2\beta \ . \end{equation} In summary, {$ g_{W'W G^0} =i g_R m_W \sin2\beta$}, and {$ g_{W'W H} = g_R m_W \cos(\alpha+\beta)$}. Thus, we obtained the decay widths for $W' \to WZ$ and $W' \to WH$ in the limit $m_{W'} \gg m_{W,Z,H}$. \begin{eqnarray} \Gamma(W' \to WZ) &\simeq & \frac{g^2_R}{192 \pi} m_{W'}\, \sin^2 2\beta \,, \nonumber \\ \Gamma(W' \to WH) &\simeq & \frac{g^2_R}{192 \pi} m_{W'}\, \cos^2 (\alpha+\beta) \,. \label{eq:widths} \end{eqnarray} In the alignment limit, $\alpha \to \beta-\fr\pi2$, the two widths above become equal. As ET identifies $G^0$ with the longitudinal $Z$, we expect the relations, \begin{equation} \Gamma(W'\to WZ) \approx \Gamma(W'\to WG^0) \approx \Gamma(W'\to WH) \hbox{ as } \alpha \to \beta-\fr\pi2 \ .\end{equation} We are going to illustrate the operation of the ET. The longitudinal $W^+$ is identified with $G^+=\cos\beta \Phi^+_1 + \sin\beta \Phi^+_2 $ in Eq.(\ref{eq:phis}). The action of $W'$ moves entries within the same row in the $2 \times 2$ matrix form of the bidoublet. Therefore the amplitude $$ {\cal M}(W'\to G^+(p^+) G^0(p^0) ) = \frac{g_R}{\sqrt2} \frac1{\sqrt2} (\cos\beta\sin\beta + \cos\beta\sin\beta) \ (p^+ -p^0)\cdot \epsilon' \ .$$ \begin{equation} {\cal M}(W'\to G^+(p^+) G^0(p^0) ) =\frac{g_R}{2}\sin2\beta \ (p^+ -p^0)\cdot \epsilon' \ . \end{equation} The factor $(p^+ -p^0)$ corresponds to the Feynman amplitude for the convective current, which is contracted with the polarization vector $\epsilon'$ of $W'$. The above amplitude should give the same width $\Gamma(W'\to W G^0)$. Indeed it is because $\fr12(p^+ -p^0)\cdot \epsilon' = p^+\cdot \epsilon' \approx m_W \epsilon^+_L\cdot \epsilon'$. On the other hand, we can start from the tri-gauge coupling of the anti-symmetric Lorentz form, $$ (p^+-p^0)\cdot \epsilon' (\epsilon^+\cdot\epsilon^0) +(p^0+P)\cdot \epsilon^+ (\epsilon^0\cdot\epsilon') +(-P-p^+)\cdot \epsilon^0 (\epsilon'\cdot\epsilon^+) $$ $$ \qquad = (2 p^+ \cdot \epsilon') (\epsilon^+ \cdot\epsilon^0) + (2 p^0 \cdot \epsilon^+) (\epsilon^0\cdot\epsilon') -(2 p^+\cdot \epsilon^0) (\epsilon'\cdot\epsilon^+) \ .$$ Now using the ET, we concentrate at the longitudinally polarized $W$ of $\epsilon^+\approx p^+/m_W$ and $Z$ of $\epsilon^0\approx p^0/m_Z$. Up to an over factor $\fr1{m_W m_Z}$, we obtain $$ (2 p^+ \cdot \epsilon') (p^+ \cdot p^0) + (2 p^0 \cdot p ^+) (p^0\cdot\epsilon') - (2 p^+\cdot p^0) (\epsilon'\cdot p^+) $$ $$ \qquad = (2 p^0 \cdot p ^+) (p^0\cdot\epsilon') = m_{W'}^2 (p^0\cdot\epsilon') \ .$$ Therefore, the longitudinal amplitude from Eq.(\ref{eq:WPWZcoupling}) agrees with the other calculation based on $G^+G^0$. $$ {\cal M}(W'\to WZ) = g_R\sin2\beta p^0\cdot \epsilon' = \frac{1}{2} g_R \sin2\beta m_{W'} \hat{\BM{p}}\BM{\cdot\epsilon'} = \frac{1}{2} g_R m_{W'} \cos\theta \sin2\beta \ .$$ Integrating out the angular parameter $\theta$, the decay width is \begin{equation} \Gamma(W'\to WZ)=\frac{1}{2m_{W'}} \int |{\cal M}(W'\to WZ)|^2 \frac{d\cos\theta}{2} \frac{1}{8\pi} =\frac{g_R^2\sin^22\beta}{192\pi}m_{W'} \ , \end{equation} which is in agreement with Eq.(\ref{eq:widths}). Following the similar method, we can verify the coupling of $WWH$ in this model by using $$ m^2_{WW}=\frac{g^2_Lv^2}{4}=\frac{g^2_L}{4}(v^2_1+v^2_2)\,. $$ Then the coupling of $WWH$ is \begin{equation} g_{WWH} =\frac{\partial m_{WW}^2 }{\partial (v_1/\sqrt2)} \frac{\partial \Phi_1^0 }{\partial H}+ \frac{\partial m_{WW}^2 }{\partial (v_2/\sqrt2)} \frac{\partial \Phi_2^0 }{\partial H}=g_L m_W \sin(\beta-\alpha)\,. \end{equation} In the alignment limit, $\beta \to \alpha + \pi/2$, the $WWH$ coupling goes back to the SM Higgs-gauge boson coupling. Gauge-boson and fermonic couplings of the 125 GeV Higgs boson are now well measured by ATLAS and CMS, especially, the couplings to the massive gauge bosons. The deviations from the SM values shall be less than about 10\%, i.e $|\sin(\beta-\alpha)|\gtrsim0.9$. That implies the allowed range of $|\cos(\beta-\alpha)|\lesssim 0.44$. Weaker limits for the couplings to up- and down-type quarks from Higgs precession data also dictate the $\alpha$ and $\beta$'s parameter region. Therefore, in this model framework, the Higgs precision data would set the boundary on the deviation from the alignment limit, and thus restrict the ratio of $\Gamma(W' \to WZ)$ and $\Gamma(W' \to WH)$. The robust and detailed allowed region of $\alpha$ and $\beta$ from Higgs precision data depends on different types of 2HDM's. For the allowed parameter region, we refer to Ref.~\cite{atlas-2hdm}, where Type-I, -II, Lepton-specific, and Flipped 2HDMs have been studied. The universal feature from their results, in the small $\tan \beta\simeq 0.1$ region, the allowed $\cos(\beta-\alpha)$ is close to the alignment limit, i.e $|\cos(\beta-\alpha)|\lesssim 0.05$. This is because the universal up-type quark Yukawa coupling among the 2HDMs is enhanced by factor $1/\sin\beta$. For $\tan\beta\gtrsim 2$ region, only the Type-I case allows more dramatic deviation from the alignment limit. For instance, taking $\tan\beta = 2.5$, the allowed range from Higgs precision data is $ -0.37 < \cos(\beta-\alpha) < 0.42$. Because only in Type-I case, all the up-, down-quark and leptonic Yukawa couplings deviate from SM values by the same factor $(\cos\alpha/\sin\beta)$, such that larger $\tan\beta$ would not enhance any of these couplings, and they are therefore less constrained by Higgs precision data. We shall use the results of Type-I 2HDM obtained in Ref.~\cite{atlas-2hdm} to restrict the parameter of our model. \section{Constraints from existing data} Recently, both ATLAS and CMS collaborations have published their $W'$ searches with different decay channels, including fermionic final states $ l^{\pm}\nu $ ~\cite{ATLAS:lnu}, dijet~\cite{ATLAS:dijet, CMS:dijet}, $ t b $ ~\cite{CMS:2016wqa}, and also bosonic final states $ W^{\pm}Z $ ~\cite{ATLAS:2016yqq, ATLAS:2016cwq, ATLAS:2016npe, CMS:2016mwi, CMS:2016pfl} at 13 TeV. Here we list all the constraints from these searches in Table~\ref{tab:constraint}. Here $j$ includes all light flavors, $ l $ includes ($e, \mu$) and $\nu$ includes ($\nu_{e}, \nu_{\mu}, \nu_\tau$). Finally, $ J $ means large-$R$ jets ($W$ jet or $Z$ jet). \begin{table}[h!] \caption{\small \label{tab:constraint} Different decay mode searches at 13 TeV of $W'$ with $ m_{W'}\sim 3 $ TeV for both ATLAS and CMS constraints. Here $j$ includes all light flavors, $ l $ and $ \nu $ include ($e, \mu$) and ($\nu_{e}, \nu_{\mu}$), separately. Finally, $ J $ means large-$R$ jets ($W$ jet or $Z$ jet). } \vspace{1.0mm} \begin{ruledtabular} \begin{tabular}{ l c c c } & Process & Upper Bound & Ref. \\ \hline ATLAS & $ p p\rightarrow W^{\prime\pm}\rightarrow l^{\pm}\nu $ & $ \leq 0.243 $ (fb) & \cite{ATLAS:lnu} \\ ATLAS & $ p p\rightarrow W^{\prime\pm}\rightarrow j j' $ & $ \leq 69.5 $ (fb) & \cite{ATLAS:dijet} \\ CMS & $ p p\rightarrow W^{\prime\pm}\rightarrow j j' $ & $ \leq 41.7 $ (fb) & \cite{CMS:dijet} \\ CMS ($ t b\rightarrow l^{\pm}\nu b b $) & $ p p\rightarrow W^{\prime\pm}\rightarrow t b $ & $ \leq 84.4 $ (fb) & \cite{CMS:2016wqa} \\ ATLAS ($ W^{\pm}Z\rightarrow J J $) & $ p p\rightarrow W^{\prime\pm}\rightarrow W^{\pm}Z $ & $ \leq 3.0 $ (fb) & \cite{ATLAS:2016yqq} \\ ATLAS ($ W^{\pm}Z\rightarrow l^{\pm}\nu q q' $) & $ p p\rightarrow W^{\prime\pm}\rightarrow W^{\pm}Z $ & $ \leq 5.5 $ (fb) & \cite{ATLAS:2016cwq} \\ ATLAS ($ W^{\pm}Z\rightarrow q q' l^+ l^- $) & $ p p\rightarrow W^{\prime\pm}\rightarrow W^{\pm}Z $ & $ \leq 10.4 $ (fb) & \cite{ATLAS:2016npe} \\ ATLAS ($ W^{\pm}Z\rightarrow q q'\nu\nu $) & $ p p\rightarrow W^{\prime\pm}\rightarrow W^{\pm}Z $ & $ \leq 3.0 $ (fb) & \cite{ATLAS:2016npe} \\ CMS ($ W^{\pm}Z\rightarrow J J $) & $ p p\rightarrow W^{\prime\pm}\rightarrow W^{\pm}Z $ & $ \leq 3.2 $ (fb) & \cite{CMS:2016mwi} \\ CMS ($ W^{\pm}Z\rightarrow l^{\pm}\nu q q $) & $ p p\rightarrow W^{\prime\pm}\rightarrow W^{\pm}Z $ & $ \leq 7.0 $ (fb) & \cite{CMS:2016pfl} \\ \end{tabular} \end{ruledtabular} \end{table} As we can see from Table~\ref{tab:constraint} that the strongest constraint comes from $ W^{\prime\pm}\rightarrow l^{\pm}\nu $ searches, but here we choose the leptophobic version of the model such that this constraint will not cause serious effects on our results. On the other hand, the dijet constraints from both the ATLAS and CMS analyses rely on the acceptance ($A$) and the width-to-mass ratio ($\Gamma /M$) effects. Note that the dijet limits quoted in Table~\ref{tab:constraint} are only for the narrow-width resonance scenario. Here we follow their analyses by using $A=0.4 (0.6)$ for ATLAS \cite{ATLAS:dijet} (CMS \cite{CMS:dijet}) analyses and the width-to-mass ratio effects are from Table 2 in \cite{ATLAS:dijet} for ATLAS analysis and Table 4 in for CMS analysis \cite{Khachatryan:2015sja} to rescale in our case. \footnote{ Since we do not find the width-to-mass ratio effects for $W^{\prime\pm}\rightarrow t b $ and $ W^{\prime\pm}\rightarrow W^{\pm}Z $ for either ATLAS or CMS analysis, we therefore conservatively use the original constraints of their publications with the narrow width approximation analysis. } Another set of constraints come from the precision Higgs boson data, including the gauge-Higgs couplings, Yukawa couplings, and the $H\gamma\gamma$ and $Hgg$ factors. In 2HDMs, such constraints can be recast in terms of $\tan\beta$ and $\cos(\beta - \alpha)$. The excluded region in the parameter space of Type-I 2HDM is shown explicitly in the upper-left panel in Fig.~\ref{f1} \cite{atlas-2hdm}. \section{Results} \begin{figure}[t!] \centering \includegraphics[height=3.2in,angle=-90]{cba_tb.pdf} \includegraphics[height=3.2in,angle=-90]{cba_gR.pdf} \includegraphics[height=3.2in,angle=-90]{width_gR.pdf} \includegraphics[height=3.2in,angle=-90]{inv_gR.pdf} \caption{\small \label{f1} The red-dotted points for $m_{W'}=3$ TeV satisfy the signal production cross section $\sigma(pp \to W' \to WH)\geq 4.5$ fb, the upper limits listed in Table~\ref{tab:constraint}, and the dijet upper limit adapted for the broad-width resonance. The cyan (green) hatched region was excluded by the Higgs precision data of Type-I 2HDM~\cite{atlas-2hdm,atlas_cms_data}. } \end{figure} \begin{figure}[t!] \centering \includegraphics[height=4.2in,angle=-90]{wh.pdf} \caption{\small \label{f2} The $m_{JJ}$ distribution of ATLAS 2-tag $WH$ data points and the SM background (blue-solid histogram) are from Ref.~\cite{atlas}. The $W' \to WH$ contribution added to the background is indicated with the red-dashed histogram. The parameters are $\cos(\beta-\alpha)=-0.3$, $\tan\beta=2.41$, $g_R=1.358$, $\Gamma^{\rm other}_{W'}=0.185\times m_{W'}$, and $m_{W'}=3$ TeV, which gives $\sigma(pp \to W' \to WH)=9.7$ fb, $\Gamma_{W'}=0.3\times m_{W'}$. } \end{figure} In Fig.~\ref{f1}, we show the aforementioned experimental constraints on the parameter space of $SU(2)_L\times SU(2)_R\times U(1)_{B-L}$ model, and include the non-standard $W'$ decay width $\Gamma^{\rm other}_{W'}$. The red-dotted points satisfy the requirement on the signal cross section \begin{equation} \label{x-sec45} \sigma(pp \to W') \times B(W' \to WH) \geq 4.5 \; {\rm fb}, \end{equation} evaluated in the narrow-width approximation, and the upper limits listed in Table~\ref{tab:constraint}, except for the dijet upper limit. The dijet limits are adapted to the broad-width-resonance case, following the instructions in Ref.~\cite{ATLAS:dijet,Khachatryan:2015sja}. The excess bump in the $m_{JJ}$ distribution of the 2-tag $WH$ channel from ATLAS~\cite{atlas} is not necessarily a narrow resonance, likewise, we do not restrict the width of $W'$ to be narrow. The cyan (green) hatched region was excluded by the combined 7 and 8 TeV ATLAS and CMS signal strength data (the ATLAS data only) ~\cite{atlas-2hdm,atlas_cms_data}. The shifting of the hatched region is mainly due to the change in the diphoton signal strength from $\mu_{\gamma\gamma}(ggF) = 1.32 \pm 0.38$ to $1.10 ^{+0.23}_{-0.22}$. Most of the red-dotted points are ruled out by this constraint, yet there exists a small region that satisfies all the existing constraints and Higgs precision data. Nevertheless, as shown in Fig~\ref{f1} there exists a small region of parameter space that is not excluded by the aforementioned constraints, including all those listed in Table~\ref{tab:constraint} (with modified dijet constraints) and the Higgs precision data, as well as satisfying the cross section requirement in Eq.~(\ref{x-sec45}). This small region corresponds to parameters $\cos(\beta-\alpha) \simeq -0.3$, $\tan\beta \simeq 2.41$, $g_R \simeq 1.358$, and $\Gamma^{\rm other}_{W'} \simeq 0.185\times m_{W'}$. It will give a cross section of $\sigma(pp \to W') \times B(W' \to WH ) \approx 4.6$ fb in the narrow-width approximation. However, if we abandon the narrow-width approximation and adopt the full calculation, it gives a cross section of $\sigma(pp \to W' \to WH)=9.7$ fb and $\Gamma_{W'}=0.3\times m_{W'}$. Thus, $\sigma(pp \to W' \to WH) \times B(H\to b \bar b) \approx 5.2 $ fb, \footnote{ We employed the branching ratio $B(H \to b\bar b) =0.54$ in 2HDM type-I for $\cos(\beta-\alpha)=-0.3$ and $\tan\beta = 2.41$.} which is within the range shown by the ATLAS data in Eq.~(\ref{first}). Note that the $K$ factor for the process is roughly $1.3$ at the LHC energies, but for the purpose of consistency with backgrounds we do not multiply this $K$ factor. Using this point, the $W'$ contribution to the $m_{JJ}$ distribution is shown in Fig.~\ref{f2} with red-dashed histograms, where $m_{JJ}$ is the invariant mass of the $W$ and $H$ hadronic jets. We can see that this broad-width $W'$ provides an interpretation for the three observed events around $m_{JJ}=3$ TeV of ATLAS. Therefore, the allowed region, though small, can explain the excess bump observed at the $WH$ channel. Additional comments are in order here. From the upper-left panel in Fig.~\ref{f1}, the distribution of the red-dotted points is symmetric under the exchange $\tan\beta \leftrightarrow 1/\tan\beta$ and $\cos(\beta-\alpha)\leftrightarrow -\cos(\beta-\alpha)$. It is because the ratio between $\Gamma(W' \to WH)$ and $\Gamma(W' \to WZ)$ can be rewritten as \begin{equation} \frac{\Gamma(W' \to WH)}{\Gamma(W' \to WZ)}=\frac{\cos(\beta+\alpha)}{\sin(2\beta)} =\frac{1}{2}\left[ \frac{1}{\tan\beta}-\tan\beta \right]\times \cos(\beta-\alpha)+\sin(\beta-\alpha). \end{equation} Also, from the lower-right panel we can see that without the non-standard decay of $W'$, the $W'$ of $SU(2)_L\times SU(2)_R\times U(1)_{B-L}$ model does not have any more viable parameter space to explain the $WH$ excess observed at ATLAS, mainly due to the dijet constraint. \section{Conclusions} We have studied a unified model based on $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$, which was broken at multi-TeV scale to the SM symmetry. We have attempted to use the $W'$ gauge boson of mass 3 TeV to interpret the excess bump seen at the ATLAS $W H \to (q \bar q') (b\bar b)$ data. We have shown that such an interpretation faces very strong constraints from dijet data and $WZ$ data, as well as the precision Higgs data. Yet, we are able to find a viable parameter space region, though small, that can accommodate all the existing data and provide an explanation for the excess bump at 3 TeV. The largest cross section that we obtain is $\sigma(pp \to W' \to WH) \times B(H \to b \bar b) \simeq 5.2 $ fb, which is roughly equal to the experimental result shown in Eq.~(\ref{first}). A few comments are offered as follows. \begin{enumerate} \item Below the symmetry breaking scale of $SU(2)_L \times SU(2)_R \times U(1)_{B-L} \to SU(2)_L \times U(1)_Y$, the Higgs field can be recast into two doublet Higgs fields, in a manner similar to the conventional 2HDM. Therefore, the model is also subject to the constraints from the precision Higgs data. The ATLAS publication \cite{atlas-2hdm} has presented the excluded region in various 2HDM's. We adopted the least restricted one -- Type I -- in this work, and showed the excluded region in the upper-left panel Fig.~\ref{f1}. All the other types of 2HDM's are more severely constrained, and have no allowed region when superimposed on our model. \item The mass spectrum of $A$, $H^+$, and $H'$ will have interesting effects on flavor physics and low energy constraints. First of all, $B$ physics is sensitive to the charged Higgs mass, e.g., $b\to s \gamma$, $B$-$\overline{B}$ mixing, $B \to \mu^+ \mu^-$. However, in Type I 2HDM all Yukawa couplings are proportional to $\cos\alpha/\sin\beta$. Therefore, based on the constraint from Higgs precision data, $\sin(\beta-\alpha) \ \approx \ 1$, and so that $\alpha \simeq \beta - \pi/2$. It implies that $\cos\alpha/\sin\beta \simeq 1$. Hence, there is no $\tan\beta$ enhancement in contrast to the Type II model. Therefore, as long as $\tan\beta \ ^>_\sim \ 1$, the constraint on the charged Higgs mass is rather weak. Another important constraint is the $\rho$ parameter (or $\Delta T$) being very close to 1 -- the custodial limit. It can be fulfilled by taking the mass splitting among $A, H', H^+$ to be small. We therefore set $m_A \approx m_{H'} \approx m_{H^+}$. \item We have adopted the leptophobic condition for the $W'$ boson, or by assuming the right-handed neutrino is heavier than the mass of $W'$. \item Note that the boson jets for $W$ and $Z$ bosons are overlapping at 60\%. We do not work out for the $Z' \to ZH$ boson in this work, but it can be done similarly. However, leptophobic version is a must for the $Z'$ to avoid the very strong leptonic limit. \item The dijet limit of $pp \to W' \to jj$ presented the most stringent constraint to the model. We have to adopt other decay modes in order to dilute the branching ratio into dijets. Possible decay modes are $ W'^+ \to H^+A,\;Z H^+,\; W^+ H',\; W^+ A,\; H^+ H,\; H^+ H'$. Searches for these modes serve as further checks on the model. \item The ATLAS data (and also the CMS data) did not indicate a narrow resonance at 3 TeV. Therefore, we assume one more parameter (somewhat restricted) $\Gamma^{\rm other}_{W'}$ to alleviate the constraint from dijet. As shown in Fig.~\ref{f2}, the resonance width is rather wide. Currently, we obtained the total width $\Gamma_{W'} = 0.3 \times m_{W'}$. \item Although there are some direct searches on $A$ and $H'$ from the LHC~\cite{2HDM:Collider}, the constraints for Type-I 2HDM are not strong enough. Conservatively, we can focus on heavy Higgs bosons around $500 - 1000$ GeV, and the interesting signatures for this mass range can be categorized according to their final states: \begin{eqnarray} \hbox{I.} & \quad& p p\rightarrow W^{\prime +} \rightarrow H^+Z/H \rightarrow t\overline{b} b\overline{b}\, (\tau^+\tau^-) \rightarrow W^+ +4b\quad {\rm or} \quad W^+ +2b2\tau. \ .\nonumber\\ \hbox{II.} &\quad& p p\rightarrow W^{\prime +} \rightarrow H^+A/H' \rightarrow t\overline{b}t\overline{t} \rightarrow W^+W^+W^- +4b \;. \nonumber\\ \hbox{III.} &\quad& p p\rightarrow W^{\prime +} \rightarrow W^+A/H' \rightarrow W^+t\overline{t} \rightarrow W^+W^+W^- +2b \;. \nonumber \end{eqnarray} In the second one, the $W^+ W^+ W^-$ can decay into a pair of same-sign dilepton and a pair of jets plus missing energy. Indeed, it has been searched for at the LHC~\cite{ss_dilepton_8tev}. Many other possibilities of final states consisting of multi-leptons and jets can also be searched for. All these channels are to be explored if the excess of the 3 TeV $WH$ resonance is going to be established in the future data. \end{enumerate} \section*{\bf Acknowledgments} This research was supported in parts by the MoST of Taiwan under Grant No. MOST-105-2112-M-007-028-MY3, by the U.S. DOE under Grant No. DE-FG-02-12ER41811 at UIC, and by the World Premier International Research Center Initiative (WPI), MEXT, Japan.
2,877,628,090,565
arxiv
\section{Introduction} Similarly to Satie's understanding of music as ``measuring sound' , Quantum Optics can be understood as the science of measuring photon correlations. This led to one of the most recent revolutions concerning our understanding of light, establishing among other things that coherence is not a feature of monochromaticity but a quality of photon correlators to factoriz . Most of the field has concerned itself with time-resolved photon correlations but to get a comprehensive picture, one should also describe correlations in other variables (polarization, position, etc.) One is particularly important in any dynamical system: energy. Being a quantum mechanical problem, and since energy is $\hbar$ times the frequency, this brings head on the problem of conjugate variables and the uncertainty principle~\cite{arXiv_delvalle18a}. Of particular interest is Resonance Fluorescence (driving resonantly a two-level system), whose spectral shape---the Mollow triple ---posed the question of peak-to-peak correlations in the early days of the field~\cite{cohentannoudji77a}. Thanks to a new technique to compute exactly frequency- and time-resolved photon correlations~\cite{delvalle12a}, we have been able to provide the full landscape of $N$-photon correlations from nontrivial systems~\cite{gonzaleztudela13a}. This has been experimentally measured~\cite{peiris15a} and found to be in excellent agreement with the theoretical predictions. Such an approach reveals a new class of transitions, the \emph{leapfrog processes}, that involve $N$-photon transitions over $N-1$ manifolds of excitation. This makes for strongly quantum correlated emission~\cite{sanchezmunoz14a}. Intercepting (e.g., by filtering) such degenerate leapfrog transitions allows to realize a regime of pure $N$-photon emission~\cite{sanchezmunoz14a,sanchezmunoz18a}. The landscape of correlation however extends far beyond the case of degenerate photons, and a rich hyperspace of photon correlations exist at the ``\emph{photon-bundle}'' level~\cite{lopezcarreno17a}, where one can correlate not simply photons but groups, or ``bundles'' of them. This should allow such a system to service a rich class of \emph{heralded $N$-photon sources}. Such sources, if realized, would clearly have strong implications for quantum spectroscopy (by exciting optical targets with this new type of quantum light)~\cite{lopezcarreno15a,lopezcarreno16 }. In this text, written at the occasion of the METANANO 2018 in Sochi, we present some illustrative and original results, highlighting the temporal aspect on the one hand (for some fixed frequency) and the frequency aspect on the other hand (for some fixed time, usually, the case of coincidences, as this is the case which usually draws the greatest attention). While the formalism~\cite{delvalle12a} covers these two aspects simultaneously, it is helpful and/or enlightening to look at them separately. Since hastily reading observers may otherwise get a wrong impression on the generality of the results, we also present a case where correlations in \emph{both} time and frequency are displayed. To also emphasize that the method is exact and see how it compares with previous theories, that were not, we show explicit calculations for both cases. \section{Correlations in time} \begin{figure*}[thbp] \centering \includegraphics[width=\linewidth]{./figures/comparison.pdf} \caption{\small Comparison between the filtered correlations from the side peaks of the Mollow triplet, as obtained exactly through the sensing method~(solid blue) and approximately as in Ref.~\cite{schrama92a}~(dashed green). The driving intensity~$\Omega$ is varied from panels~(a) to~(d), and detuning between laser and emitter from panels~(i) to~(iii). Oscillations are lost in the approximation and the bunching peak associated to photon heralding is overestimated. Parameters: $\Gamma/\gamma_\sigma=10$ ($\gamma_\sigma$ the emitter decay rate) and rest as shown.} \label{fig:Tue21Mar154846GMT2017} \end{figure*} Photon correlations are typically measured as a function of time-delay between successive photons. With two photons, this yields the quantity $g^{(2)}(\tau)$ (the so-called $2^{\mathrm{nd}}$ order correlation function). The frequency-resolved version interposes filters in front of the detectors and the quantity becomes $g^{(2)}_{\Gamma}(\omega_1,\omega_2,\tau)$ (here with the same frequency window~$\Gamma$ for both detectors, which needs not be the case, see, e.g., Fig.~5 of Ref~\cite{gonzaleztudela15a}). The formal expression to compute was obtained exactly in the 80s, providing a great result of photo-detection theory, but the actual computation proved too complex and approximations had to be introduced. Typically, following the fruitful dressed state picture, auxiliary states are used and correlations between them computed as an approximation. We will consider the case of correlations between the side peaks of the Mollow triplet, and compare the results from Ref.~\cite{schrama92a}(dashed green in Fig.~\ref{fig:Tue21Mar154846GMT2017}), which rely on this approximation, with our results (solid blue), which are numerically exact. Overall, Physics is safe. The approximation is not exact and can even fail qualitatively (most notably, it does not depend on driving and misses oscillations at low driving) but the basic behaviour is well captured and fairly accurate at high-driving and resonance (the perfect agreement with experimental data is maybe less satisfactory~\cite{schrama92a,ulhaq12a} but fitting the less accurate model with free parameters can achieve that). The main limitation however lies elsewhere. Correlations are pinned to peaks emission in this approximation, which motivates its form in the first place. However, much more interesting correlations are to be found \emph{away} from the peaks. \begin{figure}[t] \centering \includegraphics[width=.85\linewidth]{./figures/g2tau.pdf} \caption{\small Two $N$-photon time cross-correlations at frequencies~$\pm\Omega_{+}/N$ (on both sides of the central peak of the Mollow triplet with $\Omega_+$ its splitting) for $N=1$~(blue lines), $2$~(red lines) and $3$~(green lines). (a)~Resonant excitation and (b)~detuned excitation (insets shown longer times). The detuning in~(b) is $\omega_\sigma=200\gamma_\sigma$. For all cases~$\Omega_+=300\gamma_\sigma$ and~$\Gamma=40\gamma_\sigma$, which maximizes the superpoissonian peak.} \label{fig:WedMar15134215GMT2017} \end{figure} Figure~\ref{fig:WedMar15134215GMT2017} shows $N$-photon correlations at the frequencies $\pm\Omega_+/N$ for~$N=1$ (in blue, cross-correlations of two single-photons from the side peaks, as in Fig.~\ref{fig:Tue21Mar154846GMT2017}), $N=2$ (in red, cross-correlations of two two-photon states \emph{halfway between the central and side peaks}) and $N=3$ (cross-correlations of two three-photon states at one-third of the frequency between the central and side peaks). Not only do we now access frequencies previously out of reach, but we also consider correlations between \emph{bundles} ($N$-photon states), introducing the quantity $g_{{n_1},{n_2},\Gamma}^{(2)}(\omega_1,\omega_2) \equiv {\mean{ {:}\Pi_{\mu=1}^{2}\ud{a}^{n_\mu} (\omega_\mu) a^{n_\mu}(\omega_\mu){:}} }/(\Pi_{\mu=1}^{2} \mean{\ud{a}^{n_\mu}(\omega_\mu) a^{n_\mu}(\omega_\mu) })$ (with $a(\omega)$ the annihilation operator of a photon of frequency~$\omega$ and~{:} normal ordering) that correlates a $n_1$-photon bundle of frequency~$\omega_1$ with a~$n_2$-photon bundle of frequency~$\omega_2$ both in frequency windows of width~$\Gamma$ (this can be generalized to different frequency windows~\cite{lopezcarreno17a}). While single-photon correlations become not particularly noteworthy, we see here that the same phenomenology is transported to photon bundles. One can indeed recognize in the insets of Fig.~\ref{fig:WedMar15134215GMT2017}(b) the familiar profiles of photon heralding, but at the~$N$-photon level. These results are thus not only exact, but also unique. \section{Correlations in frequency} The correlation spectrum in frequency is probably the most conceptually striking, as it brings a new dimension to the problem. It turns a number, $g^{(2)}(0)$, into a picture, as shown in Fig.~\ref{fig:WedMar15143809GMT2017}. Panel~(a) correlates two photons of arbitrary frequencies, and is the case already measured experimentally~\cite{peiris15a}. The antidiagonal red lines are ``leapfrog processes'' and show regions where the system tends to emit photons in pairs. Since we have amply discussed two-photon spectra in previous works~\cite{gonzaleztudela13a}, we turn directly to the even greater family of two $N$-photon bundles correlation spectra. Panels~(b) and~(c) show correlations between two-photon and three-photon bundles, respectively, here at resonance. In each case, the structure of the correlation spectrum can be easily understood from leapfrog transitions, shown in insets $i$--$iii$ along with, in panels~(d)--(f), the traces they imprint in the correlation spectra, with corresponding color codes. Note how case~$iii$, for instance, involves six photons, in a plethora of photon emissions of various orders, leading to a crowded landscape with myriads of leapfrogs. Nevertheless, these are all clearly resolved in~$g^{(2)}_{3,3}$ and understood in simple terms. Such correlated emission can power devices. \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{./figures/TPS.pdf} \caption{\small Landscape of two $N$-photon bundle correlations for (a) $N=1$, (b) $N=2$ and (c)~$N=3$, with red corresponding to bunching, blue antibunching and white no correlations. The structure, sketched in panels~(d)--(f), comes from leapfrog transitions, shown in insets $i$--$iii$ with corresponding colors. The structure in (d) is also present in~(b) and~(c) and that in~(e) is also present in~(c) (and so on to higher orders). Parameters: $\Omega_+=300\gamma_\sigma$, laser-emitter detuning~$\omega_\sigma = 200\gamma_\sigma$ and filtering window $\Gamma=5\gamma_\sigma$.} \label{fig:WedMar15143809GMT2017} \end{figure*} \section{Correlations in both time and frequency} At this stage, it is clear that we have jointly considered correlations in both time and frequency. We will let an image speak a thousand words, with Fig.~\ref{fig:Fri23Feb180807GMT2018} showing photon correlations simultaneously in time and frequency, in a 3D space (that we have projected on two planes for clarity). Heisenberg uncertainties are satisfied through~$\Gamma$. There is no numerical difficulty in getting these results, which have all been obtained with a high-level programming language on a middle-end personal laptop in time of the order of a few minutes. The only difficulty that arises is not in obtaining the data in the first place, but in processing and displaying it, as this starts to provide a fairly comprehensive picture of a nontrivial quantum mechanical system. \begin{figure}[!htb] \begin{minipage}{.45\linewidth} \centering \includegraphics[width=\linewidth]{./figures/3D.pdf} \label{fig:Fri23Feb180807GMT2018} \end{minipage}% \qquad \begin{minipage}{.45\textwidth} \vskip1cm\caption{\small Two-photon correlations~$g_\Gamma^{(2)}(\omega_1,\omega_2,\tau)$ from the Mollow triplet with a cut of time correlations at $\omega_2=\Omega_+/2$, showing how leapfrog processes imprint strong correlations in time along with the peaks emission. Of course the cut could be taken anywhere in the two-photon spectrum. Parameters: $\Gamma=5\gamma_\sigma$, $\Omega = 87 \gamma_\sigma$, $\Omega_+ = 174 \gamma_\sigma$ and driving at resonance.} \end{minipage} \end{figure} \section{Conclusions} We have summarized the main results that follow from our theory of frequency- and time-resolved photon correlations~\cite{delvalle12a}, which provides exact computations in nontrivial systems. We illustrated the theory with original results as interesting cases can be picked up from the endless configurations that abound in any quantum optical system. In particular, we have compared our exact results with previous approximations, upgraded correlations to the case of bundles as otherwise no clear physical picture emerge, highlighted the interest of measuring away from the peaks in rich landscapes of correlations and emphasized the joint time and frequency aspect of our theory. Such results can power a new class of optical devices, out of which we foresee heralded $N$-photon emitters as the most decisive breakthrough for today's photonics. \section*{References} \bibliographystyle{Science}
2,877,628,090,566
arxiv
\section{Introduction}\label{sec1} The recent issue to investigate alternative theories of gravity comes out from Cosmology, Quantum Field Theory and Mach's Principle. The initial singularity, the flatness and horizon problems \cite{guth} point out that Standard Cosmological Model \cite{weinberg}, based on General Relativity (GR) and Particle Standard Model, fails in describing the Universe at extreme regimes. Besides, GR does not work as a fundamental theory capable of giving a quantum description of spacetime. Due to these reasons and to the lack of a definitive Quantum Gravity theory, alternative theories of gravitation have been pursued in order to attempt, at least, a semi-classical approach to quantization. In particular, {\it Extended Theories of Gravity} (ETGs) face the problem of gravitational interaction correcting and enlarging the Einstein theory. The general paradigm consists in adding, into the effective action, physically motivated higher-order curvature invariants and non-minimally coupled scalar fields \cite{odintsov,farhoudi}. The interest of such an approach in early epoch cosmology is due to the fact that ETGs can ``naturally'' reproduce inflationary behaviors able to overcome the shortcomings of the Standard Cosmological Model and seems also capable of matching with several observations. From another viewpoint, the Mach Principle gives further motivations to modify GR stating that the local inertial frame is determined by the average motion of distant astronomical objects \cite{bondi}. As a consequence, the gravitational coupling can be scale-dependent. This means that the concept of inertia and the Equivalence Principle have to be revised since there is no {\it a priori} reason to restrict the gravitational Lagrangian to a linear function of the Ricci scalar $R$, minimally coupled with matter \cite{brans,cimento,sciama,faraoni,Carroll:2004de,Carroll:2004hc}. Very recently, ETGs are playing an interesting role to describe today's observed Universe. In fact, the impressive amount of good quality data of last decade seems to shed new light into the effective picture of the Universe. Type Ia Supernovae (SNeIa) \cite{SNeIa}, anisotropies in the CMBR \cite{CMBR}, and matter power spectrum derived from wide and deep galaxy surveys \cite{LSS} represent the strongest evidences for a radical revision of the Cosmological Standard Model also at recent epochs. Specifically, the {\it Concordance $\Lambda$CDM Model} is showing that baryons contribute only for $\sim 4\%$ to the total matter\,-\,energy budget, while the {\it cold dark matter} (CDM) represents the bulk of the clustered large scale structures ($\sim 25\%$) and the cosmological constant $\Lambda$ plays the role of the so called ``dark energy'' ($\sim 70\%$) \cite{triangle}. Although being the best fit to a wide range of data \cite{LambdaTest}, the $\Lambda$CDM model is affected by strong theoretical shortcomings \cite{LambdaRev} that have motivated the search for alternative models \cite{PR03,copeland}. Dark energy models mainly rely on the implicit assumption that Einstein's GR is the correct theory of gravity indeed. Nevertheless, its validity on large astrophysical and cosmological scales has never been tested but only {\it assumed} \cite{will}, and it is therefore conceivable that both cosmic speed up and missing matter are nothing else but signals of a breakdown of GR. In this sense, GR could fail in giving self-consistent pictures both at ultraviolet scales (early universe) and at infrared scales (late universe). Following this line of thinking, the ``minimal'' choice could be to take into account generic functions $f(R)$ of the Ricci scalar $R$. However, such an approach can be encompassed in the ETGs being the minimal extension of GR. The task for this extended theories should be to match the data under the ``economic'' requirement that no exotic dark ingredients have to be added, unless these are going to be found with fundamental experiments \cite{mimicking}. This is the underlying philosophy of what are referred to as {\it $f(R)$-gravity} (see \cite{copeland,odirev,GRGrev} and references therein). Although higher order gravity theories have received much attention in cosmology, since they are naturally able to give rise to the accelerating expansion (both in the late and in the early universe \cite{noi}), it is possible to demonstrate that $f(R)$ theories can also play a major role at astrophysical scales. In fact, modifying the gravity Lagrangian affects the gravitational potential in the low energy limit. Provided that the modified potential reduces to the Newtonian one on the Solar System scale, this implication could represent an intriguing opportunity rather than a shortcoming for $f(R)$ theories. In fact, a corrected gravitational potential could offer the possibility to fit galaxy rotation curves without the need of huge amounts of dark matter \cite{noipla,mond,jcap,mnras,sobouti,salucci,mendoza}. In addition, it is possible to work out a formal analogy between the corrections to the Newtonian potential and the usually adopted galaxy halo models which allow to reproduce dynamics and observation {\it without} dark matter \cite{jcap}. However, extending the gravitational Lagrangian could give rise to several problems. These theories could have instabilities \cite{DeFelice:2007zq,instabilities-f(R)} and ghost\,-\,like behaviors \cite{ghost-f(R),DeFelice:2006pg,Calcagni:2006ye}, and they have to be matched with the low energy limit experiments which fairly test GR. Besides, these theories should also be compatible with early universe tests such as the formation of CMBR anisotropies, Big Bang Nucleosynthesis \cite{DeFelice:2005bx}, and Baryogenesis \cite{Davoudiasl:2004gf,DeFelice:2004uv}. Actually, the debate concerning the weak field limit of $f(R)$-gravity is far to be definitive. In the last few years, several authors have dealt with this matter with contrasting conclusions, in particular with respect to the Parameterized Post Newtonian (PPN) limit \cite{ppn-tot,lgcpapers}. In summary, it seems that the paradigm to adopt $f(R)$-gravity leads to interesting results at cosmological, galactic and Solar System scales but, up to now, no definite physical criterion has been found to {\it select} the final $f(R)$ theory (or class of theories) capable of matching the data at all scales. Interesting results have been achieved in this line of thinking \cite{mimicking,DeFelice:2007ez,Hu,Star,Odintsov1} but the approaches are all phenomenological and are not based on some fundamental principle as the conservation or the invariance of some quantity or some intrinsic symmetry of the theory. Furthermore, as it was shown in \cite{DeFelice:2007zq}, in alternative theories of gravity, it is important to understand the background before exploring other bounds, such as anisotropies in the CMBR. For this goal it is essential to try to find exact analytical solutions for the $f(R)$ theories, and, only after this, study more in detail the possible evolutions compatible with our data (e.g.\ solar system and CMBR bounds). In some sense, the situation is similar to that of dark matter: we know very well its effect at large astrophysical scales but no final evidence of its existence has been found, up to now, at fundamental level. In the case of $f(R)$-gravity, we know that the paradigm is working: in principle, the missing matter and accelerated cosmic behavior can be addressed taking into account gravity (in some extended version), baryons and radiation but we do not know a specific criterion to select the final, comprehensive theory. In this paper, we want to address the following issues: $i)$ Is there some general principle capable of {\it selecting} physically motivated $f(R)$ models? $ii)$ Can conserved quantities or symmetries be found in relation to specific $f(R)$ theories? $iii)$ Can such quantities, if existing, give rise to viable cosmological models? In this paper, following the so called {\it Noether Symmetry Approach} (see \cite{cimento,leandros,lambiase}, we want to seek for viable $f(R)$ cosmological models. As we will see, the method is twofold: from one side, the existence of symmetries allows to solve exactly the dynamics; from the other side, the Noether {\it charge} can always be related to some observable quantity. The layout of the paper is the following. In Sec.\ref{sec2}, we sketch the dynamics of $f(R)$ gravity in the metric approach and derive the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological equations. Sec.\ref{sec3} is devoted to the general discussion of the Noether Symmetry Approach by which it is possible to find out conserved quantities and then symmetries which allow to exactly solve a dynamical system. In Sec.\ref{sec4}, we apply the method to the $f(R)$ cosmology. In Sec.\ref{sec5}, we give a detailed summary of the exact solutions discussing them in presence or in absence of the Noether charge. Sec.\ref{sec6} is devoted to the discussion and the conclusions. \section{ $f(R)$ gravity and cosmology} \label{sec2} The action {\beta} S=\int d^4 x\,\sqrt{-g}\,f(R)+S_m\, , \end{equation} describes a theory of gravity where $f(R)$ is a generic function of the Ricci scalar $R$. GR is recovered in the particular case $f(R)=-R/16\pi G$, and $S_m$ is the action for a perfect fluid minimally coupled with gravity \footnote{We are using the following conventions, $\eta_{\mu\nu}= {\rm diag}(1,-1,-1,-1)$, and $R_{\mu\nu}=R^\alpha{}_{\mu\alpha\nu}$, $c=\hbar=1$.}. This action, in general, leads to 4th order differential equations for the metric since the field equations are {\beta}\label{field} f_R\,R_{\mu\nu}-\tfrac12\,f\,g_{\mu\nu}-f_{R;\mu\nu}+g_{\mu\nu}\,\Box f_R=-\tfrac12\,T^{m}_{\mu\nu}\, , \end{equation} where a subscript $R$ denotes differentiation with respect to $R$ and $T^{m}_{\mu\nu}$ is the matter fluid stress-energy tensor. Defining a {\it curvature stress\,-\,energy tensor} as \begin{equation} \label{curva} T^{curv}_{\mu\nu}\,=\,\frac{1}{f_{R}(R)}\left\{\frac{1}{2}g_{\mu\nu}\left[f(R)-Rf_R(R)\right] +f_R(R)^{;\alpha\beta}(g_{\alpha\mu}g_{\beta\nu}-g_{\mu\nu}g_{\alpha\beta}) \right\}\,, \end{equation} Eqs.(\ref{field}) can be recast in the Einstein\,-\,like form\,: \begin{equation}\label{5} G_{\mu \nu} = R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = T^{curv}_{\mu\nu}+T^{m}_{\mu\nu}/f_R(R) \end{equation} where matter non\,-\,minimally couples to geometry through the term $1/f_R(R)$. It is known that these theories can be mapped to a scalar-tensor theory. However, there are two points which should be noticed. First, the two theories might have different quantum descriptions, as they only coincide on the classical solutions. Furthermore, the two theories are classically equivalent if the Brans-Dicke parameter ($\omega_{\rm BD}$) exactly vanishes and if the scalar field possesses a suitable potential. This fact is related to the second point: in the literature, the Brans-Dicke field is commonly taken as a light scalar field for which the local gravity constraint fixes the Brans-Dicke parameter to be greater than 40000. This bound is usually considered when studying Brans-Dicke theories. However, for the $f(R)$ theories, since $\omega_{\rm BD}=0$, this is not the case, and the presence of a non-negligible potential is essential in order to give an explicit mass to the gravitational scalar degree of freedom. Once one has the solution $H(t)$ (and consequently $R(t)$) for a given $f(R)$, the scalar field is defined as $\Phi(t)=-f_R(t)$, and its potential is $U\bigl(\Phi(t)\bigr)=R(t)\,f_R(t)-f\bigl(R(t)\bigr)$. An example showing this link between scalar-tensor theories and $f(R)$ gravity is given in the appendix for one solution which will be found explicitly later on. In order to derive the cosmological equations in a FLRW metric, one can define a canonical Lagrangian ${\cal L}={\cal L}(a,\dot{a}, R, \dot{R})$, where ${\cal Q}=\{a,R\}$ is the configuration space and ${\cal TQ}=\{a,\dot{a}, R, \dot{R}\}$ is the related tangent bundle on which ${\cal L}$ is defined. The variable $a(t)$ and $R(t)$ are the scale factor and the Ricci scalar in the FLRW metric, respectively. One can use the method of the Lagrange multipliers to set $R$ as a constraint of the dynamics. Selecting the suitable Lagrange multiplier and integrating by parts, the Lagrangian ${\cal L}$ becomes canonical. In our case, we have {\beta} S=2\pi^2\int dt\,a^3 \left\{f(R)-\cal L\left[R+6\left(\frac{\ddot a}a+\frac{\dot a^2}{a^2}+\frac \kappa{a^2}\right)\right] -\frac{\rho_{m0}}{a^3}-\frac{\rho_{r0}}{a^4}\right\} , \end{equation} where $a$ is the scale factor scaled with respect to today's value (so that $a=\tilde a/\tilde a_0$ and $a(t_0)=1$); $\rho_{m0}$ and $\rho_{r0}$ represent the standard amounts of dust and radiation fluids as, for example, measured today; finally $\kappa=k/\tilde a_0^2$, where $k=0,\pm1$. This choice for $a$, makes it dimensionless, and it also implies that $[\kappa]=[R]=M^2$, whereas $[f]=[\rho_{r0}]=M^4$. It is straightforward to show that, for $f(R)=-R/16\pi G-\rho_{\Lambda0}$, one obtains the usual Friedmann equations. The variation with respect to $R$ of the action gives $\cal L=f_R$. Therefore the previous action can be rewritten as {\beta} S=2\pi^2\int dt\,a^3\left\{f-f_R\left[R+6\left(\frac{\ddot a}a+\frac{\dot a^2}{a^2}+\frac \kappa{a^2}\right)\right] -\frac{\rho_{m0}}{a^3}-\frac{\rho_{r0}}{a^4}\right\}\, , \end{equation} and then, integrating by parts, the point-like FLRW Lagrangian is {\beta} {\cal L}= a^3\,(f-f_R\,R)+6\,a^2\,f_{RR}\,\dot R\,\dot a +6\,f_R\,a\,\dot a^2-6\kappa\,f_R\,a-\rho_{m0}-\rho_{r0}/a\, ,\label{eqz0} \end{equation} which is a canonical function of two coupled fields, $R$ and $a$, both depending on time $t$. The total energy $E_{\cal L}$, corresponding to the $0,0$-Einstein equation, is {\beta} \label{energy} E_{{\cal L}}=6\,f_{RR}\,a^2\,\dot a\,\dot R+ 6\,f_R\,a\,\dot a^2- a^3\,(f-f_R\,R) +6\kappa\,f_R\,a+\rho_{m0}+\frac{\rho_{r0}}a=0\, . \end{equation} As we shall see later, it is convenient to look for parametric solutions in the form $\bigl[H(a),f\bigl(R(a)\bigr)\bigr]$, so that $f_R=f'/R'$, where a prime denotes differentiation with respect to the time-parameter $a$. We also have that, if $R\neq {\rm constant}$, $f_{RR}\,\dot R=df_R/dt=a\,H\,f_R'=a\,H\,[f''/R'-f'\,R''/{R'}^2]$, so that the Friedmann equation can be rewritten as {\beta} f-6 a \left(\frac{f''}{R'}-\frac{f'\,R''}{R'^2}\right) H^2-\frac{6 f' \, H^2}{R'}-\left(\frac{6 \kappa }{a^2}+R\right)\frac{f'}{R'}=\frac{\rho_{0m}}{a^3}+\frac{\rho_{0r}}{a^4}\, . \label{FRDA} \end{equation} The equations of motion for $a$ and $R$ are respectively \begin{eqnarray} f_{RR}\left[R+6\,H^2+6\,\frac{\ddot a}a+6\,\frac{\kappa}{a^2}\right]&=&0\label{frd1}\\ 6\,f_{RRR}\,\dot R^2+ 6\,f_{RR}\,\ddot R+ 6\,f_R\,H^2+12\,f_R\,\frac{\ddot a}a&=& 3\,(f-f_R\,R)-12\,f_{RR}\,H\,\dot R -6\,f_R\,\frac{\kappa}{a^2}+\frac {\rho_{r0}}{a^4}\, , \label{frd2} \end{eqnarray} where $H\equiv \dot a/a$ is the Hubble parameter. Considering $R$ and $a$ as the variables, we have, for consistency (excluding the case $f_{RR}=0$), that $R$ coincides with the definition of the Ricci scalar in the FLRW metric. Geometrically, this is the Euler constraint of the dynamics. Using (\ref{frd1}), only one of the equations (\ref{energy}), and (\ref{frd2}) is independent because of the Bianchi identities, as these equations correspond to the first and second modified Einstein equations, and matter is conserved. Equivalently, after multiplying equation (\ref{frd2}) by $a^2\,\dot a$, and using (\ref{frd1}), one can integrate (\ref{frd2}) to find (\ref{energy}). Furthermore, as we will show below, constraints on the form of the function $f(R)$ and, consequently, solutions of the system (\ref{energy}), (\ref{frd1}) can be achieved by asking for the existence of Noether symmetries. Such solutions will also solve equation (\ref{frd2}) automatically. On the other hand, the existence of the Noether symmetries guarantees the reduction of dynamics and the eventual solvability of the system. \section{The Noether Symmetry Approach}\label{sec3} Solutions for the dynamics given by (\ref{eqz0}) can be achieved by selecting cyclic variables related to some Noether symmetry. In principle, this approach allows to select $f(R)$-gravity models compatible with the symmetry so it can be seen as a physical criterion since the conserved quantities are a sort of {Noether charges}. Therefore such a criterion might be to look for those $f(R)$ which have {\it cosmological} Noether charge. Although this criterion somehow ``breaks'' Lorentz-invariance because we need the FLRW background to formulate it, however Lorentz-invariance is evidently broken in our universe by the presence of the CBMR radiation which, by itself, fixes a preferred reference frame. In general, the Noether Theorem states that conserved quantities are related to the existence of cyclic variables into dynamics \cite{arnold,marmo,morandi}. Let ${\cal L}(q^{i}, \dot{q}^i)$ be a canonical, non-degenerate point-like Lagrangian where \begin{equation} \label{01} \frac{\partial {\cal L }}{\partial \lambda}=0\,;\;\;\;\;\;\;\; \mbox{det}H_{ij}\buildrel {\rm def} \over = \mbox{det} \left|\left| \frac{\partial^2 {\cal L}}{\partial \dot{q}^i\partial\dot{q}^j}\right|\right|\neq 0\,, \end{equation} with $H_{ij}$ the Hessian matrix related to ${\cal L}$ and a dot denotes differentiation with respect to the affine parameter $\lambda$. The dot indicates derivatives with respect to the affine parameter $\lambda$ which, in our case, corresponds to the cosmic time $t$. In analytical mechanics, ${\cal L}$ is of the form \begin{equation} \label{02} {\cal L}=T({{\bf q}},\dot{{\bf q}})-V({{\bf q}})\;, \end{equation} where $T$ and $V$ are the ``kinetic'' and ``potential energy'' respectively. $T$ is a positive definite quadratic form in $\dot{{\bf q}}$. The energy function associated with ${\cal L}$ is \begin{equation} \label{03} E_{\cal L }\equiv\frac{\partial {\cal L}}{\partial \dot{q}^{i}}\dot{q}^i-{\cal L}\,, \end{equation} which is the total energy $T+V$. In any case, $E_{\cal L }$ is a constant of motion. Since our cosmological problem has a finite number of degrees of freedom, we are going to consider only point-transformations. Any invertible transformation of the ``generalized positions'' $Q^{i}=Q^{i}({{\bf q}})$ induces a transformation of the ``generalized velocities'' such that \begin{equation} \label{04} \dot{Q}^i({{\bf q}})=\frac{\partial Q^i}{\partial q^j}\dot{q}^j\;; \end{equation} the matrix ${\cal J}=|| \partial Q^i/\partial q^j ||$ is the Jacobian of the transformation on the positions, and it is assumed to be nonzero. The Jacobian $\widetilde{{\cal J}}$ of the induced transformation is easily derived and ${\cal J}\neq 0\rightarrow \widetilde{{\cal J}}\neq 0$. In general, this condition is not satisfied in the whole space but only in the neighbor of a point. It is a local transformation. A point transformation $Q^{i}=Q^{i}({\bf q})$ can depend on one (or more than one) parameter. As starting point, we can assume that a point transformation depends on a parameter $\epsilon$, i.e. $Q^{i}=Q^{i}({\bf q},\epsilon)$, and that it gives rise to a one-parameter Lie group. For infinitesimal values of $\epsilon$, the transformation is then generated by a vector field: for instance, $\partial/\partial x$ is a translation along the $x$ axis, $x(\partial/\partial y)-y(\partial/\partial x)$ is a rotation around the $z$ axis and so on. The induced transformation (\ref{04}) is then represented by \begin{equation} \label{05} {\bf X}=\alpha^{i}({{\bf q}})\frac{\partial}{\partial q^{i}}+ \left(\frac{d}{d\lambda}\alpha^{i}({{\bf q}})\right)\frac{\partial}{\partial \dot{q}^i}\;. \end{equation} ${\bf X}$ is called the ``complete lift'' of ${\bf X}$ \cite{morandi}. A function $F({\bf q}, {\bf \dot{q}})$ is invariant under the transformation ${\bf X}$ if \begin{equation} \label{06} L_{{\bf X}}F\buildrel {\rm def} \over =\alpha^{i}({{\bf q}})\frac{\partial F}{\partial q^{i}}+ \left(\frac{d}{d\lambda}\alpha^{i}({{\bf q}})\right)\frac{\partial F}{\partial \dot{q}^i}\,=\,0\;, \end{equation} where $L_{{{\bf X}}}F$ is the Lie derivative of $F$. Specifically, if $L_{{{\bf X}}}{\cal L}=0$, ${\bf X}$ is a {\it symmetry} for the dynamics derived by ${\cal L}$. As we shall see later on, we will look for a sufficient condition on the form of $f(R)$ in our Lagrangian, which allows $L_{{{\bf X}}}{\cal L}=0$ to vanish. Let us consider now a Lagrangian ${\cal L}$ and its Euler-Lagrange equations \begin{equation} \label{07} \frac{d}{d\lambda}\frac{\partial {\cal L }}{\partial\dot{q}^{j}}-\frac{\partial {\cal L}}{\partial q^{j}}=0\,. \end{equation} Let us consider also the vector field (\ref{05}). Contracting (\ref{07}) with the $\alpha^{i}$'s gives \begin{equation} \label{06a} \alpha^{j}\left( \frac{d}{d\lambda}\frac{\partial {\cal L}}{\partial \dot{q}^j}- \frac{\partial {\cal L }}{\partial q^j}\right)=0\,. \end{equation} Being \begin{equation} \label{06b} \alpha^{j}\frac{d}{d\lambda}\frac{\partial {\cal L}}{\partial \dot{q}^j}= \frac{d}{d\lambda}\left(\alpha^j\frac{\partial {\cal L}}{\partial \dot{q} ^j}\right)- \left(\frac{d \alpha^j}{d\lambda}\right)\frac{\partial {\cal L }}{\partial \dot{q} ^j}\,, \end{equation} from (\ref{06a}), we obtain \begin{equation} \label{08} \frac{d}{d\lambda}\left(\alpha^{i}\frac{\partial {\cal L}}{\partial \dot{q}^i} \right)=L_{\bf X}{\cal L}\,. \end{equation} The immediate consequence is the {\it Noether Theorem} which states: If $L_{\bf X}{\cal L}=0$, then the function \begin{equation} \label{09} \Sigma_{0}=\alpha^{i}\frac{\partial {\cal L }}{\partial \dot{q}^i} \,, \end{equation} is a constant of motion. Some comments are necessary at this point. Eq.(\ref{09}) can be expressed independently of coordinates as a contraction of ${\bf X}$ by a Cartan one-form \begin{equation} \label{09a} \theta_{\cal L} \buildrel {\rm def} \over = \frac{\partial {\cal L}}{\partial \dot{q}^i}dq^i \; . \end{equation} For a generic vector field $ {\bf Y} = y^i \partial / \partial x^i $, and one-form $\beta = \beta_i d x^i $, we have, by definition, $ {\displaystyle i_{\bf Y} \beta = y^i \beta_i} $. Thus Eq.(\ref{09}) can be written as \begin{equation} \label{09b} i_{\bf X} \theta _{\cal L} = \Sigma_{0} \; . \end{equation} By a point-transformation, the vector field ${\bf X}$ becomes \begin{equation} \label{09c} \widetilde{{\bf X}} = (i_{\bf X} d Q^k) \frac{\partial}{\partial Q^k} + \left( \frac{d}{d\lambda} (i_x d Q^k)\right) \frac{\partial}{\partial \dot{Q}^k} \; . \end{equation} We see that $\widetilde{{\bf X}}'$ is still the lift of a vector field defined on the ``space of positions.'' If ${\bf X}$ is a symmetry and we choose a point transformation such that \begin{equation} \label{010} i_{\bf X} dQ^1 = 1 \; ; \;\;\; i_{\bf X} dQ^i = 0 \;\;\; i \neq 1 \; , \end{equation} we get \begin{equation} \label{010a} \widetilde{{\bf X}} = \frac{\partial}{\partial Q^1} \;;\;\;\;\; \frac{\partial {\cal L}}{\partial Q^1} = 0 \; . \end{equation} Thus $Q^1$ is a cyclic coordinate and the dynamics results {\it reduced} \cite{arnold,marmo}. Furthermore, the change of coordinates given by (\ref{010}) is not unique and then a clever choice could be very important. In general, the solution of Eq.(\ref{010}) is not defined on the whole space. It is local in the sense explained above. Besides, it is possible that more than one ${\bf X}$ is found, e.g. ${\bf X}_1$, ${\bf X}_2$. If they commute, i.e. $ [{\bf X}_1, {\bf X}_2] = 0 $, then it is possible to obtain two cyclic coordinates by solving the system \begin{equation} i_{\bf {X_{1}}} dQ^1 = 1; \, i_{\bf {X_{2}}} dQ^2 = 1; \, i_{\bf {X_{1}}} dQ^i = 0;\,i \neq 1;\, i_{\bf {X_{2}}} dQ^i = 0; \, i \neq 2\,. \end{equation} The transformed fields will be $\partial/\partial Q^{1}$, $\partial/\partial Q^{2}$. If they do not commute, this procedure is clearly not applicable, since commutation relations are preserved by diffeomorphisms. If the relation ${\bf X}_3 = [{\bf X}_1, {\bf X}_2]$ holds, also ${\bf X}_3$ is a symmetry, being $L_{\bf{X_{3}}}{\cal L}=L_{\bf{X_{1}}}L_{\bf{X_{2}}}{\cal L}- L_{\bf{X_{2}}}L_{\bf{X_{1}}}{\cal L}=0$. If ${\bf X_{3}}$ is independent of ${\bf X_{1}}$, ${\bf X_{2}}$, we can go on until the vector fields close the Lie algebra. The usual approach to this situation is to make a Legendre transformation, going to the Hamiltonian formalism, and then derive the Lie algebra of Poisson brackets. If we seek for a reduction of dynamics by cyclic coordinates, the procedure is possible in the following way: $i)$ we arbitrarily choose one of the symmetries, or a linear combination of them, searching for new coordinates where, as sketched above, the cyclic variables appear. After the reduction, we get a new Lagrangian $\widetilde{{\cal L}}({\bf Q})$; $ii)$ we search again for symmetries in this new configuration space, make a new reduction and so on until possible; $iii)$ if the search fails, we try again by another of the existing symmetries. Let us now assume that ${\cal L}$ is of the form (\ref{02}). As ${\bf X}$ is of the form (\ref{05}), $L_{\bf X}{\cal L}$ will be a homogeneous polynomial of second degree in the velocities plus a inhomogeneous term in the $q^{i}$. Since such a polynomial has to be identically zero, each coefficient must be independently zero. If $n$ is the dimension of the configuration space, we get $\{1+n(n+1)/2\}$ partial differential equations. The system is overdetermined, therefore, if any solution exists, it will be expressed in terms of integration constants instead of boundary conditions. It is also obvious that an overall constant factor in the Lie vector ${\bf X}$ is irrelevant. In other words, the Noether Symmetry Approach can be used to select functions which assign the models and such functions (and then the models) can be physically relevant. Considering the specific case which we are going to discuss, the $f(R)$ cosmology, the situation is the following. The configuration space is ${\cal Q}=\{a,R\}$ while the tangent space for the related tangent bundle is ${\cal TQ}=\{a,\dot{a},R,\dot{R}\}$. The Lagrangian is an application \begin{equation} {\cal L}: {\cal TQ}\longrightarrow \Re\end{equation} where $\Re$ is the set of real numbers. The generator of symmetry is \begin{equation} \label{vec}{\bf X}=\alpha\frac{\partial}{\partial a}+\beta\frac{\partial}{\partial R}+\dot{\alpha}\frac{\partial}{\partial \dot{a}}+\dot{\beta}\frac{\partial}{\partial\dot{R}}\,.\end{equation} As discussed above, a symmetry exists if the equation $L_{\bf X}{\cal L}=0$ has solutions. Then there will be a constant of motion on shell, i.e.\ for the solutions of the Euler equations, as stated above equation (\ref{09}). In other words, a symmetry exists if at least one of the functions $\alpha$ or $\beta$ in Eq.(\ref{vec}) is different from zero. As a byproduct, the form of $f(R)$, not specified in the point-like Lagrangian (\ref{eqz0}), is determined in correspondence to such a symmetry. \section{Noether symmetries in $f(R)$ cosmology}\label{sec4} For the existence of a symmetry, we can write the following system of equations (linear in $\alpha$ and $\beta$), \begin{eqnarray} f_R\,(\alpha+2a\,\partial_a\alpha)+a\,f_{RR}\,(\beta+a\,\partial_a\beta)&=&0\label{eqz1}\\ a^2\,f_{RR}\,\partial_R\alpha&=&0\label{eqz2}\\ 2\,f_R\,\partial_R\alpha+f_{RR}\,(2\,\alpha+a\,\partial_a\alpha+a\,\partial_R\beta)+a\,\beta\,f_{RRR}&=&0\label{eqz3}\, , \end{eqnarray} obtained setting to zero the coefficients of the terms $\dot{a}^2$, $\dot{R}^2$ and $\dot{a}\dot{R}$ in $L_{\bf X}{\cal L}=0$. In order to make $L_{\bf X}{\cal L}=0$ vanish we will also look for those particular $f$'s which, given the Euler dynamics, also satisfy the constraint {\beta} 3\alpha\,(f-R\,f_R)-a\,\beta\,R\,f_{RR}-\frac{6\kappa}{a^2}\,(\alpha f_R+a\,\beta\,f_{RR})\label{eqz4}+\frac{\rho_{r0}\,\alpha}{a^4}=0\, . \end{equation} This procedure is different from the usual Noether symmetry approach, in the sense that now $L_{\bf X}{\cal L}=0$ will be solved not for all dynamics (which solve the Euler-Lagrange equations), but only for those $f$ which allows Euler solutions to solve also the constraint (\ref{eqz4}). Imposing such a constraint on the form of $f$ will turn out to be, as we will show, a sufficient condition to find solutions of the Euler-Lagrange equation which also possess a constant of motion, i.e.\ a Noether symmetry. As we shall see later on, the system (\ref{eqz1}), (\ref{eqz2}) and (\ref{eqz3}) can be solved exactly. Having a non-trivial solution for $\alpha$ and $\beta$ for this system, one finds a constant of motion if also the constraint (\ref{eqz4}) is satisfied. In fact, with these $\alpha$ and $\beta$, only those Euler-Lagrange solutions which also satisfy equation (\ref{eqz4}) will have a constant of motion. However, this will not happen for all $f(R)$'s. The task will be to find such forms of $f$. A solution of (\ref{eqz1}), (\ref{eqz2}) and (\ref{eqz3}) exists if explicit forms of $\alpha$, $\beta$ are found. If, at least one of them is different from zero, a Noether symmetry exists. If $f_{RR}\neq0$, Eq.(\ref{eqz2}) can be immediately solved being {\beta} \alpha=\alpha(a)\, . \end{equation} The case $f_{RR}=0$ is trivial since corresponds to the standard GR. We can rewrite Eqs.(\ref{eqz1}) and (\ref{eqz3}) as follows \begin{eqnarray} f_R\left(\alpha+2a\,\frac{d\alpha}{da}\right)+a\,f_{RR}\,(\beta+a\,\partial_a\beta)&=&0\label{eqz1b}\\ f_{RR}\left(2\,\alpha+a\,\frac{d\alpha}{da}+a\,\partial_R\beta\right)+a\,\beta\,f_{RRR}&=&0\label{eqz3b}\, . \end{eqnarray} Since $f=f(R)$, then $\partial f/\partial a=0$, even in the case we consider $R=R(a)$, it is possible to solve equation (\ref{eqz3b}), by writing it as {\beta} \partial_R(\beta\,f_{RR})=-f_{RR}\left(2\,\frac\alpha a+\frac{d\alpha}{da}\right)\label{eqz3bbis}\, \end{equation} whose general solution can be written as {\beta} \beta=-\left[\frac{2\alpha}a+\frac{d\alpha}{da}\right]\frac{f_R}{f_{RR}}+\frac{h(a)}{f_{RR}}\, . \end{equation} Therefore one finds that Eq.\ (\ref{eqz1b}) gives {\beta} f_R\left[\alpha-a^2\,\frac{d^2\alpha}{da^2}-a\,\frac{d\alpha}{da}\right]+a\,\left[h-a\frac{dh}{da}\right]=0\, , \end{equation} which has solution {\beta} \alpha=c_1\,a+\frac{c_2}a\,\qquad{\rm and}\qquad h=\frac{\bar c}a\, , \end{equation} where, being $a$ dimensionless, $c_1$ and $c_2$ have the same dimensions. We can further fix $\alpha$ to be dimensionless, this fixes the dimensions of $\beta$ to be $[\beta]=M^2$. Then also $[\bar c]=M^2$, so that we have {\beta} \beta=-\left[3\,c_1+\frac{c_2}{a^2}\right]\frac{f_R}{f_{RR}}+\frac{\bar c}{a\,f_{RR}}\, . \end{equation} We can now use the expressions for $\alpha$ and $\beta$ into Eq.(\ref{eqz4}) as follows {\beta} f_R=\frac{3\,a\,(c_1\,a^2+c_2)\,f-\bar c\,(a^2\,R+6 \kappa) }{2 a (c_2 R-6 c_1 \kappa )}+\frac{\left(c_1 a^2+c_2\right) \rho_{r0}}{2 a^4 (c_2 R-6 c_1 \kappa )}\, , \label{fRab} \end{equation} if $c_2\,R-6\,\kappa\,c_1\neq0$. It is clear that, for a general $f$, it will not be possible to solve at the same time the Euler-Lagrange equation and this constraint. Therefore we have to use the Noether constraint in order to find the subset of those $f$ which make this possible. As we shall see later, it is convenient to look for a parametric solution in the form $\bigl[H(a),f\bigl(R(a)\bigr)\bigr]$. In this case, since $f_R=f'/R'$, the Noether condition corresponds to the following ODE {\beta} \frac{f'(a)}{R'(a)}=\frac{3\,a\,(c_1\,a^2+c_2)\,f(a)-\bar c\,(a^2\,R(a)+6 \kappa) }{2 a (c_2\, R(a)-6 c_1 \kappa )}+\frac{\left(c_1 a^2+c_2\right) \rho_{r0}}{2 a^4 (c_2 R(a)-6 c_1 \kappa )}\, . \label{NT1} \end{equation} It should be noted that this change of variable is defined only if $R'\neq0$, that is if $R$ is not constant during the evolution. When this happens Eq. (\ref{eqz4}) or (\ref{fRaF}) sets $a=a_0={\rm constant}$, which corresponds to an uninteresting solution. Any Euler-Lagrange solution, by definition, satisfies the Einstein equations. However we will show that there are forms of $f(R)$, for which a subset of those solution will also be a Noether solution. In fact, Eq.(\ref{fRab}) can also be rewritten as {\beta} c_1\,a^2\,(\rho_{r0}+3\,a^4\,f+12\,\kappa\,a^2\,f_R)+c_2\,[\rho_{r0}+a^4\,(3\,f-2\,R\,f_R)]=\bar c\,a^3\,(a^2\,R+6\kappa)\, . \label{fRaF} \end{equation} Therefore we look for a family of solutions that, being a Noether symmetry, gives a class of $f(R)$ models. This symmetry implies the existence of the following constant of motion {\beta} \alpha\,(6\,f_{RR}\,a^2\,\dot R+12\,f_R\,a\,\dot a) +\beta\,(6\,f_{RR}\,a^2\,\dot a)=6\,\mu^3_0={\rm constant},\label{abete} \end{equation} where $\mu_0$ has the dimensions of a mass. Equation (\ref{abete}) can be recast in the form {\beta} \frac{d(f_R)}{dt}=f_{RR}\,\dot R=\frac{\mu^3_0}{a\,(c_1\,a^2+c_2)} +\frac{c_1\,a^2-c_2}{c_1\,a^2+c_2}\,f_R\,H -\frac{\bar c\,a}{c_1\,a^2+c_2}\,H\, , \end{equation} or, using the time-parameter $a$ {\beta} a H(a) \left(\frac{f''(a)}{R'(a)}-\frac{f'(a) R''(a)}{R'(a)^2}\right)-\frac{\left(a^2 c_1-c_2\right) H(a) f'(a)}{\left(c_1 a^2+c_2\right) R'(a)}=\frac{\mu_0 ^3}{a \left(c_1 a^2+c_2\right)} -\frac{\bar c\,a}{c_1\,a^2+c_2}\,H(a)\, . \label{NT2} \end{equation} Once Eq.\ (\ref{NT1}) is solved, because the Noether constraint is satisfied, the solution $\bigl[H(a),f\bigl(R(a)\bigr)\bigr]$ will automatically solve also (\ref{NT2}) for a particular $\mu_0$. Equation (\ref{abete}) can be used to reduce the order of the Friedmann equation. In fact, writing Eq.(\ref{energy}) as {\beta} f-6\,f_{RR}\,\dot R\,H -6\,f_R\,H^2-f_R\left(R+\frac{6\kappa}{a^2}\right)-\frac{\rho_{m0}}{a^3}-\frac{\rho_{r0}}{a^4}=0\, ,\end{equation} we have {\beta} f- \frac{12\,c_1\,a^2}{c_1\,a^2+c_2}\,f_R\,H^2\, -f_R\left(R+\frac{6\kappa}{a^2}\right) +\frac{6\,\bar c\,a}{c_1\,a^2+c_2}\,H^2=\frac{6\,\mu_0^3\,H}{a\,(c_1\,a^2+c_2)}+\frac{\rho_{m0}}{a^3}+\frac{\rho_{r0}}{a^4}\, , \label{reduc} \end{equation} where $f_R$ is given by (\ref{fRab}). We will use this relation in order to find out exact cosmological solutions. Namely, we will search for solutions depending on the constant of motion $\mu_0$ determined by the Noether symmetry. \section{Exact cosmological solutions}\label{sec5} In order to find out exact cosmological solutions, let us discuss the Noether condition Eq.(\ref{fRaF}) and the dynamical system (\ref{energy}),(\ref{frd1}) with respect to the values of the integration constants $c_{1,2}$, the structural parameters $k,\rho_{r0}, \rho_{m0}$ and the Noether charge $\mu_0$. Beside cosmological solutions, also the explicit form of $f(R)$ will result fixed in the various cases. As we shall see later on, analytical solutions can be easily found for the case when both $\bar c$ and $\mu_0$ vanish at the same time. Therefore in all this section, except one subsection, we will set $\bar c=0$. \subsection{Case $c_1=0$} In this case, the Noether condition (\ref{fRaF}) reduces to {\beta} 2\,R\,f_R-3\,f=\frac{\rho_{r0}}{a^4}\, . \label{ntc10} \end{equation} \subsubsection{Vacuum and pure dust case} In vacuum, or in the presence of dust only, i.e.\ $\rho_{r0}=0$, we find {\beta} f=f_0\left(\frac{R}{R_0}\right)^{\!3/2}\, . \label{fr32} \end{equation} This solution, for the vacuum case $\rho_{r0}=\rho_{m0}=0$, has been already found \cite{lambiase}. The absence of a ghost imposes that $f_R<0$, i.e.\ $f_0>0$ since $R_0<0$. In the case of dust and no radiation ($\rho_{m0}\neq0,\rho_{r0}=0$), one can substitute Eq.(\ref{fr32}) into (\ref{reduc}) to find {\beta} \left(\frac R{R_0}\right)^{\!3/2} +\frac{18\kappa}{a^2\,R_0}\left(\frac R{R_0}\right)^{\!1/2} =-\frac{12\mu_0^3\,H}{c_2\,a\,f_0}-\frac{2\rho_{m0}}{a^3\,f_0}\, . \label{frc10R} \end{equation} \begin{enumerate} \item $k=0$. In this case, for consistency, we need the right hand side of (\ref{frc10R}) to be positive. If $\mu_0=0$ (case for which analytical solutions could be given), this is impossible as $f_0>0$, therefore there is no ghost-free solution. For the more general case $\mu_0^3/c_2<0$, there could be a physical solution: the non-linearity of the equations does not allow us to find analytical solutions for this case. Nevertheless, solutions (to be found numerically) may still exist. \item $k\neq0$. The Ricci scalar can be found as the solution of Eq.\ (\ref{frc10R}). For $\mu_0=0$, we have a cubic equation in $(R/R_0)^{1/2}$, for which a real solution always exists (which may not be positive though). Looking at equation (\ref{frc10R}), the case $\mu_0=0,k=-1$ has no ghost-free solutions ($f_0<0$). Also the case $\mu_0=0,k=1$ has no solution, because we have {\beta} \sqrt{\frac R{R_0}}=\left[\frac{\tilde B_0^{1/3}}{f_0 R_0}-\frac{6\kappa\, f_0 }{\tilde B_0^{1/3}}\right]\frac1a\, , \end{equation} where we have defined the constant {\beta} \tilde B_0=\sqrt{f_0^4 \rho_{m0}^2 R_0^6 +216 f_0^6\, \kappa^3\, R_0^3}-f_0^2 \rho_{m0} R_0^3\, , \end{equation} which implies that $(f_0/\rho_{m0})^2\,(\kappa/R_0)^3>-1/216$. If so, then, since $R_0<0$, $\tilde B_0>0$. However, this would lead to a negative value for $(R/R_0)^{1/2}$. \end{enumerate} \subsubsection{Dust and radiation case} In this case we have {\beta} f_R=\frac32\,\frac{f}{R}+\frac{\rho_{r0}}{2\,a^4\,R}\, .\label{reduc2bis} \end{equation} Once again, in order to have $f_R<0$, and $R<0$ during the evolution of the universe one requires {\beta} f>-\frac{\rho_{r0}}{3\,a^4}\, .\label{constrN1} \end{equation} If we substitute the expression for $f_R$ into the reduced Friedmann Eq.(\ref{reduc}) we find {\beta} f=-\frac{12\, \mu^3_0 \,a\,H\,R}{c_2 \left(R\, a^2+18 \kappa\right)}-\frac{6 \kappa\, \rho_{r0}}{a^4 \left(R a^2+18 \kappa\right)}-\frac{3 \rho_{r0} R}{a^2 \left(R a^2+18 \kappa\right)}-\frac{2 \rho_{m0} R}{a \left(R a^2+18 \kappa\right)}\, . \label{fc10} \end{equation} This relation gives $f$ as a function of $a$ being $R=R(a)$. It has to be $c_2\neq0$ otherwise the Noether condition becomes trivial. This expression can be inserted back into (\ref{reduc2bis}). Assuming $R=R(a)$ as a monotonic function of $a$, one finds that $f_R=(df/da) / (dR/da)$, and equation (\ref{ntc10}) becomes a differential equation for $R(a)$, which can be written as \begin{eqnarray} R'&=& \frac6{a^3 \left(18 a^3 H \mu_0 ^3+4 c_2 \rho_{r0}+3 a c_2 \rho_{m0}\right) \left(R a^2+6 \kappa\right)}\times \{-R^2 \,[2 a^3 \,(H-a H') \mu^3_0\notag\\ &+&c_2(2 \rho_{r0}+a \rho_{m0})] a^4 +6 \kappa R [6 a^3 \mu ^3_0\,(H+a H')-c_2 (4 \rho_{r0}+a \rho_{m0})] a^2 -72 c_2 \kappa^2 \rho_{r0}\}\, , \label{Rpc10} \end{eqnarray} where the prime denotes differentiation with respect to the scale factor $a$. Eq.(\ref{Rpc10}) can be further rewritten as a second order differential equation in $H(a)$, by using equation (\ref{frd1}), {\beta} R=-12\,H^2-6\,a\,H\,H'-6\,\frac{\kappa}{a^2}\, . \label{Ra} \end{equation} Substituting (\ref{Ra}) into (\ref{Rpc10}) one finds \begin{eqnarray} H''&=&-\frac1{a^4 H^2 \left(18 a^3 H \mu_0 ^3+4 c_2 \rho_{r0}+3 a c_2 \rho_{m0}\right)}\times\{24 a \kappa^2 \mu_0 ^3+H [a^2 \{a^2 (6 a^3 H \mu_0 ^3+4 c_2 \rho_{r0}\notag\\ &&{}+3 a c_2 \rho_{m0}) {H'}^2+a[12 a \kappa \mu_0 ^3+H (78 a^3 H \mu_0 ^3+32c_2 \rho_{r0}+21 a c_2 \rho_{m0})] H'\notag\\ &&{}+12 H[2 a \kappa \mu_0 ^3+H (2 a^3 H \mu_0 ^3+2 c_2\rho_{r0}+a c_2 \rho_{m0})]\}-8 c_2 \kappa\rho_{r0}]\}\, . \label{CTc10} \end{eqnarray} This differential equation selects those $f(R)$ models which satisfy, at the same time, both the Friedmann equation and the Noether condition. It has to be stressed that, having chosen $a$ as the time variable, finding the $H(a)$'s which solve (\ref{CTc10}) uniquely fixes the metric tensor. Hence, $H(a)$ represents a fully solved exact solution for the Einstein equations. Of course, if one wants to know the link between $a$ and the proper time, $a=a(t)$, one needs to find the integral $t=\int da/(aH)$. The case $\mu_0=0$ is interesting as it allow us to find analytical solutions, as the differential equation becomes (2nd order and) linear for the variable $H^2$. In this case, the solution of the equation will be a family $H=H(a,d_1,d_2,c_2,\mu_0,\kappa,\rho_{r0},\rho_{m0})$, where $d_{1,2}$ are two constants coming from the integration of Eq.(\ref{CTc10}). In turn, by using Eq.(\ref{Ra}), it is possible to define a function $R=R(a,d_1,d_2,c_2,\mu_0,\rho_{r0},\rho_{m0})$, which can then be substituted into Eq.(\ref{fc10}) in order to find the explicit parametric form of $f(R)$, i.e. $f=f(a,d_1,d_2,c_2,\mu_0,\rho_{r0},\rho_{m0})$. In other words, we find the explicit parametric form for $f(R)$ where the parameter used to describe the $f(R)$ is the scale factor $a$ (see also \cite{mimicking} for a comparison with observations. However, in that case, the adopted $f(R)$ models were constructed by phenomenological considerations and not derived from some first principle, as the existence of symmetries as discussed here). We can distinguish some relevant cases. \begin{enumerate} \item $k=0$, $\mu_0=0$. In this case, by exactly integrating equation (\ref{CTc10}), we find {\beta} H^2=d_2\,\frac{d_1+8\,a\,\rho_{r0}+3\,\rho_{m0}\,a^2}{a^4}\, , \end{equation} where $d_{1,2}$ are integration constants, with $[d_1]=M^4$ and $[d_2]=M^{-2}$. This expression for $H(a)$ together with (\ref{fc10}) and (\ref{Ra}) form a solution for the set of ODE's (\ref{FRDA}), and (\ref{NT1}), so that Eq.\ (\ref{NT2}) is satisfied giving $\mu_0=0$. Although this solution is analytical it cannot be accepted because it allows for a negative Newton constant. In fact, equation (\ref{constrN1}) cannot be satisfied by equation (\ref{fc10}) if $k=0,\mu_0=0$. However the non-linear case $\mu_0/c_2<0$ could actually lead to physical solutions (to be discussed elsewhere in a forthcoming paper). For the same reason, also the case $k=-1,\mu_0=0$ should be rejected. \item $k=1$, $\mu_0=0$. As far as $R<-18\kappa/a^2$, the second term in the l.h.s.\ of equation (\ref{fc10}) becomes positive, allowing for the possibility of finding a physical solution. The integration of (\ref{CTc10}) leads to {\beta} H^2=\left(\sqrt2\,d_1-\frac{32\,\rho_{r0}^2\,\kappa}{9\rho_{0m}^2}\right)\frac1{a^4} +\left(8\,d_2\,\rho_{r0}-\frac{16\rho_{r0}\,\kappa}{3\rho_{m0}}\right)\frac1{a^3} +\frac{3\,d_2\,\rho_{m0}}{a^2}\, , \end{equation} with $[d_1]=M^2$, and $[d_2]=M^{-2}$. In order to find $d_1$ and $d_2$ one can fit this formula with the standard Friedmann equation of GR with only matter, radiation and curvature. Therefore, one has to consider \begin{eqnarray} \sqrt2\,d_1-\frac{32\,\rho_{r0}^2\,\kappa}{9\rho_{0m}^2}&=&H_0^2\,\Omega_{r0}^{\rm eff}\, ,\\ 8\,d_2\,\rho_{r0}-\frac{16\rho_{r0}\,\kappa}{3\rho_{m0}}&=&H_0^2\,\Omega_{m0}^{\rm eff}\, ,\\ 3\,d_2\,\rho_{m0}&=&H_0^2\,\Omega_{k0}^{\rm eff}\, , \end{eqnarray} but this system admits no solutions as one finds {\beta} \kappa=\tfrac12\,H_0^2\,\Omega_{k0}^{\rm eff}-\tfrac3{16}\,\frac{\rho_{m0}}{\rho_{r0}}\,H_0^2\,\Omega_{m0}^{\rm eff}<0\, \end{equation} using today's data \cite{Spergel:2006hy}. \end{enumerate} \subsection{Case $c_2=0$} In this case, the Noether condition (\ref{fRaF}) reduces to {\beta} \rho_{r0}+3\,a^4\,f+12\,\kappa\,a^2\,f_R=0\, . \label{ntc20} \end{equation} \subsubsection{Vacuum and dust only case} In this case we have $\rho_{r0}=0$, and a flat universe cannot be solution as one would obtain $f=0$. Considering $k\neq0$ one finds {\beta} f_R=-\frac{a^2\,f}{4\,\kappa}\, .\label{frigo} \end{equation} Since $f_R<0$ then $f$ is positive when $k<0$ and viceversa. Substituting this into the Friedmann equation one finds {\beta} \{a^3 c_1 [(12 H^2+R)\, a^2+10 \kappa ]\}\,f=4 \kappa\,(6 H \mu_0^3+c_1 \rho_{m0})\, . \end{equation} Restricting ourselves only to the study of the simple and linear case of a vanishing $\mu_0$, we can distinguish two cases \begin{enumerate} \item $\rho_{m0}=0,\mu_0=0$. In this case one needs to impose {\beta} R=-12H^2-10\,\frac\kappa{a^2}\, , \end{equation} which, together with the definition of $R$, gives {\beta} H^2=2 d_1-\frac{2\kappa}{3a^2}\, , \end{equation} where $d_1$ is a constant of integration with dimensions $M^2$. This behavior describes a universe with only a cosmological constant and curvature. Equation (\ref{ntc20}) can now be solved for $f(a)$ giving {\beta} f=\frac{d_2}a=d_2\left[-\frac{R+24d_1}{2\kappa}\right]^{1/2}\, , \end{equation} where $d_2$ is a constant of integration with dimensions $M^4$. \item $\rho_{m0}\neq0,\mu_0=0$. In this case the Friedmann equation and (\ref{frigo}) give {\beta} f=\frac{4 \kappa \, \rho_{m0}}{\left(12 H^2+R\right) a^5+10 \kappa a^3}\, . \end{equation} Substituting this expression in (\ref{frigo}), and using the definition for $R$ in terms of $H(a)$ one finds a linear 2nd order differential equation in $H^2(a)$, which has solution {\beta} H^2=\frac{d_1}{2a^4}+2d_2-\frac{2\kappa}{3a^2}\, , \end{equation} where $d_{1,2}$ are integration constants, and $[d_1]=[d_2]=M^2$. Therefore one has \begin{eqnarray} R&=&-24 d_2-\frac{2\kappa}{a^2}\,,\\ f&=&-\frac{2\kappa\,\rho_{m0}}{3\,a\,d_1}\, . \end{eqnarray} \end{enumerate} \subsubsection{Radiation and dust case} Also in this case, we have three possibilities, according to the values of $k$. \begin{enumerate} \item $k=0.$ In this case one finds that {\beta} f=-\frac{\rho_{r0}}{3 a^4}\, . \end{equation} Therefore we have {\beta} f_R=\frac{f'}{R'}=\tfrac43\,\frac{\rho_{r0}}{a^5\,R'}\, . \end{equation} A well-behaved background evolution requires, with our conventions, $R'>0$, so that $f_R>0$. This means a negative effective Newton constant, i.e.\ the solution cannot be accepted. \item $k\neq0$. In this case, using equation (\ref{ntc20}) one finds {\beta} f_R=-\frac{\rho_{r0}}{12\,\kappa\,a^2}-\frac{f\,a^2}{4\kappa}\, , \end{equation} and then using Friedmann equation (\ref{reduc}) one can solve for $f$, as follows {\beta} f = \frac{-c_1 \left(12 H^2+R\right) \rho_{r0} a^2+12 \kappa \left(6 H \mu_0^3+c_1 \rho_{m0}\right) a+6 c_1 \kappa \rho_{r0}}{3 a^4 c_1 \left[\left(12 H^2+R\right) a^2+10 \kappa \right]}\, . \end{equation} By plugging this relation into the Noether condition (\ref{ntc20}), and using the definition of $R$ in terms of $H, H'$, and $a$, one finds the following differential equation for $H(a)$ \begin{eqnarray} H''&=&\frac{a H \left[-\left(18 a H \mu_{0}^3+3 a c_1 \rho_{m0}+4 c_1\rho_{r0}\right) {H'}^2 a^4-3 \left(a H \left(30 a H \mu_{0}^3+5 a c_1 \rho_{m0}+8 c_1 \rho_{r0}\right)-4 \kappa \mu_{0}^3\right) H'\, a^2\right.}{a^5 H^2 \left(18 a H \mu_{0}^3+3 a c_1 \rho_{m0}+4 c_1 \rho_{r0}\right)}\notag\\ &&{}+\frac{\left.4 \kappa \left(6 a H \mu_{0}^3+a c_1 \rho_{m0}+2 c_1 \rho_{r0}\right)\right]-8 \kappa ^2 \mu_{0}^3}{a^5 H^2 \left(18 a H \mu_{0}^3+3 a c_1 \rho_{m0}+4 c_1 \rho_{r0}\right)}\, . \label{eqzc20} \end{eqnarray} In the case $\mu_0=0,\rho_{m0}\neq0$, this differential equation can be exactly integrated to give {\beta} H^2= \frac{256 \kappa \rho_{r0}^3}{405 a^5 \rho_{m0}^3}+\frac{16 \kappa \rho_{r0}^2}{27 a^4 \rho_{m0}^2}+\frac{8 d_1 \rho_{r0}}{5 a^5}-\frac{2 \kappa }{3 a^2}+\frac{3 \rho_{m0} d_1}{2 a^4}+2 d_2\, , \end{equation} where $d_{1,2}$ are two constants of integration with dimensions $[d_1]=M^{-2}=[d_2]^{-1}$. It is interesting to note the presence of a new cosmological term in this Friedmann equation, which goes as $a^{-5}$, which would correspond to a matter term with equation of state parameter $w=2/3$. If $\mu_0=0,\rho_{m0}=0$, i.e.\ a universe filled with radiation only, equation (\ref{eqzc20}) has the following solution {\beta} H^2=2d_2+\frac{2d_1}{5a^5}-\frac{2\kappa}{3a^2}\, , \end{equation} with $[d_1]=[d_2]=M^2$. \end{enumerate} \subsection{Case $c_1,c_2\neq0$} In this case, one can divide equation (\ref{fRaF}) by $c_1$ finding {\beta} f_R=\frac{a^2+c_3}{c_3\,R-6\,k}\,\frac{\rho_{r0}+3\,a^4\,f}{2\,a^4}\, ,\label{fRabis} \end{equation} where $c_3=c_2/c_1\neq0$. This implies that {\beta} f_{RR}\,\dot R=\frac{\tilde\mu^3_0}{a\,(a^2+c_3)} +\frac{a^2-c_3}{a^2+c_3}\,f_R\,H\, , \end{equation} where $\tilde\mu_0^3=\mu_0^3/c_1$. Friedmann equation Eq.(\ref{reduc}) can be rewritten as {\beta} f- \frac{12\,a^2}{a^2+c_3}\,f_R\,H^2\, -f_R\left(R+\frac{6k}{a^2}\right)=\frac{6\tilde\mu_0^3\,H}{a\,(a^2+c_3)}+\frac{\rho_{m0}}{a^3}+\frac{\rho_{r0}}{a^4}\, . \label{reducbis} \end{equation} By substituting (\ref{fRabis}) into (\ref{reducbis}), and solving for $f$, one finds \begin{eqnarray} f&=& \frac{12\,\tilde \mu_0^3\, a^5\, H\, (6 k-c_3\, R)}{a^4 \left(a^2+c_3\right) [3 \left(12 H^2+R\right) a^4+(30 k+c_3 R)\, a^2+18 c_3 k]}\notag\\ &&{}-\frac{\rho_{r0} \left(12 H^2+R\right) a^4+2 \rho_{m0}\, (c_3 R-6 k)\, a^3+3 \rho_{r0}\, (c_3 R-2 k)\, a^2+6 c_3 k \rho_{r0}}{a^4\, [3 \left(12 H^2+R\right) a^4+(30 k+c_3 R)\, a^2+18 c_3 k]}\, , \label{genFa} \end{eqnarray} which means that the Noether symmetry, combined with the dynamics, determines the form of $f$. In this case $f$ is a function of $a$ since both $R$ and $H$ are functions of $a$. We can still go further by using the same trick used in the previous section, i.e.\ considering $f$ as an implicit function of $a$ into the Noether condition (\ref{fRabis}). Since $f=f(R(a))$ one finds {\beta} f_R=\frac{df}{dR}=\frac{da}{dR}\,\frac{df}{da}=\frac{f'}{R'}\, . \label{dfdae} \end{equation} Plugging Eqs.(\ref{genFa}) and (\ref{dfdae}) into (\ref{fRabis}), one finds a second order differential equation for $H$, as follows \begin{eqnarray} H''&=&\frac1{a^4 (a^2+c_3)\,(3 a^2+c_3)\, H^2 \,[18\, \tilde\mu_0^3\, H\, a^3+(a^2+c_3)\, (4 \rho_{r0}+3 a \rho_{m0})]}\notag\\ &&{}\times\left\{-24 c_3 \,(3 a^2+c_3)\, \tilde\mu_0^3\, H^4 \,a^5-24 \,(a^2+c_3)^2\, k^2\, \tilde\mu_0^3\,a\right.\notag\\ &&-H^2 \left[6 \,(3 a^2+c_3)^2\, \tilde\mu_0^3\, {H'}^2\, a^4 +24 \,(-3 a^4-2 c_3\,a^2+c_3^2)\, k\, \tilde\mu_0^3\right. \notag\\ &&\left.{}+(a^2+c_3)^2 \,(45\, \rho_{m0}\, a^3+72 \,\rho_{r0}\, a^2+21\, c_3\, \rho_{m0}\, a+32\, c_3\,\rho_{r0})\, H'\right] a^3\notag\\ &&{}-6 H^3 \left[(3 a^2+c_3) \,(15 a^2+13 c_3)\, \tilde\mu_0^3 \,H'\, a^4+2 c_3 \,(a^2+c_3)^2\, (2 \rho_{r0}+a\rho_{m0})\right] a^2\notag\\ &&{}-(a^2+c_3)\, H\,\bigl[a^4\, H'\, [12 (c_3-3 a^2)\, k\, \tilde\mu_0^3 +(a^2+c_3)\, (3 a^2+c_3) \, (4 \rho_{r0}+3 a \rho_{m0})\, H']\notag\\ &&\left.{}-4 \,(a^2+c_3)\, k\, (3 \rho_{m0} a^3+6 \rho_{r0} a^2+2 c_3 \rho_{r0})\bigr]\right\}\, . \label{FRDNT} \end{eqnarray} This differential equation defines the dynamics of the Noether solutions for a generic $f(R)$ model compatible with the Noether symmetry. This result is relevant since there is a free parameter $c_3$, which together with the initial conditions for $H_0$ and $H_0'$, uniquely specify the dynamics. This non-linear ODE is still of second order in $H(a)$ as the $0,0$-Einstein equation for any $f(R)$ theory. However, there is a huge improvement as this equation is independent of the explicit form $f(R)$, having as the only unknown parameters two real numbers, $c_3$ and $\mu_0$, the Noether charge. This also says that for any value of the Noether charge there is a solution, the solution of (\ref{FRDNT}). Therefore all the solutions of (\ref{FRDNT}), as $c_3,\mu_0$ vary, represent the whole set of Noether-charged cosmological solutions of the $f(R)$ theories. \subsubsection{Vacuum and pure dust case} In this case equation (\ref{fRabis}) reduces to {\beta} f_R=\frac{3 f \left(a^2+c_3\right)}{2 \left(R\, c_3-6 \kappa \right)}\, , \label{relais1} \end{equation} whereas $f$ can be written as {\beta} f=\frac{2 \left(6 \kappa -R c_3\right) \left(\left(6 H \mu_{0}^3+\rho_{m0}\right) a^2+\rho_{m0} c_3\right)}{a \left(a^2+c_3\right) \left(3 \left(12 H^2+R\right) a^4+\left(30 \kappa +R c_3\right) a^2+18 \kappa c_3\right)}\, . \end{equation} The case $\rho_{m0}=0,\mu_0=0$ admits no solutions, therefore, as before, we will only discuss the case $\mu_0=0,\rho_{m0}\neq0$, for which we can recast $f$ in the following form {\beta} f=3 \left(12 H^2+R\right) a^4+\left(30 \kappa +R c_3\right) a^2+18 \kappa c_3\, . \end{equation} Inserting this relation into (\ref{relais1}) together with the definition of $R$ one finds {\beta} H''=\frac{-4 c_3 H^2-a \left(15 a^2+7 c_3\right) H'\,H -a^2 \left(3 a^2+c_3\right) {H'}^2+4 \kappa }{a^2 \left(3 a^2+c_3\right)H}\, , \end{equation} whose general solution reads {\beta} H^2=-\frac{c_3 \kappa }{9 a^4}-\frac{2 \kappa }{3 a^2}+\frac{2 d_1}{a^4}+\frac{2 c_3 d_2}{a^2}+3 d_2\, . \end{equation} \subsubsection{Pure radiation case} Once again, studying Eq.\ (\ref{FRDNT}) to the case $\mu_0=0$ and $\rho_{m0}=0$, we find the following equation {\beta} \bigl(H^2\bigr)''=-\frac{18 a^2+8 c_3}% {a \left(3 a^2+c_3\right) }\,\bigl(H^2\bigr)' -\frac{12\,c_3\, H^2}{a^2 \left(3 a^2+c_3\right)} +\frac{2k(6 a^2+2c_3)}{a^4 \left(3 a^2+c_3\right)}\, . \label{genc1c2mu0rhom0} \end{equation} The general solution, when $c3>0$, for this ODE is \begin{eqnarray} H^2&=&\frac{3 c_3 d_1}{a^4}+\frac{27 d_1}{c_3}+\frac{18 d_1}{a^2}+\frac{5 \sqrt{3} \sqrt{c_3} d_2}{a^3}+\frac{9 \sqrt{3} d_2}{a \sqrt{c_3}}+\frac{4 \kappa }{c_3}+\frac{2 \kappa }{a^2}\notag\\ &&{}+\frac{3 c_3 d_2 \arctan\!\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right)}{a^4}+\frac{27 d_2 \arctan\!\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right)}{c_3}+\frac{18 d_2 \arctan\!\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right)}{a^2}\, , \end{eqnarray} whereas, for $c_3<0$, one finds \begin{eqnarray} H^2&=&\frac{3 c_3 d_1}{a^4}+\frac{27 d_1}{c_3}+\frac{18 d_1}{a^2}-\frac{5 \sqrt{3} \sqrt{c_3} d_2}{a^3}+\frac{9 \sqrt{3} d_2}{a \sqrt{-c_3}}+\frac{4 \kappa }{c_3}+\frac{2 \kappa }{a^2}\notag\\ &&{}+\frac{3 c_3 d_2 {\rm arctanh}\!\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right)}{a^4}+\frac{27 d_2 {\rm arctanh}\!\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right)}{c_3}+\frac{18 d_2 {\rm arctanh}\!\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right)}{a^2}\, . \end{eqnarray} Either expression for $H(a)$ together with Eq.\ (\ref{genFa}) and Eq.\ (\ref{Ra}) form a solution for (\ref{FRDA}), and (\ref{NT1}), and possess $\mu_0=0$ Noether charge. \subsubsection{Matter and Radiation case} Let us restrict our study to the case $\tilde\mu=0$, for which we can find analytical solutions. Eq.(\ref{FRDNT}) reduces to \begin{eqnarray} \bigl(H^2\bigr)''&=&-\frac{\left(45 \rho_{m0} a^3+72 \rho_{r0} a^2+21 c_3 \rho_{m0} a+32 c_3 \rho_{r0}\right)}% {a \left(3 a^2+c_3\right) (4 \rho_{r0}+3 a \rho_{m0})}\,\bigl(H^2\bigr)'\notag\\ &&{}-\frac{24\,c_3\left( \rho_{m0} a+2 \rho_{r0} \right) H^2}{a^2 \left(3 a^2+c_3\right) (4 \rho_{r0}+3 a \rho_{m0})} +\frac{8k(3 \rho_{m0} a^3+6 \rho_{r0} a^2+2 c_3 \rho_{r0})}{a^4 \left(3 a^2+c_3\right) (4 \rho_{r0}+3 a \rho_{m0})}\, . \label{genc1c2mu0} \end{eqnarray} It is remarkable that this differential equation is linear in $H^2$. This makes the problem of solving it much easier. In fact, analytical solutions for $k=0,\pm 1$ can be achieved. Let us discuss them. \begin{enumerate} \item $k=0$. The solution of Eq.(\ref{genc1c2mu0}) is \begin{eqnarray} H^2&=&\frac{4 d_1 d_2 c_3^{9/2}}{a^4}+\frac{24 d_1 d_2 c_3^{7/2}}{a^2}-\frac{\rho_{0m} d_2 c_3^{5/2}}{a^4}+36 d_1 d_2 c_3^{5/2}\notag\\ &&{}+\frac{2 \sqrt{3} \rho_{r0} \arctan\!\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) d_2 c_3^2}{a^4}+\frac{10 \rho_{r0} d_2 c_3^{3/2}}{a^3} +\frac{12 \sqrt{3} \rho_{r0} \arctan\!\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) d_2 c_3}{a^2}\notag\\ &&{}+\frac{18 \rho_{r0} d_2 \sqrt{c_3}}{a}+18 \sqrt{3} \rho_{r0} \arctan\!\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) d_2\, , \end{eqnarray} where $d_1$ and $d_2$ are integration constants with dimensions, $[d_1]=M^4$, and $[d_2]=M^{-2}$. This is clearly a deviation from standard GR, because there is a $1/a$ term, which leads to an accelerated behavior if dominates. Furthermore there are terms, all involving $\rho_{r0}$, which include the arctangent of $a$, where $c_3$ is supposed to be positive. These terms have different behavior at low and high redshift. In fact since ${\displaystyle\lim_{a\to0}\arctan(a)\sim a}$ at high redshifts, these terms behave as dust, $1/a$ and $a$ respectively, and are subdominant with respect to the radiation. On the other hand, since ${\displaystyle \lim_{a\to\infty}\arctan(a)\sim\pi/2}$ for large and positive $a$, these terms will behave as radiation, curvature and cosmological constant respectively. It is also interesting to notice that in order to have a true dust matter component at late times, it has to be {\beta} 10\,\rho_{r0}\,d_2\,c_3^{3/2}=\frac{8\pi G}3\,\rho_{m0}\, .\label{dmk0} \end{equation} This means that $\rho_{r0}$ behaves as the source of matter component in this modified Friedmann equation. A cosmological constant term is also present. It is determined by the integration constants of the Noether condition. As for the case $c_3<0$, the solution of Eq.\ (\ref{genc1c2mu0}) can be written as follows \begin{eqnarray} H^2&=&-\frac{4 d_1 d_2 (-c_3)^{9/2}}{a^4}+\frac{24 d_1 d_2 (-c_3)^{7/2}}{a^2}+\frac{\rho_{0m} d_2 (-c_3)^{5/2}}{a^4}-36 d_1 d_2 (-c_3)^{5/2}\notag\\ &&{}+\frac{2 \sqrt{3} \rho_{r0} {\rm arctanh}\!\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) d_2 c_3^2}{a^4}+\frac{10 \rho_{r0} d_2 (-c_3)^{3/2}}{a^3} +\frac{12 \sqrt{3} \rho_{r0} {\rm arctanh}\!\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) d_2 c_3}{a^2}\notag\\ &&{}-\frac{18 \rho_{r0} d_2 \sqrt{-c_3}}{a}+18 \sqrt{3} \rho_{r0} {\rm arctanh}\!\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) d_2\, .\label{solzH} \end{eqnarray} For this solution, as a pedagogical example, more detailed calculations and a link with scalar-tensor theories are given in the appendix. \item $k\neq0$. The general solution is \begin{eqnarray} H^2&=&-\frac{32 \kappa \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right)\rho_{r0}^3}{9 \sqrt{3} a^4 \rho_{m0}^3 \sqrt{c_3}}-\frac{160 \kappa \rho_{r0}^3}{27 a^3 \rho_{m0}^3 c_3}-\frac{64 \kappa \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) \rho_{r0}^3}{3 \sqrt{3} a^2 \rho_{m0}^3 c_3^{3/2}}-\frac{32 \kappa \rho_{r0}^3}{3 a \rho_{m0}^3 c_3^2}-\frac{32 \kappa \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) \rho_{r0}^3}{\sqrt{3} \rho_{m0}^3 c_3^{5/2}}\notag\\ &&{}-\frac{16 \kappa \rho_{r0}^2}{3 a^2 \rho_{m0}^2 c_3}-\frac{8 \kappa \rho_{r0}^2}{27 a^4 \rho_{m0}^2} -\frac{8 \kappa \rho_{r0}^2}{\rho_{m0}^2 c_3^2}+\frac{\sqrt{3} \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) d_2 \rho_{r0}}{2 a^4 c_3^{5/2}}+\frac{5 d_2 \rho_{r0}}{2 a^3 c_3^3}+\frac{3 \sqrt{3} \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) d_2 \rho_{r0}}{a^2 c_3^{7/2}}\notag\\ &&{}+\frac{9 d_2 \rho_{r0}}{2 a c_3^4}+\frac{9 \sqrt{3} \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) d_2 \rho_{r0}}{2 c_3^{9/2}}-\frac{2 \kappa \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) \sqrt{c_3} \rho_{r0}}{\sqrt{3} a^4 \rho_{m0}}-\frac{4 \sqrt{3} \kappa \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) \rho_{r0}}{a^2 \rho_{m0} \sqrt{c_3}}\notag\\ &&{}-\frac{10 \kappa \rho_{r0}}{3 a^3 \rho_{m0}}-\frac{6 \kappa \rho_{r0}}{a \rho_{m0} c_3}-\frac{6 \sqrt{3} \kappa \arctan\left(\frac{\sqrt{3} a}{\sqrt{c_3}}\right) \rho_{r0}}{\rho_{m0} c_3^{3/2}}-\frac{2 \kappa }{3 a^2}-\frac{\kappa c_3}{9 a^4}+\frac{6 d_1}{a^2 c_3}+\frac{9 d_1}{c_3^2}+\frac{d_1}{a^4}-\frac{\rho_{m0} d_2}{4 a^4 c_3^2}\, . \end{eqnarray} Also in these cases we have interesting behaviors matching the main cosmological eras. The integration constants $d_{1,2}$ have dimensions respectively $[d_1]=M^2$, and $[d_2]=M^{-2}$. The analysis, for both this and the previous case ($k=0$), of the set of parameters $\{d_1,d_2, c_3\}$ which can be bounded by observations will be done in a forthcoming paper. Eq.\ (\ref{genc1c2mu0}), for the case $c_3<0$, has solution \begin{eqnarray} H^2&=&\frac{32 \kappa {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right)\rho_{r0}^3}{9 \sqrt{3} a^4 \rho_{m0}^3 \sqrt{-c_3}}-\frac{160 \kappa \rho_{r0}^3}{27 a^3 \rho_{m0}^3 c_3}-\frac{64 \kappa {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) \rho_{r0}^3}{3 \sqrt{3} a^2 \rho_{m0}^3 (-c_3)^{3/2}}-\frac{32 \kappa \rho_{r0}^3}{3 a \rho_{m0}^3 c_3^2}+\frac{32 \kappa {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) \rho_{r0}^3}{\sqrt{3} \rho_{m0}^3 (-c_3)^{5/2}}\notag\\ &&{}-\frac{16 \kappa \rho_{r0}^2}{3 a^2 \rho_{m0}^2 c_3}-\frac{8 \kappa \rho_{r0}^2}{27 a^4 \rho_{m0}^2} -\frac{8 \kappa \rho_{r0}^2}{\rho_{m0}^2 c_3^2}-\frac{\sqrt{3} {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) d_2 \rho_{r0}}{2 a^4 (-c_3)^{5/2}}+\frac{5 d_2 \rho_{r0}}{2 a^3 c_3^3}+\frac{3 \sqrt{3} {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) d_2 \rho_{r0}}{a^2 (-c_3)^{7/2}}\notag\\ &&{}+\frac{9 d_2 \rho_{r0}}{2 a c_3^4}-\frac{9 \sqrt{3} {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) d_2 \rho_{r0}}{2 (-c_3)^{9/2}}-\frac{2 \kappa {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) \sqrt{-c_3} \rho_{r0}}{\sqrt{3} a^4 \rho_{m0}}+\frac{4 \sqrt{3} \kappa {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) \rho_{r0}}{a^2 \rho_{m0} \sqrt{-c_3}}\notag\\ &&{}-\frac{10 \kappa \rho_{r0}}{3 a^3 \rho_{m0}}-\frac{6 \kappa \rho_{r0}}{a \rho_{m0} c_3}-\frac{6 \sqrt{3} \kappa {\rm arctanh}\left(\frac{\sqrt{3} a}{\sqrt{-c_3}}\right) \rho_{r0}}{\rho_{m0} (-c_3)^{3/2}}-\frac{2 \kappa }{3 a^2}-\frac{\kappa c_3}{9 a^4}+\frac{6 d_1}{a^2 c_3}+\frac{9 d_1}{c_3^2}+\frac{d_1}{a^4}-\frac{\rho_{m0} d_2}{4 a^4 c_3^2}\, . \end{eqnarray} \end{enumerate} It is worthy to note that once the free parameters are constrained by the data (the set of allowed parameters might be empty anyhow), one can select physically interesting $f(R)$ models as in \cite{mimicking}. \subsubsection{Non-linear case, $\tilde\mu_0\neq0$} In this more general case, Eq.(\ref{FRDNT}) cannot be written as a linear differential equation in $H^2$, therefore it is not possible to achieve an analytical general solution. However, after fixing initial conditions for $H$ and giving suitable values for the parameters, one can solve it numerically. These initial conditions fix, in turn, the $f(R)$ model and the behavior of $H(a)$. \subsubsection{General non-linear case, $\bar c\neq0$ and $\tilde\mu_0\neq0$} By using Eq. (\ref{fRab}) inside Eq.\ (\ref{reduc}) one finds the following expression for $f$ \begin{eqnarray} f&=&\frac{c_1 \bar c R \left(12 H^2+R\right) a^5}{\left(c_1 a^2+c_2\right) \Delta}+\frac{\bar c R \left(12 c_2 H^2+12 c_1 \kappa +c_2 R\right) a^3}{\left(c_1 a^2+c_2\right) \Delta}\notag\\ &&{}+\frac{2 \left(36 c_1 \kappa H \mu_0 ^3-6 c_2 H R \mu_0 ^3+18 c_1 \bar c \kappa ^2+6 c_1^2 \kappa \rho_{m0}+6 c_2 \bar c \kappa R-c_1 c_2 \rho_{m0} R\right) a}{\left(c_1 a^2+c_2\right) \Delta}\notag\\ &&{}-\frac{2 c_2 \left(-18 \bar c \kappa ^2-6 c_1 \rho_{m0} \kappa +c_2 \rho_{m0} R\right)}{\left(c_1 a^2+c_2\right) \Delta\, a}-\frac{\rho_{r0} \left(12 c_1 H^2 a^4+c_1 R a^4-6 c_1 \kappa a^2+3 c_2 R a^2+6 c_2 \kappa \right)}{\Delta\, a^4}\, , \end{eqnarray} where {\beta} \Delta=36 c_1 H^2 a^4+3 c_1 R a^4+30 c_1 \kappa a^2+c_2 R a^2+18 c_2 \kappa\, . \end{equation} The Friedmann equation gives us the expression of $f$ in terms of $R(a)$, $H(a)$ and $a$. Eq.\ (\ref{NT1}), which can be rewritten here as {\beta} \frac{f'(a)}{R'(a)}=\frac{3\,a\,(c_1\,a^2+c_2)\,f(a)-\bar c\,(a^2\,R(a)+6 \kappa) }{2 a (c_2\, R(a)-6 c_1 \kappa )}+\frac{\left(c_1 a^2+c_2\right) \rho_{r0}}{2 a^4 (c_2 R(a)-6 c_1 \kappa )}\, , \end{equation} giving a dynamics for $f$, defines a second order differential equation for $H$, given by \begin{eqnarray} H''&=&\left[\left(c_1 a^2+c_2\right) H \Gamma\right]^{\!-1} {H'}^2 \left(12 c_1^2 {\bar c} \kappa a^7+9 c_1^3 \rho_{m0} a^7+54 c_1^2 \mu ^3 H a^7+12 c_1^3 \rho_{r0} a^6+24 c_1 c_2 {\bar c} \kappa a^5+21 c_1^2 c_2 \rho_{m0} a^5\right.\notag\\ &&\left.{}+36 c_1 c_2 \mu ^3 H a^5+28 c_1^2 c_2 \rho_{r0} a^4+12 c_2^2 {\bar c} \kappa a^3+15 c_1 c_2^2 \rho_{m0} a^3+6 c_2^2 \mu ^3 H a^3+20 c_1 c_2^2 \rho_{r0} a^2+3 c_2^3 \rho_{m0} a+4 c_2^3 \rho_{r0}\right) \notag\\ &&{} -\left[a \left(c_1 a^2+c_2\right) H \Gamma\right]^{\!-1}{H'}\left(54 c_1^2 {\bar c} H^3 a^9+108 c_1 c_2 {\bar c} H^3 a^7-270 c_1^2 \mu ^3 H^2 a^7-60 c_1^2 {\bar c} \kappa H a^7-45 c_1^3 \rho_{m0} H a^7\right.\notag\\ &&\left.{}-72 c_1^3 \rho_{r0} H a^6+36 c_1^2 \kappa \mu ^3 a^5+54 c_2^2 {\bar c} H^3 a^5-324 c_1 c_2 \mu ^3 H^2 a^5-120 c_1 c_2 {\bar c} \kappa H a^5-111 c_1^2 c_2 \rho_{m0} H a^5\right.\notag\\ &&\left.{}-176 c_1^2 c_2 \rho_{r0} H a^4+24 c_1 c_2 \kappa \mu ^3 a^3-78 c_2^2 \mu ^3 H^2 a^3-60 c_2^2 {\bar c} \kappa H a^3-87 c_1 c_2^2 \rho_{m0} H a^3-136 c_1 c_2^2 \rho_{r0} H a^2\right.\notag\\ &&\left.{}-12 c_2^2 \kappa \mu ^3 a-21 c_2^3 \rho_{m0} H a-32 c_2^3 \rho_{r0} H\right) -\left[a^4 \left(c_1 a^2+c_2\right) H^2 \Gamma\right]^{\!-1}4 \left(-18 c_1 c_2 \mu ^3 H^4 a^7-3 c_1^2 c_2 \rho_{m0} H^3 a^7\right.\notag\\ &&\left.{}+18 c_1^2 \kappa \mu ^3 H^2 a^7+6 c_1^2 {\bar c} \kappa ^2 H a^7+3 c_1^3 \kappa \rho_{m0} H a^7-6 c_1^2 c_2 \rho_{r0} H^3 a^6+6 c_1^3 \kappa \rho_{r0} H a^6-6 c_2^2 \mu ^3 H^4 a^5-6 c_1^2 \kappa ^2 \mu ^3 a^5\right.\notag\\ &&\left.{}-6 c_1 c_2^2 \rho_{m0} H^3 a^5+12 c_1 c_2 \kappa \mu ^3 H^2 a^5+12 c_1 c_2 {\bar c} \kappa ^2 H a^5+6 c_1^2 c_2 \kappa \rho_{m0} H a^5-12 c_1 c_2^2 \rho_{r0} H^3 a^4+14 c_1^2 c_2 \kappa \rho_{r0} H a^4\right.\notag\\ &&\left.{}-12 c_1 c_2 \kappa ^2 \mu ^3 a^3-3 c_2^3 \text{$\rho $0m} H^3 a^3-6 c_2^2 \kappa \mu ^3 H^2 a^3+6 c_2^2 {\bar c} \kappa ^2 H a^3+3 c_1 c_2^2 \kappa \rho_{m0} H a^3-6 c_2^3 \rho_{r0} H^3 a^2\right.\notag\\ &&\left.{}+10 c_1 c_2^2 \kappa \rho_{r0} H a^2-6 c_2^2 \kappa ^2 \mu ^3 a+2 c_2^3 \kappa \rho_{r0} H\right)\, , \end{eqnarray} where \begin{eqnarray} \Gamma&=&18 c_1 \bar c H^2 a^7+18 c_2 \bar c H^2 a^5-12 c_1 \bar c \kappa a^5-9 c_1^2 \rho_{m0} a^5 -54 c_1 \mu ^3 H a^5-12 c_1^2 \rho_{r0} a^4-12 c_2 \bar c \kappa a^3-12 c_1 c_2 \rho_{m0} a^3\notag\\ &&{}-18 c_2 \mu ^3 H a^3-16 c_1 c_2 \rho_{r0} a^2-3 c_2^2 \rho_{m0} a-4 c_2^2 \rho_{r0}\, . \end{eqnarray} It is evident that a more detailed (numerical) study, pursued elsewhere, of this differential equation is necessary in order to study the dynamics of these solutions. \subsection{Non-Noether solutions} In general it is not possible to find a solution of the Friedmann equations which is also a Noether symmetry since, in principle, such symmetries do not exist for any $f(R)$ theory. In general, a solutions of the cosmological equations is not a solution compatible with the condition $L_{{{\bf X}}}{\cal L}=0$. This is a peculiar situation which holds only if conserved quantities (Noether's charges) are intrinsically present in the structure of the theory (in our case, the form of $f(R)$). For example, imposing a power law solution, $a\propto t^p$, defines a function of $R=R(a)$, which can be put in the Noether symmetry equations, in order to find $f=f(R(a))$. Finally one can substitute the expressions for $f(a)$, $R(a)$, and $H$ in the Friedmann equations. In doing this, it is easy to show that, for $k=0$, there are no simple power-law solutions compatible with a Noether charge. The method discussed above allows to discriminate theories which admit or not cosmological solutions compatible with a Noether charge. It is also clear that power-law solutions do exist in general for $f(R)$ models, but they can be found using different methods \cite{noi}. Assuming, in general, a power-law $H(a)$, one finds $R$ as a function of $a$, and then, in principle, $f=f(R(a))$. It is therefore possible to write the Einstein equation as a second order differential equation for $f$ as a function of $a$, whereas all other quantities ($H$ and $R$) are given functions of $a$. The same argument holds for the redshift $z$ \cite{mimicking}. For example, let us rewrite the Friedmann equation (\ref{energy}) as {\beta} f-6\,f_{RR}\,\dot R\,H -6\,f_R\,H^2-f_R\left(R+\frac{6k}{a^2}\right)=\frac {\rho_{m0}}{a^3}+\frac{\rho_{r0}}{a^4}\, , \end{equation} and let us consider $H=\bar H(a)$ and $R=\bar R(a)$ as given functions of $a$, being, as above, {\beta} \bar R=-12\,\bar H^2-6\,a\,\bar H\,\bar H'-6\,\frac{k}{a^2}\, . \label{Rabis} \end{equation} The Friedmann equation can be written as {\beta} f'' +\left[ \frac1a -\frac{\bar R''}{\bar R'} +\frac1{6a\,\bar H^2}\left(\bar R+\frac{6k}{a^2}\right) \right]f' -\frac{\bar R'}{6a\,\bar H^2}\,f=-\frac{\rho_{m0}\,a+\rho_{r0}}{6\,a^5\,\bar H^2}\,\bar R'\, . \end{equation} This is a second order linear equation in $f$, whose general solutions depends on two parameters, $f_0$ and $f'_0$. Specifically, being the equation linear, the general solution is the linear combination of two solutions of the homogeneous ODE plus a particular solution. It is then clear that more than one $f(R)$ model can have the same behavior for $H(a)$, i.e.\ more theories share the same cosmological evolution. This situation is due to the fact that one has a fourth-order gravity theory. The singular points of this differential equation are those for which either $\bar H$ or $d\bar R/da$ vanishes. Starting from these considerations, interesting classes of solutions can be found out. \subsubsection{Radiation solutions} Let us seek for all the $f(R)$ models which have the particular solution $a=\sqrt{t/t_0}$, which means {\beta} \bar H=\frac1{2\,t_0\,a^2}=\frac{H_0}{a^2}\, ,\qquad\textrm{so that}\qquad \bar R=-\frac{6k}{a^2}\, , \end{equation} where $H_0\equiv(2\,t_0)^{-1}$. We have three interesting cases. \begin{enumerate} \item For $k=0$, we have $R=0$, leading to the Friedmann equation {\beta} f(0)-6\,f_R(0)\,\bar H^2=\frac{\rho_{m0}}{a^3}+\frac{\rho_{r0}}{a^4}\, , \end{equation} which, if $\rho_{m0}\neq0$, cannot be solved for $\bar H\sim a^{-2}$ since $f(0)$ and $f'(0)$ cannot be functions of $a$, but only constants. If $\rho_{m0}=0$, standard GR is of course recovered. \item For the case $k=-1$ we have the following differential equation for $f$, {\beta} f''+\frac{4}{a}\,f'+\frac{2\,\kappa}{H_0^2}\, f=\frac{2\,\kappa\, (\rho_{r0}+a \rho_{m0})}{H_0^2\,a^4}\, , \end{equation} whose general solution can be written as \begin{eqnarray} R&=&-\frac{6\,\kappa}{a^2}\\ f&=& \frac{\sqrt{\frac{a \sqrt{-\kappa}}{H_0}} d_2 \cos\! \left(\frac{a \sqrt{-2\kappa}}{H_0}\right) H_0^2}{\sqrt[4]{2} a^{7/2} \kappa \sqrt{\pi }} -\frac{\sqrt{\frac{a \sqrt{-\kappa}}{H_0}} d_1 \sin\! \left(\frac{a \sqrt{-2\kappa}}{H_0}\right) H_0^2}{\sqrt[4]{2} a^{7/2} \kappa \sqrt{\pi }} -\frac{\sqrt[4]{2} \sqrt{\frac{a \sqrt{-\kappa}}{H_0}} d_1 \cos\! \left(\frac{a \sqrt{-2\kappa}}{H_0}\right) H_0}{a^{5/2} \sqrt{-\kappa} \sqrt{\pi }}\notag\\ &&{}-\frac{\sqrt[4]{2} \sqrt{\frac{a \sqrt{-\kappa}}{H_0}} d_2 \sin\! \left(\frac{a \sqrt{-2\kappa}}{H_0}\right) H_0}{a^{5/2} \sqrt{-\kappa} \sqrt{\pi }} +\frac{\rho_{m0}}{a^3}+\frac{\rho_{r0} \sqrt{-\kappa} \text{Ci}\!\left(\frac{\sqrt{2} a \sqrt{-\kappa}}{H_0}\right) \sin\! \left(\frac{a \sqrt{-2\kappa}}{H_0}\right)}{\sqrt{2} a^3 H_0}\notag\\ &&{}-\frac{\rho_{r0} \sqrt{-\kappa} \cos\! \left(\frac{ a \sqrt{-2\kappa}}{H_0}\right) \text{Si}\!\left(\frac{a \sqrt{-2\kappa}}{H_0}\right)}{\sqrt{2} a^3 H_0} +\frac{\rho_{r0} \kappa \cos\!\left(\frac{a \sqrt{-2\kappa}}{H_0}\right) \text{Ci}\!\left(\frac{a \sqrt{-2\kappa}}{H_0}\right)}{a^2 H_0^2}\notag\\ &&{}+\frac{\rho_{r0} \kappa \sin\! \left(\frac{a \sqrt{-2\kappa}}{H_0}\right) \text{Si}\!\left(\frac{a \sqrt{-2\kappa}}{H_0}\right)}{a^2 H_0^2}\, , \end{eqnarray} where the SinIntegral and CosIntegral functions, Si and Ci respectively, are defined as {\beta} {\rm Si}(x)=\int_0^x \frac{\sin(t)}t\,dt\,\qquad {\rm Ci}(x)=-\int_x^\infty\frac{\cos(t)}{t}\, dt\, . \end{equation} The integration constants $d_{1,2}$ have dimensions $[d_1]=[d_2]=M^4$. \item Along the same lines, the case $k=1$ has the following solution \begin{eqnarray} R&=&-\frac{6\,\kappa}{a^2}\\ f&=&\frac{\sqrt{\frac{a \sqrt{\kappa}}{H_0}} d_1 \cosh\! \left(\frac{\sqrt{2 \kappa} a }{H_0}\right) H_0^2}{\sqrt[4]{2} a^{7/2} \kappa \sqrt{\pi }} +\frac{\sqrt{\frac{a \sqrt{\kappa}}{H_0}} d_1 \sinh\! \left(\frac{\sqrt{2 \kappa} a }{H_0}\right) H_0^2}{\sqrt[4]{2} a^{7/2} \kappa \sqrt{\pi }} -\frac{\sqrt[4]{2} \sqrt{\frac{a \sqrt{\kappa}}{H_0}} d_1 \cosh\! \left(\frac{\sqrt{2 \kappa} a}{H_0}\right) H_0}{a^{5/2} \sqrt{\pi \kappa }}\notag\\ &&{}-\frac{\sqrt[4]{2} \sqrt{\frac{a \sqrt{\kappa}}{H_0}} d_1 \sinh\! \left(\frac{\sqrt{2 \kappa} a }{H_0}\right) H_0}{a^{5/2} \sqrt{\pi \kappa }} +\frac{\rho_{m0}}{a^3} -\frac{\rho_{r0} \sqrt{\kappa}\, \text{Chi}\!\left(\frac{\sqrt{2 \kappa} a }{H_0}\right) \sinh\! \left(\frac{\sqrt{2 \kappa} a }{H_0}\right)}{\sqrt{2} a^3 H_0}\notag\\ &&{}+\frac{\rho_{r0} \sqrt{\kappa} \cosh\! \left(\frac{\sqrt{2 \kappa} a}{H_0}\right) \text{Shi}\!\left(\frac{\sqrt{2 \kappa} a}{H_0}\right)}{\sqrt{2} a^3 H_0} -\frac{\rho_{r0} \kappa \cosh\! \left(\frac{\sqrt{2 \kappa} a}{H_0}\right) \text{Chi}\!\left(\frac{\sqrt{2 \kappa} a}{H_0}\right)}{a^2 H_0^2}\notag\\ &&{}+\frac{\rho_{r0} \kappa \sinh\! \left(\frac{\sqrt{2 \kappa} a}{H_0}\right) \text{Shi}\!\left(\frac{\sqrt{2 \kappa} a}{H_0}\right)}{a^2 H_0^2}\, , \end{eqnarray} where the hyperbolic SinIntegral and CosIntegral, Shi and Chi respectively, are defined as {\beta} {\rm Shi}(x)=\int_0^x \frac{\sinh(t)}t\,dt\,\qquad {\rm Chi}(x)=\gamma_{E,M}+\ln(x)+\int_0^x\frac{\cosh(t)-1}{t}\, dt\, , \end{equation} and $\gamma_{E,M}\approx0.577$ is the Euler-Mascheroni constant. Both $d_1$ and $d_2$ are integration constants which dimensions $M^4$. \end{enumerate} \subsubsection{Matter solutions} In this case, we search for $f(R)$ models which have a dust-matter behavior, that is $a=(t/t_0)^{2/3}$, {\beta} \bar H=\frac2{3\,t_0\,a^{3/2}}=\frac{H_0}{a^{3/2}}\, , \qquad\textrm{and}\qquad \bar R=-\frac{2(2/t_0^2+9\,k\,a)}{3\,a^3}\, , \end{equation} where $H_0\equiv 2/(3\,t_0)$. For the case $k=0$, we find the explicit analytic solution \begin{eqnarray} R&=&-\frac4{3\,t_0^2\,a^3}\, ,\\ f(a)&=&a^{-(7+\sqrt{73})/4} \left(d_1 \,a^{\sqrt{73}/2}+d_2\right)+\frac{\rho_{m0}\,a-6\rho_{r0}}{2\,a^4}\, . \end{eqnarray} This is a 2-parameters family of solutions, depending on the two integration constants $d_{1,2}$ both with dimensions $M^4$. The Einstein-Hilbert case $f(R)=R$ belongs to this family, when $d_1$, $d_2$, and $\rho_{r0}$ all vanish. \subsubsection{Exponential solutions} In this case, we look for the behavior {\beta} \bar H=H_0={\rm constant}\, ,\qquad\textrm{which is}\qquad \bar R=-12\,H_0^2-\frac{6k}{a^2}\, . \end{equation} As above, we have three cases depending on $k$. \begin{enumerate} \item $k=0$. Both $H$ and $R$ are constants, and $R=R_0\equiv-12\,H_0^2$. The Friedmann equation is {\beta} f(R_0)-\tfrac12\,f_R(R_0)\,R_0=\frac{\rho_{m0}}{a^3}+\frac{\rho_{r0}}{a^4}\, , \label{infla} \end{equation} and it has solutions only for $\rho_{m0}=\rho_{r0}=0$ being $R_0$ a constant (see also \cite{barrow}). \item $k=1$. In this case, $H$ is still a constant but $R$ is not. One finds \begin{eqnarray} R&=&-12\,H_0^2-\frac{6\,\kappa}{a^2}\,\\ f&=&d_1 \cosh\! \left(\frac{\sqrt{2 \kappa}}{H_0\,a}\right)+d_2 \sinh\! \left(\frac{\sqrt{2 \kappa}}{H_0\,a}\right)\notag\\ &&{} +\frac{6 \rho_{r0}\,H_0^4}{\kappa^2}+\frac{3 \rho_{m0} \,H_0^2}{a \,\kappa}+\frac{6 \rho_{r0} \,H_0^2}{a^2 \,\kappa}+\frac{\rho_{r0}}{a^4}+\frac{\rho_{m0}}{a^3}\, . \end{eqnarray} \item $k=-1$. The solution is \begin{eqnarray} R&=&-12\,H_0^2-\frac{6\,\kappa}{a^2}\, ,\\ f&=&d_1 \cos\! \left(\frac{\sqrt{-2 \kappa}}{H_0\,a}\right)+d_2 \sin\! \left(\frac{\sqrt{-2 \kappa}}{H_0\,a}\right)\notag\\ &&{}+\frac{6\rho_{r0}\,H_0^4}{\kappa^2} +\frac{3\rho_{m0}\,H_0^2}{a \,\kappa} +\frac{6 \rho_{r0}\,H_0^2}{a^2 \,\kappa} +\frac{\rho_{r0}}{a^4}+\frac{\rho_{m0}}{a^3}\, . \end{eqnarray} \end{enumerate} \subsubsection{$\Lambda$CDM solutions} Let us now look for $f(R)$ models which are compatible with the $\Lambda$CDM being solutions of Friedmann equations. This analysis could be extremely important to compare the $f(R)$ approach with observations (see also \cite{leandros}). One defines {\beta} \bar H^2=H_0^2\left[\frac{\Omega_{m0}}{a^3}+\frac{\Omega_{r0}}{a^4}+1-\Omega_{m0}-\Omega_{r0}\right] .\ \end{equation} The differential equation to solve is therefore the following \begin{eqnarray} &&f''+\left[\frac{6 \Omega_{m0} H_0^2}{3 \Omega_{m0} H_0^2+4 a k}-\frac{4 (\Omega_{m0}+\Omega_{r0}-1) a^4-7 \Omega_{m0} a-8 \Omega_{r0}}{-(\Omega_{m0}+\Omega_{r0}-1) a^4+\Omega_{m0} a+\Omega_{r0}}\right]\,\frac{f'}{2\,a}\notag\\ &&\qquad{}-\frac{3 \Omega_{m0} H_0^2+4 a k}{2 a \left[-(\Omega_{m0}+\Omega_{r0}-1) a^4+\Omega_{m0} a+\Omega_{r0}\right] H_0^2}\,f\notag\\ &&\qquad{}= -\frac{\left(3 \Omega_{m0} \,H_0^2+4 a k\right) (\rho_{r0}+a \rho_{m0})}{2 a^5 \left[-(\Omega_{m0}+\Omega_{r0}-1) a^4+\Omega_{m0} a+\Omega_{r0}\right] H_0^2}\, , \end{eqnarray} The general integral can be numerically achieved by giving suitable initial conditions for $f_0$, $f'_0$. This analysis will be pursued in a forthcoming paper. \section{Discussion and Conclusions} \label{sec6} In this paper, we have discussed a general method to find out exact/analytical cosmological solutions in $f(R)$ gravity. The approach is based on the search for Noether symmetries which allow to reduce the dynamics and, in principle, to solve more easily the equations of motion. Besides, due to the fact that such symmetries are always related to conserved quantities, such a method can be seen as a physically motivated criterion. The main point is that the existence of the symmetry allows to fix the form of $f(R)$ models assumed in a point-like cosmological action where the FLRW metric is imposed. It is worth noticing that, starting from a point-like FLRW Lagrangian, and then deriving the Euler-Lagrange equations of motion, leads exactly to the same equations obtained by imposing the FLRW metric in the Einstein field equations. This circumstance allows to search ``directly'' the Noether symmetries in the point-like Lagrangian and then to plug the related conserved quantities into the equations of motion. As a result $i)$ the form of the $f(R)$ is fixed directly by the symmetry existence conditions and $ii)$ the dynamical system is reduced since some of its variables (at least one) is cyclic. The method is useful not only in a cosmological context but it works, in principle, every time a canonical, point-like Lagrangian is achieved (in \cite{stabile}, it has been used to find out spherically symmetric solutions in $f(R)$ gravity). In this paper, we have considered a generic $f(R)$ theory where standard fluid matter (dust and radiation) is present. The Noether conditions for symmetry select forms of $f(R)$ depending on a set of cosmological parameters such as $\{\rho_{r0},\rho_{m0},k,H_0\}$ and the effective gravitational coupling. Such a dependence can be easily translated into the more suitable set of observational parameters $\{\Omega_{r0},\Omega_{m0},\Omega_k, H_0\}$ and then matched with data. This situation has a twofold relevance: from one side, it could contribute to remove the well known problem of degeneracy (several dark energy models fit the same data and, essentially, reproduce the $\Lambda$CDM model); from the other side, being the search for Noether symmetries a relevant approach to find out conserved quantities in physics, this could be an interesting method to select models motivated at a fundamental level. It is worth noticing that the Noether constant of motion, which we have found, has the dimensions of a mass and is directly related to the various sources present into dynamics. In some sense, the Noether constant ``determines'' the bulk of the various sources as $\rho_{m0}$, $\rho_{r0}$ and the effective $\rho_{\Lambda}$ and then could greatly contribute to solve the dark energy and dark matter puzzles. In a forthcoming paper, we will directly compare the solutions which we have presented here with observational data. The ``non-Noether solutions'' deserve a final remark. In this case, we do not ask for a Noether symmetry but, finding these solutions, can be related to the previous general method. We have shown that the standard cosmological behaviors of the usual Einstein-Friedmann cosmology can be achieved also in generic $f(R)$ models, assuming that the cosmological quantities $H$ and $R$ depend on the scale factor $a$. As result, we find out general $f(R(a))$ where the standard solutions of the linear $f(R)=R$ case are easily recovered. \bigskip \noindent{\rm Acknowledgment}. We want to thank prof.\ Ringeval and prof.\ Fabri for useful discussions and comments. ADF is supported partly by STFC, UK and partly by the Belgian Federal Office for Scientific, Technical and Cultural Affairs through the Interuniversity Attraction Pole P6/11. We thank also the Referee for the fruitful discussion which allowed us to improve the paper.
2,877,628,090,567
arxiv
\section{Introduction} Although pulsar-like stars have many different manifestations, they are populated mainly by rotation-powered radio pulsars. A lot of information about pulsar radiative process is inferred from the integrated and individual pulses, the sub-pulses, and even the micro-structures of radio pulses. Among the magnetospheric emission models, the user-friendly nature of Ruderman \& Sutherland (1975; hereafter RS75) model is a virtue not shared by others~\citep{Shukre92}. In RS75 and its modified versions~\citep[e.g.,][]{QL98}, a vacuum gap exists above polar cap of a pulsar, in which charged particles (electrons and positrons) are accelerated because of ${\bf E \cdot B} \neq 0 $. These accelerated charged particles, moving along the curved magnetic field lines, radiate curvature or inverse-Compton-scattering-induced high energy photons which are converted to $e^\pm$ while propagating in strong magnetic field. A follow-up breakdown of the vacuum gap produces secondary electron-positron pairs plasma that radiate coherent radio emission. These models with gap-sparking provide a good framework to analyze observational phenomena, especially the drifting~\citep{DC68,DR99,VJ99} and bi-drifting~\citep{QLZXW2004} sub-pulses. However, the RS75-like vacuum gap models work only in strict conditions: strong magnetic field and low temperature on surface of pulsars~\citep[e.g.,][]{GH08,ML07}. The necessary binding energy of positive ions (e.g., ${}_{26}^{56}$Fe) for RS75 model to work should be higher than $\sim 10$ keV, while calculations showed that the cohesive energy of ${}_{26}^{56}$Fe at the neutron star surface is $<1$ keV~\citep{FLRSHM77,L01}. This binding energy problem could be solved within a partially screened inner gap model~\citep{GM03,GM06,MG09} for normal neutron stars. Alternatively, it is noted that the binding energy could be sufficiently high if pulsars are bare strange quark stars~\citep{XQ98,XQZ99,XZQ01} although strange stars were previously supposed to exist with crusts~\citep{AFO86}. Certainly, it is very meaningful in the elementary strong interaction between quarks and the phases of cold quark matter that the binding energy problem could be solved by bare quark stars as pulsars~\citep{X09,X10}. Though the ideas of solving the binding energy problem in BSS model were presented and discussed in some literatures, there is no comprehensive study with quantitative calculations up to now. In this paper, we are going to investigate the BSS model in quantitative details and show the physical picture of binding of particles on BSS's surface. Our research results are that multi-accelerators could occur above the polar cap for (and only for) the curvature-radiation-induced (CR-induced) sparking normal pulsars (NPs), but for other cases, such as resonant inverse-Compton-scattering-induced (ICS-induced) sparking NPs and both CR-induced and ICS-induced millisecond pulsars (MSPs), particles on surface of BSSs are bound strongly enough to form vacuum gap and RS75-like models work well if pulsars are BSSs. \section{The accelerators above polar caps of bare strange quark stars} On a BSS's surface, there are positively ($u$-quarks) and negatively ($d$- and $s$-quarks and electrons) charged particles. Quarks are confined by strong color interaction, whose binding energy could be considered as infinity when compared with the electromagnetic interaction, while electrons are bound by electromagnetic interaction. Therefore, in this paper we focus on the binding of electrons. Let's discuss briefly the binding of electrons in the BSS model at first. On one hand, assuming the electric potential at the top of RS75 vacuum gap is the same as that of the interstellar medium, one could then have a potential barrier for electrons by integrating the gap electric field from top to bottom in the vacuum gap. This potential barrier could then prevent electrons streaming into magnetosphere. On the other hand, electrons above the stellar surface of BSS are described in the Thomas-Fermi model, in which the total energy of eletrons on Fermi surface would be a constant, $\phi_0$. In previous work (e.g. Alcock et al. 1986), this constant is chosen to be zero, $\phi_0=0$, because they didn't consider the effect of spinning BSS with strong magnetic fields. Due to the unipolar generator effect, potential drop between different magnetic field lines is set up from pole to equatorial plane. This potential drop could result in different $\phi_0$, at different polar angle, $\theta$, and the total energy of electrons would then be obtained by choosing certain zero potential magnetic field line (i.e., at $\theta_{\rm B}$ or $\theta_{\rm C}$ in Fig.~\ref{antipulsar}). Finally, comparing the total energy of electrons with the height of the potential barrier in vacuum gap, we can see whether eletrons can stream into magnetosphere freely or not. \subsection{The energy of electrons on Fermi surface} The distribution of electrons in BSSs is described in the Thomas-Fermi model~\citep{AFO86}. In this model, equilibrium of electrons in an external electric field assures that the total energy of each electron on Fermi surface is a constant, $\phi_0$. For the case of extremely relativistic degenerate electron gas, it gives~\citep{AFO86} \begin{equation}\label{FD} \epsilon(\vec{r})=cp_{\rm F}(\vec{r})-e\varphi(\vec{r})=\phi_{0}, \end{equation} where $\epsilon(\vec{r})$ is the total energy, $cp_{\rm F}(\vec{r})$ is the Fermi energy, $-e\varphi(\vec{r})$ is the electrostatic potential energy of electrons and $\phi_{0}$ is a constant, describing the potential energy of electrons in the Thomas-Fermi model at infinity. On the other hand, the potential distribution of electrons on the star's surface due to the electric field induced by the rotating, uniformly magnetized star, for the sake of simplicity, could be assumed and estimated as (Xu et al. 2006, Eq. 2 there) \begin{equation}\label{UGP} V_{\rm i}(\theta) \simeq 3 \times 10^{16} B_{12} R_{6}^{2} P^{-1} \sin^{2} \theta~({\rm V}) +V_{0}, \end{equation} where $B_{12}=B/(10^{12} \; {\rm G})$, and $R_{6}=R/(10^6 \;{\rm cm})$ is the radius of a pulsar, $P=2\pi/\Omega$ is the pulsar period, $\theta$ is the polar angle and $V_{0}$ is another constant. In view of the distribution of electron above the surface of BSS extends only thousands of femtometers, the macroscopic potential drop between different magnetic field lines could be thought to be at infinity in the Thomas-Fermi model. And the potential energy related to Eq.~\ref{UGP}, $eV_{\rm i}$, could be regarded as the constant, $\phi_{0}$, in Eq.~\ref{FD}. By choosing the certain zero potential magnetic field line, we could obtain the total energy of electrons, namely $eV_{\rm i}$. Two scenarios could be possible here. The first scenario is that we choose the critical field lines whose feet are at the same electric potential as the interstellar medium~\citep{GJ69} as the zero potential. We may also suggest a second choice that the zero potential should be at those magnetic field lines which separate annular and core regions determined by $S_{\rm AG}=$$S_{\rm CG}$, where $S_{\rm AG}$ and $S_{\rm CG}$, are the stellar surface areas of annular region and core region, respectively. The second scenario is based on the idea that if particles with opposite charge stream into the magnetosphere with $\rho_{\rm GJ}$ in both regions, areas of this two regions should approximately be equal in order to keep the star not charging. The feet of the critical field lines and the magnetic field lines determined by $S_{\rm AG}=$$S_{\rm CG}$ are designated as C and B, respectively (Fig.~\ref{antipulsar}). For the above two scenarios, the total energy, $\phi_{\rm i}=eV_{\rm i}$, of electrons on the Fermi surface are given by \begin{equation}\label{EEA} \phi_{\rm i,C}(\theta) \simeq -3 \times 10^{10} B_{12} R_{6}^{2} P^{-1}(\sin^{2} \theta - \sin^{2} \theta_{\rm C} ) ~\;{\rm MeV}, \end{equation} and \begin{equation}\label{EEB} \phi_{\rm i,B}(\theta) \simeq -3 \times 10^{10} B_{12} R_{6}^{2} P^{-1}(\sin^{2} \theta - \sin^{2} \theta_{\rm B} ) ~\;{\rm MeV}, \end{equation} respectively, where $\theta_{\rm C}$ and $\theta_{\rm B}$ are polar angles of C and B (see Fig.~\ref{antipulsar}). Equations~\ref{EEA} and~\ref{EEB} imply that the total energy of electrons is higher at the poles and decreases toward the equator for an \lq antipular\rq~ ($\bf \Omega \cdot B > 0$), which means that electrons in different regions above a polar cap may behave differently. \subsection{The potential barrier of electrons in vacuum gap} In the following, we will consider the potential barrier of electrons in vacuum gap. Unlike RS75, we do calculations in situation of an \lq antipulsar\rq~whose magnetic axis is parallel to its spin axis. A schematic representation for \lq antipulsar\rq~is shown in Fig.~\ref{antipulsar}. Assuming the electric potential at the top of RS75 vacuum gap is the same as that of the interstellar medium, we could get a potential barrier for electrons by integrating the gap electric field from top to bottom in the vacuum gap. This potential barrier, in one-dimensional approximation, is (RS75) \begin{equation}\label{EEC} \phi_{\rm p}(Z)=2\pi \times 10^{4} P^{-1} B_{12} (h_{3}-Z_{3})^{2} ~\;{\rm MeV}, \end{equation} where $h_{3}=h/(10^3 \:{\rm cm})$ is the height of vacuum gap, $Z_{3}=Z/(10^3 \:{\rm cm})$ is the space coordinate measuring height above the quark surface. This potential barrier may prevent electrons injecting into pulsar's magnetosphere. The height of this potential barrier mainly depends on the height of vacuum gap which is determined by cascade mechanics of sparking, i.e., the CR-induced cascade sparking and the ICS-induced cascade sparking. In CR-induced cascade sparking model, the gap height is (RS75) \begin{equation}\label{HCR} h_{\rm CR}=5 \times 10^{3} \rho_{6}^{2/7} B_{12}^{-4/7} P^{3/7} ~\; {\rm cm}, \end{equation} and in ICS-induced cascade sparking model, it is~\citep{ZHM00} \begin{equation}\label{HICS} h_{\rm ICS}=2.79 \times 10^{4} \rho_{6}^{4/7} B_{12}^{-11/7} P^{1/7} ~\; {\rm cm}. \end{equation} In previous work of Gil et al. (2006), the heights of the vacuum gap of both CR-induced and ICS-induced sparking mechanism (Gil et al. 2006, Eqs. 21 and 22 there) are different from what we used in this work. In the PSG model, there was a partial flow of iron ions from the positively charged polar cap which coexist with the production of outflowing electron-positron plasmas. Such a charge-depleted acceleration region is also highly sensitive to both the critical ion temperature and the actual surface temperature of the polar cap~\citep{GM03}. Differently, in our model, there is no flow of positively charged particles, namely quarks and also it is insensitive to the actual surface temperature. This means that there is no partial screened effect above polar cap of bare strange quark stars, namely the pure vacuum gap exists on polar cap of bare strange quark stars. That's the reason why we use Eqs.~\ref{HCR} and~\ref{HICS} in our calculation. Whether this choice of height of vacuum gap could result in different driftrate of subpulses or not is a complicated problem. We will discuss this problem very briefly in \S 3. The potential barrier of electrons in the gap for CR-induced cascade sparking model of typical normal pulsars (NPs) is plotted in Fig.~\ref{PB}, in which the total energy of electrons at the stellar surface, namely $\phi_{\rm i}$, is illustrated at different polar angles. The situation of CR-induced cascade sparking of typical millisecond pulsars (MSPs) is similar to that of NPs but with greater height of potential barrier. \begin{figure} \includegraphics[scale=0.4]{f1.eps} \caption{A schematic representation of the geometry of~\lq antipulars\rq. CFL stands for the critical field lines, NCS for null charge surface, and LC for light cylinder. The enlarged arrows with opposite directions in annualr region and core region represent the directions of the electric field in vacuum gap. ``A'',``B'' and ``C'' represent the feet of different magnetic field lines (see text).} \label{antipulsar} \end{figure} \begin{figure} \includegraphics[scale=0.3]{f2.eps} \caption{The potential barrier of electrons, $\phi_{\rm P}$, in vacuum gap of typical NPs ($P = 1 \:{\rm s}$, $B = 10^{12}$ G). The potential energy of electrons at stellar surface, namely $\phi_{\rm i}(\theta)$, is illustrated with fixed polar angles, for example, with $0.6\theta_{\rm A}$ and $0.8\theta_{\rm A}$, where $\theta_{\rm A}$ is the polar angle of the feet of the last open field lines (Fig.~\ref{antipulsar}).} \label{PB} \end{figure} Comparing the potential barrier with total energy of electrons, we will explain behavior of electrons above polar cap. Namely, only electrons with energy greater than the potential barrier can escape into pulsar's magnetosphere. It is known that energy of electrons is a function of polar angle (Eqs.~\ref{EEA} and~\ref{EEB}). As a result, there may be a critical polar angle, $\theta_{0}$, at which the energy of electrons equals the height of this potential barrier. Comparison between the total energy of electrons and the height of potential barrier on stellar surface for typical NPs of CR-induced sparking is shown in Fig.~\ref{PEC} ($\theta_{0}$ does not exist for both ICS-induced sparking of NPs and MSPs, see Table.~\ref{PA}). The results are as follows: free flow status stays in the region of [0, $\theta_{0}$] and vacuum gap in [$\theta_{0}$, $\theta_{\rm A}$] for \lq antipulsars\rq,~where $\theta_{\rm A}$ is polar angle of the feet of the last open field lines (Fig.~\ref{antipulsar}). We give the results of $\theta_{0}$ in Table.~\ref{PA} for both pulsar and \lq antipulsar\rq,~and find that for the special case of CR-induced sparking NPs, free flow and vacuum gap could coexist above polar cap which differs from the previous scenario. The general case is that only vacuum gap exists. \begin{figure} \includegraphics[scale=0.26]{f3.eps} \caption{Comparison between the total energy of electron on stellar surface with the height of the potential barrier of typical NPs with the choice of $\phi_{\rm i}(\theta_{\rm C})=0$ and $\phi_{\rm i}(\theta_{\rm B})=0$, respectively. The solid horizontal line is the height of the potential barrier of electrons, namely $\phi_{\rm P}(Z=0)$.} \label{PEC} \end{figure} \begin{table*} \centering \begin{minipage}{140mm} \caption{The polar angles of $\theta_{\rm B}$, $\theta_{\rm C}$ and $\theta_{0}$ for both CR-induced and ICS-induced sparking of typical NPs and MSPs within both the choice of zero potentials.\label{PA}} \begin{tabular}{@{}lllllllll@{}} \hline & & & & \multicolumn{2}{c}{$\theta_{\rm 0,B}$~($\theta_{\rm A}$)} & \multicolumn{2}{c}{$\theta_{\rm 0,C}$~($\theta_{\rm A}$)} & \\ \cline{5-8}\\ & $\theta_{\rm A}$~(rad) & $\theta_{\rm B}$~($\theta_{\rm A}$) & $\theta_{\rm C}$~($\theta_{\rm A}$) & CR & ICS & CR & ICS & \\ \hline & & & &0.49 &...$^1$ &0.58 &... &$\bf \Omega \cdot B > 0$ \\ NPs & 0.0145 & 0.69 &0.76 &0.84 &2.76$^2$ &0.90 &2.83$^2$& $\bf \Omega \cdot B < 0$ \\ & & & &... &... &... &... &$\bf \Omega \cdot B >0 $ \\ MSPs & 0.145 &0.69 &0.76 &1.49$^2$ &... &1.52$^2$ &... &$\bf \Omega \cdot B < 0$\\ \hline \end{tabular}\\ $^1${$\theta_{0}$~does not exit, which means that the whole polar cap region is vacuum gap.}\\ $^2${$\theta_{0}$~$>$~$\theta_{\rm A}$,~which means that the whole polar cap region is vacuum gap.} \end{minipage} \end{table*} \subsection{The effects of thermionic emission and diffusion of electrons} It follows from the previous argument that electrons inside BSSs usually cannot stream into magnetospheres. Does any other process which may affect the existence of vacuum gap above polar cap? In vacuum gap, except pulling electrons from the interior of BSSs, two other processes which may also prevent vacuum gap from being formed are required to be investigated. One is the thermionic emission of electrons and another is the diffusion of electrons from the outer edge to the inner region of polar cap. For the first one, if the current density due to thermionic emission of electrons is much smaller than that of Goldreich-Julian charge density, the vacuum gap could be maintained as well. This current density is determined by the Richard-Dushman equation~\citep{UM95} \begin{equation} J_{\rm th}= 1.2 \times 10^{14} T_{6}^{2} \exp(-1.161 \times 10^{4}T_{6}^{-1}\phi_{\rm MeV})\; {\rm A \:cm^{-2}}, \end{equation} where $m_{\rm e}$ is the mass of electron, $k_{\rm B}$ is the Boltzmann constant, $T_{6} = T/(10^6 \:{\rm K})$ is the temperature and $\phi_{\rm MeV}=\phi/{\rm MeV}$ is the work function of electrons. In the vacuum gap of BSSs, the work function of thermionic electrons is the order of the difference between the height of the potential barrier and the total energy of electron at the surface of BSSs. The order of the difference is about $10^6$ MeV. At the same time, the surface temperature of polar caps of BSSs is order of $10^6$ K. Thus, the thermionic emission current density is $\sim 0$, which means that the thermionic emission of electrons cannot affect the existence of the vacuum gap. The second process is the diffusion of electrons whose distribution above BSSs surface is~\citep{XZQ01} \begin{equation}\label{EN} n_{\rm e}(Z)= \frac{1.187 \times 10^{32} \phi_{{\rm q},{\rm MeV}}^{3}} {(0.06 \phi_{{\rm q},{\rm MeV}} Z_{11} +4)^{3}} ~\;{\rm cm^{-3}}. \end{equation} Eq.~\ref{EN} implies that the number density of electrons (so does the kinetic energy density, $\epsilon_{\rm k}$) decreases rapidly with increasing of the distance from quark matter surface at which $\epsilon_{\rm k}$ $\gg$~$\epsilon_{\rm B}$, where $\epsilon_{\rm B}$ is the magnetic field energy density. As a result, there is a balance surface where the kinetic energy density equals the magnetic energy density. Below this balance surface, electrons can cross magnetic field lines freely and above the balance surface, this motion is prevented. The physical picture of the diffusion of electrons is illustrated in Fig.~\ref{DF}. \begin{figure} \includegraphics[scale=0.3]{f4.eps} \caption{A representative illustration of the diffusion of electrons above the polar cap of bare strange quark star.}\label{DF} \end{figure} Making use of $\epsilon_{\rm k} = \epsilon_{\rm B}$, where $\epsilon_{\rm k} = n_{\rm e} \epsilon_{\rm F}$ ($\epsilon_{\rm F}$ is the Fermi energy of degenerate electrons) and $\epsilon_{\rm B} = B^{2}/{8 \pi}$, we can obtain the height of the balance surface. For NPs, it is $Z_{11}$$\simeq 160$ and for MSPs, it is $Z_{11}$$\simeq 1.7 \times 10^4$, where $Z_{11}=Z/(10^{-11} \:{\rm cm})$. Keep in mind that there is a directed outward surface electric field above the quark matter surface. This surface electric field is much stronger than the gap electric field but decreases rapidly with also increasing of the distance. Which means that this surface electric field becomes smaller than the gap electric field above some certain distance, $Z_{0,11}$. For NPs, it is $Z_{0,11}$$\simeq 7000$, and for MSPs, it is $Z_{0,11}$$\simeq$$6.3 \times 10^{6}$ (see Fig.~\ref{DF}). Both have $Z_{11}$$\ll$~$Z_{0,11}$ for NPs and MSPs. The diffusion of electrons beneath $Z_{0,11}$ is still confined by the surface electric field meaning that only the diffusion of electrons above the surface with height of $Z_{0,11}$ needs to be considered. The diffusion coefficient, $D_{\rm c}$, is given by~\citep{XZQ01} \begin{equation}\label{DC} D_{\rm c} \simeq \frac{\rho^{2}}{\tau_{\rm F}} = \frac{\pi n_{\rm e} ce^{2}}{B^{2}} = 2.17 \times 10^{-3} B_{12}^{-2} n_{\rm {e},29} ~\;{\rm cm^{2} \,s^{-1}}, \end{equation} where $\rho = \gamma \rho_{\rm L}$ ($\rho_{\rm L}=m_{\rm e}vc/(eB)$ is the Larmor radius) is the cyclotron radius of relativistic electrons, and $\tau_{\rm F} \simeq \gamma m_{\rm e}^{2} v^{3}/(\pi e^{4} n_{\rm e})$ is the mean free flight time of electrons. The gradient of electrons along with the diffusion direction is approximately \begin{equation}\label{DG} \frac{{\rm d} n_{\rm e}}{{\rm d} x} \simeq \frac{n_{\rm e}}{\rho} = 1.4 \times 10^{38} n_{{\rm e},29}^{2/3} ~\;{\rm cm^{-4}}. \end{equation} Then, the diffusion rate is \begin{equation}\label{DR} I_{\rm df} = 2.77 \times 10^{29} B_{12}^{-1} P^{-1/2} R_{6}^{3/2} \int_{Z_{0,11}}^{\infty} n_{{\rm e},29}^{5/3} \;{\rm dZ_{11}} ~\;{\rm s^{-1}}, \end{equation} where $n_{{\rm e},29}=n_{\rm e}/(10^{29}\;{\rm cm^{-3}})$. For both the NPs and MSPs with different $\phi_{\rm q}$, we give the results of the diffusion rate $I_{\rm df}$ and $I_{\rm GJ}$ in Table~\ref{DRV} in which the flow with the Goldreich-Julian flux is $I_{\rm GJ}=$~$\pi$$r_{\rm p}^{2}$c$n_{\rm GJ} $~$\simeq$$1.4 \times$$10^{30}$$P^{-2}$$R_{6}^{3}$ $B_{12}$~$\rm s^{-1}$. we can know that both have~$I_{\rm df}$~$\ll$~$I_{\rm GJ}$~for NPs and MSPs from Table \ref{DRV}. This means that the diffusion of electrons is also negligible which guarantees the existence of vacuum gap. \begin{table*} \centering \begin{minipage}{140mm} \caption{ The typical value of the diffusion rate for NPs and MSPs with different choice of $\phi_{\rm q}$. \label{DRV}} \begin{tabular}{@{}ccccc@{}} \hline &\multicolumn{2}{c}{NPs} &\multicolumn{2}{c}{MSPs} \\ \cline{2-5}\\ $\phi_{\rm q}$ (MeV) &$I_{\rm df}$ ($10^{24} \; {\rm s^{-1}}$) & $I_{\rm GJ}$ ($10^{24}\; {\rm s^{-1}}$) & $I_{\rm df}$ ($10^{17} \; {\rm s^{-1}}$) &$I_{\rm GJ}$ ($10^{17}\; {\rm s^{-1}}$) \\ \hline 1& $\sim$4.75 & &$\sim$7.52& \\ 10&$\sim$4.91& $\sim$$1.4 \times 10^3$& $\sim$7.52& $\sim$$1.4 \times 10^{13}$\\ 20&$\sim$4.93 & & $\sim$7.53& \\ \hline \end{tabular} \end{minipage} \end{table*} \section{ Conclusions and Discussions} In RS75 model, the binding energy problem is one of the most serious problems in the normal neutron star model of pulsars. Arons and Scharlemann (1979) developed an alternative model, the space-charge limited flow (SCLF) model, in which the particles, both iron ions and electrons can be pulled out freely, and form a steady flow~\citep{AS79}. In this SCLF model, the drifting sub-pulse phenomenon which has been commonly observed in pulsars can rarely be reproduced. The prerequisite for understanding this phenomenon could be the existence of a vacuum gap. In a very special case, through our calculations, we find that there is a new physical scenario for CR-induced sparking of normal pulsars (NPs) that free flow and vacuum gap may coexist above the polar cap. But in other cases, such as ICS-induced sparking of NPs and millisecond pulsars (MSPs), only vacuum gap exists. In general, if a pulsar is not highly negatively charged~\citep{XCQ06}, vacuum gap survives at polar cap as well. One limitation is that our calculation is based on one-dimensional approximation and it might fail in some cases of MSPs. As far as we find, it is very difficult to deal with the high-dimensional cases. The one-dimensional approximation provides a good understanding of the geometry of polar cap of BSSs. In conclusion, the binding energy problem could be solved completely in the BSS model of pulsar as long as BSSs are neutral (or not highly negative charged), and the structure of polar cap of BSSs are very different with respect to that of NSs. Detailed information about the geometry of BSS's polar cap is given in Table~\ref{AG}. \begin{table*} \centering \begin{minipage}{140mm} \caption{The accelerators above polar caps of BSSs. \label{AG}} \begin{tabular}{@{}llllll@{}} \hline &\multicolumn{2}{c}{[0,~$\theta_{\rm 0}$]$^{\dag}$} &\multicolumn{2}{c}{[$\theta_{\rm 0}$,~$\theta_{\rm A}$]} & \\ \cline{2-5}\\ &CR &ICS &CR &ICS & \\ \hline &SCLF &VG &VG &VG &$\bf \Omega \cdot B > 0$ \\ NPs &VG &VG$^{\ddag}$ &SCLF &VG$^{\ddag}$& $\bf \Omega \cdot B < 0$ \\ &VG &VG &VG &VG &$\bf \Omega \cdot B >0 $ \\ MSPs &VG$^{\ddag}$ &VG &VG$^{\ddag}$ &VG &$\bf \Omega \cdot B < 0$\\ \hline \end{tabular}\\ $^{\dag}${$\theta_{0}$ represents $\theta_{\rm 0,B}$ while choosing $\phi_{\rm B}=0$ and $\theta_{\rm 0,C}$ while choosing $\phi_{\rm C}=0$.}\\ $^{\ddag}${for such cases, $\theta_{0}$ $>$~$\theta_{\rm A}$, which represents the structure of the whole polar cap region.} \end{minipage} \end{table*} A more interesting region from pole to equator may locate between that polar angle where the total energy of electron equals the potential barrier and the polar angle of the foot of zero potential magnetic field line (i.e., [$\theta_{0, \rm C},\theta_{\rm C}$] or [$\theta_{0, \rm B},\theta_{\rm B}$], see Fig.~\ref{PEC}) for CR-induced sparking NPs. After the birth of a NP, a vacuum gap exists at this region. When sparking starts, the potential in vacuum gap drops rapidly due to screen by electron-positron pairs and may become lower than that at the surface, namely $V_{\rm i}(\theta)$. As a result, the sparking converts vacuum gap to free flow at this region until the sparking ends, i.e., at [$\theta_{0, \rm C},\theta_{\rm C}$] or [$\theta_{0, \rm B},\theta_{\rm B}$], vacuum gap and free flow work alternately. This argument may have profound implications for us to distinguish neutron stars and quark stars by pulsar's magnetospheric activities (e.g., the diversity pulse profiles). Another issue to be discussed is about the drifting rate of subpulses when we use the height of pure vacuum gap in this work. The natural explanation of the drifting subpulse phenomena in vacuum gap is due to $\bf E \times B$. Unfortunately, these theoretical calculations gave higher drifting rate with respect to observations~\citep[e.g.,][]{RS75, DR99, DR01, GM03, GMZ06}. Since it has been observed~\citep{DC68}, the drifting subpulse phenomenon remains unclear which has been widely regarded as one of the most critical and potentially insightful aspects of pulsar emission~\citep{DR01}. The PSG mechanism~\citep[e.g.,][]{GM03, GM06, GMZ06} could be a way to understand lower drifting rates observed, but some complexities still exist which make the underlying physics of drifting subpulses keep complicated and far from knowing clearly. (1) in principle the drifting velocity of subpulses is the ratio of the drifting distance to the duration, while the expected velocity predicted by $\bf E \times B$ is only for electrons in separated emission units, namely the plasma filaments. These two velocities would not be the same if the plasma filaments may stop after sparking. When sparking starts, the electric field in the vacuum gap vanishes due to screen by plasmas; while sparking ends, the electric field appears again. Thus, the calculated drifting velocity with $\bf E \times B$ could be higher than that of observations. (2) the so-called aliasing effect: as one observes subpulses only once every rotation period, we can hardly determine their actual speed. The main obstacles in the aliasing problem are the under sampling of subpulse motion and our inability to distinguish between subpulses especially when the differences between subpulses formed by various subbeams are smaller than the fluctuations in subpulses from one single subbeam~\citep{LSRR03}. Anyway, detailed studies are very necessary in the future works. We assume that the potential energy related to Eq.~\ref{UGP}, $eV_{\rm i}$, to be the constant, $\phi_{0}$, in Eq.~\ref{FD}. This assumption could be reasonable. For an uniformly magnetized, rotating conductor sphere, the unipolar generator will induce an electric field which is a function of polar angle, as described in Eq.~\ref{UGP}. In the case of $\bf \Omega \cdot B>0$ (Fig.~\ref{antipulsar}), the potential energy of electron is highest at the polar region which means that those electrons there could be easier to escape. Alternatively, this conclusion could be quantitatively understood as following: because of Lorentz force inside a star, more electrons locate at the polar region so that the Fermi energy of electron is higher there and easier to escape into magnetosphere. \section*{Acknowledgments} We thank Dr. Kejia Lee and other members in the pulsar group of Peking University for their helpful and enlightened discussions. We also thank Prof. Janusz Gil for his helpful comments and suggestions. Junwei Yu is grateful to Dr. Caiyan Li for her helpful assistance. The work is supported by NSFC (10973002,10935001), the National Basic Research Program of China (grant 2009CB824800) and the John Templeton Foundation.
2,877,628,090,568
arxiv
\section{A resource-driven operational semantics} \label{sec:operational} There are many variants of the $\pi$-calculus; here's ours: \[ \begin{array}{c} P \ ::=\ \sum \pi_i. P_i \ \ |\ \ P \oplus Q \ \ |\ \ \textsf{new } x. P \ \ |\ \ P|Q \ \ |\ \ \textsf{rec } X. P \ \ |\ \ X \\ \pi \ ::=\ \ov{e}e' \ \ |\ \ e(x) \qquad \qquad e \ ::=\ x \ \ |\ \ c \end{array} \] We distinguish between external choice ($+$) and internal choice ($\oplus$), which simplifies the liveness semantics~(\secref{liveness}) but is not essential. We also distinguish between channels ($c, d$) and channel variables ($x, y, z$) and include a simple grammar of channel expressions ($e$) ranging over both. A \emph{closed} process has no unbound channel or process variables. Closed processes may, however, refer to channel constants and thereby communicate with an environment. We write $0$ for an empty summation, which is an inert process. \subsection{Generating actions} The operational semantics of closed processes is given in two layers, via two labelled transition systems. In both systems, the labels are (syntactic) \emph{actions}, given by the following grammar: \begin{eqnarray*} \alpha &::=& c!d \ \ |\ \ c?d \ \ |\ \ \nu c \ \ |\ \ \tau \ \ |\ \ \lightning \qquad (\textsc{Action}) \end{eqnarray*} Actions record the concrete channels involved in sending, receiving, and allocating, respectively. The action $\tau$, as usual, represents an internal (unobservable) step on the part of the process. The action $\lightning$ represents a fault, caused by using an unowned channel~(\secref{action-sem}). Communication actions are dual: $\ov{c!d} = c?d$ and $\ov{c?d} = c!d$, while $\ov{\nu c}$, $\ov{\tau}$, and $\ov{\lightning}$ are undefined. The first transition system generates all conceivable actions associated with a process, without considering whether those actions are globally plausible: \begin{display}[$P \step{\alpha} Q$]{Operational semantics: action generation} \[ \begin{array}{rcl} \cdots + \ov{c}d.P + \cdots &\step{c!d}& P \tr{& \sr{ASend}}\\ \cdots + c(x).P +\cdots &\step{c?d}& P \{d/x\} \tr{& \sr{ARecv}} \\ P_1 \oplus P_2 &\step{\tau}& P_i \tr{& \sr{AChooseL/R}} \\ \textsf{new } x.P &\step{\nu c}& P\{c/x\} \tr{& \sr{AAlloc}} \\ \textsf{rec } X. P &\step{\tau}& P\{\textsf{rec } X.P/ X\} \tr{\multicolumn{2}{l}{P\{\textsf{rec } X.P/ X\}}\ \sr{ARec}} \end{array} \quad \begin{array}{c} \infer[\tr{AIntL}] {P \step{\alpha} P'} {P|Q \step{\alpha} P'|Q} \qquad \infer[\tr{AIntR}] {Q \step{\alpha} Q'} {P|Q \step{\alpha} P|Q'} \\ \\ \infer[\tr{ACom}] {P \step{\alpha} P' \\ Q \step{\ov{\alpha}} Q'} {P|Q \step{\tau} P'|Q'} \end{array} \] \end{display} According to this semantics, we will have transitions like \[ \textsf{new } x. \textsf{new } y. \ov{x}y. 0 \ \step{\nu c}\ \textsf{new } y. \ov{c}y. 0 \ \step{\nu c}\ \ov{c}c. 0 \ \step{c!c}\ 0 \] where $c$ is allocated twice, and used to communicate with an environment that cannot know it. To filter out such executions, we use resources. \subsection{Resources and action semantics}\label{sec:action-sem} The execution above is intuitively impossible because, after the first $\nu c$ action, the process \emph{already owns} the channel $c$. Similarly, for the process $\textsf{new } x. \ov{x}x . 0$ the trace \[ \textsf{new } x. \ov{x}x. 0 \ \step{\nu c}\ \ov{c}c. 0 \ \step{c!c}\ 0 \] is impossible because the channel $c$, having just been allocated, is unknown to the environment---so no parallel process could possibly be on the other side of the communication, receiving along $c$. Formally, resources are elements $\sigma$ of the domain $ \Sigma \triangleq \textsc{Chan} \rightharpoonup \{ \textsf{pub}, \textsf{pri} \} $, where $\textsf{pub}$ and $\textsf{pri}$ are distinct atoms. If a process is executing with resources $\sigma$, it owns the channels $\textrm{dom}(\sigma)$, and $\sigma(c)$ tells, for each $c$, whether that ownership is exclusive. Therefore, if $c \in \textrm{dom}(\sigma)$, the action $\nu c$ is impossible. Likewise, if $\sigma(c) = \textsf{pri}$, the action $c!c$ is impossible. The resources owned at a particular point in time determine not only what is \emph{possible}, but also what is \emph{permissible}. For example, the process $\ov{c}d.0$ immediately attempts a communication along the channel $c$. If this channel is not allocated (\emph{i.e.}, not owned, \emph{i.e.}, not in $\textrm{dom}(\sigma)$) then the process is \emph{faulty}: it is attempting to use a dangling pointer. We interpret actions $\alpha$ as \emph{resource transformers} of type $\Sigma \rightarrow \Sigma^\top_\bot$.\footnote{ The notation $\Sigma^\top_\bot$ denotes the set $\{\Sigma, \top, \bot\}$ and implies an ordering $\bot \leq \sigma \leq \top$ for all $\sigma \in \Sigma$. The order structure follows abstract separation logic~\cite{Calcagno2007}, and is related to locality~(\secref{safety}). } Since all nondeterminism is resolved during the generation of actions, these transformers are deterministic. A result of $\top$ or $\bot$ represents that an action is not permissible or not possible, respectively. Given the semantics $\ASem{\alpha} : \Sigma \rightarrow \Sigma^\top_\bot$ of actions (defined below), we can define a transition system that \emph{executes} actions according to the currently-owned resources: \begin{display} [$P,\sigma \step{\alpha} P',\sigma'$]{Operational semantics: resource sensitivity} \[ \infer[\tr{RStep}] {P \step{\alpha} P' \\ \ASem{\alpha}{\sigma} = \sigma'} {P,\sigma \rstep{\alpha} P', \sigma'} \qquad \infer[\tr{RFault}] {P \step{\alpha} P' \\ \ASem{\alpha}{\sigma} = \top} {P,\sigma \rstep{\lightning} 0,\sigma} \] \end{display} Successful actions proceed normally, updating the owned resources---note that if $\ASem{\alpha}{\sigma} = \sigma'$ then in particular $\ASem{\alpha}{\sigma} \neq \top, \bot$. Impermissible actions noisily fail, generating the faulting label $\lightning$. Impossible actions silently fail to occur. The semantics of actions is as follows: \begin{display}[$\ASem{\alpha}{} : \Sigma \rightarrow \Sigma^\top_\bot$]{Action semantics} \[ \begin{array}[t]{rcl@{\quad}rcl} \ASem{c!d}{\sigma} &\triangleq& \begin{cases} \top & \{c,d\}\not\subseteq \textrm{dom}(\sigma) \\ \sigma[d \ \textsf{pub}] & \sigma(c) = \textsf{pub} \\ \bot & \textrm{otherwise} \end{cases} & \ASem{c?d}{\sigma} &\triangleq& \begin{cases} \top & c \notin \textrm{dom}(\sigma) \\ \sigma[d \ \textsf{pub}] & \! \begin{array}[t]{l} \sigma(c) = \textsf{pub},\\ \quad \sigma(d) \neq \textsf{pri} \end{array} \\ \bot & \textrm{otherwise} \end{cases} \\ \ASem{\nu c}{\sigma} &\triangleq& \begin{cases} \sigma[c \ \textsf{pri}] & c \notin\textrm{dom}(\sigma) \\ \bot & \textrm{otherwise} \end{cases} & \ASem{\tau}{\sigma} &\triangleq& \sigma \qquad \ASem{\lightning}{\sigma} \ \triangleq\ \top \end{array} \] \end{display} Allocation is always permitted, but is not possible if the channel is already allocated. Allocated channels are initially private. Sending a channel publicizes it, but the communication is only possible if performed over an already public channel, and only permitted over an allocated channel. A locally-unknown channel received from the environment is known to the environment, and hence public; a locally-known channel received from the environment cannot possibly have been private. \paragraph{Examples} Consider the process $\textsf{new } x. 0$. We have \[ \textsf{new } x. 0\quad \step{\nu c}\quad 0 \] for every channel $c$. It follows that \[ \textsf{new } x. 0,\ \emptyset\quad \rstep{\nu c}\quad 0,\ [c \mapsto \textsf{pri}] \] for every channel $c$, while executing with more resources \[ \textsf{new } x. 0,\ [c \mapsto \textsf{pri}]\quad \rstep{\nu d}\quad 0,\ [c \mapsto \textsf{pri}] \uplus [d \mapsto \textsf{pri}] \] results in constrained allocation: the $\uplus$ here denotes disjoint union, meaning that $c \neq d$. The fact that $c$ was already allocated pruned one trace (preventing it from taking an impossible step), but introduced no new traces. Similarly, \[ \textsf{new } x. \ov{x}x. 0\quad \step{\nu c} \ov{c}c.0\quad \step{c!c} 0 \] but, taking resources into account, we have \[ \textsf{new } x. \ov{x}x. 0,\ \emptyset\quad \rstep{\nu c}\quad \ov{c}c.0,\ [c \mapsto \textsf{pri}] \] at which point the process is stuck: the action $c!c$ is prevented from occurring, because $\ASem{c!c}[c\mapsto \textsf{pri}] = \bot$. This deadlock is exactly what we expect to see when a process attempts to communicate along a private channel. Finally, we have \[ \textsf{new } x. (\ov{x}x.0 | x(y).\ov{y}x.0) \quad\step{\nu c}\quad \ov{c}c.0 | c(y).\ov{y}c.0 \quad\step{\tau}\quad 0 | \ov{c}c.0 \quad\step{c!d}\quad 0 | 0 \] which, with resources, yields \[ \textsf{new } x. (\ov{x}x.0 | x(y).\ov{y}x.0),\ \emptyset \ \ \rstep{\nu c}\ \ \ov{c}c.0 | c(y).\ov{y}c.0,\ [c \mapsto \textsf{pri}] \ \ \rstep{\tau}\ \ 0 | \ov{c}c.0,\ [c \mapsto \textsf{pri}] \] Here we see that \emph{internal} communication along a private channel is both possible and permitted: such internal steps appear as $\tau$ actions to the resource-sensitive stepping relation, and hence always pass through. On the other hand, the internal communication also leaves the ownership of $c$ unchanged. Because it remains private, the final communication $\ov{c}c$ is stuck, as it should be. \subsection{Process safety} With the simple public/private resource model, faulting occurs only when using an unallocated channel. Our semantic framework can accommodate deallocation, but doing so complicates the full abstraction result, and we wish to focus on the standard $\pi$-calculus. Avoiding deallocation allows us to easily characterize ``safe'' processes: we say $\sigma \vdash P\checkmark$ iff $P$ is closed and all channel constants in $P$ are in $\textrm{dom}(\sigma)$, and have: \begin{lemma} If $\sigma \proves P \checkmark$ then $P,\sigma \stackrel{\lightning}{\not\rightarrow}$, and if $P,\sigma \step{\alpha} P',\sigma'$ then $\sigma' \proves P' \checkmark$. \end{lemma} \section{Denotational semantics: safety traces} \label{sec:safety} Resources provide an intriguing refactoring of the operational semantics for $\pi$-calculus, but their real payoff comes in the elementary denotational model they support. We begin with a simple trace model capturing only (some) safety properties, which allows us to focus on the role of resources. Afterwards we incorporate liveness~(\secref{liveness}) and its interaction with resources. For the safety model, we have traces $t$, trace sets $T$ and behaviors $B$: \[ \begin{array}{c} \textsc{Trace} \ \triangleq\ \textsc{Action}^* \qquad \textsc{Beh} \ \triangleq\ \Sigma \rightarrow \textsc{TraceSet} \\ \textsc{TraceSet} \ \triangleq\ \{ T\ :\ \emptyset \subset T \subseteq \textsc{Trace},\ T\ \textrm{prefix-closed} \} \end{array} \] Processes will denote behaviors: sets of action traces determined by the initially-available resources. Not every action is observable. We follow standard treatments of $\pi$-calculus~\cite{Sangiorgi2001,Hennessy2002} in considering $\tau$ steps unobservable, and eliding $\nu c$ steps until just before the allocated channel $c$ is sent over a public channel (a ``bound send''). Our denotational semantics shows that the operators of the $\pi$-calculus are congruent for these observables, and the cited works prove that similar observables are fully abstract for yet coarser notions of observation. The observables of an action $\alpha$ are a (possibly empty) trace, depending on the available resources: \begin{display} [$|\alpha|_\sigma : \textsc{Trace}$] {Action observables} \[ \begin{array}{rcl} |\tau|_\sigma &\triangleq& \epsilon \\ |\nu c|_\sigma &\triangleq& \epsilon \end{array} \qquad \begin{array}{rcl} |\lightning|_\sigma &\triangleq& \lightning \\ |c?d|_\sigma &\triangleq& c?d \end{array} \qquad \begin{array}{rcl} |c!d|_\sigma &\triangleq& \begin{cases} \nu d \cdot c!d & \sigma(d) = \textsf{pri} \\ c!d & \textrm{otherwise} \end{cases} \end{array} \] \end{display} We write $t \cdot u$ or $tu$ for trace concatenation, and $\epsilon$ for the empty trace. Although $\nu c$ is not immediately observable, taking a $\nu c$ step affects the resources owned by the process, so exposing $c$ later will cause the $\nu c$ step to reemerge. The behavior of a process can be read from its operational semantics: \begin{display}[$\ObB{P} : \textsc{Beh}$]{Safety observation} \[ \infer { } {\epsilon \in \Ob{P}{\sigma}} \tr{\sr{OEps}} \qquad \infer {P,\sigma \rstep{\alpha} P', \sigma' \\ t \in \Ob{P'}{\sigma'}} {|\alpha|_\sigma t \in \Ob{P}{\sigma}} \tr{\sr{OStep}} \] \end{display} The goal of the denotational semantics is to calculate the same traces compositionally over process structure. $\textsc{TraceSet}$ is a complete lattice under the subset order, and behaviors inherit this order structure pointwise: we write $B \sqsubseteq B'$ if $B(\sigma) \subseteq B'(\sigma)$ for all $\sigma$ and have $(B \sqcup B')(\sigma) = B(\sigma) \cup B'(\sigma)$. The semantic operators are monotonic (in fact, continuous), so we are justified in defining \textsf{rec} as a fixpoint. For the safety semantics, which is based on finite observation, it is the least fixpoint. The safety trace model is insensitive to branching behavior of processes~\cite{Glabbeek1988}, so internal and external choice are indistinguishable. We interpret both forms of choice using $\sqcup$, merging behaviors from all the alternatives. For empty summations, $\sqcup$ yields the smallest behavior: $\lambda \sigma. \{ \epsilon \}$. The denotation function is parameterized by an environment $\rho$, here taking channel variables $x$ to channels $c$, and process variables $X$ to behaviors $B$. It uses two additional operators, $\triangleright$ and $\parallel$, which we will define shortly. \begin{display} [$\SemB{P} : \textsc{Env} \rightarrow \textsc{Beh}$] {Denotational semantics (safety)} \[ \begin{array}{r@{\ \ \triangleq \ \ }r@{\ \triangleright\ }l} \Sem{\ov{e}e'.P}{\rho} & \rho e ! \rho e' & \Sem{P}{\rho} \\ \Sem{e(x).P}{\rho} & \bigsqcup_c \rho e ? c & \Sem{P}{\rho[x \mapsto c]} \\ \Sem{\textsf{new } x.P}{\rho} & \bigsqcup_c \nu c & \Sem{P}{\rho[x \mapsto c]} \\ \Sem{\textsf{rec } X.P}{\rho} & \multicolumn{2}{r}{\mu B. \Sem{P}{\rho[X \mapsto B]}} \end{array} \qquad \begin{array}{r@{\ \ \triangleq\ \ }l} \Sem{\sum \pi_i.P_i}{\rho} & \bigsqcup_i \Sem{\pi_i.P_i}{\rho} \\ \Sem{P \oplus Q}{\rho} & \Sem{P}{\rho} \sqcup \Sem{Q}{\rho} \\ \Sem{P | Q}{\rho} & \Sem{P}{\rho} \parallel \Sem{Q}{\rho} \\ \Sem{X}{\rho} & \rho(X) \end{array} \] \end{display} The interpretation of prefixed processes resembles the operational semantics: each clause of the denotational semantics generates all locally-reasonable actions, without immediately checking global plausibility. We use $\sqcup$ to join the behaviors arising from each action---once more reflecting nondeterminism---and we update the environment as necessary. The operator $\alpha \triangleright B$ prefixes an action $\alpha$ to a behavior $B$ in a resource-sensitive way, playing a role akin to the second layer of the operational semantics: \begin{display} [$\alpha \triangleright B : \textsc{Beh}$] {Semantic prefixing} \[ \begin{array}{r@{\ \ \triangleq\ \ }l} (\alpha \triangleright B)(\sigma) & \{ \alpha t\ : \ASem{\alpha}{\sigma} = \sigma',\ t \in B(\sigma') \} \ \cup\ \{ \lightning\ :\ \ASem{\alpha}{\sigma} = \top \} \ \cup\ \{ \epsilon \} \end{array} \] \end{display} To maintain prefix-closure, we include $\epsilon$ as a possible trace. A quick example: \[ \Sem{\textsf{new } x. \ov{x}x.0}{\emptyset} \ =\ \bigsqcup_c \nu c \triangleright \Sem{\ov{x}x.0}{x \mapsto c} \ =\ \bigsqcup_c \nu c \triangleright c!c \triangleright \Sem{0}{x \mapsto c} \ =\ \bigsqcup_c \nu c \triangleright c!c \triangleright \lambda \sigma. \{ \epsilon \} \] This expansion of the definition resembles the traces we see from the first layer of the operational semantics, without taking resources into account. The denotation, recall, is a \emph{behavior}: to extract its set of traces, we must apply it to some particular resource $\sigma$. If we use the empty resource, we see that \begin{eqnarray*} \left(\bigsqcup_c \nu c \triangleright c!c \triangleright \lambda \sigma. \{ \epsilon \}\right)(\emptyset) &=& \{ \epsilon \} \cup \bigcup_c \left\{ \nu c \cdot t\ :\ t \in \left(c!c\triangleright \lambda \sigma. \{\epsilon\}\right)[c\mapsto \textsf{pri}] \right\} \\ &=& \{ \epsilon \} \cup \bigcup_c \left\{ \nu c \cdot t\ :\ t \in \{ \epsilon \} \right\} \end{eqnarray*} in other words, we have $\Sem{\textsf{new } x. \ov{x}x.0}{\emptyset}(\emptyset) = \{ \epsilon \} \cup \bigcup_c \{ \nu c \}$. Just as in the operational semantics, the fact that $\ASem{c!c}[c \mapsto \textsf{pri}] = \bot$ prevents the $c!c$ step from being recorded. Here, the prefix closure (in particular, the inclusion of $\epsilon$ in every application of $\triangleright$) ensures that we see the trace up to the point that we attempt an impossible action. Finally, we have parallel composition---the most interesting semantic operator. Here we must ask a crucial question for the denotational semantics: if $\sigma$ is the resource belonging to $P|Q$, what resources do we provide to $P$ and $Q$? The question does not come up in the operational semantics, which maintains a single, global resource state, but a compositional semantics must answer it. Consider the process $\textsf{new } x.(\ov{x}c\ |\ x(z))$. When the process reaches the parallel composition, $x$ will still be private. The privacy of $x$ means that the subprocesses can only communicate with each other (yielding $\tau$), not with the external environment of the process. But the subprocesses \emph{are} communicating with environments external to themselves---namely, each other. That is, $x$ is private to $\ov{x}c\ |\ x(z)$, which cannot communicate along it externally, but it is \emph{public} to the \emph{subprocesses} $\ov{x}c$ and $x(z)$, which can. Formally, we capture this narrative as follows: \begin{display} [$B_1 \parallel B_2 : \textsc{Beh}$] {Semantic parallel composition} \[ \begin{array}{r@{\ \triangleq\ }l} (B_1 \parallel B_2)(\sigma) & \bigcup_{t_i \in B_i(\widehat{\sigma})} (t_1 \parallel t_2)(\sigma) \end{array} \ \textrm{where}\ \widehat{\sigma}(c)\ \triangleq\ \begin{cases} \textsf{pub} & c \in \textrm{dom}(\sigma) \\ \textrm{undefined} & \textrm{otherwise} \end{cases} \] \end{display} The resource $\sigma$ given to a parallel composition of behaviors is fed in \emph{public-lifted} form ($\widehat{\sigma}$) to the composed behaviors, yielding two sets of traces. For each pair of traces $t_1$ and $t_2$ from these sets, we calculate all interleavings $t_1 \mbox{$\parallel$} t_2$: \begin{display} [$t \parallel u : \textsc{Beh}$] {Trace interleavings} \[ \begin{array}{rcll} t \parallel u &\triangleq& \lambda \sigma.\{\epsilon\} &\textrm{if } t = \epsilon = u \\&\sqcup& \alpha \triangleright (t' \parallel u) &\textrm{if } t = \alpha t' \\&\sqcup& \alpha \triangleright (t \parallel u') &\textrm{if } u = \alpha u' \\&\sqcup& t' \parallel u' &\textrm{if } t = \alpha t',\ u = \ov{\alpha} u' \end{array} \] \end{display} Interleaving at first glance appears standard, but note the use of semantic prefixing $\triangleright$: \emph{the interleavings are not simply another set of traces, they are given as a \emph{behavior} that must be evaluated}. We evaluate with the \emph{original} resources $\sigma$. The effect is that each interleaving is checked with respect to the resources held by the \emph{combined} process. This additional check is the key to making the ``declare everything public'' approach work, allowing us to take into account channels that are private from the point of view of the combined process, but public between the subprocesses. An example helps illuminate the definitions: take the process $\ov{d}c\ |\ d(z)$ with resources $\sigma = [c \mapsto \textsf{pub}][d \mapsto \textsf{pri}]$. It is easy to calculate that \[ \begin{array}{l} \Sem{\ov{d}c}{\emptyset}\!(\widehat{\sigma})\ =\ \{ \epsilon, d!c \} \qquad \Sem{d(z)}{\emptyset}\!(\widehat{\sigma})\ =\ \{ \epsilon \} \cup \{ d?e\ :\ e \in \textsc{Chan} \} \\ d!c \parallel d?c \ =\ \left(d!c \triangleright d?c \triangleright \lambda \sigma. \{\epsilon\}\right) \sqcup \left(d?c \triangleright d!c \triangleright \lambda \sigma. \{\epsilon\}\right) \sqcup \left(\lambda \sigma. \{\epsilon\}\right) \end{array} \] The interleaving $d!c \parallel d?c$ includes the case that $d!c$ and $d?c$ are two sides of the same communication (yielding $\lambda \sigma. \{\epsilon\}$) and the two possible orderings if they are not. From the point of view of $\widehat{\sigma}$, which has lost the information that $d$ is private to the combined process, this is the most we can say. However, the interleaving is built using the prefixing operation $\triangleright$, so when we evaluate it with respect to the original $\sigma$, some traces will be silently dropped: \begin{eqnarray*} && (d!c \parallel d?c)(\sigma)\\ &\ =\ & (d!c \triangleright d?c \triangleright \lambda \sigma. \{\epsilon\})(\sigma) \cup (d?c \triangleright d!c \triangleright \lambda \sigma. \{\epsilon\})(\sigma) \cup (\lambda \sigma. \{\epsilon\}) (\sigma) \\ &\ =\ & \{ \epsilon \} \cup \{ \epsilon \} \cup \{ \epsilon \} \end{eqnarray*} In particular, for any $B$ we have $ (d!c \triangleright B)(\sigma) = (d?c \triangleright B)(\sigma) = \{ \epsilon \} $ because $\sigma(d) = \textsf{pri}$. We are left only with traces that could arise from internal communication, as expected. That is, $\Sem{\textsf{new } x.(\ov{x}c|x(y))}{\emptyset}[c \mapsto\textsf{pub}] = \{ \epsilon \}$. More generally, we can show $\Sem{\textsf{new } x.(\ov{x}c|x(y))}{\emptyset}\sigma = \Sem{0}{\emptyset}\sigma$ whenever $c \in \textrm{dom}(\sigma)$. Because $\ASem{\lightning}{\sigma} = \top$, we have $\lightning \triangleright B = \lambda \sigma. \{ \lightning, \epsilon \}$ for any $B$. Thus, when a $\lightning$ action is interleaved, the interleaving is terminated with that action. In summary, we calculate the traces of $P|Q$ by calculating the traces of $P$ and $Q$ under conservatively public-lifted resources, then evaluating the interleavings with complete information about what resources $P|Q$ actually owns. \paragraph{Example calculations} Before proving full abstraction, we briefly examine a few of the expected laws. For example, why does $\SemB{\textsf{new } x.0} = \SemB{0}$? Expanding the former, we get $\bigsqcup_c \nu c \triangleright \lambda \sigma . \{ \epsilon \}$. When applied to a particular $\sigma$, this behavior yields the simple set $\{ \epsilon \}$, because $|\nu c|_\sigma = \epsilon$. This simple example sheds light on the importance of action observation $|-|$: it is crucial for ignoring when, or in some cases whether, channels are allocated. A more complex example is the following: \begin{eqnarray*} \Sem{\textsf{new } x.\textsf{new } y.P}{\rho} &=& \bigsqcup_c \nu c \triangleright \Sem{\textsf{new } y.P}{\rho[x \mapsto c]} \\ &=& \bigsqcup_c \nu c \triangleright \bigsqcup_d \nu d \triangleright \Sem{P}{\rho[x \mapsto c, y \mapsto d]} \\ &=& \bigsqcup_{c,d} \nu c \triangleright \nu d \triangleright \Sem{P}{\rho[x \mapsto c, y \mapsto d]} \\ &=& \bigsqcup_{c,d} \nu d \triangleright \nu c \triangleright \Sem{P}{\rho[x \mapsto c, y \mapsto d]} \\ &=& \bigsqcup_d \nu d \triangleright \bigsqcup_c \nu c \triangleright \Sem{P}{\rho[x \mapsto c, y \mapsto d]} \\ &=& \bigsqcup_d \nu d \triangleright \Sem{\textsf{new } x.P}{\rho[y \mapsto d]} \ =\ \Sem{\textsf{new } y.\textsf{new } x.P}{\rho} \end{eqnarray*} The key step is swapping $\nu c$ and $\nu d$, which relies on the lemma $\nu c \triangleright \nu d \triangleright B = \nu d \triangleright \nu c \triangleright B$. The validity of this lemma, again, relies on observability: $|\nu c|_\sigma = |\nu d|_\sigma = \epsilon$ for all $\sigma$. \subsection{Congruence for the basic operators} We prove full abstraction by proving a \emph{congruence} result for each operator in the language. For the operators other than parallel composition, we show: \begin{lemma}[Core congruences] All of the following equivalences on closed processes hold: \begin{enumerate} \item $\ObB{0} = \Sem{0}{\emptyset}$ \item $\ObB{\ov{c}d.P} = c!d \triangleright \ObB{P}$ \item $\ObB{c(x).P} = \bigsqcup_d c?d \triangleright \ObB{P\{d/x\}}$ \item $\ObB{\textsf{new } x.P} = \bigsqcup_c \nu c \triangleright \ObB{P\{c/x\}}$ \item $\ObB{\sum_i P_i} = \bigsqcup_i \ObB{P_i}$ \item $\ObB{P \oplus Q} = \Ob{P} \sqcup \ObB{Q}$ \end{enumerate} \end{lemma} \noindent These equivalences are straightforward to show; we prove each by showing containment in both directions. For illustration, we give the proof that $\ObB{c(x).P} \subseteq \bigsqcup_d c?d \triangleright \ObB{P\{d/x\}}$: \begin{proof} Let $\sigma \in \Sigma$ and $t \in \Ob{c(x).P}{\sigma}$. We analyze cases on the derivation of $t \in \Ob{c(x).P}{\sigma}$: \pcase{ \infer {\phantom{a} } {\epsilon \in \Ob{c(x).P}{\sigma}} } \vskip 2pt\noindent Let $d$ be a channel. Then $t = \epsilon \in c?d \triangleright \ObB{P\{d/x\}}$ by definition of $\triangleright$. The result follows by monotonicity of $\sqcup$. \pcase{ \infer {c(x).P,\sigma \step{\alpha} P',\sigma' \\ t' \in \Ob{P'}{\sigma'}} {|\alpha|_\sigma t' \in \Ob{c(x).P}{\sigma}} } \vskip 2pt Reasoning by inversion, we see that there are two subcases: \psubcase{\exists d.\ \alpha = c?d,\ \ASem{c?d}{\sigma} = \sigma',\ P' = P\{d/x\}} \vskip 4pt Then $t = \alpha t' \in \bigsqcup_d c?d \triangleright \Ob{P\{d/x\}}$ trivially by the definition of $\triangleright$. \psubcase{\alpha = \lightning,\ c \notin\textrm{dom}(\sigma),\ P' = 0} \vskip 4pt Then $t = \alpha t' = \lightning$ because $\Ob{0}{\sigma'} = \{ \epsilon \}$. That $\lightning \in \bigsqcup_d c?d \triangleright \Ob{P\{d/x\}}$ again follows easily by the definition of $\triangleright$. \end{proof} \subsection{Congruence for parallel composition} The justification of our treatment of parallel composition goes back to the intuitions from the beginning of the paper: concurrent process must divide resources amongst themselves, with each process using only those resources it owns. We say $\sigma$ separates into $\sigma_1$ and $\sigma_2$ if the following conditions hold: \begin{display}[$(\sigma_1 \parallel \sigma_2) \subseteq \Sigma$]{Parallel separation} \[ \sigma \in (\sigma_1 \parallel \sigma_2) \ \triangleq\ \left\{ \begin{array}{l} \textrm{dom}(\sigma) = \textrm{dom}(\sigma_1) \cup \textrm{dom}(\sigma_2) \\ \sigma_1(c) = \textsf{pri} \implies \sigma(c)=\textsf{pri},\ c\notin\textrm{dom}(\sigma_2) \\ \sigma_2(c) = \textsf{pri} \implies \sigma(c)=\textsf{pri},\ c\notin\textrm{dom}(\sigma_1) \end{array} \right. \] \end{display} We understand this definition as saying: if $\sigma_1$ and $\sigma_2$ are resources separately held by $P$ and $Q$ respectively, then $\sigma$ is \emph{possibly} the resource held by $P|Q$. The subresources $\sigma_i$ do not uniquely determine a combination $\sigma$ because resources public to the subprocess may, or may not, be private to the combined process.\footnote{ This means that $\Sigma$ with $\parallel$ does not form a separation algebra~\cite{Calcagno2007}; see~\secref{resources}. } Separation crisply captures the desired meaning of public and private ownership: if one subprocess owns a resource privately ($\sigma_1(c) = \textsf{pri}$), then the other subprocess does not own the resource at all ($c \notin\textrm{dom}(\sigma_2)$), but both processes may own a resource publicly. To show that that $\ObB{P_1|P_2} = \ObB{P_1} \mbox{$\parallel$} \ObB{P_2}$, we must show that our strategy of interleaving traces from publicly-lifted resources agrees with the global operational semantics. A key idea is that $\sigma \in \sigma_1 \parallel \sigma_2$ constitutes an invariant relationship between the resources owned by subprocesses (in the denotational semantics) and those owned by the composite process (in the operational semantics). The invariant holds initially because $\sigma \in \widehat{\sigma} \parallel \widehat{\sigma}$. The unobservability of $\nu c$ steps complicates matters somewhat: it means there is an additional perspective on resources---call it $\sigma_{\rm den}$---owned by a composite process. Generally, $\sigma_{\rm den}$ underestimates the true resources $\sigma$ of the operational semantics. Consider the denotational interleaving of two traces $t_1$ and $t_2$ from subprocesses $P_1$ and $P_2$ respectively. If $P_1$ allocates a channel, that allocation does not appear immediately in $t_1$, and hence does not appear immediately in the resources $\sigma_{\rm den}$ of the interleaving, while it would appear in $\sigma$ operationally. During denotational interleaving, the same channel can even be owned privately in \emph{both} $\sigma_1$ and $\sigma_2$. The key observation here is that either both subprocesses eventually reveal a given private channel---in which case the denotational interleaving is filtered out---or at least one subprocess does not---in which case its choice of channel is irrelevant. Altogether, the four resources---$\sigma_{\textrm{op}}$, $\sigma_{\textrm{den}}$, $\sigma_1$, and $\sigma_2$---are always related: \[ \mathcal{I}(\sigma_{\textrm{op}}, \sigma_{\textrm{den}}, \sigma_1, \sigma_2)\ \triangleq\ \sigma_{\textrm{op}} \in \sigma_1\parallel\sigma_2,\ \sigma_{\rm den} = \sigma_{\textrm{op}} \setminus \{ c\ :\ \sigma_1(c) = \textsf{pri} \vee \sigma_2(c) = \textsf{pri} \} \] Validating parallel composition requires another important lemma, \emph{locality} from abstract separation logic~\cite{Calcagno2007}.\footnote{ For simplicity we avoid the order-theoretic definition here, which requires lifting some of our constructions to $2^\Sigma$ in a way that is not otherwise useful. } \begin{lemma}[Locality] If $\sigma \in \sigma_1 \parallel \sigma_2$ then \begin{itemize} \item if $\ASem{\alpha}{\sigma} = \top$ then $\ASem{\alpha}{\sigma_1} = \top$, and \item if $\ASem{\alpha}{\sigma} = \sigma'$ then $\ASem{\alpha}{\sigma_1} = \top$ or $\ASem{\alpha}{\sigma_1} = \sigma'_1$ for some $\sigma'_1$ with $\sigma' \in \sigma'_1 \parallel \sigma_2$. \end{itemize} \end{lemma} The lemma characterizes the transformations an action can make given some composite resources $\sigma$ in terms of its behavior on subresources $\sigma_1$. Providing additional resources can never introduce new faults, and if the action does not fault given just $\sigma_1$ resources, then the changes it makes to $\sigma$ must only change the $\sigma_1$ portion (framing). Locality was introduced to characterize the frame rule of separation logic~\cite{Calcagno2007}, but we use it here to characterize interleaving steps in parallel composition. We have a related lemma for internal communication steps: \begin{lemma}[Communication] If $\sigma \in \sigma_1 \parallel \sigma_2$, $\ASem{\alpha}{\sigma_1} = \sigma'_1$ and $\ASem{\ov{\alpha}}{\sigma_2} = \sigma'_2$ then $\sigma \in \sigma'_1 \parallel \sigma'_2$. \end{lemma} We prove each direction of congruence separately: \begin{lemma} If $\mathcal{I}(\sigma_{\textrm{op}}, \sigma_{\textrm{den}}, \sigma_1, \sigma_2)$, $\sigma_i \vdash P_i \checkmark$ and $t \in \Ob{P_1|P_2}{\sigma_{\textrm{op}}}$ then\\ $t \in (t_1\parallel t_2)(\sigma_{\textrm{den}})$ for some $t_i \in \Ob{P_i}{\sigma_i}$. \end{lemma} \begin{lemma} If $\mathcal{I}(\sigma_{\textrm{op}}, \sigma_{\textrm{den}}, \sigma_1, \sigma_2)$, $\sigma_i \vdash P_i \checkmark$, $t_i \in \Ob{P_i}{\sigma_i}$, and \\ $t \in (t_1\parallel t_2)(\sigma_{\textrm{den}})$ then $t \in \Ob{P_1|P_2}{\sigma_{\textrm{op}}}$. \end{lemma} The first of these two lemmas is easier to prove, because we are given a trace $t$ derived from the operational semantics of the composite processes. This means that the subprocesses are guaranteed not to independently allocate the same channel. The second lemma requires more care, using the insights mentioned above about renaming unexposed channels. The assumptions $\sigma_i \vdash P_i \checkmark$ are needed to ensure that the processes we are working with do not fault. The reason that faulting is problematic is seen in the following example: \begin{eqnarray*} && \textsf{new } x.\ov{c}x.0\ |\ c(y).\ov{c}y.\ov{d}y.0),\ [c \mapsto \textsf{pub}] \\ &\rstep{\nu d}\ \ & \ov{c}d.0\ |\ c(y).\ov{c}y.\ov{d}y.0,\ [c \mapsto \textsf{pub}, d \mapsto \textsf{pri}] \\ &\rstep{\tau}\ \ & 0\ |\ \ov{c}d.\ov{d}c.0,\ [c \mapsto \textsf{pub}, d\mapsto\textsf{pri}] \\ &\rstep{c!d}\ \ & 0\ |\ \ov{d}c.0,\ [c \mapsto \textsf{pub}, d\mapsto\textsf{pub}] \\ &\rstep{d!c}\ \ & 0\ |\ 0,\ [c \mapsto \textsf{pub}, d\mapsto\textsf{pub}] \end{eqnarray*} The uncomfortable aspect of this derivation is that the channel $d$ occurred in the process initially, even though it was not owned. As a result, the process was able to \emph{allocate} $d$, in a sense falsely capturing the constant $d$ that initially appeared. In cases where the process allocates a different channel than $d$, it will fault when it attempts to communicate along the constant channel $d$. But in this ``lucky'' case, the operational semantics allows communication along the constant channel. The denotational semantics, however, \emph{always} generates a fault. It computes the traces compositionally, meaning that a channel $d$ allocated by one subprocess is not immediately available for use by a parallel subprocess. Our full abstraction result applies only to nonfaulty processes, which, fortunately, is a trivial syntactic check. However, this does limit its applicability to languages that include features like deallocation, which makes checking for safety more difficult. \subsection{Full abstraction} To complete the proof of full abstraction, we must deal with recursion. We begin with the usual unwinding lemma, proved in the standard syntactic way: \begin{lemma}[Unwinding] We have $\ObB{\textsf{rec } X.P} = \bigsqcup_n \Ob{\textsf{rec}_n X.P}$, where $\textsf{rec}_0 X.P \triangleq \textsf{rec } X. X$ and $\textsf{rec}_{n+1} X.P \triangleq P\{\textsf{rec}_n X.P/X\}$. \end{lemma} We also have the standard substitution lemmas: \begin{lemma}[Substitution] We have $\Sem{P[Q/X]}{\rho} = \Sem{P}{\rho[X \mapsto Q]}$ and\\ $\Sem{P[c/x]}{\rho} = \Sem{P}{\rho[x \mapsto c]}$. \end{lemma} \noindent Combined these lemmas with the previous congruence results, it is straightforward to show the following theorem relating the observed operational traces to those calculated denotationally: \begin{theorem}[Congruence] If $P$ closed and $\sigma \proves P \checkmark$ then $\Ob{P}{\sigma} = \Sem{P}{\emptyset}\sigma$. \end{theorem} \noindent To prove this theorem, we must generalize it to deal with open terms. We do this by introducing a \emph{syntactic environment} $\eta$ as a finite map taking channel variables to channels and process variables to closed processes. Given a syntactic environment $\eta$ the corresponding semantic environment $\widehat{\eta}$ is given by: \[ (\widehat{\eta})(x)\ \triangleq\ \eta(x) \qquad (\widehat{\eta})(X)\ \triangleq\ \ObB{\eta(X)} \] We write $\eta P$ for the application of $\eta$ as a syntactic substitution on $P$. The needed induction hypothesis for congruence is then \begin{center} if $\sigma \proves \eta P \checkmark$ then $\Ob{\eta P}{\sigma} = \Sem{P}{\widehat{\eta}}\sigma$. \end{center} Define $P =_{\textsc{Den}} Q$ iff $\Sem{P}{\rho}\sigma = \Sem{Q}{\rho}\sigma$ for all $\sigma$ such that $\sigma \proves P\checkmark$ and $\sigma \proves Q\checkmark$. Likewise, let $P =_{\textsc{Op}} Q$ iff $\ObB{C[P]}\sigma = \ObB{C[Q]}\sigma$ for all contexts~$C$ with $\sigma \proves C[P]\checkmark$ and $\sigma \proves C[Q]\checkmark$. Full abstraction follows by compositionality: \begin{theorem}[Full abstraction] $P =_{\textsc{Den}} Q$ iff $P =_{\textsc{Op}} Q$. \end{theorem} \section{Denotational semantics: adding liveness} \label{sec:liveness} To round out our study of $\pi$-calculus, we must account for liveness properties. Liveness in process algebra appears under diverse guises, differing in sensitivity to branching behavior and divergence~\cite{Glabbeek1988}. Each account of liveness corresponds to some choice of basic observable: given a process $P$ and a context $C$, what behavior of $C[P]$ matters? The standard observable for the $\pi$-calculus is barbed bisimilarity~\cite{barbed}, which sits quite far on the branching side of the linear-branching time spectrum~\cite{Glabbeek1988}. Here, we choose a treatment more in the spirit of linear time: an adaptation of acceptance traces~\cite{Hennessy2002}. This choice is partly a matter of taste, but it also allows us to stick with a purely trace-theoretic semantics, which keeps the domain theory to a minimum. We do not see any immediate obstacles to applying our resource-based handling of names to a branching-time semantics. Branching sensitivity and resource-sensitivity seem largely orthogonal, though of course branches may be pruned when deemed impossible given the owned resources. \subsection{Liveness observables} We say that a process \emph{diverges} if it \emph{can} perform an infinite sequence of unobservable (\emph{i.e.}, internal) steps without any intervening interactions with its environment---which is to say, the process can livelock. On the other hand, a process that can make \emph{no} further unobservable steps is blocked (waiting for interaction from its environment). The basic observables in our model are: \begin{itemize} \item A finite sequence of interactions, after which the process diverges or faults; \item A finite sequence of interactions, after which the process is blocked, along with which channels it is blocked on; and \item An infinite sequence of interactions. \end{itemize} \noindent Notice that we have conflated divergence and faulting: we view both as erroneous behavior. In particular, we view any processes that are capable of immediately diverging or faulting as equivalent, regardless of their other potential behavior. This perspective is reasonable---meaning that it yields a congruence---because such behavior is effectively uncontrollable. For example, if $P$ can immediately diverge, so can $P|Q$ for any $Q$. Formally, we add a new action $\delta_\Delta$ which records that a process is blocked attempting communication along the finite set of \emph{directions} $\Delta$: \[ \alpha\ ::=\ \cdots \ \ |\ \ \delta_\Delta \qquad \Delta \subseteq_\textrm{fin} \textsc{Dir} \triangleq \{ c!\ :\ c\in\textsc{Chan} \} \cup \{ c?\ :\ c\in\textsc{Chan} \} \] We then define \[ \textsc{LTrace}\ \triangleq\ \textsc{NTAction}^*;\{ \lightning, \delta_\Delta\}\ \cup\ \textsc{NTAction}^\omega \qquad \textsc{LBeh}\ \triangleq\ \Sigma \rightarrow 2^{\textsc{LTrace}} \] where \textsc{NTAction} (for ``non-terminating action'') refers to all actions except for $\lightning$ or blocking actions $\delta_\Delta$. Thus finite liveness traces must end with either a $\delta_\Delta$ action or a $\lightning$ action, whereas neither of these actions can appear in an infinite trace. Each liveness trace encompasses a \emph{complete} behavior of the process: either the process continues interacting indefinitely, yielding an infinite trace, or diverges, faults or gets stuck after a finite sequence of interactions. Therefore, sets of liveness traces are not prefixed-closed. As with the safety traces, we can observe liveness traces from the operational semantics. However, we do so using the \emph{greatest} fixpoint of the following rules: \begin{display}[$\LObB{P} : \textsc{LBeh}$]{Liveness observation} \[ \begin{array}{c} \infer[\tr{LOStep}] {P,\sigma \rstep{\alpha }P',\sigma' \\\\ \alpha \neq \lightning \\ t \in \LOb{P'}{\sigma'}} {|\alpha|_\sigma t \in \LOb{P}{\sigma}} \textrm{gfp} \qquad \infer[\tr{LOFault}] {P,\sigma \rstep{\lightning}} {\lightning \in \LOb{P}{\sigma}} \textrm{gfp} \qquad \infer[\tr{LOBlocked}] {P,\sigma \ \textrm{blocked}\ \Delta} {\delta_\Delta \in \LOb{P}{\sigma}} \textrm{gfp} \end{array} \] \end{display} where $P, \sigma \ \textrm{blocked}\ \Delta$ means that $P, \sigma$ can only take communication steps, and $\Delta$ contains precisely the directions of available communication. Since the owned resources influence which communications are possible, they also influence the directions on which a process is blocked: \[ \delta_{\{c!\}} \in \LOb{\ov{c}c.0}{[c \mapsto \textsf{pub}]} \qquad \delta_\emptyset \in \LOb{\ov{c}c.0}{[c \mapsto \textsf{pri}]} \] The action $\delta_\emptyset$ reflects a completely deadlocked process, and is for example the sole trace of the inert process $0$. Defining the observations via a greatest fixpoint allows for infinite traces to be observed, but also means that if a process diverges after a trace $t$, its behavior will contain all traces $tu$, in particular $t\lightning$. For example, suppose $P,\sigma \rstep{\tau} P,\sigma$. If $t$ is any liveness trace whatsoever, we can use the first inference rule to show, coinductively, that $t \in \LOb{P}{\sigma}$. We merely assume that $t \in \LOb{P}{\sigma}$, and derive that $|\tau|_\sigma t = t \in \LOb{P}{\sigma}$. Thus, divergence is ``catastrophic'' (as in failures/divergences~\cite{Brookes1984}). An important step toward making these observables coherent is the notion of \emph{refinement}. In general, saying that $P$ refines $Q$ (or $P$ ``implements'' $Q$) is to say that every behavior of $P$ is a possible behavior of $Q$. In other words, $P$ is a more deterministic version of $Q$. We define a refinement order on traces: \[ t \sqsubseteq t \qquad t\delta_\Delta \sqsubseteq t\delta_{\Delta'}\ \textrm{if}\ \Delta' \subseteq \Delta \qquad tu \sqsubseteq t\lightning \] which we lift to sets of traces as: $T \sqsubseteq U$ iff $\forall t \in T.\ \exists u\in U.\ t \sqsubseteq u$. This notion of refinement, which closely follows that of acceptance traces~\cite{Hennessy2002}, says that an implementation must allow at least the external choices that its specification does. It also treats faulting as the most permissive specification: if $Q$ faults, then any $P$ will refine $Q$. Moreover, any two immediately-faulting processes are equivalent. Since faulting and divergence are treated identically, the same holds for divergent processes. Thus, the simple refinement ordering on traces has an effect quite similar to the closure conditions imposed in failures/divergences semantics. The ordering on trace sets inherits the complete lattice structure of $2^{\textsc{LTrace}}$, as does the pointwise order on \textsc{LBeh}. We again exploit this fact when interpreting recursion. \subsection{Liveness semantics} To complete the semantic story, we need to interpret blocking actions. We define \begin{eqnarray*} \ASem{\delta_\Delta}{\sigma} &\triangleq& \begin{cases} \top & \exists c.\ (c! \in \Delta \vee c? \in \Delta) \wedge c \notin\textrm{dom}(\sigma) \\ \sigma & \textrm{otherwise} \end{cases} \\ |\delta_\Delta|_\sigma &\triangleq& \delta_{\Delta'} \ \textrm{where}\ \Delta' = \Delta\upharpoonright \{c\ :\ \sigma(c) = \textsf{pub} \} \end{eqnarray*} which shows the interaction between resources and blocking: blocking on a private resource is possible, but unobservable (\emph{cf.} projection on $\delta$ in~\cite{Brookes2002}). For example, we have \[ \begin{array}{c@{\qquad}c} \ASem{\delta_{\{c!\}}}{[c \mapsto \textsf{pub}]} = [c \mapsto \textsf{pub}] & |\delta_{\{c!\}}|_{[c \mapsto \textsf{pub}]} = \delta_{\{c!\}} \\ \ASem{\delta_{\{c!\}}}{[c \mapsto \textsf{pri}]} = [c \mapsto \textsf{pri}] & |\delta_{\{c!\}}|_{[c \mapsto \textsf{pri}]} = \delta_\emptyset \end{array} \] The denotational semantics for liveness, $\LSemB{-}$, is largely the same as that for safety, except for the following clauses: \begin{eqnarray*} \LSem{\textsf{rec } X.P}{\rho} &\triangleq&\ \nu B. \LSem{P}{\rho[X \mapsto B]} \\ \LSem{\sum \pi_i. P_i}{\rho} &\triangleq& \left(\bigsqcup \LSem{\pi_i. P_i}{\rho} \right) \sqcup \left(\delta_{\{\textrm{dir}(\rho\pi_i)\}} \triangleright \lambda \sigma.\emptyset \right) \end{eqnarray*} Recursion is given by a greatest fixpoint, as expected. A summation of prefixed actions now generates a corresponding blocking set, recording the external choice (where dir extracts the direction of a prefix). The blocking action is ``executed'' using the prefixing operator $\triangleright$ so that the actual observed action corresponds to the available resources, as in the example above. Finally, we use the following definition of interleaving: \[ \begin{array}{rcll} t \parallel u &\triangleq_{\textrm{gfp}}& \alpha \triangleright (t' \parallel u) &\textrm{if } t = \alpha t',\ \alpha \ \textrm{not blocking} \\&\sqcup& \alpha \triangleright (t \parallel u') &\textrm{if } u = \alpha u',\ \alpha \ \textrm{not blocking} \\&\sqcup& \delta_{\Delta \cup \Delta'} &\textrm{if } t = \delta_\Delta,\ u = \delta_{\Delta'},\ \ov{\Delta} \pitchfork \Delta' \\&\sqcup& t' \parallel u' &\textrm{if } t = \alpha t',\ u = \ov{\alpha} u' \end{array} \] Liveness interleaving is given by a greatest fixpoint. An infinite sequence of internal communications (operationally, an infinite sequence of $\tau$ moves) therefore yields \emph{all} possible traces, including faulting ones, as it should. An interleaved trace is blocked only when both underlying traces are, and only when they do not block in opposite directions ($\ov{\Delta}$ is $\Delta$ with directions reversed, and $\pitchfork$ denotes empty intersection). If two processes are blocked in opposite directions, then their parallel composition is in fact \emph{not} blocked, since they are willing to communicate with each other (\emph{cf} stability~\cite{Brookes1984}). \subsection{Full abstraction} The proof of full abstraction is structured similarly to the proof for the safety semantics. Congruence proofs must take into account blocking actions, which is straightforward in all cases except for parallel composition. There, we require a lemma: \begin{lemma}[Blocking congruence] Suppose $\mathcal{I}(\sigma_{\textrm{op}}, \sigma_{\textrm{den}}, \sigma_1, \sigma_2)$. Then \begin{itemize} \item If $\delta_{\Delta_i} \in \LOb{P_i}{\sigma_i}$ and $\Delta_1 \pitchfork \ov{\Delta_2}$ then $|\delta_{\Delta_1 \cup \Delta_2}|_{\sigma_{\textrm{den}}} \in \LOb{P_1|P_2}{\sigma_{\textrm{op}}}$. \item If $\delta_\Delta \in \LOb{P_1|P_2}{\sigma_{\textrm{op}}}$ then $\delta_{\Delta_i} \in \LOb{P_i}{\sigma_i}$ for some $\Delta_1$, $\Delta_2$ with $\Delta_1 \pitchfork \ov{\Delta_2}$ and $|\delta_{\Delta_1 \cup \Delta_2}|_{\sigma_{\textrm{den}}} = \delta_\Delta$. \end{itemize} \end{lemma} Defining $=_{\textsc{LDen}}$ and $=_{\textsc{LOp}}$ analogously to the safety semantics, we again have full abstraction: \begin{theorem}[Full abstraction] $P =_{\textsc{LDen}} Q$ iff $P =_{\textsc{LOp}} Q$. \end{theorem} \section{Logic} \label{sec:logic} We now sketch a logic for reasoning about the safety semantics of processes. The logic proves \emph{refinement} between open processes---denotationally, trace containment; operationally, contextual approximation. The refinements are qualified by assertions about owned resources, which is what makes the logic interesting. The basic judgment of the logic is $\Gamma \proves p \blacktriangleright P \sqsubseteq Q$, which says the traces of $P$ are traces of $Q$, as long as the initial resources and environment, respectively, satisfy assertions $p$ and $\Gamma$ (defined below). Resource assertions $p$ are as follows: \[ p\ ::=\ \textsf{true} \ \ |\ \ \textsf{false} \ \ |\ \ p \wedge q \ \ |\ \ p \vee q \ \ |\ \ p * q \ \ |\ \ x\ \textsf{pub} \ \ |\ \ x\ \textsf{pri} \ \ |\ \ x=y \ \ |\ \ x \neq y \] and we let $x\ \textsf{known} \triangleq x\ \textsf{pub} \vee x\ \textsf{pri}$. Satisfaction of assertions depends on both the environment and resources, as in these illustrative cases: \[ \begin{array}{lcl} \rho, \sigma \models x \ \textsf{pub} &\triangleq& \sigma(\rho(x)) = \textsf{pub} \\ \rho, \sigma \models p_1 * p_2 &\triangleq& \exists \sigma_1,\sigma_2. \sigma = \sigma_1 \uplus \sigma_2 \textrm{ and } \rho, \sigma_i \models p_i \end{array} \] Resource assertions like $x\ \textsf{pub}$ are intuitionistic~\cite{Reynolds2002}; without deallocation there is no reason to use the classical reading, which can assert nonownership. We are using the standard interpretation of separation logic's $*$ as disjoint separation to enable \emph{sequential} reasoning about resource transformers in our logic. Action interpretations $\ASemB{\alpha}$ are local with respect to $*$, just as they were for $\parallel$. Environment assertions $\Gamma$ constrain process variables: \[ \begin{array}{c} \Gamma\ ::=\ \emptyset \ \ |\ \ \Gamma,\ (p \blacktriangleright X \sqsubseteq P) \\ \rho \models (p \blacktriangleright X \sqsubseteq P) \ \ \triangleq\ \ \forall \sigma.\ (\rho, \sigma \models p) \implies \rho(X)(\sigma) \subseteq \Sem{P}{\rho}{\sigma} \end{array} \] The definition of entailment is thus: \[ \Gamma \models p \blacktriangleright P \sqsubseteq Q\ \ \triangleq\ \ \forall \rho, \sigma.\ (\rho \models \Gamma\ \wedge\ \rho, \sigma \models p) \implies \Sem{P}{\rho}\sigma \subseteq \Sem{Q}{\rho}\sigma \] By qualifying refinements by resource assertions we can incorporate Hoare logic-like reasoning. Take, for example, the rule \[ \infer {\Gamma \proves p * (x\ \textsf{pub} \wedge y\ \textsf{pub}) \blacktriangleright P \sqsubseteq Q} {\Gamma \proves p * (x\ \textsf{pub} \wedge y\ \textsf{known}) \blacktriangleright \ov{x}y.P \sqsubseteq \ov{x}y.Q} \] for sending over a public channel. It is a kind of congruence rule, but we shift resource assumptions for the subprocesses, corresponding to the Hoare triple \[ \{ p * (x\ \textsf{pub} \wedge y\ \textsf{known}) \}\ \ov{x}y\ \{ p * (x\ \textsf{pub} \wedge y\ \textsf{pub}) \} \] The syntactic structure of prefixes (rather than sequential composition) prevents a clean formulation of the logic using Hoare triples. This is why the frame $p$ is included, rather than added via a separate frame rule; we are using ``large'' rather than ``small'' axioms~\cite{OHearn2001}. A better treatment is possible if we semantically interpret prefixing as sequential composition, which requires a variables-as-resources model~\cite{Parkinson}. For sending over a private channel, we have an axiom: $\ov{x}y.P$ refines \emph{any} process when $x$ is private, because $\ov{x}y.P$ is stuck. The corresponding Hoare triple is $\{ x\ \textsf{pri} \wedge y\ \textsf{known}\} \ \ov{x}y\ \{\textsf{false}\}$. Here is a fragment of the logic, focusing on resource-sensitive rules: \begin{display}[$\Gamma \proves p \blacktriangleright P \sqsubseteq Q$] {A selection of logical rules for safety behavior} \[ \infer {\Gamma \proves p * (x\ \textsf{pub} \wedge y\ \textsf{pub}) \blacktriangleright P \sqsubseteq Q} {\Gamma \proves p * (x\ \textsf{pub} \wedge y\ \textsf{known}) \blacktriangleright \ov{x}y.P \sqsubseteq \ov{x}y.Q} \quad \infer { } {\Gamma\proves x\ \textsf{pri} \wedge y\ \textsf{known} \blacktriangleright \ov{x}y.P \sqsubseteq Q} \] \[ \infer {\Gamma \proves (p * x\ \textsf{pub}) \wedge y\ \textsf{pub} \blacktriangleright P \sqsubseteq Q \\ y \notin \textrm{fv}(p, \Gamma)} {\Gamma \proves p * x\ \textsf{pub} \blacktriangleright x(y).P \sqsubseteq x(y).Q} \quad \infer { } {\Gamma \proves x\ \textsf{pri} \blacktriangleright x(y).P \sqsubseteq Q} \] \[ \infer {\Gamma \proves p * x\ \textsf{pri} \blacktriangleright P \sqsubseteq Q \\ x \notin \textrm{fv}(p, \Gamma)} {\Gamma \proves p \blacktriangleright \textsf{new } x.P \sqsubseteq \textsf{new } x.Q} \qquad \infer {\Gamma \proves \widehat{p} \blacktriangleright P_i \sqsubseteq Q_i} {\Gamma \proves p \blacktriangleright P_1|P_2 \sqsubseteq Q_1|Q_2} \] \[ \infer {p \blacktriangleright X \sqsubseteq P \in \Gamma } {\Gamma \proves p \blacktriangleright X \sqsubseteq P} \qquad \infer {\Gamma, p \blacktriangleright X \sqsubseteq Q \proves p \blacktriangleright P \sqsubseteq Q} {\Gamma \proves p \blacktriangleright \textsf{rec } X. P \sqsubseteq Q} \qquad \infer {p \models p' \\ \Gamma \proves p' \blacktriangleright P \sqsubseteq Q} {\Gamma \proves p \blacktriangleright P \sqsubseteq Q} \] \end{display} The congruence rule for parallel composition performs public-lifting $\widehat{p}$ on resource assertions (by replacing $\textsf{pri}$ by $\textsf{pub}$ in the assertion). Fixpoint induction is resource-qualified as well. We reason about the body $P$ of a recursive definition $\textsf{rec } X.P$ using a hypothetical bound on $X$ as the induction hypothesis. That hypothesis, however, is only applicable under the \emph{same} resource assumptions $p$ that were present when it was introduced---making $p$ the loop invariant. In addition to these resource-sensitive rules, we have the usual laws of process algebra, including the expansion law. Combining those laws with the ones we have shown, we can derive an \emph{interference-free} expansion law, as in this simplified version: $ \Gamma \proves x\ \textsf{pri} \wedge y\ \textsf{known} \blacktriangleright \ov{x}y.P | x(z).Q \equiv P | Q\{y/z\} $. \section{Discussion} \subsection{Future work: richer resources} \label{sec:resources} Our resource model captures exactly the guarantees provided by the $\pi$-calculus: until a channel is exposed, it is unavailable to the environment; afterwards, all bets are off. This property is reflected in the fact that $\Sigma$ is not a separation algebra, since $c\ \textsf{pub} \parallel c\ \textsf{pub}$ can result in $c\ \textsf{pub}$ or $c\ \textsf{pri}$. No amount of public ownership adds up definitively to private ownership. Rather than using resources to model the guarantees of a language, we can instead use them to enforce guarantees we intend of programs, putting ownership ``in the eye of the asserter''~\cite{O'Hearn2007}. We can then recover privacy just as Boyland showed~\cite{Boyland2003} how to recover write permissions from read permissions: via a fractional model of ownership: $ \Sigma_{\textsc{Frac}} \triangleq \textsc{Chan} \rightarrow [0, 1] $. Unlike traditional fractional permissions, owning a proper fraction of a channel does not limit what can be done with the channel---instead, it means that the environment is \emph{also} allowed to communicate on the channel. The fractional model yields a separation algebra, using (bounded) summation for resource addition. An easy extension is distinguishing send and receive permissions, so that interference can be ruled out in a direction-specific way. One can also imagine encoding a session-type discipline~\cite{Honda1998} as a kind of resource: $\Sigma_\textsc{Sess} \triangleq \textsc{Chan} \rightharpoonup \textsc{Session}$ where \[ s \in \textsc{Session}\ ::=\ \ell.s \oplus \ell.s \ \ |\ \ \ell.s\ \&\ \ell.s \ \ |\ \ !.s \ \ |\ \ ?.s \ \ |\ \ \textsf{end} \] Separation of session resources corresponds to matching up dual sessions, and actions work by consuming the appropriate part of the session. Ultimately, such resource models could yield rely-guarantee reasoning for the $\pi$-calculus, borrowing ideas from deny-guarantee~\cite{Dodds2009}. A challenge for using these models is managing the ownership protocol in a logic: how are resources consistently attached to channels, and how are resources split when reasoning about parallel composition? We are far from a complete story, but believe our semantics and logic can serve as a foundation for work in this direction. \subsection{Related work} \label{sec:related} Hoare and O'Hearn's work~\cite{Hoare2008} introduced the idea of connecting the model theory of separation logic with the $\pi$-calculus, and provided the impetus for the work presented here. Their work stopped short of the full $\pi$-calculus, modelling only point-to-point communication and only safety properties. Our liveness semantics, full abstraction results, and refinement calculus fill out the rest of the story, and they all rely on our new resource model. In addition, our semantics has clearer connections to both Brookes's action trace model~\cite{Brookes2002} and abstract separation logic~\cite{Calcagno2007}. Previous fully abstract models of the $\pi$-calculus are based on functor categories~\cite{Stark2002,Hennessy2002,Fiore2002}, faithfully capturing the traditional role of scope for privacy in the $\pi$-calculus. Those models exploit general, abstract accounts of recursion, nondeterminism, names and scoping in a category-theoretic setting. We have similarly sought connections with a general framework, but have chosen resources, separation and locality as our foundation. An immediate question is: why do we get away with so much less mathematical scaffolding? This question is particularly pertinent in the comparison with Hennessy's work~\cite{Hennessy2002}, which uses a very similar notion of observation. Hennessy's full abstraction result is proved by extracting, from his functor-categorical semantics, a set of acceptance traces, and showing that this extraction is injective and order preserving. The force of this ``internal full abstraction'' is that the functor-categorical meaning of processes is completely determined by the corresponding acceptance traces. But note, these traces are \emph{not} given directly via a compositional semantics: they are extracted only after the compositional, functor-categorical semantics has been applied. What we have shown, in a sense, is that something like acceptance traces for a process can be calculated directly, and compositionally, from process syntax. Beyond providing a new perspective on the $\pi$-calculus, we believe the resource-oriented approach will yield new reasoning techniques, as argued above. We have also emphasized concreteness, giving an elementary model theory based on sets of traces. Finally, it is worth noting that substructural type systems have been used to derive strong properties (like confluence) in the $\pi$-calculus~\cite{Kobayashi1999}, just as we derived interference-free expansion. Here, we have used a resource theory to explain the $\pi$-calculus as it is, rather than to enforce additional discipline. But the ideas of~\secref{resources} take us very much into the territory of discipline enforcement. More work is needed to see what that territory looks like for the resource-based approach. \paragraph{Acknowledgements} We are grateful to Paul Stansifer and Tony Garnock-Jones for feedback on drafts of this paper, and to the anonymous reviewers who provided guidance on presentation.
2,877,628,090,569
arxiv
\section{Introduction} Belief revision is the study of how an intelligent agent may replace its current epistemic state by another one which is non-trivial and incorporates new information. In \cite{AlchourronGardenforsMakinson1}, the well-known AGM approach was proposed. An epistemic state is modelled there by a deductively closed set of formulas $K$ and new information by a formula $\alpha$. A revision operator is then a function that transforms $K$ and $\alpha$ into a new set of formulas (intuitively, the revised epistemic state). One of the contributions of the AGM approach is that it provides well-known postulates that any reasonable revision operator should satisfy. These postulates have been defended by their authors. But, doubts have been expressed as to their ``soundness'', e.g.~\cite{KatsunoMendelzon1}, and especially ``completeness'', e.g.~\cite{FreundLehmann1}, \cite{DarwichePearl1}, \cite{Lehmann3}, and \cite{DarwichePearl2}. In particular, to be accepted, an operator never needs to put some coherence between the revisions of two different sets $K$ and $K'$. As a consequence, some operators are accepted though they are not well-behaved when iterated. In addition, modelling an epistemic state by just a deductively closed set of formulas has been rejected by many researchers, e.g. \cite{BoutilierGoldszmidt1}, \cite{Boutilier1}, \cite{DarwichePearl2}, \cite{Williams1}, and \cite{NayakFooPagnuccoSattar1}. In \cite{Lehmann3} and \cite{FriedmanHalpern1}, it is argued that this modelling is not sufficient in many AI applications. This provides motivations for another approach, based on distances between any two valuations, introduced in \cite{LehmannMagidorSchlechta2} and investigated further in \cite{LehmannMagidorSchlechta1}. Their approach is in line with the AGM modelling of an epistemic state, but it defines well-behaved iterated revisions. More precisely, suppose we have at our disposal a distance $\cal D$ between any two valuations. This defines an operator $|_{\cal D}$, called a {\it distance operator}, which transforms any ordered pair $(V,W)$ of sets of valuations into the set $V |_{\cal D} W$ of all those elements of $W$ that are closest to $V$ according to ${\cal D}$. This operator $|_{\cal D}$ defines naturally the revision of $K$ by $\alpha$ as the set of all formulas satisfied in $\M{K} |_{\cal D} \M{\alpha}$ (i.e. the set of all those models of $\alpha$ that are closest to the models of $K$). This constitutes a {\it distance-based revision operator}, which is interesting for its natural aspect and for it is well-behaved when iterated. This is due to the fact that the revisions of the different $K$'s are all defined by the same distance, which ensures a strong coherence between them. Note that this is not the case with other definitions. For instance, with sphere systems \cite{Grove1} and epistemic entrenchment relations \cite{GardenforsMakinson1}, the revision of each $K$ is defined by a different structure without any ``glue'' relating them. In \cite{LehmannMagidorSchlechta1}, several families of distance-based revision operators were characterized by the AGM postulates together with new ones that deal with iterated revisions. However, the latter postulates include a ``loop'' condition of arbitrarily big size. An interesting question is whether it can be replaced by a finite condition. Elements of negative answer were provided in \cite{Schlechta5}. Approximatively, Schlechta call normal a characterization containing only conditions which are finite, universally quantified (like e.g. the AGM postulates), and simple (i.e. using only elementary operations like e.g. $\cup$, $\cap$, $\setminus$). Then, he showed that for families of distance operators, there is no normal characterization. Now, there is a strong connexion between the distance operators (which apply to valuations) and the distance-based revision operators (which apply to formulas). It is quite reasonable to think that the work of Schlechta can be continued to show that for families of distance-based revision operators, there is no normal characterization either. The families investigated in \cite{LehmannMagidorSchlechta1} might well be concerned with this, which suggests that the arbitrarily big loop condition cannot be replaced by a finite, universally quantified, and simple condition. The contribution of the present paper is to extend the work of Schlechta in two directions. First, we will use the word ``normal'' in a larger sense. Indeed, we will call normal a characterization containing only conditions which are finite and universally quantified, but not necessarily simple (i.e. the conditions can involve complex structures or functions, etc., we are not limited to elementary operations). Then, we will show that the families which Schlechta investigated still do not admit a normal characterization, in our larger sense. This is therefore a generalization of his negative results. Second, we will extend the negative results (always in our sense) to new families of distance operators, in particular to some that respect the Hamming distance. We are quite confident that the present work can be continued, like the work of Schlechta, to show that for families of distance-based revision operators, there is no normal characterization either. But, we will cover more families and with a more general definition of a normal characterization. This is the main motivation. In addition, the impossibility results of the present paper already help to understand more clearly the limits of what is possible in this area. They have therefore an interest of their own. First, we will present the distance-based revision and the characterizations of Lehmann {\it et al.} Second, we will define formally the normal characterizations. Third, we will show the impossibility results. And finally, we will conclude. \section{Background}\label{DISbackground} \subsection{Pseudo-distances}\label{DISpseudodist} In many circumstances, it is reasonable to assume that an agent can evaluate for any two valuations $v$ and $w$, how far is the situation described by $w$ from the situation described by $v$, or how difficult or unexpected the transition from $v$ to $w$ is, etc. In \cite{LehmannMagidorSchlechta1}, this is modelled by pseudo-distances: \begin{definition}\label{DISdiststruc} Let $\cal V$ be a set. \\ ${\cal D}$ is a {\it pseudo-distance} on $\cal V$ iff ${\cal D} = \langle {C}, \prec, d \rangle$, where ${C}$ is a non-empty set, $\prec$ is a strict total order on ${C}$, and $d$ is a function from ${\cal V} \times {\cal V}$ to ${C}$. \end{definition} Intuitively, $\cal V$ is a set of valuations. Each element of ${C}$ represents a ``cost''. $c \prec c'$ means the cost $c$ is strictly smaller than the cost $c'$. And, $d(v, w)$ is the cost of the move from $v$ to $w$. Natural properties that come to mind are those of usual distances. Before introducing them, we need standard notations: \begin{notation} $\cal P$ denotes the power set operator. \\ For every set $S$, $|S|$ denotes the cardinality of $S$. \\ $\mathbb{N}$, $\mathbb{N}^{+}$, $\mathbb{R}$, and $\mathbb{R}^+$ denote respectively the natural, positive natural, real, and positive real numbers. \\ Let $r \in \mathbb{R}$. Then, $abs(r)$ denotes the absolute value of~$r$. \\ Let $n, m \in \mathbb{N}$. Then, $[n, m]$ denotes the set of every $k$ in $\mathbb{N}$ (not in $\mathbb{R}$) such that $n \leq k \leq m$. \end{notation} \begin{definition} Suppose ${\cal D} = \langle {C}, \prec, d \rangle$ is a pseudo-distance on a set $\cal V$. \\ ${\cal D}$ is {\it symmetric} iff $\forall \: v, w \in {\cal V}$, $d(v, w) = d(w, v)$. \\ ${\cal D}$ is {\it identity respecting} (IR) iff \\ $(1)$ $C = \mathbb{R}$; \\ $(2)$ $\prec$ is the usual strict total order on $\mathbb{R}$; \\ $(3)$ $\forall \: v, w \in {\cal V}$, $d(v, w) = 0$ iff $v = w$. \\ ${\cal D}$ is {\it positive} iff $(1)$, $(2)$, and \\ $(4)$ $\forall \: v, w \in {\cal V}$, $0 \preceq d(v, w)$. \\ ${\cal D}$ is {\it triangle-inequality respecting} (TIR) iff $(1)$, $(2)$, and \\ $(5)$ $\forall \: v, w, x \in {\cal V}$, $d(v, x) \preceq d(v, w) + d(w,x)$. \end{definition} These properties have not been imposed from start because natural circumstances could then no longer be modelled. For instance, non-symmetric pseudo-distances are useful when moving from $v$ to $w$ may be ``cheaper'' than moving from $w$ to $v$. There are also circumstances where staying the same requires effort and then non-IR pseudo-distances will be helpful. We can also imagine scenarios where some costs can be seen as ``benefits'', we will then turn to non-positive pseudo-distances, etc. In addition, the costs are not required to be necessarily the real numbers. Indeed, for instance, we could need $|\mathbb{N}|$ to model an ``infinite cost'' useful when a move is impossible or extremely difficult. Provided one accepts the infinite cost $|\mathbb{N}|$, we can define naturally ``liberal'' versions of identity respect, positivity, and triangle-inequality respect: \begin{definition} Suppose ${\cal D} = \langle {C}, \prec, d \rangle$ is a pseudo-distance on a set $\cal V$. \\ ${\cal D}$ is {\it liberally} IR iff \\ $(1)$ $C = \mathbb{R} \cup \lbrace |\mathbb{N}| \rbrace$; \\ $(2)$ $\forall \: c, c' \in C$, $c \prec c'$ iff $(c, c' \in \mathbb{R}$ and $c < c')$ or $(c \in \mathbb{R}$ and $c' = |\mathbb{N}|)$; \\ $(3)$ $\forall \: v, w \in {\cal V}$, $d(v, w) = 0$ iff $v = w$. \\ ${\cal D}$ is {\it liberally positive} iff $(1)$, $(2)$, and \\ $(4)$ $\forall \: v, w \in {\cal V}$, $0 \preceq d(v, w)$. \\ ${\cal D}$ is {\it liberally} TIR iff $(1)$, $(2)$, and \\ $(5)$ $\forall \: v, w, x \in {\cal V}$: if $d(v,x), d(v,w), d(w,x) \in \mathbb{R}$, then $d(v, x) \preceq d(v, w) + d(w,x)$; \\ if $d(v,x) = |\mathbb{N}|$, then $d(v, w) = |\mathbb{N}|$ or $d(w, x) = |\mathbb{N}|$. \end{definition} The Hamming distance between propositional valuations has been considered in \cite{Dalal1} and investigated further by many authors. Respecting this distance is an important property. We need before to present the matrices for a propositional language \cite{Urquhart1}: \begin{definition} Let ${\cal L} = \langle {\cal A}, {\cal C} \rangle$ be a propositional language (${\cal A}$ denotes the atoms and ${\cal C}$ the connectives), let $\cal F$ be the set of all well-formed formulas (wffs) of $\cal L$, and $\forall \: \diamond \in {\cal C}$, let $n(\diamond)$ be the arity of $\diamond$. \\ $\cal M$ is a {\it matrix} on $\cal L$ iff ${\cal M} = \langle T, E, f \rangle$, where $T$ is a set, $E$ is a non-empty proper subset of $T$, and $f$ is a function (whose domain is ${\cal C}$) such that $\forall \: \diamond \in {\cal C}$, $f_\diamond$ (i.e.~$f(\diamond)$) is a function from $T^{{n(\diamond)}}$ to $T$. \\ $v$ is a $\cal M$-{\it valuation} iff $v$ is a function from $\cal F$ to $T$ such that $\forall \: \diamond \in {\cal C}$, $\forall \: \alpha_1, \ldots, \alpha_{{n(\diamond)}} \in {\cal F}$, $v(\diamond(\alpha_1, \ldots, \alpha_{{n(\diamond)}})) = f_\diamond(v(\alpha_1), \ldots, v(\alpha_{{n(\diamond)}}))$. \end{definition} Intuitively, $T$ is a set of truth values and $E$ contains all the designated truth values. \begin{definition} Let ${\cal L} = \langle {\cal A}, {\cal C} \rangle$ be a propositional language, $\cal M$ a matrix on $\cal L$, $\cal V$ the set of all $\cal M$-valuations, and ${\cal D} = \langle {C}, \prec, d \rangle$ a pseudo-distance on $\cal V$. \\ We use the following notation: $\forall \: v, w \in {\cal V}$, \\ $h(v, w) := \lbrace p \in {\cal A} : v(p) \not= w(p) \rbrace$. \\ ${\cal D}$ is {\it Hamming-inequality respecting} (HIR) iff $\forall \: v, w, x \in {\cal V}$, if $|h(v, w)| < |h(v, x)|$, then $d(v, w) \prec d(v, x)$. \end{definition} Recall that $h(v, w)$ may be infinite and thus $<$ should be understood as the usual order on the cardinal numbers. We turn to crucial operators introduced in \cite{LehmannMagidorSchlechta1}. They are central in the definition of the distance-based revision. They transform any two sets of valuations $V$ and $W$ into the set of every element $w$ of $W$ such that a global move from $V$ to $w$ is of minimal cost. Note that concerning this point, \cite{LehmannMagidorSchlechta1} has its roots in \cite{KatsunoMendelzon1} and especially in \cite{Lewis1}. \begin{definition} Suppose ${\cal D} = \langle {C}, \prec, d \rangle$ is a pseudo-distance on a set $\cal V$. \\ We denote by $|_{\cal D}$ the binary operator on ${\cal P}({\cal V})$ such that $\forall \: V, W \subseteq {\cal V}$, we have $V |_{\cal D} W =$ $$ \lbrace w \in W : \exists \: v \in V, \forall \: v' \in V, \forall \: w' \in W, d(v, w) \preceq d(v', w') \rbrace. $$ \end{definition} \subsection{Distance-based revision operators} \label{DISdistbaseopera} The ontological commitments endorsed in \cite{LehmannMagidorSchlechta1} are close to the AGM ones: a classical propositional language is considered and both epistemic states and new information are modelled by consistent sets of formulas (not necessarily deductively closed). \begin{notation} We denote by ${\cal L}_c$ some classical propositional language and by $\vdash_c$, ${\cal V}_c$, $\models_c$, and ${\cal F}_c$ respectively the classical consequence relation, valuations, satisfaction relation, and wffs of ${\cal L}_c$. Let $\Gamma, \Delta \subseteq {\cal F}_c$ and $V \subseteq {\cal V}_c$, then: \\ $\Gamma \vee \Delta := \lbrace \alpha \vee \beta : \alpha \in \Gamma, \beta \in \Delta \rbrace$; \\ $\C{\vdash_c}{\Gamma} := \lbrace \alpha \in {\cal F}_c : \Gamma \vdash_c \alpha \rbrace$; \\ $\M{\Gamma} := \lbrace v \in {\cal V}_c : \forall \: \alpha \in \Gamma, v \models_c \alpha \rbrace$; \\ $\T{V} := \lbrace \alpha \in {\cal F}_c : V \subseteq \M{\alpha} \rbrace$; \\ ${\bf C} := \lbrace \Gamma \subseteq {\cal F}_c : \C{\vdash_c}{\Gamma} \not= {\cal F}_c \rbrace$; \\ ${\bf D} := \lbrace V \subseteq {\cal V}_c : \exists \: \Gamma \subseteq {\cal F}_c, V = \M{\Gamma} \rbrace$. \end{notation} In this classical framework, two new properties for pseudo-distances can be defined. They convey natural meanings. Their importance has been put in evidence in \cite{LehmannMagidorSchlechta1}. \begin{definition} Let ${\cal D} = \langle {C}, \prec, d \rangle$ be a pseudo-distance on ${\cal V}_c$. \\ ${\cal D}$ is {\it definability preserving} (DP) iff \\ $\forall \: V, W \in {\bf D}$, $V|_{\cal D}W \in {\bf D}$. \\ ${\cal D}$ is {\it consistency preserving} (CP) iff \\ $\forall \: V, W \in {\cal P}({\cal V}_c) \setminus \lbrace \emptyset \rbrace$, $V|_{\cal D}W \not=\emptyset$. \end{definition} Now, suppose we are given a pseudo-distance $\cal D$ on ${\cal V}_c$. Then, the revision of a consistent set of formulas $\Gamma$ by a second one $\Delta$ can be defined naturally as the set of all formulas satisfied in $\M{\Gamma} |_{\cal D} \M{\Delta}$: \begin{definition} Let $\star$ be an operator from ${\bf C} \times {\bf C}$ to ${\cal P}({\cal F}_c)$. \\ We say that $\star$ is a {\it distance-based revision operator} iff there exists a pseudo-distance ${\cal D}$ on ${\cal V}_c$ such that $\forall \: \Gamma, \Delta \in {\bf C}$, $$\Gamma \star \Delta = \T{\M{\Gamma} |_{\cal D} \M{\Delta}}.$$ In addition, if ${\cal D}$ is symmetric, IR, DP etc., then so is $\star$. \end{definition} The authors of \cite{LehmannMagidorSchlechta1} rewrote the AGM postulates in their framework as follows. \\ Suppose $\star$ is an operator from ${\bf C} \times {\bf C}$ to ${\cal P}({\cal F}_c)$ Then, define the following properties: $\forall \: \Gamma, \Gamma', \Delta, \Delta' \in {\bf C}$, \begin{description} \item[$(\star0)$] if $\C{\vdash_c}{\Gamma} = \C{\vdash_c}{\Gamma'}$ and $\C{\vdash_c}{\Delta} = \C{\vdash_c}{\Delta'}$, \\ then $\Gamma \star \Delta = \Gamma' \star \Delta'$; \item[$(\star1)$] $\Gamma \star \Delta \in {\bf C}$ and $\Gamma \star \Delta = \C{\vdash_c}{\Gamma \star \Delta}$; \item[$(\star2)$] $\Delta \subseteq \Gamma \star \Delta$; \item[$(\star3)$] if $\Gamma \cup \Delta \in {\bf C}$, then $\Gamma \star \Delta = \C{\vdash_c}{\Gamma \cup \Delta}$; \item[$(\star4)$] if $(\Gamma \star \Delta) \cup \Delta' \in {\bf C}$, \\ then $\Gamma \star (\Delta \cup \Delta') = \C{\vdash_c}{(\Gamma \star \Delta) \cup \Delta'}$. \end{description} Then, it can be checked that every positive, IR, CP and DP distance-based revision operator~$\star$ satisfies $(\star0)$-$(\star4)$, i.e. the AGM postulates. More importantly, $\star$ satisfies also certain properties that deal with iterated revisions. This is not surprising as the revisions of the different $\Gamma$'s are all defined by a unique pseudo-distance, which ensures a strong coherence between them. For example, $\star$ satisfies two following properties: $\forall \: \Gamma, \Delta, \lbrace \alpha \rbrace, \lbrace \beta \rbrace \in {\bf C}$, \begin{itemize} \item if $\gamma \in (\Gamma \star \lbrace \alpha \rbrace) \star \Delta$ and $\gamma \in (\Gamma \star \lbrace \beta \rbrace) \star \Delta$, \\ then $\gamma \in (\Gamma \star \lbrace \alpha \vee \beta \rbrace) \star \Delta$; \item if $\gamma \in (\Gamma \star \lbrace \alpha \vee \beta \rbrace) \star \Delta$, \\ then $\gamma \in (\Gamma \star \lbrace \alpha \rbrace) \star \Delta$ or $\gamma \in (\Gamma \star \lbrace \beta \rbrace) \star \Delta$. \end{itemize} These properties are not entailed by the AGM postulates, a counter-example can be found in \cite{LehmannMagidorSchlechta1}. But, they seem intuitively justified. Indeed, take three sequences of revisions that differ only at some step in which the new information is $\alpha$ in the first sequence, $\beta$ in the second, and $\alpha \vee \beta$ in the third. Now, suppose $\gamma$ is concluded after both the first and the second sequences. Then, it should intuitively be the case that $\gamma$ is concluded after the third sequence too. Similar arguments can be given for the second property. Now, to characterize the full distance-based revision more is needed. This is discussed in the next section. \subsection{Characterizations}\label{DIScharacLMS} The authors of \cite{LehmannMagidorSchlechta1} provided characterizations for families of distance-based revision operators. They proceed in two steps. First, they defined the distance operators, in a very general framework: \begin{definition} Let $\cal V$ be a set, ${\bf V}, {\bf W}, {\bf X} \subseteq {\cal P}({\cal V})$, and $|$ an operator from ${\bf V} \times {\bf W}$ to ${\bf X}$. \\ $|$ is a {\it distance operator} iff there exists a pseudo-distance ${\cal D}$ on $\cal V$ such that $\forall \: V \in {\bf V}$, $\forall \: W \in {\bf W}$, $V | W = V |_{\cal D} W$. \\ In addition, if ${\cal D}$ is symmetric, HIR, DP, etc., then so is $|$. \end{definition} Then, they characterized families of such distance operators (with the least possible assumptions about $\bf V$, $\bf W$, and $\bf X$). This is the essence of their work. Here is an example: \begin{proposition}\label{DISalgebraic}{\bf \cite{LehmannMagidorSchlechta1}} \\ Suppose ${\cal V}$ is a non-empty set, ${\bf V} \subseteq {\cal P}({\cal V})$ (such that $\emptyset \not\in {\bf V}$ and $\forall \: V, W \in {\bf V}$, we have $V \cup W \in {\bf V}$ and if $V \cap W \not=\emptyset$, then $V \cap W \in {\bf V}$ too), and $|$ an operator from ${\bf V} \times {\bf V}$ to ${\bf V}$. \\ Then, $|$ is a symmetric distance operator iff $\forall \: k \in \mathbb{N}^+$ and $\forall \: V_0, V_1, \ldots, V_k \in {\bf V}$, we have $V_0|V_1 \subseteq V_1$ and \begin{description} \item[$(| loop)$] if $\left \{ \begin{array}{l} (V_1 | (V_0 \cup V_2)) \cap V_0 \not= \emptyset,\\ (V_2 | (V_1 \cup V_3)) \cap V_1 \not= \emptyset,\\ \ldots,\\ (V_k | (V_{k-1} \cup V_0)) \cap V_{k-1} \not= \emptyset, \end{array} \right .$ \\ then $(V_0 | (V_k \cup V_1)) \cap V_1 \not= \emptyset$. \end{description} \end{proposition} In a second step only, they applied these results to characterize families of distance-based revision operators. For instance, they applied Proposition~\ref{DISalgebraic} to get Proposition~\ref{DIScaracrevclass} below. We should say immediately that they chose a classical framework to define the distance-based revision. But, if we choose now another framework, there are quite good chances that Proposition~\ref{DISalgebraic} can be still applied, thanks to its algebraic nature. \begin{proposition}\label{DIScaracrevclass} {\bf \cite{LehmannMagidorSchlechta1}} \\ Let $\star$ be an operator from ${\bf C} \times {\bf C}$ to~${\cal P}({\cal F}_c)$. \\ Then, $\star$ is a symmetric CP DP distance-based revision operator iff $\star$ satisfies $(\star0)$, $(\star1)$, $(\star2)$, and \\ $\forall \: k \in \mathbb{N}^+$, $\forall \: \Gamma_0, \Gamma_1, \ldots, \Gamma_k \in {\bf C}$, \begin{description} \item[$(\star loop)$] if $\left \{ \begin{array}{l} \Gamma_0 \cup (\Gamma_1 \star (\Gamma_0 \vee \Gamma_2)) \in {\bf C},\\ \Gamma_1 \cup (\Gamma_2 \star (\Gamma_1 \vee \Gamma_3)) \in {\bf C},\\ \ldots,\\ \Gamma_{k-1} \cup (\Gamma_k \star (\Gamma_{k-1} \vee \Gamma_0)) \in {\bf C}, \end{array} \right .$ \\ then $\Gamma_1 \cup (\Gamma_0 \star (\Gamma_k \vee \Gamma_1)) \in {\bf C}$. \end{description} \end{proposition} \section{Normal characterizations}\label{DISnormalchar} Let $\cal V$ be a set, ${\cal O}$ a set of binary operators on ${\cal P}({\cal V})$, and $|$ a binary operator on ${\cal P}({\cal V})$. Approximatively, in \cite{Schlechta5}, a characterization of ${\cal O}$ is called normal iff it contains only conditions which are universally quantified, apply $|$ only a finite number of times, and use only elementary operations (like~e.g.~$\cup$, $\cap$, $\setminus$), see Section~1.6.2.1 of \cite{Schlechta5} for details. Here is an example of such a condition: \begin{description} \item[$(C1)$] $\forall \: V, W \in {\bf U} \subseteq {\cal P}({\cal V})$, $V | ( (V \cup W) | W ) = \emptyset$. \end{description} Now, we introduce a new, more general, definition with an aim of providing more general impossibility results. Approximatively, in the present paper, a characterization of $\cal O$ will be called normal iff it contains only conditions which are universally quantified and apply $|$ only a finite number of times. Then, the conditions can involve complex structures or functions, etc., we are not limited to elementary operations. More formally: \begin{definition} \label{DISnormalCO} Suppose $\cal V$ is a set and ${\cal O}$ a set of binary operators on ${\cal P}({\cal V})$. \\ $\cal C$ is a {\it normal characterization} of ${\cal O}$ iff ${\cal C} = \langle n, \Phi \rangle$ where $n \in \mathbb{N}^+$ and $\Phi$ is a relation on ${\cal P}({\cal V})^{3n}$ such that for every binary operator $|$ on ${\cal P}({\cal V})$, we have $| \in {\cal O}$ iff \\ $\forall \: V_1, \ldots, V_n, W_1, \ldots, W_n \subseteq {\cal V}$, \\ $(V_1, \ldots, V_n, W_1, \ldots, W_n, V_1 | W_1, \ldots, V_n | W_n) \in \Phi.$ \end{definition} Note that $\Phi$ is a relation in the purely set-theoretic sense. Now, suppose there is no normal characterization of ${\cal O}$. Here are examples (i.e. $(C1)$, $(C2)$, and $(C3)$ below) that will give the reader a good idea which conditions cannot characterize ${\cal O}$. This will therefore make clearer the range of our impossibility results (Propositions~\ref{DISpascarac} and \ref{DISpascaracham} below). To begin, $(C1)$ cannot characterize $\cal O$. Indeed, suppose it does, i.e. $| \in {\cal O}$ iff $\forall \: V, W \in {\bf U}$, $V | ( (V \cup W) | W ) = \emptyset$. \\ Then, take $n = 4$ and $\Phi$ such that \\ $(V_1, \dots, V_4, W_1, \ldots, W_4, X_1, \ldots, X_4) \in \Phi$ iff \\ $\left \{ \begin{array}{l} V_1, V_2 \in {\bf U},\\ V_3 = V_1 \cup V_2,\\ W_3 = V_2,\\ V_4 = V_1,\\ W_4 = X_3\end{array} \right .$ entail $X_4 = \emptyset$. \\ Then, $\langle 4, \Phi \rangle$ is a normal characterization of ${\cal O}$. We give the easy proof of this, so that the reader can check that a convenient relation $\Phi$ can be found immediately for all simple conditions like $(C1)$. \begin{proof} Direction: ``$\rightarrow$''. \\ Suppose $| \in {\cal O}$. \\ Then, $\forall \: V, W \in {\bf U}$, $V | ( (V \cup W) | W ) = \emptyset$. \\ Let $V_1, \ldots, V_4, W_1, \ldots, W_4 \subseteq {\cal V}$. We show: \\ $(V_1, \ldots, V_4, W_1, \ldots, W_4, V_1 | W_1, \ldots, V_4 | W_4) \in \Phi$. \\ Suppose $V_1, V_2 \in {\bf U}$, $V_3 = V_1 \cup V_2$, $W_3 = V_2$, $V_4 = V_1$, and $W_4 = V_3 | W_3$. \\ Then, as $V_1, V_2 \in {\bf U}$, we get $V_1 | ( (V_1 \cup V_2) | V_2 ) = \emptyset$. \\ But, $V_1 | ( (V_1 \cup V_2) | V_2 ) = V_1 | ( V_3 | W_3 ) = V_4 | W_4$. Direction: ``$\leftarrow$''. \\ Suppose $\forall \: V_1, \ldots, V_4, W_1, \ldots, W_4 \subseteq {\cal V}$, \\ $(V_1, \ldots, V_4, W_1, \ldots, W_4, V_1 | W_1, \ldots, V_4 | W_4) \in \Phi$. \\ We show $| \in {\cal O}$. Let $V, W \in {\bf U}$. \\ Take $V_1 = V$, $V_2 = W$, $V_3 = V_1 \cup V_2$, $W_3 = V_2$, $V_4 = V_1$, $W_4 = V_3 | W_3$. Take any values for $W_1$ and $W_2$. \\ Then, $V_1 \in {\bf U}$, $V_2 \in {\bf U}$, $V_3 = V_1 \cup V_2$, $W_3 = V_2$, $V_4 = V_1$, and $W_4 = V_3|W_3$. \\ But, $(V_1, \ldots, V_4, W_1, \ldots, W_4, V_1 | W_1, \ldots, V_4 | W_4) \in \Phi$ \\ Therefore, by definition of $\Phi$, $V_4 | W_4 = \emptyset$. \\ But, $V_4 | W_4 = V_1 | ((V_1 \cup V_2) | V_2) = V | ((V \cup W) | W)$.\qed \end{proof} At this point, we excluded all those conditions which are excluded by (the nonexistence of a normal characterization of ${\cal O}$ in the sense of) Schlecha, i.e. all conditions like $(C1)$. But actually, more complex conditions are also excluded. For instance, let $f$ be any function from ${\cal P}({\cal V})$ to ${\cal P}({\cal V})$. Then, the following condition: \begin{description} \item[$(C2)$] $\forall \: V, W \in {\bf U}$, $f(V) | ( (V \cup W) | W ) = \emptyset$. \end{description} cannot characterize ${\cal O}$. Indeed, suppose it characterizes ${\cal O}$. Then, take $n = 4$ and $\Phi$ such that \\ $(V_1, \dots, V_4, W_1, \ldots, W_4, X_1, \ldots, X_4) \in \Phi$ iff \\ $\left \{ \begin{array}{l} V_1, V_2 \in {\bf U},\\ V_3 = V_1 \cup V_2,\\ W_3 = V_2,\\ V_4 = f(V_1),\\ W_4 = X_3 \end{array} \right .$ entail $X_4 = \emptyset$. \\ Then, $\langle 4, \Phi \rangle$ is a normal characterization of ${\cal O}$. We leave the easy proof of this to the reader. On the other hand, $(C2)$ is not excluded by Schlechta, if $f$ cannot be constructed from elementary operations. But, even if there exists such a construction, showing that it is indeed the case might well be a difficult problem. We can even go further combining universal (not existential) quantifiers and functions like~$f$. For instance, suppose ${\cal G}$ is a set of functions from ${\cal P}({\cal V})$ to ${\cal P}({\cal V})$ and consider the following condition: \begin{description} \item[$(C3)$] $\forall \: f \in {\cal G}$, $\forall \: V, W \in {\bf U}$, $f(V) | ( (V \cup W) | W ) = \emptyset$. \end{description} Then, $(C3)$ cannot characterize ${\cal O}$. Indeed, suppose $(C3)$ characterizes ${\cal O}$. Then, take $n = 4$ and $\Phi$ such that \\ $(V_1, \dots, V_4, W_1, \ldots, W_4, X_1, \ldots, X_4) \in \Phi$ iff \\ $\forall \: f \in {\cal G}$, if $\left \{ \begin{array}{l} V_1, V_2 \in {\bf U},\\ V_3 = V_1 \cup V_2,\\ W_3 = V_2,\\ V_4 = f(V_1),\\ W_4 = X_3, \end{array} \right .$ then $X_4 = \emptyset$. \\ Then, $\langle 4, \Phi \rangle$ is a normal characterization of ${\cal O}$. The easy proof is left to the reader. On the other hand, $(C3)$ is not excluded by Schlechta. Finally, a good example of a condition which is not excluded (neither by us nor by Schlechta) is of course the arbitrary big loop condition $(| loop)$. \section{Impossibility results}\label{DISnoFiniteNormChar} We provide our first impossibility result. It generalizes Proposition 4.2.11 of \cite{Schlechta5}. Our proof will be based on a slight adaptation of a particular pseudo-distance invented by Schlechta (called ``Hamster Wheel''). \begin{proposition}\label{DISpascarac} Let $\cal V$ be an infinite set, ${\cal N}$ the set of all symmetric IR positive TIR distance operators from ${\cal P}({\cal V})^2$ to ${\cal P}({\cal V})$, and ${\cal O}$ a set of distance operators from ${\cal P}({\cal V})^2$ to ${\cal P}({\cal V})$ such that ${\cal N} \subseteq {\cal O}$. \\ Then, there does not exist a normal characterization of ${\cal O}$. \end{proposition} \begin{proof} Suppose the contrary, i.e. suppose there is $n \in \mathbb{N}^+$ and a relation $\Phi$ on ${\cal P}({\cal V})^{3n}$ such that \begin{description} \item[$(0)$] for every binary operator $|$ on ${\cal P}({\cal V})$, we have $| \in {\cal O}$ iff \\ $\forall \: V_1, \ldots, V_n$, $W_1, \ldots, W_n \subseteq {\cal V}$, \\ $(V_1, \ldots, V_n$, $W_1, \ldots, W_n$, $V_1 | W_1, \ldots, V_n | W_n) \in \Phi$. \end{description} As $\cal V$ is infinite, there are distinct $v_1, \ldots, v_m$, $w_1, \ldots, w_m$ in ${\cal V}$, with $m = n + 3$. \\ Let $X = \lbrace v_1, \ldots, v_m, w_1, \ldots, w_m \rbrace$. \\ Let ${\cal D}$ be the pseudo-distance on $\cal V$ such that ${\cal D} = \langle \mathbb{R}, <, d \rangle$, where $<$ is the usual order on $\mathbb{R}$ and $d$ is the function defined as follows. Let $v, w \in {\cal V}$. Consider the cases that follow: \\ Case~1: $v = w$. \\ Case~2: $v \not= w$. \\ Case~2.1: $\lbrace v, w \rbrace \not\subseteq X$. \\ Case~2.2: $\lbrace v, w \rbrace \subseteq X$. \\ Case~2.2.1: $\lbrace v, w \rbrace \subseteq \lbrace v_1, \ldots, v_m \rbrace$. \\ Case~2.2.2: $\lbrace v, w \rbrace \subseteq \lbrace w_1, \ldots, w_m \rbrace$. \\ Case~2.2.3: $\exists \: i, j \in [1, m],\; \lbrace v, w \rbrace = \lbrace v_i, w_j \rbrace$. \\ Case~2.2.3.1: $i = j$. \\ Case~2.2.3.2: $abs(i - j) \in \lbrace 1, m-1 \rbrace$. \\ Case~2.2.3.3: $1 < abs(i - j) < m-1$. \\ Then, $$ d(v, w) = \left \{ \begin{array}{l l} 0 & \textrm{if Case~1 holds;}\\ 1 & \textrm{if Case~2.1 holds;}\\ 1.1 & \textrm{if Case~2.2.1 holds;} \\ 1.1 & \textrm{if Case~2.2.2 holds;} \\ 1.4 & \textrm{if Case~2.2.3.1 holds;}\\ 2 & \textrm{if Case~2.2.3.2 holds;}\\ 1.2 & \textrm{if Case~2.2.3.3 holds.}\\ \end{array} \right . $$ Note that ${\cal D}$ is essentially, but not exactly, the Hamster Wheel of \cite{Schlechta5}. The main difference is Case~2.1, which was not treated by Schlechta. The reader can find a picture of $\cal D$ in Figure~1. \\ \begin{center} \epsfig{file=./wheel1NB.eps, height=8cm} \hfill \\ Figure 1: A slight adaptation of Hamster Wheel. \end{center} \hfill \\ Let $|$ be the binary operator on ${\cal P}({\cal V})$ such that $\forall \: V, W \subseteq {\cal V}$, $$ V | W = \left \{ \begin{array}{l l} \lbrace w_m \rbrace & \textrm{if}\; V = \lbrace v_m, v_1 \rbrace, W = \lbrace w_m, w_1 \rbrace;\\ \lbrace v_m \rbrace & \textrm{if}\; V = \lbrace w_m, w_1 \rbrace, W = \lbrace v_m, v_1 \rbrace;\\ V |_{\cal D} W & \textrm{otherwise.} \end{array} \right . $$ The difference between $|$ and $|_{\cal D}$ is strong enough so that: \begin{description} \item[$(1)$] $|$ is not a distance operator. \end{description} The proof will be given later. Thus, $| \not\in {\cal O}$. Thus, by $(0)$: \begin{description} \item[$(2)$] $\exists V_1, \ldots, V_n, W_1, \ldots, W_n \subseteq {\cal V}$, \\ $(V_1, \ldots, V_n, W_1, \ldots, W_n, V_1 | W_1, \ldots, V_n | W_n) \not\in \Phi$. \end{description} In addition, we took $m$ sufficiently big so that: \begin{description} \item[$(3)$] $\exists \: r \in [1, m-1]$ such that \\ $\forall \: i \in [1, n]$, $\lbrace V_i, W_i \rbrace \not= \lbrace \lbrace v_r, v_{r+1} \rbrace, \lbrace w_r, w_{r+1} \rbrace \rbrace$. \end{description} We will give the proof later. \\ Let $|'$ be the binary operator on ${\cal P}({\cal V})$ such that $\forall \: V, W \subseteq {\cal V}$, $$ V |' W = \left \{ \begin{array}{l l} \lbrace w_{r+1} \rbrace & \!\!\!\textrm{if}\: V = \lbrace v_r, v_{r+1} \rbrace, W = \lbrace w_r, w_{r+1} \rbrace;\\ \lbrace v_{r+1} \rbrace & \!\!\!\textrm{if}\: V = \lbrace w_r, w_{r+1} \rbrace, W = \lbrace v_r, v_{r+1} \rbrace;\\ V | W & \!\!\!\textrm{otherwise.}\\ \end{array} \right . $$ The difference between $|'$ and $|$ is ``invisible'' for $\Phi$. \\ More formally, $\forall \: i \in [1, n]$, $V_i |' W_i = V_i | W_i$. \\ The proof of this is obvious by $(3)$. \\ Therefore, by $(2)$, we get: \\ $(V_1, \ldots, V_n$, $W_1, \ldots, W_n$, $V_1 |' W_1, \ldots, V_n |' W_n) \not\in \Phi$. \\ Thus, by $(0)$, we obtain: \begin{description} \item[$(4)$] $|' \not\in {\cal O}$. \end{description} But, at the same time, there is a convenient pseudo-distance that represents $|'$. Indeed, let ${\cal D}'$ be the pseudo-distance on $\cal V$ such that ${\cal D}' = \langle \mathbb{R}, <, d' \rangle$, where $d'$ is the function such that $\forall \: v, w \in {\cal V}$, $$ d'(v, w) = \left \{ \begin{array}{l l} \!\!\!1.3 & \!\!\!\textrm{if}\: \exists i \in [r+1, m], \lbrace v, w \rbrace = \lbrace v_i, w_i \rbrace; \\ \!\!\!d(v, w) & \!\!\!\textrm{otherwise.} \end{array} \right . $$ Then, we will show: \begin{description} \item[$(5)$] $|' = |_{{\cal D}'}$. \end{description} But, ${\cal D}'$ is obviously symmetric, IR, and positive. \\ In addition, ${\cal D}'$ is TIR, because ${\cal D}'$ is IR and \\ $\forall \: v, w \in {\cal V}$, $d'(v, w) = 0$ or $1 \leq d'(v,w) \leq 2$. \\ Thus, $|'$ is a symmetric IR positive TIR distance operator. \\ Consequently, $|' \in {\cal N}$ and thus \begin{description} \item[$(6)$] $|' \in {\cal O}$. \end{description} So, we get a final contradiction by $(4)$ and $(6)$. \\ \\ {\it Proof of $(1)$}. Suppose the contrary, i.e. suppose there is a pseudo-distance ${{\cal S}} = \langle {C}, \prec, g \rangle$ on $\cal V$ such that $| = |_{{\cal S}}$. \\ Then, we will show: \\ $(1.1)$\quad $\forall \: i \in [1, m-1]$, $g(v_i, w_i) = g(v_{i+1}, w_{i+1})$. \\ On the other hand, we will show: \\ $(1.2)$\quad $g(v_m, w_m) \prec g(v_1, w_1)$. \\ But, by $(1.1)$ and $(1.2)$, we get an obvious contradiction. \\ \\ {\it Proof of $(1.1)$}. Suppose $i \in [1, m-1]$. Then: \\ $\lbrace v_i, v_{i+1} \rbrace |_{{\cal S}}\lbrace w_i, w_{i+1} \rbrace = \lbrace v_i, v_{i+1} \rbrace |_{\cal D} \lbrace w_i, w_{i+1} \rbrace = \lbrace w_i, w_{i+1} \rbrace$. \\ Case~1: $g(v_i, w_i) \prec g(v_{i+1}, w_{i+1})$. \\ We have $\lbrace v_i \rbrace |_{{\cal S}}\lbrace w_i, w_{i+1} \rbrace = \lbrace v_i \rbrace |_{\cal D} \lbrace w_i, w_{i+1} \rbrace = \lbrace w_i \rbrace$. \\ Thus, $w_{i+1} \not\in \lbrace v_i \rbrace |_{{\cal S}}\lbrace w_i, w_{i+1} \rbrace$. \\ Therefore, $g(v_i, w_i) \prec g(v_i, w_{i+1})$. \\ Thus, $w_{i+1} \not\in \lbrace v_i, v_{i+1} \rbrace |_{{\cal S}}\lbrace w_i, w_{i+1} \rbrace$, which is impossible. \\ Case~2: $g(v_{i+1}, w_{i+1}) \prec g(v_i, w_i)$. \\ We have $\lbrace v_{i+1} \rbrace |_{{\cal S}}\lbrace w_i, w_{i+1} \rbrace = \lbrace v_{i+1} \rbrace |_{\cal D} \lbrace w_i, w_{i+1} \rbrace = \lbrace w_{i+1} \rbrace$. \\ Therefore, $w_i \not\in \lbrace v_{i+1} \rbrace |_{{\cal S}}\lbrace w_i, w_{i+1} \rbrace$. \\ Consequently, $g(v_{i+1}, w_{i+1}) \prec g(v_{i+1}, w_i)$. \\ Thus, $w_i \not\in \lbrace v_i, v_{i+1} \rbrace |_{{\cal S}}\lbrace w_i, w_{i+1} \rbrace$, which is impossible. \\ Case~3: $g(v_i, w_i) \not\prec g(v_{i+1}, w_{i+1})$ and $g(v_{i+1}, w_{i+1}) \not\prec g(v_i, w_i)$. \\ Then, as $\prec$ is total, $g(v_i, w_i) = g(v_{i+1}, w_{i+1})$. \\ \\ {\it Proof of $(1.2)$}. We have $\lbrace v_m, v_1 \rbrace |_{{\cal S}} \lbrace w_m, w_1 \rbrace =$ $\lbrace v_m, v_1 \rbrace | \lbrace w_m, w_1 \rbrace$ $=$ $\lbrace w_m \rbrace$. \\ Therefore, $w_1 \not\in \lbrace v_m, v_1 \rbrace |_{{\cal S}} \lbrace w_m, w_1 \rbrace$. Thus: \\ $\exists \: v \in \lbrace v_m, v_1 \rbrace$, $\exists \: w \in \lbrace w_m, w_1 \rbrace$, $g(v, w) \prec g(v_1, w_1)$. \\ Case~1: $g(v_m, w_m) \prec g(v_1, w_1)$. We are done. \\ Case~2: $g(v_m, w_1) \prec g(v_1, w_1)$. \\ We have $\lbrace v_m \rbrace |_{{\cal S}}\lbrace w_m, w_1 \rbrace = \lbrace v_m \rbrace |_{\cal D} \lbrace w_m, w_1 \rbrace = \lbrace w_m \rbrace$. \\ Therefore, $w_1 \not\in \lbrace v_m \rbrace |_{{\cal S}}\lbrace w_m, w_1 \rbrace$. \\ Thus, $g(v_m, w_m) \prec g(v_m, w_1)$. \\ Thus, by transitivity of $\prec$, $g(v_m, w_m) \prec g(v_1, w_1)$. \\ Case~3: $g(v_1, w_m) \prec g(v_1, w_1)$. \\ Then, $\lbrace v_1 \rbrace |_{{\cal S}}\lbrace w_m, w_1 \rbrace = \lbrace w_m \rbrace$. \\ However, $\lbrace v_1 \rbrace |_{{\cal S}}\lbrace w_m, w_1 \rbrace = \lbrace v_1 \rbrace |_{\cal D} \lbrace w_m, w_1 \rbrace = \lbrace w_1 \rbrace$, which is impossible. \\ Case~4: $g(v_1, w_1) \prec g(v_1, w_1)$. \\ Impossible by irreflexivity of $\prec$. \\ \\ {\it Proof of $(3)$}. For all $s \in [1, m-1]$, define: \\ $I_s := \lbrace i \in [1, n] : \lbrace V_i, W_i \rbrace = \lbrace \lbrace v_s, v_{s+1} \rbrace, \lbrace w_s, w_{s+1} \rbrace \rbrace\rbrace$. \\ Suppose the opposite of what we want to show, i.e. suppose $\forall \: s \in [1, m-1]$, $I_s \not = \emptyset$. \\ As $v_1, \ldots, v_m, w_1, \ldots, w_m$ are distinct, $\forall \: s, t \in [1, m-1]$, if $s \not= t$, then $I_s \cap I_t = \emptyset$. \\ Therefore, $m-1 \leq | I_1 \cup \ldots \cup I_{m-1}|$. \\ On the other hand, $\forall \: s \in [1, m-1]$, $I_s \subseteq [1, n]$. \\ Thus, $| I_1 \cup \ldots \cup I_{m-1}| \leq n$. \\ Thus, $m-1 \leq n$, which is impossible as $m = n+3$. \\ \\ {\it Proof of $(5)$}. Let $V, W \subseteq {\cal V}$. \\ Case~1: $V = \lbrace v_r, v_{r+1} \rbrace$ and $W = \lbrace w_r, w_{r+1} \rbrace$. \\ Then, $V |' W = \lbrace w_{r+1} \rbrace = V |_{{\cal D}'} W$. \\ Case~2: $V = \lbrace w_r, w_{r+1} \rbrace$ and $W = \lbrace v_r, v_{r+1} \rbrace$. \\ Then, $V |' W = \lbrace v_{r+1} \rbrace = V |_{{\cal D}'} W$. \\ Case~3: $V = \lbrace v_m, v_1 \rbrace$ and $W = \lbrace w_m, w_1 \rbrace$. \\ Then, $V |' W = V | W = \lbrace w_m \rbrace = V |_{{\cal D}'} W$. \\ Case~4: $V = \lbrace w_m, w_1 \rbrace$ and $W = \lbrace v_m, v_1 \rbrace$. \\ Then, $V |' W = V | W = \lbrace v_m \rbrace = V |_{{\cal D}'} W$. \\ Case~5: $\lbrace V, W \rbrace \not\in$ \\ $\lbrace \lbrace \lbrace v_r, v_{r+1} \rbrace, \lbrace w_r, w_{r+1} \rbrace \rbrace, \lbrace \lbrace v_m, v_1 \rbrace, \lbrace w_m, w_1 \rbrace \rbrace \rbrace$. \\ Then, $V |' W = V | W = V |_{\cal D} W$. \\ Case~5.1: $V = \emptyset$ or $W = \emptyset$. \\ Then, $V |_{\cal D} W = \emptyset = V |_{{\cal D}'} W$. \\ Case~5.2: $V \cap W \not= \emptyset$. \\ Then, $V |_{\cal D} W = V \cap W = V |_{{\cal D}'} W$. \\ Case~5.3: $V \not= \emptyset$, $W \not= \emptyset$, and $V \cap W = \emptyset$. \\ Case~5.3.1: $V \not\subseteq X$. \\ Then, $V |_{\cal D} W = W = V |_{{\cal D}'} W$. \\ Case~5.3.2: $V \subseteq X$. \\ Case~5.3.2.1: $W \not\subseteq X$. \\ Then, $V |_{\cal D} W = W \setminus X = V |_{{\cal D}'} W$. \\ Case~5.3.2.2: $W \subseteq X$. \\ Case~5.3.2.2.1: $V \not\subseteq \lbrace v_1, \ldots, v_m \rbrace$ and $V \not\subseteq \lbrace w_1, \ldots, w_m \rbrace$. \\ Then, $V |_{\cal D} W = W = V |_{{\cal D}'} W$. \\ Case~5.3.2.2.2: $V \subseteq \lbrace v_1, \ldots v_m \rbrace$ and $W \not\subseteq \lbrace w_1, \ldots w_m \rbrace$. \\ Then, $V |_{\cal D} W = W \cap \lbrace v_1, \ldots, v_m \rbrace = V |_{{\cal D}'} W$. \\ Case~5.3.2.2.3: $V \subseteq \lbrace v_1, \ldots v_m \rbrace$ and $W \subseteq \lbrace w_1, \ldots w_m \rbrace$. \\ Case~5.3.2.2.3.1: $\exists \: v_i \in V$, $\exists \: w_j \in W$, \\ $1 < abs(i-j) < m-1$. \\ Then, $V |_{\cal D} W =$ \\ $\lbrace w_j \in W : \exists \: v_i \in V$, $1 < abs(i-j) < m-1 \rbrace = V |_{{\cal D}'} W$. \\ Case~5.3.2.2.3.2: $\forall \: v_i \in V$, $\forall \: w_j \in W$, \\ $abs(i-j) \in \lbrace 0, 1, m-1 \rbrace$. \\ Case~5.3.2.2.3.2.1: $|V \cup W| \geq 5$. \\ As $m \geq 4$, $\exists \: v_i \in V$, $\exists \: w_j \in W$, $1 < abs(i-j) < m-1$, which is impossible. \\ Case~5.3.2.2.3.2.2: $|V \cup W| \in \lbrace 2, 3, 4 \rbrace$. \\ Case~5.3.2.2.3.2.2.1: $\lbrace k \in [1, m] : v_k \in V, w_k \in W \rbrace = \emptyset$. \\ Then, $V |_{\cal D} W = W = V |_{{\cal D}'} W$. \\ Case~5.3.2.2.3.2.2.2: $\exists \: i \in [1, m]$ such that \\ $\lbrace k \in [1, m] : v_k \in V, w_k \in W \rbrace = \lbrace i \rbrace$. \\ Then, $V |_{\cal D} W = \lbrace w_i \rbrace = V |_{{\cal D}'} W$. \\ Case~5.3.2.2.3.2.2.3: $\exists \: i, j \in [1, m]$ such that $i < j$ and \\ $\lbrace k \in [1, m] : v_k \in V$ and $w_k \in W \rbrace = \lbrace i, j \rbrace$. \\ Then, $V = \lbrace v_i, v_j \rbrace$ and $W = \lbrace w_i, w_j \rbrace$. \\ Case~5.3.2.2.3.2.2.3.1: $r < i$ or $j \leq r$. \\ Then, $V |_{\cal D} W = \lbrace w_i, w_j \rbrace = V |_{{\cal D}'} W$. \\ Case~5.3.2.2.3.2.2.3.2: $i \leq r < j$. \\ We have $abs(i - j) \in \lbrace 1, m-1 \rbrace$. Thus, $\langle V,\: W \rangle \in$ \\ $\lbrace \langle \lbrace v_r,\: v_{r+1} \rbrace,\: \lbrace w_r,\: w_{r+1} \rbrace \rangle$, $\langle \lbrace v_1,\: v_m \rbrace,\: \lbrace w_1,\: w_m \rbrace \rangle \rbrace$, \\ which is impossible. \\ Case~5.3.2.2.3.2.2.4: $| \lbrace k \in [1, m] : v_k \in V, w_k \in W \rbrace | \geq 3$. \\ Then, $|V \cup W| \geq 6$, which is impossible. \\ Case~5.3.2.2.4: $V \subseteq \lbrace w_1, \ldots w_m \rbrace$ and $W \not\subseteq \lbrace v_1, \ldots v_m \rbrace$. \\ Then, $V |_{\cal D} W = W \cap \lbrace w_1, \ldots, w_m \rbrace = V |_{{\cal D}'} W$. \\ Case~5.3.2.2.5: $V \subseteq \lbrace w_1, \ldots w_m \rbrace$ and $W \subseteq \lbrace v_1, \ldots v_m \rbrace$. \\ Similar to Case~5.3.2.2.3.\qed \end{proof} We extend the negative results to the ``liberal'' and Hamming properties. The proof will be based on an adaptation of the Hamster Wheel. Note that the Hamming distance is a realistic distance which has been investigated by many researchers. This strengthen the importance of Proposition~\ref{DISpascaracham} below in the sense that not only abstract but also concrete cases do not admit a normal characterization. \begin{proposition}\label{DISpascaracham} Let ${\cal L} = \langle {\cal A}, {\cal C} \rangle$ be a propositional language with ${\cal A}$ infinite and countable, ${\cal M}$ a matrix on $\cal L$, $\cal V$ the set of all $\cal M$-valuations, ${\cal N}$ the set of all symmetric, HIR, liberally IR, liberally positive, and liberally TIR distance operators from ${\cal P}({\cal V})^2$ to ${\cal P}({\cal V})$, and ${\cal O}$ a set of distance operators from ${\cal P}({\cal V})^2$ to ${\cal P}({\cal V})$ such that ${\cal N} \subseteq {\cal O}$. \\ Then, there does not exist a normal characterization of ${\cal O}$. \end{proposition} \begin{proof} Suppose the contrary, i.e. suppose there are $n \in \mathbb{N}^+$ and a relation $\Phi$ on ${\cal P}({\cal V})^{3n}$ such that \begin{description} \item[$(0)$] for every binary operator $|$ on ${\cal P}({\cal V})$, we have $| \in {\cal O}$ iff \\ $\forall \: V_1, \ldots, V_n$, $W_1, \ldots, W_n \subseteq {\cal V}$, \\ $(V_1, \ldots, V_n, W_1, \ldots, W_n, V_1 | W_1, \ldots, V_n | W_n) \in \Phi$. \end{description} As ${\cal A}$ is infinite, there are distinct $p_1, \ldots, p_m,$ $q_1, \ldots, q_m$ in ${\cal A}$, with $m = n + 3$. \\ Let's pose ${\cal M} = \langle T, D, f \rangle$. \\ As $D \not= \emptyset$ and $T \setminus D \not= \emptyset$, there are distinct $0, 1 \in T$. \\ Now, $\forall \: i \in [1, m]$, let $v_i$ be the $\cal M$-valuation that assigns $1$ to $p_i$ and $0$ to each other atom of ${\cal A}$. \\ Similarly, $\forall \: i \in [1, m]$, let $w_i$ be the $\cal M$-valuation that assigns $1$ to $q_i$ and $0$ to each other atom of ${\cal A}$. \\ Let $X = \lbrace v_1, \ldots, v_m, w_1 \ldots, w_m \rbrace$. \\ Note that $\forall \: v, w \in X$, with $v \not= w$, we have $|h(v, w)| = 2$. \\ Finally, let ${\cal D}$ be the pseudo-distance on $\cal V$ such that ${\cal D} = \langle \mathbb{R} \cup \lbrace |\mathbb{N}| \rbrace, \prec, d \rangle$, where $\prec$ and $d$ are defined as follows. \\ Let $c, c' \in \mathbb{R} \cup \lbrace |\mathbb{N}| \rbrace$. Then, $c \prec c'$ iff $(c, c' \in \mathbb{R}$ and $c < c')$ or $(c \in \mathbb{R}$ and $c' = |\mathbb{N}|)$. \\ Let $v, w \in {\cal V}$ and consider the cases which follow: \\ Case~1: $v = w$. \\ Case~2: $v \not= w$. \\ Case~2.1: $\lbrace v, w \rbrace \not\subseteq X$. \\ Case~2.1.1: $|h(v, w)| = 1$. \\ Case~2.1.2: $|h(v, w)| \geq 2$. \\ Case~2.2: $\lbrace v, w \rbrace \subseteq X$. \\ Case~2.2.1: $\lbrace v, w \rbrace \subseteq \lbrace v_1, \ldots, v_m \rbrace$. \\ Case~2.2.2: $\lbrace v, w \rbrace \subseteq \lbrace w_1, \ldots, w_m \rbrace$. \\ Case~2.2.3: $\exists \: i, j \in [1, m],\; \lbrace v, w \rbrace = \lbrace v_i, w_j \rbrace$. \\ Case~2.2.3.1: $i = j$. \\ Case~2.2.3.2: $abs(i - j) \in \lbrace 1, m-1 \rbrace$. \\ Case~2.2.3.3: $1 < abs(i - j) < m-1$. \\ Then, $$ d(v, w) = \left \{ \begin{array}{l l} 0 & \textrm{if Case~1 holds;}\\ 1.4 & \textrm{if Case~2.1.1 holds;}\\ |h(v, w)| & \textrm{if Case~2.1.2 holds;} \\ 2.1 & \textrm{if Case~2.2.1 holds;} \\ 2.1 & \textrm{if Case~2.2.2 holds;} \\ 2.4 & \textrm{if Case~2.2.3.1 holds;}\\ 2.5 & \textrm{if Case~2.2.3.2 holds;}\\ 2.2 & \textrm{if Case~2.2.3.3 holds.}\\ \end{array} \right . $$ \\ Note that ${\cal D}$ is an adaptation of the Hamster Wheel of \cite{Schlechta5}. The reader can find a picture of $\cal D$ in Figure~2. \\ \begin{center} \epsfig{file=./wheelhammingNB.eps, height=8cm} \hfill \\ Figure 2: An adaptation of Hamster Wheel. \end{center} \hfill \\ Let $|$ be the binary operator on ${\cal P}({\cal V})$ defined as follows. \\ Let $V, W \subseteq {\cal V}$ and consider the cases that follow: \\ Case~1: $\forall v \in V$, $\forall w \in W, \lbrace v, w \rbrace \subseteq X$ or $3 \leq |h(v, w)|$. \\ Case~1.1: $V \cap X = \lbrace v_m, v_1 \rbrace$ and $W \cap X = \lbrace w_m, w_1 \rbrace$. \\ Case~1.2: $V \cap X = \lbrace w_m, w_1 \rbrace$ and $W \cap X = \lbrace v_m, v_1 \rbrace$. \\ Case~1.3: $\lbrace V \cap X, W \cap X \rbrace \not= \lbrace \lbrace v_m, v_1 \rbrace, \lbrace w_m, w_1 \rbrace \rbrace$. \\ Case~2: $\exists \: v \in V$, $\exists \: w \in W$, $\lbrace v, w \rbrace \not\subseteq X$ and $|h(v, w)| < 3$. \\ Then, $$ V | W = \left \{ \begin{array}{l l} \lbrace w_m \rbrace & \textrm{if Case~1.1 holds;}\\ \lbrace v_m \rbrace & \textrm{if Case~1.2 holds;}\\ V |_{\cal D} W & \textrm{if Case~1.3 or Case~2 holds.} \end{array} \right . $$ The difference between $|$ and $|_{\cal D}$ is sufficiently strong so that $|$ is not a distance operator. The proof is verbatim the same as for $(1)$ in the proof of Proposition~\ref{DISpascarac}. \\ Consequently, $| \not\in {\cal O}$, thus, by $(0)$, we get that \begin{description} \item[$(1)$] $\exists V_1, \ldots, V_n, W_1, \ldots, W_n \subseteq {\cal V}$, \\ $(V_1, \ldots, V_n, W_1, \ldots, W_n, V_1 | W_1, \ldots, V_n | W_n) \not\in \Phi$. \end{description} Moreover, we chose $m$ sufficiently big so that: \begin{description} \item[$(2)$] $\exists \: r \in [1, m-1]$, $\forall \: i \in [1, n]$, \\ $\lbrace V_i \cap X, W_i \cap X \rbrace \not= \lbrace \lbrace v_r, v_{r+1} \rbrace, \lbrace w_r, w_{r+1} \rbrace \rbrace$. \end{description} The proof is verbatim the same as for $(3)$ in the proof of Proposition~\ref{DISpascarac}, except that $V_i$ and $W_i$ are replaced by $V_i \cap X$ and $W_i \cap X$. \\ Let $|'$ be the binary operator on ${\cal P}({\cal V})$ defined as follows. \\ Let $V, W \subseteq {\cal V}$ and consider the cases that follow: \\ Case~1: $\forall v \in V$, $\forall w \in W$, $\lbrace v, w \rbrace \subseteq X$ or $3 \leq |h(v, w)|$. \\ Case~1.1: $V \cap X = \lbrace v_r, v_{r+1} \rbrace$ and $W \cap X = \lbrace w_r, w_{r+1} \rbrace$. \\ Case~1.2: $V \cap X = \lbrace w_r, w_{r+1} \rbrace$ and $W \cap X = \lbrace v_r, v_{r+1} \rbrace$. \\ Case~1.3: $\lbrace V \cap X, W \cap X \rbrace \not= \lbrace \lbrace v_r, v_{r+1} \rbrace, \lbrace w_r, w_{r+1} \rbrace \rbrace$. \\ Case~2: $\exists \: v \in V$, $\exists \: w \in W$, $\lbrace v, w \rbrace \not\subseteq X$ and $|h(v, w)| < 3$. \\ Then, $$ V |' W = \left \{ \begin{array}{l l} \lbrace w_{r+1} \rbrace & \textrm{if Case~1.1 holds;}\\ \lbrace v_{r+1} \rbrace & \textrm{if Case~1.2 holds;}\\ V | W & \textrm{if Case~1.3 or Case~2 holds.}\\ \end{array} \right . $$ The difference between $|'$ and $|$ is ``invisible'' for $\Phi$. \\ More formally, $\forall \: i \in [1, n]$, $V_i |' W_i = V_i | W_i$. \\ The proof is obvious by $(2)$. Thus, by $(1)$, we get: \\ $(V_1, \ldots, V_n$, $W_1, \ldots, W_n$, $V_1 |' W_1, \ldots, V_n |' W_n) \not\in \Phi$. \\ Therefore, by $(0)$, we get: \begin{description} \item[$(3)$] $|' \not\in {\cal O}$. \end{description} But, in parallel, there is a convenient pseudo-distance that represents $|'$. Indeed, let ${\cal D}'$ be the pseudo-distance on $\cal V$ such that ${\cal D}' = \langle \mathbb{R} \cup \lbrace |\mathbb{N}|\rbrace, \prec, d' \rangle$, where $d'$ is the function such that $\forall \: v, w \in {\cal V}$, $$ d'(v, w) = \left \{ \begin{array}{l l} \!\!\!2.3 & \!\!\!\textrm{if}\: \exists i \in [r+1, m], \lbrace v, w \rbrace = \lbrace v_i, w_i \rbrace;\\ \!\!\!d(v, w) & \!\!\!\textrm{otherwise.} \end{array} \right . $$ Note that $\forall \: v, w \in {\cal V}$, we have: \\ $|h(v,w)| \in \mathbb{N}$ iff $d(v,w) \in \mathbb{R}$ iff $d'(v,w) \in \mathbb{R}$. \\ Thus, $|h(v,w)| = |\mathbb{N}|$ iff $d(v,w) = |\mathbb{N}|$ iff $d'(v,w) = |\mathbb{N}|$. \\ Note again that $\forall \: v, w \in {\cal V}$, with $|h(v,w)| \in \mathbb{N}$, we have: \\ $|h(v, w)| \leq d'(v, w) \leq d(v,w) \leq |h(v, w)| + 0.5$. \\ We will show: \begin{description} \item[$(4)$] $|' = |_{{\cal D}'}$. \end{description} But, ${\cal D}'$ is symmetric, liberally IR, and liberally positive. \\ In addition, we will show: \begin{description} \item[$(5)$] ${\cal D}'$ is HIR; \item[$(6)$] ${\cal D}'$ is liberally TIR. \end{description} So, $|'$ is a symmetric, liberally IR, liberally positive, liberally TIR, and HIR distance operator. \\ Therefore, $|' \in {\cal N}$ and thus: \begin{description} \item[$(7)$] $|' \in {\cal O}$. \end{description} Finally, $(3)$ and $(7)$ entail a contradiction. \\ \\ {\it Proof of $(4)$}. Let $V, W \subseteq {\cal V}$. \\ Case~1: $\forall v \in V$, $\forall w \in W$, $\lbrace v, w \rbrace \subseteq X$ or $3 \leq |h(v, w)|$. \\ Case~1.1: $V \cap X = \lbrace v_r, v_{r+1} \rbrace$ and $W \cap X = \lbrace w_r, w_{r+1} \rbrace$. \\ Then, $V |' W = \lbrace w_{r+1} \rbrace = V |_{{\cal D}'} W$. \\ Case~1.2: $V \cap X = \lbrace w_r, w_{r+1} \rbrace$ and $W \cap X = \lbrace v_r, v_{r+1} \rbrace$. \\ Then, $V |' W = \lbrace v_{r+1} \rbrace = V |_{{\cal D}'} W$. \\ Case 1.3: $V \cap X = \lbrace v_m, v_1 \rbrace$ and $W \cap X = \lbrace w_m, w_1 \rbrace$. \\ Then, $V |' W = \lbrace w_m \rbrace = V |_{{\cal D}'} W$. \\ Case 1.4: $V \cap X = \lbrace w_m, w_1 \rbrace$ and $W \cap X = \lbrace v_m, v_1 \rbrace$. \\ Then, $V |' W = \lbrace v_m \rbrace = V |_{{\cal D}'} W$. \\ Case~1.5: $\lbrace V \cap X, W \cap X \rbrace \not\in$ \\ $\lbrace \lbrace \lbrace v_m, v_1 \rbrace, \lbrace w_m, w_1 \rbrace \rbrace$, $\lbrace \lbrace v_r, v_{r+1} \rbrace, \lbrace w_r, w_{r+1} \rbrace \rbrace \rbrace$. \\ Then, $V |' W = V | W = V |_{\cal D} W$. \\ Case~1.5.1: $V \cap W \not= \emptyset$. \\ Then, $V |_{\cal D} W = V \cap W = V |_{{\cal D}'} W$. \\ Case~1.5.2: $V \cap W = \emptyset$. \\ Case~1.5.2.1: $V \cap X = \emptyset$ or $W \cap X = \emptyset$. \\ Then, $\forall \: v \in V$, $\forall \: w \in W$, $d'(v, w) = d(v, w)$. \\ Therefore, $V |_{\cal D} W = V |_{{\cal D}'} W$. \\ Case~1.5.2.2: $V \cap X \not= \emptyset$ and $W \cap X \not= \emptyset$. \\ Then, we will show: \begin{description} \item[$(4.1)$] $V |_{\cal D} W = V \cap X |_{\cal D} W \cap X$; \item[$(4.2)$] $V |_{{\cal D}'} W = V \cap X |_{{\cal D}'} W \cap X$. \end{description} But, we have $V \cap X |_{\cal D} W \cap X = V \cap X |_{{\cal D}'} W \cap X$. \\ The proof of this is verbatim the same as for Case~5.3.2.2, in the proof of $(5)$, in the proof of Proposition~\ref{DISpascarac}, except that $V$ and $W$ are replaced by $V \cap X$ and $W \cap X$. \\ Case~2: $\exists \: v \in V$, $\exists \: w \in W$, $\lbrace v, w \rbrace \not\subseteq X$ and $|h(v, w)| \leq 2$. \\ Then, $V |' W = V | W = V |_{\cal D} W$. \\ Case~2.1. $V \cap W \not= \emptyset$. \\ Then, $V |_{\cal D} W = V \cap W = V |_{{\cal D}'} W$. \\ Case~2.2. $V \cap W = \emptyset$. \\ Case~2.2.1. $\exists \: v' \in V$, $\exists \: w' \in W$, $|h(v, w)| = 1$. \\ Then, $V |_{\cal D} W = \lbrace w \in W : \exists \: v \in V, |h(v, w)| = 1 \rbrace = V |_{{\cal D}'} W$. \\ Case~2.2.2. $\forall \: v' \in V$, $\forall \: w' \in W$, $|h(v, w)| \geq 2$. \\ Then, $V |_{\cal D} W =$ \\ $\lbrace w \in W : \exists \: v \in V, \lbrace v, w \rbrace \not\subseteq X$ and $|h(v, w)| = 2 \rbrace = V |_{{\cal D}'} W$.\hfill \\ \\ {\it Proof of $(4.1)$}. Direction: ``$\subseteq$''. \\ Let $w \in V |_{\cal D} W$. \\ Then, $\exists \: v \in V$, $\forall \: v' \in V$, $\forall \: w' \in W$, $d(v,w) \preceq d(v', w')$. \\ Case~1: $\lbrace v, w \rbrace \subseteq X$. \\ Then, $w \in V \cap X |_{\cal D} W \cap X$. \\ Case~2: $\lbrace v, w \rbrace \not\subseteq X$. \\ We have $\exists \: v' \in V \cap X$ and $\exists \: w' \in W \cap X$. \\ In addition, $d(v', w') \in \mathbb{R}$ and $d(v', w') \leq 2.5$. \\ Case~2.1: $|h(v,w)| = |\mathbb{N}|$. \\ Then, $d(v,w) = |\mathbb{N}|$. \\ Therefore, $d(v', w') \prec d(v, w)$, which is impossible. \\ Case~2.2: $|h(v,w)| \in \mathbb{N}$. \\ Then, $d(v, w) \in \mathbb{R}$ and $3 \leq |h(v, w)| \leq d(v,w)$. \\ Therefore, $d(v', w') < d(v, w)$. \\ Thus, $d(v', w') \prec d(v, w)$, which is impossible. Direction: ``$\supseteq$''. \\ Let $w \in V \cap X |_{\cal D} W \cap X$. \\ Then, $\exists \: v \in V \cap X$ such that \\ $\forall \: v' \in V \cap X$, $\forall \: w' \in W \cap X$, $d(v, w) \preceq d(v', w')$. \\ Let $v' \in V$, $w' \in W$. \\ Case~1: $\lbrace v', w' \rbrace \subseteq X$. \\ Then, $d(v, w) \preceq d(v', w')$. \\ Case~2: $\lbrace v', w' \rbrace \not\subseteq X$. \\ As $v, w \in X$, we have $d(v, w) \in \mathbb{R}$ and $d(v, w) \leq 2.5$. \\ Case~2.1: $|h(v', w')| = |\mathbb{N}|$. \\ Then, $d(v', w') = |\mathbb{N}|$. Thus, $d(v, w) \prec d(v', w')$. \\ Case~2.2: $|h(v', w')| \in \mathbb{N}$. \\ Then, $d(v', w') \in \mathbb{R}$ and $3 \leq |h(v', w')| \leq d(v', w')$. \\ Therefore, $d(v,w) < d(v', w')$. \\ Thus, $d(v,w) \prec d(v', w')$. \\ Consequently, in all cases, $d(v, w) \preceq d(v', w')$. \\ Thus, $w \in V |_{\cal D} W$. \\ \\ {\it Proof of $(4.2)$}. Verbatim the proof of $(4.1)$, except that $|_{\cal D}$ and $d$ are replaced by $|_{{\cal D}'}$ and $d'$. \\ \\ {\it Proof of $(5)$}. Let $v, w, x \in {\cal V}$ with $|h(v, w)| < |h(v, x)|$. \\ Case~1: $|h(v, x)| = |\mathbb{N}|$. \\ Then, $|h(v,w)| \in \mathbb{N}$. \\ Thus, $d'(v, w) \in \mathbb{R}$ and $d'(v, x) = |\mathbb{N}|$. \\ Therefore, $d'(v, w) \prec d'(v, x)$. \\ Case~2: $|h(v,x)| \in \mathbb{N}$. \\ Then, $|h(v,w)| \in \mathbb{N}$. \\ Therefore, $d'(v,x) \in \mathbb{R}$, $d'(v,w) \in \mathbb{R}$, and $d'(v, w) \leq |h(v, w)| + 0.5 < |h(v, w)| + 1 \leq |h(v, x)| \leq d'(v, x)$. \\ Thus, $d'(v, w) \prec d'(v, x)$. \\ \\ {\it Proof of $(6)$}. Let $v, w, x \in {\cal V}$. \\ Note that $h(v,x) \subseteq h(v, w) \cup h(w,x)$. \\ Therefore, $|h(v,x)| \leq |h(v, w) \cup h(w,x)|$. \\ Case~1: $d'(v,x) = |\mathbb{N}|$. \\ Then, $|h(v,x)| = |\mathbb{N}|$. \\ Now, suppose $d'(v, w) \in \mathbb{R}$ and $d'(w, x) \in \mathbb{R}$. \\ Then, $|h(v,w)|, |h(w,x)| \in \mathbb{N}$. \\ Thus, $|h(v, w) \cup h(w, x)| \in \mathbb{N}$. \\ Therefore, $|h(v,x)| \in \mathbb{N}$, which is impossible. \\ Thus, $d'(v, w) = |\mathbb{N}|$ or $d'(w, x) = |\mathbb{N}|$. \\ Case~2: $d'(v,x), d'(v,w), d'(w,x) \in \mathbb{R}$. \\ Case~2.1: $|h(v, w)| = 0$ or $|h(w, x)| = 0$. Trivial. \\ Case~2.2: $|h(v, w)| \geq 1$ and $|h(w, x)| \geq 1$. \\ Case~2.2.1: $|h(v, w)| \geq 2$ or $|h(w, x)| \geq 2$. \\ Case~2.2.1.1: $|h(v, x)| \in \lbrace 0, 1, 2 \rbrace$. \\ Then, $d'(v, x) \leq |h(v, x)| + 0.5 \leq 2.5 < 3 \leq |h(v, w)| + |h(w, x)| \leq d'(v, w) + d'(w, x)$. \\ Case~2.2.1.2: $|h(v, x)| \geq 3$. \\ Then, $d'(v, x) = |h(v, x)| \leq |h(v, w)| + |h(w, x)| \leq d'(v, w) + d'(w, x)$. \\ Case~2.2.2: $|h(v, w)| = 1$ and $|h(w, x)| = 1$. \\ Case~2.2.2.1: $|h(v, x)| \in \lbrace 0, 1, 2 \rbrace$. \\ Then, $d'(v, x) \leq |h(v, x)| + 0.5 \leq 2.5 < 1.4 + 1.4 = d'(v, w) + d'(w, x)$. \\ Case~2.2.2.2: $|h(v, x)| \geq 3$. \\ Then, $|h(v, x)| > |h(v, w)| + |h(w, x)|$, impossible.\qed \end{proof} \section{Conclusion} \label{DISconclu} We laid the focus on the question to know whether $(\star loop)$ can be replaced by a finite condition in Proposition~\ref{DIScaracrevclass}. Obviously, the presence of $(\star loop)$ is due to the presence of $(| loop)$. So, to solve the problem one might attack its source, i.e. try to replace $(| loop)$ by a finite condition in Proposition~\ref{DISalgebraic}. But, we showed in the present paper that for families of distance operators, there is no normal characterization. The symmetric family is concerned with this and therefore $(| loop)$ cannot be replaced by a finite and universally quantified condition. Now, we can go further. Indeed, there is a strong connexion between the distance operators and the distance-based revision operators. Lehmann {\it et al.} used this connexion to get their results on the latter from their results on the former. It is reasonable to think that the same thing can be done with our negative results, i.e this paper can certainly be continued in future work to show that for families of distance-based revision operators, there is no normal characterization either. For instance, the family which is symmetric, CP, and DP might well be concerned with this, which suggests that $(\star loop)$ cannot be replaced by a finite and universally quantified condition. In addition, this direction for future work can still be followed if we define the distance-based revision in a non-classical framework. Indeed, as Lehmann {\it et al.} did, we worked in a general framework. For instance, if we define the revision in the $\cal FOUR$ framework ---$\cal FOUR$ is a well-known paraconsistent logic from \cite{Belnap1} and \cite{Belnap2} --- then we can probably use the results of \cite{LehmannMagidorSchlechta1} and our results respectively to show characterizations of revision operators and show that they cannot be really improved. Moreover, most of the approaches to belief revision treat in a trivial way inconsistent sets of beliefs (if they are treated at all). However, people may be rational despite inconsistent beliefs (there may be overwhelming evidence for both something and its contrary). There are also inconsistencies in principle impossible to eliminate like the ``Paradox of the Preface'' \cite{Makinson3}. The latter says that a conscientious author has reasons to believe that everything written in his book is true. But, because of human imperfection, he is sure that his book contains errors, and thus that something must be false. Consequently, he has (in the absolute sense) both reasons to believe that everything is true and that something is false. So, principles of rational belief revision must work on inconsistent sets of beliefs. Standard approaches to belief revision (e.g. AGM) all fail to do this as they are based on classical logic. Paraconsistent logics (like e.g. $\cal FOUR$) could be the bases of more adequate approaches. Another advantage of such approaches is that they will not be forced to eliminate a contradiction even when there is no good way to do it. Contradictions could be tolerated until new information eventually comes to justify one or another way of elimination. Finally, such approaches will benefit from an extended field of application which includes multi-agent systems where the agents can have individually inconsistent beliefs. Furthermore, it is easy to see that these perspectives for belief revision can be transposed to belief merging. \bibliographystyle{aaai}
2,877,628,090,570
arxiv
\section{Introduction} A Rayleigh-Taylor (RT) system is composed by the superposition of two layers of a single-phase fluid, with the lower lighter than the upper one and subject to an external gravity field $g>0$. In this system, the two layers mix together until the fluid reaches ``equilibrium'', characterized by a completely homogeneous environment with hydrostatic density/temperature profiles. Applications span a wide range of fields, such as astrophysics \cite{Cabot}, quantum physics related to the inmiscible Bose-Einstein condensates \cite{Sasaki,Kobyakov}, ocean and atmospheric sciences \cite{Spalart,Gutman}. Historically, the first theoretical work on the stability of a stratified fluid in a gravitational field is due to Rayleigh in 1900 \cite{Rayleigh}, followed about forty years later by Taylor's work on the growth of perturbations between two fluids with different densities \cite{Taylor}. Since then, the Rayleigh-Taylor instability has been intensively studied theoretically, experimentally and numerically (see, e.g., the review of Dimonte {\it et al.} \cite{Dimonte}). Still, many problems remain open. The RT instability amounts to two main physics problems: the initial growth of perturbations between two layers of fluid with different densities and the mixing problem related to the penetration of the perturbation front through the static fluid. The evolution of the mixing layer length, $L(t)$, follows a 'free-fall' temporal law, $L(t)=\alpha{g}(At)t^2$; $At=(\rho_1-\rho_2)/(\rho_1+\rho_2)$ is {\it Atwood number} which takes into account the density differences between the upper $(\rho_1)$ and lower $(\rho_2)$ layers, $g$ is the acceleration of gravity and $\alpha$ is a dimensionless coefficient, the so-called {\it growth rate}. Recent works \cite{Dalziel, Cook} have suggested that the value of $\alpha$ might depend also on the initial conditions. Beside the large-scale growth of the mixing layer, also small scale statistics have attracted the attention of many groups in recent years, both in 2d, 3d and {\it quasi} 2d-3d geometries \cite{chertkov,Biferale,boffettaprl,boffettapre}. Moreover, different setups have been investigated including stratification \cite{Biferale2,Scagliarini,spiegel,frolich,robinson} and reaction \cite{biferaleepl,chertkovjfm}. In spite of the progress made so far, most of the work in this area is limited to the classical case of double density fluids, while complex stratifications effects have not been extensively investigated. In this paper we want to present the study of an RT system which is slightly different from the ones present in the literature: we focus on the spatio-temporal evolution of a single component fluid when initially prepared with three different density layers in hydrostatic unstable equilibrium (see Figure \ref{figure1}). Previous studies on triple density RT were limited to the case of one unstable and one stable layer \cite{Jacobs}, focusing mainly on the entrainment by the unstable flow inside the stable one. On the contrary, we have a fully unstable density/temperature distributions with two unstable layers and we need to take into account the nonlinear interactions of rising and falling plumes from each one of the two developing mixing layers, at difference from the case of the interaction of buoyant plumes with an interface \cite{Nokes}. Similarly, our case differs from the case of propagating fronts \cite{Bychkov,Bell,Modestov} because we do not have the extra effects induced by the deflagration velocity. The setup is given by a two-dimensional ($2d$) $L_x{\times}L_z$ tank of fluid split in three sub-volumes at three different --initially homogeneous-- temperatures $T_u<T_m<T_d$, where each of them is in hydrostatic equilibrium $\partial_{z}p_{0}(z)=-g\rho_{0}(z)$. We enforce periodic boundary conditions in the horizontal direction.\\ The initial hydrostatic unstable configuration is therefore given by: \begin{equation} \begin{split} \begin{cases} T_{0}(z)=T_d, \hspace{2mm} \rho_0^d(z)=\rho_{d}exp[-g(z-z_d)/T_d] & -L_z/2<z<-{\delta}/2 \\ T_{0}(z)=T_m, \hspace{2mm} \rho_0^m(z)=\rho_{m}exp[-g(z-z_m)/T_m] & -{\delta}/2<z<{\delta}/2 \\ T_{0}(z)=T_u, \hspace{2mm} \rho_0^u(z)=\rho_{u}exp[-g(z-z_u)/T_u] & {\delta}/2<z<L_z/2 \label{seteq} \end{cases} \end{split} \end{equation} where $\delta$ is the width of the middle layer at temperature $T_m$, and $z_u, z_m, z_d$ are three parameters fixing the overall geometry.\\ Assuming that for each single domain we have $p_0(z)=T_0{\rho}_0(z)$, in order to be at equilibrium we require the same pressure at the interface, finding the following simple condition on the above expressions \begin{equation} \rho_0^{d}(-\delta/2)T_{d}=\rho_0^{m}(-\delta/2)T_{m}, \hspace{2mm} \rho_0^{m}(\delta/2)T_{m}=\rho_0^{u}(\delta/2)T_{u} \end{equation} Since $T_u<T_m<T_d$, we have $\rho_{u}>\rho_{m}>\rho_{d}$, ensuring that we have an unstable initial condition.\\ \begin{center} \begin{figure}[h!] \includegraphics[scale=0.4,angle=90]{Figure_1} \caption{Sketch of the initial configuration for the triple temperature Rayleigh-Taylor system. Temperature in the three regions is chosen constant, while density follow an hydrostatic profile (eq.(\ref{seteq})). The temperature jump at the interface is smoothed by a tanh profile of the order of ten grid points. The bold and thin solid lines represent the temperature and density profiles, respectively.} \label{figure1} \end{figure} \end{center} From a phenomenological point of view, the problem we are going to study is the interaction between two turbulent fronts (one originated from the upper density jump and one from the lower) when they come in contact and then evolve together like a single mixed front. Turbulent fluctuations are the driving force of the RT instability, so it is of interest to study how two turbulent flows interact. Moreover, comparison with the one-front RT system is made in order to show how the presence of an ``intermediate'' well-mixed turbulent layer of fluid can alter the evolution of typical large scale quantities, such as the mixing layer length and the velocity-temperature fluctuations. The study is done using a Lattice Boltzmann Thermal (LBT) scheme \cite{Biferale, Scagliarini} running on a GPU cluster, so our work is interesting also from the point of view of several architecture-specific optimization steps that we have applied to our computer code in order to boost its computational efficiency. This paper is organized as follows: in section II we present the equations of motion; in section III we describe the details of the lattice Boltzmann model (LBM) formulation and its implementation on GPUs. Section IV presents the result of a large scale analysis and compares with the ``classical'' RT evolution. Conclusions close the paper in section V. \section{Equations of motion} The evolution of a compressible flow in an external gravity field is described by the following Navier-Stokes equations (double indexes are summed upon): \begin{equation} \begin{split} \begin{cases} D_{t}{\rho}=-\partial_{i}{({\rho}u_i)} \\ \rho{D_{t}}u_i=-\partial_{i}P-{\rho}g \, \delta_{i,z}+\mu{\partial_{jj}u_i} \\ \rho{c_p}D_tT-D_tP=k{\partial_{ii}T} \label{NS} \end{cases} \end{split} \end{equation} where $D_t$ is the material derivate, $\mu$ and $k$ are molecular viscosity and thermal conductivity respectively, $c_p$ is the constant pressure specific heat and $\rho$, $T$, $P$ and ${\bf u}$ are the thermo-hydrodynamical fields of density, temperature, pressure and velocities, respectively. In the limit of small compressibility, the parameters depend weakly on the local thermodynamics fields, so expanding pressure around its hydrostatic value $P=p_0+p$, with $\partial_zp_0=-g\rho$ and $p<<p_0$, and performing a small Mach number expansion we can write equations (\ref{NS}) as: \begin{equation} \begin{split} \label{eq:nsf} \begin{cases} D_{t}{\rho}=-\partial_{i}{({\rho}u_i)} \\ D_tu_i=-\partial_ip/\rho+g\theta/\tilde T \delta_{i,z}+\nu{\partial_{jj}u_i}\\ D_tT-u_z\gamma=\kappa\partial_{ii}T \end{cases} \end{split} \end{equation} where $\tilde T$ is the mean temperature averaged on the whole volume, $\nu=\mu/\rho$ is kinematic viscosity, $\kappa=k/(c_p\rho)$ is thermal diffusivity and $\gamma=g/c_p$ is the adiabatic gradient for an ideal gas. From this approximation it is clear that only temperature fluctuations $\theta=T-\tilde T$ force the system; assuming the adiabatic gradient is negligible, $\gamma{\sim}0$, it is well know that starting from an unstable initial condition, as show in Figure \ref{figure1}, any small perturbation will lead to a turbulent mix between the cold and hot regions, developing along the vertical direction. If the adiabatic gradient is not negligible, the RT mixing does not proceed forever and stops when the mixing length becomes of the order of the stratification length scale, a further complexity that will not be studied here \cite{Biferale2}. \section{Numerical Method} \subsection{Thermal Kinetic Model} In this section, we recall the main features of the lattice Boltzmann model (LBM) employed in the numerical simulations; for full details we refer the reader to the works of \cite{Philippi,Sbragaglia,Scagliarini}. For an ideal isothermal fluid, LBM \cite{gladrow,benzi,chen} can be derived from the continuum Boltzmann equation in the BGK approximation \cite{bhatnagar}, upon expansion in Hermite velocity space of the single particle distribution function (PDF) $f(\bm{x},\zeta,t)$, which describes the probability to find a particle at space-time location $(\bm{x},t)$ and with velocity $\zeta$ \cite{he,martys,shan}. Discretization on the lattice is enforced taking a discrete finite set of velocities $\zeta{\in}[\bm{c}_1,\bm{c}_2,...,\bm{c}_M]$, where the total number $M$ is determined by the embedding spatial dimension and the required degree of isotropy \cite{gladrow}. As a result, the dynamical evolution on a discretized spatial and temporal lattice is described by a set of populations $f_l(\bm{x},t)$ with $l=1,...,M$. In the papers \cite{Philippi,Sbragaglia} it was shown that in two dimension with only one set of kinetic populations we need $M=37$ fields (the so-called {\it D2Q37} model) to recover in the Chapman-Enskog limit the continuum thermal-hydrodynamical evolution given by eq.(\ref{NS}). The set of speeds are shown in Figure \ref{D2Q37}, while the discretized LBM evolution is given by \begin{equation} f_{l}({\bm x}+{\bm c}_l\Delta{t},t+\Delta{t})-f_{l}({\bm x},t)=-\frac{\Delta{t}}{\tau_{LB}}[f_{l}({\bm x},t)-f^{(eq)}_{l}]. \label{LBE} \end{equation} The left-hand side of eq.(\ref{LBE}) stands for the streaming step of $f_l$, while the right-hand side represents the relaxation toward a local Maxwellian distribution function $f^{(eq)}_{l}$, with the characteristic time $\tau_{LB}$. A novelty introduced by the mentioned algorithm is that the equilibrium distribution function directly depends on the coarse grained variables plus a shift due to the local body force term \cite{Scagliarini,Philippi}: \begin{equation} f^{(eq)}_l=f^{(eq)}_l\left[\rho,{\bf u}+{\bf g}\tau_{LB}, T+\frac{\tau_{LB}(\Delta{t}-\tau_{LB})}{d}g^2 \right ]; \end{equation} the macroscopic fields are defined in terms of the lattice Boltzmann populations as follows: \begin{equation} \rho=\sum_{l}{f_{l}}, \hspace{0.2cm} {\rho}\,{\bold{u}}=\sum_{l}{\bold{c}_{l}f_{l}}, \hspace{0.2cm} d\,{\rho}\,T=\sum_{l}{\vert}{\bold{c}_{l}-\bold{u}}{\vert}^{2}f_{l} \end{equation} ( $d$ is the space dimensionality). In \cite{Scagliarini,Sbragaglia}, it was shown that in order to avoid spurious terms due to lattice discretization and to recover the correct hydrodynamical description from the discretized lattice Boltzmann variables, momentum and temperature must be renormalized. This can be obtained by taking for momentum and temperature the following expressions: $$ \bold{u}^{(H)}=\bold{u}+\frac{\Delta{t}}{2}\bold{g}; \qquad T^{(H)}=T+\frac{(\Delta{t})^2g^2}{4D}. $$ Using these renormalized hydrodynamical fields for a 2d geometry, it is known that is possible to recover the standard thermo-hydrodynamical equations (\ref{eq:nsf}) through a Chapman-Enskog expansion \cite{Scagliarini,Sbragaglia}. \begin{center} \begin{figure}[h!] \includegraphics[scale=0.4,angle=0]{Figure_2} \hspace{-8mm} \caption{Scheme of the discretized set of $37$ velocities $c_l$ that are used by our LBM to recover the hydrodynamical behavior in the long wavenumber limit. $r\sim{1.1969}$ is the lattice constant \cite{Scagliarini}.} \label{D2Q37} \end{figure} \end{center} \subsection{GPU optimized algorithm} We have optimized a code that implements the LBM described above, taking into account both performance on one GPU and scaling on a fairly large number of GPUs; our runs have been performed on a cluster based on NVIDIA C2050/C2070 GPUs. GPUs have a large number of small processing elements working in parallel and performing the same sequence of operation; in CUDA, the programming language that we have used throughout, each sequence of instructions operating on different data is called a {\em thread}. Optimization focuses along three lines: i) organizing data in such a way that it can be quickly moved between GPU and memory; ii) ensuring that a large number of GPU-threads operate independently on different data items with as little interference as possible, and iii) organizing data moves among GPUs minimizing the idle time of each GPU as it waits for data from another node.\\ Concerning data organization, we first split a lattice of size $L_x \times L_z$ on $N_p$ GPUs along the $x$ dimension; each GPU handles a sublattice of $L_x/N_p \times L_z$ points. Lattice data is stored in memory in column-major order and we keep in memory two copies of the lattice: at each time step, the code reads from one copy and writes to the other. This choice requires more memory, but it allows to map one GPU thread per lattice site and then process all threads in parallel. Arrays of LB populations are stored in memory one after the other (this is usually referred to as Structure-of-Arrays [SOA]); this scheme helps {\em coalescing} memory accesses to different lattice sites that the GPU processes in parallel, and helps increase bandwidth. \begin{center} \begin{figure} \includegraphics[width=0.6\textwidth]{Figure_3} \caption{(Color online) Tiling of the physical lattice with periodic boundary conditions in the $x$ direction on several processing nodes; the picture shows the halo columns that contain copies of the lattice points processed by the neighboring processors.} \label{fig:lattice} \end{figure} \end{center} The physical lattice is surrounded by halo-columns and rows, see Figure \ref{fig:lattice}. For a physical lattice of size $L_x \times L_z$, we allocate a grid of $NX \times NZ$ lattice points, $NX = H_x +L_x +H_x$, and $NZ = H_z +L_z +H_z$. $H_x, H_z$ are the sizes of the halos used to establish data continuity between GPUs working on adjoining sublattices and to enforce periodic boundary conditions in the $x$ direction. This makes the computation uniform for all sites, so we avoid thread divergences, badly impacting on performance. We have $H_x = 3$, and $H_z = 16$; the halo in $Z$ is larger than needed by the algorithm, in order to keep data aligned (data words must be allocated in multiples of 32), and to maintain cache-line alignment in multiples of 128 Bytes.\\ As customary for GPUs, the host starts the execution of each time step, corresponding to four main kernels: first the {\tt periodic boundary conditions} step exchanges columns of the z-halos, then three steps follow that implement (i) the free propagation {\tt (propagate)} expressed by the lhs of (\ref{LBE}), (ii) the rigid boundary conditions {\tt (bc)} at the top and bottom walls and (iii) the collisions {\tt (collide)} in the rhs of (\ref{LBE}). Step {\tt propagate} moves each population of each site to a different lattice site, according to its velocity vector. Computerwise this corresponds to memory accesses to sparse addresses. Two options are available: {\em push} moves all populations from the {\em same} site to their appropriate destinations; while {\em pull} gathers populations from neighbor sites to the {\em same} destination; in the first case one has aligned memory reads and misaligned writes, while the opposite is true in the second case; we have tested both options and then settled for {\em pull} that offers $\simeq 20\%$ higher bandwidth. Step {\tt bc} executes after {\tt propagate} in order to enforce boundary conditions at the top and bottom of the cell; it adjusts population values at sites with coordinates $z=0,1,2$ and $z=L_z -3, L_z-2$ and $L_z-1$; its computational impact is fully negligible, so we do not apply significant optimization steps here. Finally, {\tt collide} performs the collision phase of the LB procedure. Step {\tt collide} executes in parallel on a large number of threads: no synchronization is necessary because all threads read data from one copy of the lattice (the {\tt prv} array) and write to the other copy (the {\tt nxt} array); {\tt prv} and {\tt nxt} swap their role at the following time step. Some care is needed to find the optimal number of threads: if this value is too small, available computing resources are wasted, while if it is too large there is not enough space to keep all intermediate data items on GPU-registers and performance drops quickly. For each data points {\tt collide} executes $\approx 7600$ double precision floating-point operations (some of which can be optimized away by the compiler); $\approx 72\%$ of them can be cast as FMAs (fused multiply add), in which the operation $a \times b +c$ is performed in just one step.\\ \begin{table}[b] \begin{tabular}{l|r|rr|rr} \toprule - & QPACE & 2-WS & C2050 & 2-SB & K20X \\ \hline P (GFlops) & 15 & 60 & 172 & 166 & 412 \\ MLUPS & 1.9 & 7.7 & 22 & 21.7 & 124 \\ E/site ($\mu J$/site) & 56 & 34 & 10 & 12 & 4.2 \\ \hline \label{compare} \end{tabular} \caption{ Performance comparison of our LB code among several architectures, based on the results of \cite{iccs10,parcfd12,parcfd11} 2-WS (2-SB) are Intel dual-processor systems based on the Westmere (Sandy Bridge) micro-architecture; C2050 (K20X) are NVIDIA GPUs based on the Fermi (Kepler) processors. } \end{table} We now consider the optimization steps for data exchange between different GPUs, i.e., for step {\tt pbc} of the program. In principle one first moves data from a strip of lattice points of one GPU to the halo region of the logically neighboring GPU, and then executes the streaming phase of the program (i.e. the {\tt propagate} function): this means that one has to wait till data transfer has completed. They key remark is that fresh halo data is only needed when applying {\tt propagate} to a small number of lattice sites (close to the halos). We then divide {\tt propagate} in three concurrent {\em streams} (in CUDA jargon). One stream handles most lattice points (those far away from the halos), while the two remaining streams handle GPU-to-GPU communications followed by {\tt propagate} on the data strips close to the halos (one process handles the halo columns at right and the other those at left). In this way the complete program follows the time pattern shown in Figure \ref{fig:timeSequence}, and effectively hides almost all communication overheads (up to $32$ GPUs, the size of the machine that we have used for our runs). Over the years we have developed several version of this code optimized for a number of HPC systems. As early as 2010 we had a version \cite{iccs10,BiferalePTRS} for the QPACE massively parallel system~\cite{qpace}; more recently, we carefully optimized the code for multi-core commodity CPUs~\cite{parcfd12} and many-core GPUs \cite{parcfd11}. Table (I) summarizes our performance results. This table cover three generations of HPC processors: the early PowerCellX8i of QPACE (2008), the Intel Westmere CPU and the NVIDIA C2050 (2010), and the Intel Sandybridge CPU and NVIDIA K20 (2013). Note that for each processor we consider a code specifically optimized for its architecture, exploiting several levels of parallelism (vector instructions and core parallelism). Table (I) shows the sustained performance of the full code, the number of lattice-sites updated per second (MLUPS, a user friendly figure-of-merit for performance) and the approximate value of the energy used to update one site. Our numbers refer to the performance of just one processing node: this is a reasonable approximation to actual performance up to $\simeq 32$ nodes, since communication overheads can be successfully hidden, as discussed above. Table (I) shows a significant improvement of performances as newer processors appear; it is interesting to underline that -- at fixed time -- GPUs offers 2-3X better performance than multi-core CPUs, while at the same time being roughly 3X more energy efficient. Further optimizations that we have applied to our code after the present work has been completed offer even higher performances: i.e. a system of two CPUs and two K20X GPUs now breaks the 1 Tflops barrier for single-node sustained double-precision performance, see \cite{sbacpad13}. We conclude this section underlying that the set of optimization steps that we have applied to our 2D code are expected to be equally efficient in 3D for the code running on each processing node. In this case, memory needs are larger, so it may be necessary to use a larger number of processing node, and multi-node scaling would have to be studied carefully. \begin{center} \begin{figure}[b] \includegraphics[width=0.6\textwidth]{Figure_4} \caption{(Color online) Time schedule of the main step of the complete program; the picture shows that the time associated to data transfer between processing elements can be overlapped with computation.} \label{fig:timeSequence} \end{figure} \end{center} \section{Data and Analysis} Here we present the results obtained from the numerical simulations of the 2d $L_x{\times}L_z$ RT systems putting at the middle of the vertical domain a layer of depth $\delta$ at an intermediate temperature $T_m=(T_u+T_d)/2$. For comparison, we also perform simulations for the usual RT configuration i.e. with $T=T_u$ on the upper half and $T=T_d>T_u$ on the lower half of the cell. In this study we limit ourselves to the case of negligible stratification. Within each layer, temperature values are chosen to set $At<<1$. Anyhow, it is important to stress that the algorithm is also applicable to strongly stratified flows \cite{BiferalePTRS}. All physical parameters are listed in Table (II). \begin{table} \begin{tabular}{l*{15}{c}} \toprule {Case} & {\em $At$\/} & {\em $At_u$\/} & {\em $At_d$\/} & {\em $ L_x$\/} & {\em $L_z$\/} & {\em $\delta$\/} & {\em g\/} & {\em $T_u$\/} & {\em $T_m$\/} & {\em $T_d$\/} & {\em $N_{conf}$\/} & {\em $L_{ad}$\/} & {\em $Ra$\/} & {\em $Re$\/} & {\em $Pr$\/}\\ \hline single-front & 0.025 & & & 2400 & 6144 & & $1.5{\times}10^{-5}$ & 0.975 & & 1.025 & 7 & 6666 & $10^{10}$ & $6{\times}10^4$ & 1\\ double-front & 0.025 & 0.0126 & 0.0123 & 2400 & 6144 & 500 & $1.5{\times}10^{-5}$ & 0.975 & 1 & 1.025 & 7 & 6666 & $10^{10}$ & $6{\times}10^4$ & 1\\ \hline \label{tablerun} \end{tabular} \caption{ Parameters for the RT runs. Total Atwood number, $At=(T_d-T_u)/(T_d+T_u)$; upper Atwood number, $At_u=(T_m-T_u)/(T_m+T_u)$; lower Atwood number, $At_d=(T_d-T_m)/(T_d+T_m)$; gravity $g$; temperature in the upper region, $T_u$; temperature in the middle region, $T_m$; temperature in the lower region; $T_d$; number of independent RT evolution, $N_{conf}$; adiabatic length $L_{ad}=2{\Delta}T/g$; maximum Rayleigh number, $Ra=\Delta{T}L_z^3g/{\nu}{\kappa}$; Reynolds number, $Re=VL_z/{\nu}$ ($V=<v_z^2>_x^{1/2}$); Prandtl number, $Pr=\nu/{\kappa}$.} \end{table} The initial width of the intermediate layer is chosen $\delta{\sim}L_z/10$, in order to avoid possible confining effects by the vertical boundaries during the merging of the middle temperature layer. \begin{center} \begin{figure*} \includegraphics[scale=0.15,angle=0]{Figure_5} \hspace{4mm} \includegraphics[scale=0.15,angle=0]{Figure_6} \hspace{4mm} \includegraphics[scale=0.15,angle=0]{Figure_7} \hspace{4mm} \includegraphics[scale=0.15,angle=0]{Figure_8} \hspace{4mm} \includegraphics[scale=0.15,angle=0]{Figure_9} \hspace{4mm} \includegraphics[scale=0.15,angle=0]{Figure_10} \caption{(Color online) Snapshot of the temperature (top row), temperature gradients (middle row) and vertical velocity (bottom row) for the double-front (left) and single-front (right) RT at $t/\tau=0,2,3$, with $\tau=\sqrt{L_x/(gAt)}$.} \label{figureT} \end{figure*} \end{center} Snapshots of the temperature, temperature gradient and vertical velocity are shown in Figure \ref{figureT}, taken at three different time points $(t=0,2,3)\tau$ during the evolution; ${\tau}=\sqrt{L_x/(gAt)}$ is the typical RT normalization time. Vertical temperature profiles $${\overline{T}}(z)=\frac{1}{L_x}{\int dx T(x,z,t)}$$ are also shown in Figure \ref{profili1}, comparing the double-front RT with the single-front RT. The triple density fluid starts to merge under the effect of the instability, causing the development of two fronts in correspondence of the two temperature/density interfaces. These two fronts continue to mix separately the region of the flow between $T_u$ and $T_m$ (referred as upper front) and the region between $T_m$ and $T_d$ (referred as lower front), until they get in touch at $t {\sim} 2.5 \tau$. At this time, the $T_m$ layer of fluid is greatly eroded and the flows with temperatures $T_u$ and $T_d$ start to interact. Unlike the classical RT system, in this case we have also the action of turbulent viscosity exerted by one front on the other. This effect leads to a slowing down in the growth of the mixing rate of the double-front experiment with respect to the one-front case. \begin{center} \begin{figure}[h!] \includegraphics[scale=0.35,angle=270]{Figure_11} \includegraphics[scale=0.35,angle=270]{Figure_12} \caption{Temperature profile for the double-step case (left) and single step case (right) at various times.} \label{profili1} \end{figure} \end{center} A useful quantity to describe the mixing layer extension is the {\it mixing length} $L(t)$, defined as the region where the mean temperature profile is within a given range, e.g. $\overline{T}(z) \, \in [(1+a)\,T_{u}:(1-a)\,T_{d}]$, (typically $a=0.05(T_d-T_u)$). An alternate way to evaluate $L(t)$ uses the following integral law \cite{Cabot}: \begin{equation} L(t)={\int dz{\Theta}\biggl[\frac{\overline{T}(z,t)-T_u}{T_d-T_u}\biggr]} \end{equation} where ${\Theta}[b]$ is a function with a tent map profile: \begin{equation} \begin{split} \begin{cases} {\Theta}[b]=2b & 0<b<1/2, \\ {\Theta}[b]=2(1-b) & 1/2<b<1.\\ \end{cases} \end{split} \end{equation} There is a vast literature based on observation, dimensional analysis and self-similar assumptions \cite{Read,Youngs} which shows that the mixing layer length follows a quadratic evolution in time: \begin{equation} L(t)={\alpha} (At) g \, t^{2} \label{eqL} \end{equation} where $\alpha$ is a dimensionless parameter named ``growth rate''. Taking the square of the time derivate of eq.(\ref{eqL}), we have the following self-similar scaling \cite{Ristorcelli}: \begin{equation} [\dot{L}(t)]^2=4{\alpha}g (At) \,L(t). \label{eqL2} \end{equation} Considering our double-front problem, we have to write an expression for $L(t)$ which must take into account the different nature of the system respect to the classical case. For this purpose we use two distinct mixing lengths, one for the upper front, $L_u(t)$, and one for the lower front, $L_d(t)$, defined as: \begin{equation} \begin{split} L_u(t)={\int dz{\Theta}\biggl[\frac{\overline{T}(z,t)-T_u}{T_m-T_u}\biggr]},\\ L_d(t)={\int dz{\Theta}\biggl[\frac{\overline{T}(z,t)-T_m}{T_d-T_m}\biggr]}. \label{3L} \end{split} \end{equation} Starting from eq.(\ref{eqL}), we can write its double-front counterpart as \begin{equation} L_u(t)={\alpha}_u(At_u)g \, t^{2},\qquad L_d(t)={\alpha}_d(At_d)g \, t^{2} \label{3L2} \end{equation} where $At_u=(T_m-T_u)/(T_m+T_u)$ and $At_d=(T_d-T_m)/(T_d+T_m)$ are the upper and lower Atwood numbers and $\alpha_u$ and $\alpha_d$ are the upper and lower growth rate, respectively. Within the double mixing length approach, we can define the total mixing length of the fluid as the sum of the upper and lower components, $L_{tot}(t)=L_u(t)+L_d(t)$. Using the previous expression (\ref{3L2}) we have \begin{equation} \label{eqtot} L_{tot}(t)=\alpha_ug(At_u)\,t^2+\alpha_dg (At_d)\,t^2 \equiv {\alpha_{tot}}g (At)\,t^2 \end{equation} where we have introduced the definition for the total growth rate in this double layer case as $ \alpha_{tot} = (\alpha_u At_u +\alpha_d At_d)/At$ An important advantage of eq.(\ref{eqtot}) is that it is local in time, so we may extract the coefficient ${\alpha}_{tot}$ by a simple evaluation of the plateau in the ratio $\dot{L}_{tot}^2/L_{tot}$ at each time. \begin{center} \begin{figure}[h!] \includegraphics[scale=0.35,angle=0]{Figure_13} \includegraphics[scale=0.35,angle=0]{Figure_14} \caption{Mixing layer length $L_{tot}(t/\tau)$ (left) and asymptotic growth rate $\alpha_{tot}(t/\tau)$ (right) evolution for the double (square) and single (circle) front RT.} \label{mixing} \end{figure} \end{center} In Figure \ref{mixing} we show the temporal evolution of the mixing length $L_{tot}(t)$ and the growth parameter $\alpha_{tot}(t)$ for the double-front and single-front RT. In order to make the analysis coherent, we apply the double map tent formulation (\ref{3L}) also to the single-front case, where now the middle temperature $T_m$ layer is ``virtual''. Inspection of Figure \ref{mixing} shows that -- as long as the two fronts evolve separately -- the mixing length has the same shape for the single and double-front cases with similar values of $\alpha_{tot}$, which is a good evidence that in this regime the two fronts proceed independently and in a self-similar way. Differences arise after the two fronts come in contact and start to interact. As we can see the mixing rate of the double-RT decreases with respect to the single one, as it is visible from the drops of the curves of $L_{tot}$ and $\alpha_{tot}$ in Figure \ref{mixing} around $t/\tau \sim 3$. Evidently, the turbulent viscosity generated from one front acts on the other slowing the propagation, as the system spends some amount of energy to fill the temperature/energy gap present between the two fronts (see Figure \ref{temperature} and Figure \ref{kinetic} for a one-to-one comparison of the temperature and kinetic energy fluctuations for single-front and double-front). For the classical RT system it is well known that temperature fluctuations remain constant during the development of convection (as we can see in the right panel of Figure \ref{temperature}); this is also valid for the double-front case until the two turbulent volumes are separated, because we can treat each of them as an independent RT system. When they start to merge, the temperature fluctuations in the region at the center of the cell, between the two fronts, must be ``re-ordered'' and brought to the same level of the two peaks, slowing the vertical growth of the mixing layer. After the two fronts are completely merged at $t/\tau{\sim}4$, the double-front system behaves like a classic one-front RT with temperatures $T_u$ and $T_d$, and the mixing layer length starts to increase again with the expected rate, $L_{tot}\, {\sim}\, t^2$. \begin{center} \begin{figure}[h!] \includegraphics[scale=0.35,angle=0]{Figure_15} \includegraphics[scale=0.35,angle=0]{Figure_16} \caption{Temperature fluctuations $\sqrt{\langle({T-\langle{T}\rangle_x})^2\rangle_x}$ for the double-front (right) and single-front (left) RT.} \label{temperature} \end{figure} \end{center} \begin{center} \begin{figure}[h!] \includegraphics[scale=0.35,angle=0]{Figure_17} \includegraphics[scale=0.35,angle=0]{Figure_18} \caption{Kinetic energy fluctuations $\sqrt{\langle({E_k-\langle{E_k}\rangle_x})^2\rangle_x}$ for the double-front (left) and single-front (right) RT.} \label{kinetic} \end{figure} \end{center} \section{Conclusions} In this paper, we have studied a 2d Rayleigh-Taylor turbulence with an intermediate temperature layer in the middle of the vertical domain. The goal of this paper is twofold. First, we have developed a highly optimized GPU-based thermal Lattice Boltzmann algorithm to study hydrodynamical and thermal fluctuations in turbulent single phase fluids. Second, we have applied it to study the evolution (and collision) of two turbulent RT fronts. Our results clearly show that at the moment of the collision the vertical evolution of the two front systems slows down and that the long time evolution is recovered only when a well mixed region is present in the center of the mixing length. This observation clearly shows the importance on the outer environment on the evolution of any RT system. Different initial temperature profiles would have lead to a different time for the collision between the two fronts and to a different duration of the intermediate mixing period (where the slow-down is observed). We have checked that the maximum effect is obtained in the configuration analyzed here, i.e. when two fronts of comparable kinetic energy collide. In the case when one of the two fronts is much stronger $At_u >> At_d$ the non-linear superposition becomes less important. Furthermore, a closer look at the spatial configurations for the one-front and two-front cases presented in Figure \ref{figureT} demonstrates that the large-scale behavior recovers a universal evolution in the asymptotic regime while the temperature and velocity fluctuations at small-scales still show important differences, suggesting the possibility of a long term memory of the initial configuration for high wave numbers modes. For completeness, we point out that the results presented here are related to the case of ``white noise'' initial conditions at the RT unstable interface with low values of the Atwood number. Differences can arise for the single-mode initial conditions and if the Atwood number is close to unity; in that case the nonlinear RT instability may be initially described as large-scale bubbles rising up in the heavy fluid with or without mixing, the latter depending on the strength of the secondary Kelvin-Helmotz instability \cite{Ramaprabhu,Oparin,Bychkov2}. A further generalization of this study to full 3d geometries and to analyze the small-scales properties of the system in the region where the two fronts collide will be presented in a future work. \section*{Acknowledgments} We would like to thank CINECA (Bologna, Italy) for the use of their GPU-based computer resources, in the framework of the ISCRA Access Programme. This work has been supported by the SUMA project of INFN. L.B. acknowledge partial funding from the European Research Council under the European Community's Seventh Framework Programme, ERC Grant Agreement N. 339032.
2,877,628,090,571
arxiv
\section{Introduction} \label{sec:intro} It has long been known, in the case of three light quarks, that there is a transition to a CP-violating phase for non-degenerate quarks when one of the quark masses becomes sufficiently negative~\cite{Dashen:1970et}. For example, using leading order (LO) SU(3) chiral perturbation theory (\raise0.4ex\hbox{$\chi$}PT), and fixing $m_d$ and $m_s$, the transition occurs when $m_u=-m_d m_s/(m_d+m_s)$~\cite{Creutz:2003xu}. The neutral pion becomes massless on the transition line, and within the new phase the chiral order parameter, $\langle\Sigma\rangle$, becomes complex. For physical QCD this is mostly a curiosity, since increasingly accurate determinations of the quark masses indicate clearly that all are positive relative to one another~\cite{Beringer:1900zz,Aoki:2013ldr}. Thus physical QCD, despite the non-degeneracy of the up and down quarks, lies away from the critical line. For lattice QCD (LQCD), however, the situation is less clear. The position of the transition can be shifted closer to the physical point by discretization effects. Indeed, it is well known that, with degenerate Wilson-like fermions,\footnote{% ``Wilson-like'' indicates that the analysis holds for both Wilson fermions and various improvements thereof, in particular for non-perturbatively ${\cal O}(a)$-improved Wilson fermions.} discretization effects can lead to the appearance of a new phase---the Aoki phase---in which isospin is spontaneously broken and $\langle\Sigma\rangle$ is complex~\cite{Aoki:1983qi,Sharpe:1998xm}. In addition, advances in simulations now allow calculations to be done at the physical light-quark masses, including, very recently, the physical non-degeneracy between up and down quarks~\cite{Borsanyi:2014jba}. It is thus natural to ask how, in LQCD with non-degenerate quarks, discretization effects change the position and nature of the CP-violating phase. This question is particularly acute in the case of twisted-mass fermions, where additional symmetry breaking is explicitly included. In this paper we address this question for Wilson-like and twisted-mass lattice fermions. We do so using \raise0.4ex\hbox{$\chi$}PT, specifically the versions of \raise0.4ex\hbox{$\chi$}PT\ in which the effects of discretization have been included. Our work also allows us to address a related issue: In what way is the CP-violating phase of the continuum theory related to the Aoki phase of the lattice theory?\footnote{% This issue has been raised previously by Mike Creutz and his conjectured answer is confirmed by the present analysis~\cite{Creutz:2014em}.} Since twisted-mass QCD is only defined for even numbers of fermion flavors~\cite{Frezzotti:2003ni}, a necessary step for our work is to rephrase the continuum SU(3) \raise0.4ex\hbox{$\chi$}PT\ analysis of Ref.~\cite{Creutz:2003xu} in the two-flavor theory obtained by integrating out the strange quark. This requires that the contributions of one of the next-to-leading order (NLO) low-energy coefficients ($\ell_7$) be treated as parametrically larger than the others. Thus we are led to a somewhat non-standard power-counting, but one which reproduces the SU(3) phase diagram, including the CP-violating phase, within SU(2) \raise0.4ex\hbox{$\chi$}PT. This approach has been used before along the line $m_u=-m_d$~\cite{Smilga:1998dh}; here we extend the analysis to arbitrary mass splitting. Similar work has also been done recently in the context of a effective theory including the $\eta$ meson~\cite{Aoki:2014moa}. The organization of this article is as follows. In Sec.~\ref{sec:vacuum} we briefly recall the results for the phase structure and pion masses at LO in SU(2) and SU(3) \raise0.4ex\hbox{$\chi$}PT, and show how they differ. Section~\ref{sec:matching} describes the matching of SU(3) and SU(2) \raise0.4ex\hbox{$\chi$}PT. In Sec.~\ref{sec:disc}, we recall briefly how discretization effects are incorporated in \raise0.4ex\hbox{$\chi$}PT\ for degenerate Wilson-like fermions, and the resulting phase structure. We then present our first new results: the phase diagram including both discretization effects and non-degeneracy. In Sec.~\ref{sec:twist} we move onto twisted-mass fermions, focusing first on the phase diagram and pion masses in the case of maximal twist, where most simulations have been done because of the property of automatic $\mathcal{O}(a)$ improvement~\cite{Frezzotti:2003ni}. It is nevertheless interesting to understand how the results with untwisted and maximally twisted fermions are connected, and so, in Sec.~\ref{sec:arbtwist}, we discuss the phase diagram for general twist. Up to this stage, our analysis is done using the LO terms due to the average quark mass, discretization effects and non-degenerate quark masses. To understand how robust the results are we consider, in Sec.~\ref{sec:higher}, the impact of including the next higher order terms in our power counting. Some conclusions are collected in Section \ref{sec:concl}. \section{Continuum Vacuum Structure at leading order in \raise0.4ex\hbox{$\chi$}PT} \label{sec:vacuum} In this section we review the vacuum structure predicted by LO \raise0.4ex\hbox{$\chi$}PT\ for both two and three light flavors. The LO chiral Lagrangian in Euclidean space-time is, for any number of light flavors, \begin{equation} \mathcal{L}_\chi = \frac{f^2}{4}\tr\left[ \partial_\mu \Sigma \partial_\mu \Sigma^\dagger -(\chi\Sigma^\dagger+\Sigma\chi^\dagger)\right]\,, \end{equation} where $\Sigma\in SU(N_f)$ and $\chi=2B_0 M$ (with $M$ the mass matrix), while $f\sim 92\;$MeV and $B_0$ are low-energy constants (LECs). For two light flavors the chiral order parameter can be parametrized as $\langle\Sigma\rangle=\exp(i\theta \hat n\cdot \vec\tau)$. Although the mass matrix $M={\rm diag}(m_u,m_d)$ has both singlet and triplet components, the leading order potential depends only on the former \begin{equation} \mathcal{V}_{SU(2),\,LO} = -\frac{f^2}{4}\tr\left[\chi\Sigma^\dagger+\Sigma\chi^\dagger\right] = -\frac{f^2}{2}\cos{\theta}\tr[\chi] \equiv -f^2 \cos{\theta}\,\chi_\ell\,. \end{equation} In the last step we have defined the convenient quantity $\chi_\ell=B_0 (m_u+m_d)$. The potential is minimized at $\theta=0$ if $\chi_\ell >0$ and at $\theta=\pi$ if $\chi_\ell<0$, resulting in the phase diagram sketched in Fig.~\ref{fig:SU2LO}. In terms of the behavior of the condensate, this is a first-order phase transition at which the condensate flips sign. This characterization is somewhat misleading, however, because the two sides of the transition are related by a non-anomalous flavor rotation. Such a transformation can change $M \to -M$ and $\Sigma\to-\Sigma$, while leaving physics unchanged. Thus by adding an extra dimension to the phase diagram (as we will do later) one finds that the two sides are connected. Expanding the potential about its minimum, using $\Sigma=\langle \Sigma\rangle\exp(i\vec \pi\cdot \vec \tau/f)$ we find the standard LO result for the pion masses, $m^2_\pi=|\chi_\ell|$. These thus vanish along the phase transition line. That they vanish at the origin follows from Goldstone's theorem due to the spontaneous breaking of the exact axial symmetry. That they vanish away from the origin along the transition line is not expected from symmetry arguments, and indeed holds, as we will see, only at LO in \raise0.4ex\hbox{$\chi$}PT. \begin{figure}[tb!] \centering \includegraphics[scale=.3]{SU2LOlabeled.png} \caption{\label{fig:SU2LO} Phase diagram at lowest order in SU(2) \raise0.4ex\hbox{$\chi$}PT.} \end{figure} The phase diagram of the three-flavor theory has a more interesting structure, as elucidated most extensively by Creutz~\cite{Creutz:2003xu}. Since $m_s\gg m_u,m_d$ in nature, it is natural to hold $m_s$ fixed and vary the other two quark masses. The resulting phase diagram at LO is sketched in Fig.~\ref{fig:SU3LO}. The ``normal'' region, in which $\langle\Sigma\rangle=\mathbb 1$, ends at a transition line along which $m_{\pi^0}$ vanishes. This occurs (for fixed $m_s>0$) when one of the other masses, say $m_u$, becomes sufficiently negative. The explicit expression for the neutral pion mass in this phase is \begin{equation} m_{\pi^0\, SU(3)}^2=\frac{2}{3}B_0\left(m_u+m_d+m_s - \sqrt{m_u^2+m_d^2+m_s^2-m_u m_d-m_u m_s-m_d m_s}\right)\,, \label{eq:mpi0su3} \end{equation} which vanishes when $m_u=-m_d m_s/(m_d + m_s)$. The charged pions remain massive throughout the normal phase except at the origin. \begin{figure}[bt!] \centering \includegraphics[scale=.3]{SU3LOlabeled.png} \caption{\label{fig:SU3LO} Phase diagram at lowest order in SU(3) \raise0.4ex\hbox{$\chi$}PT\ with fixed strange quark mass. Equations for the positions of phase transition lines are given in the text.} \end{figure} Moving outside the normal phase one enters a CP-violating phase in which the condensate is complex. The explicit form is \begin{equation} \left<\Sigma\right>=\begin{pmatrix} \exp{i\phi} & 0 & 0 \\ 0 & \exp{i\psi} & 0 \\ 0 & 0 & \exp{-i(\phi+\psi)} \end{pmatrix} \end{equation} where the phases satisfy \begin{equation} m_u\sin{\phi}=m_d\sin{\psi}=-m_s\sin{(\phi+\psi)} \,. \end{equation} In this case there is a genuine phase transition at the boundary. It is of second order: $\langle\Sigma\rangle$ is continuous, and a single pion becomes massless. The phase diagram is symmetric under both $m_u\leftrightarrow m_d$ interchange and inversion through the origin (with $m_s$ fixed). Inversion is brought about by a non-anomalous axial isospin transformation, which also changes the condensate as shown in Fig.~\ref{fig:SU3LO}. We note that the CP-violating region is of finite width.\footnote{% The theory along the $m_u=-m_d$ diagonal is identical to that with $m_u=m_d$ at $\theta_{\rm QCD}=\pi$, and has been discussed extensively in the literature. In particular, a \raise0.4ex\hbox{$\chi$}PT\ analysis of this theory has been given in Ref.~\cite{Smilga:1998dh}.} Specifically, as one moves away from the origin along the $m_u=-m_d$ diagonal, the width of this region grows proportionally to $(m_u\!-\!m_d)^2/m_s$. As the figure shows, there are additional phase boundaries in the second and fourth quadrants. These occur, however, when $|m_u|,|m_d|> |m_s|$, and thus lie far from the region of physical interest. In the rest of our analysis, we consider only the region in which $|m_u|,|m_d| \ll |m_s|$, and thus zoom in on the vicinity of the origin in Fig.~\ref{fig:SU3LO}. \section{Matching SU(2) and SU(3) \raise0.4ex\hbox{$\chi$}PT\ for non-degenerate quarks} \label{sec:matching} If we choose the quark masses to satisfy $|m_u|,|m_d| \ll |m_s| \ll \Lambda_{\rm QCD}$, then the properties of pions can be simultaneously described by both SU(2) and SU(3) \raise0.4ex\hbox{$\chi$}PT, and the predictions of the two theories must agree. The results of the previous section show that this is not the case if we work to LO in both theories---the CP-violating phase is absent in SU(2) \raise0.4ex\hbox{$\chi$}PT. The discrepancy is resolved by noting that the CP-violating phase has a width proportional to $(m_u\!-\!m_d)^2$, indicating that it arises at NLO in SU(2) \raise0.4ex\hbox{$\chi$}PT. In this section we recall how the two theories are matched, and show how the CP-violating phase can then be obtained in SU(2) \raise0.4ex\hbox{$\chi$}PT\ when including the resulting NLO term. To do the matching, one considers quantities accessible in both SU(2) and SU(3) theories, namely pion masses and scattering amplitudes. Expanding the LO SU(3) result in powers of $m_{u,d}/m_s$, the leading terms match with the LO SU(2) result, while the first subleading terms match with an NLO SU(2) contribution. The subleading terms in the SU(3) results are in fact proportional to $(m_u\!-\!m_d)^2$, because they arise from intermediate $\eta$ propagators and involve two factors of the $\pi^0-\eta$ mixing amplitude. The only source of such mass dependence at NLO in the SU(2) theory is the $\ell_7$ term in the NLO potential \begin{equation} \mathcal{V}_{SU(2)\, NLO} = -\frac{\ell_3}{16} [\tr(\chi^\dagger \Sigma + \Sigma^\dagger \chi)]^2 +\frac{\ell_7}{16}[\tr(\chi^\dagger \Sigma - \Sigma^\dagger \chi)]^2\,. \end{equation} Writing $\chi$ as \begin{equation} \chi = \chi_\ell \mathbb 1 + \epsilon \tau_3\,, \ \ {\rm with}\ \ \epsilon= B_0(m_u-m_d)\,, \end{equation} we see that only the $\epsilon$ part contributes to the $\ell_7$ term. Thus this term leads to contributions proportional to $(m_u\!-\!m_d)^2$. Other NLO contributions (i.e. those proportional to different NLO LECs or coming from loops) do not have this mass dependence. The simplest quantity with which to do the matching is the neutral pion mass, and this was used to determine the value of $\ell_7$ in Ref.~\cite{Gasser:1984gg}. The LO SU(3) result [given in Eq.~(\ref{eq:mpi0su3}) above] expands to \begin{equation} m_{\pi^0 SU(3)\, LO}^2=\chi_\ell -\frac{\epsilon^2}{4B_0 m_s} +\mathcal{O}\left(\frac{\epsilon^2 m_{u,d}}{m_s^2}\right)\,. \end{equation} The SU(2) result at NLO is \begin{equation} m_{\pi^0 SU(2)\, NLO}^2=\chi_\ell - \frac{2\ell_7 \epsilon^2 }{f^2} + {\cal O}\left(\frac{\chi_\ell^2}{\Lambda_\chi^2}\right)\,, \end{equation} where $\Lambda_\chi=4\pi f$ is the chiral scale. The $\chi_\ell^2$ contributions arise from terms in the NLO chiral Lagrangian (including $\ell_3$) as well as from chiral logarithms. Equating these two results one finds~\cite{Gasser:1984gg} \begin{equation} \ell_7=\frac{f^2}{8B_0m_s}\,. \label{eq:ell7match} \end{equation} One can show that with this value for $\ell_7$, contributions to all pion $n$-point amplitudes proportional to $\epsilon^2/m_s$ agree in the two theories. We stress that in this matching we are not taking into account ``standard'' NLO contributions, i.e. those suppressed relative to LO results by factors of $m_{u,d}/\Lambda_{\rm QCD}\sim (m_\pi/\Lambda_\chi)^2$ (up to logarithms). Such contributions arise in both SU(3) and SU(2) \raise0.4ex\hbox{$\chi$}PT\ and must be included in a full NLO matching. This is not necessary for our purposes since such terms lead to small isospin-conserving corrections to the vacuum structure and pion masses---they do not introduce qualitatively new effects. By contrast, the $\epsilon^2$ terms that we keep lead to isospin breaking, and are the leading order contributions which do so. Indeed, for this reason $\ell_7$ is not renormalized at this order, since, as already noted, one-loop chiral logarithms do not contain a term proportional to $\epsilon^2$. Thus it is consistent to work with the classical potential, rather than the one-loop effective potential. This is not the case for other LECs such as $\ell_3$, which are renormalized and thus scale-dependent~\cite{Gasser:1984gg}. We can formalize this by noting that standard NLO contributions are parametrically smaller than the terms we keep by a factor of $m_s/\Lambda_{\rm QCD}$. This allows the development of a consistent power-counting scheme in which the $\epsilon^2$ terms are larger than generic $m^2$ contributions.\footnote{% The numerical basis for this power-counting is not very strong. For example, $\ell_7$ and $\ell_3(\mu)$ are comparable in size for reasonable values of the scale $\mu$. Thus the numerical size of the standard NLO corrections we are dropping may be comparable to those proportional to $\epsilon^2$ that we are keeping. The key point, however, is that we are interested in qualitatively new effects, rather than a precise quantitative description.} We discuss this in the following section. To be consistent we should also account for NLO contributions in SU(3) \raise0.4ex\hbox{$\chi$}PT\ of size $m_s/\Lambda_{\rm QCD}$ relative to LO terms. These, however, lead only to a renormalization of the SU(2) constants $f$ and $B_0$ relative to their SU(3) counterparts. Since we work henceforth entirely in the SU(2) theory, we choose to leave this renormalization implicit. We now show that the inclusion of the $\ell_7$ term leads to the same phase diagram as found in the LO SU(3) analysis. Given the matching result Eq.~(\ref{eq:ell7match}), we always assume $\ell_7>0$ in the following. Using $\langle\Sigma\rangle=\exp(i\theta \hat n\cdot \vec \tau)$, the potential becomes \begin{equation} \mathcal{V}_{SU(2)} = - f^2 \left( \chi_\ell \cos{\theta} +c_\ell \epsilon^2 n_3^2 \sin^2{\theta}\right)\,, \label{eq:V2NLO} \end{equation} where $c_\ell=\ell_7/f^2$. Since $\ell_7>0$, the potential is always minimized by choosing $|n_3|=1$. Since $n_3=1$ and $n_3=-1$ are related by changing the sign of $\theta$, we can, without loss of generality, set $n_3=1$. The resulting potential is stationary with respect to $\theta$ at the ``normal'' values $\theta=0$ and $\pi$, and in addition at \begin{equation} \cos\theta = \frac{\chi_\ell }{2c_\ell\epsilon^2}\,. \end{equation} This new stationary value always leads to the global minimum of the potential where it is valid, i.e. when $|\cos\theta|\le 1$. Thus, for fixed $\epsilon$, there is a new phase for $-2c_\ell\epsilon^2 \le \chi_\ell \le 2 c_\ell\epsilon^2$, within which $\langle\Sigma\rangle$ is complex and CP is violated. Although $\cos\theta$ is fixed, the sign of $\theta$ is not, with the two possible vacua begin related by a CP transformation. This phase matches continuously onto the normal phases with $\cos\theta=\pm1$ at its boundaries. Thus the phase transition is of second order. \begin{figure}[tb!] \centering \includegraphics[scale=.3]{SU2mNLOlabeled.png} \caption{\label{fig:NLOell7} Phase diagram from SU(2) \raise0.4ex\hbox{$\chi$}PT\ including $\ell_7$ term with $\ell_7>0$. Equations for the positions of phase transition lines are given in the text.} \end{figure} The resulting phase diagram is sketched in Fig.~\ref{fig:NLOell7}. This is not only qualitatively similar to the central portion of the LO SU(3) phase diagram, Fig.~\ref{fig:SU3LO}, but is in fact in complete quantitative agreement at the appropriate order. For example, expanding the SU(3) result for the phase boundary, $m_u=-m_d/(1+m_d/m_s)$, in powers of $m_{u,d}/m_s$, and keeping only the leading non-trivial term, one finds that the boundary occurs at $\chi_\ell=\epsilon^2/(4 B_0 m_s)$. This agrees with the SU(2) result $\chi_\ell=2 \ell_7 \epsilon^2/f^2$ using the matching condition (\ref{eq:ell7match}). We have also checked that the pion masses agree throughout the phase plane. We do not quote results for pion masses here, since they are included in the more general analysis presented below. The fact that the CP-violating phase can be reproduced within SU(2) \raise0.4ex\hbox{$\chi$}PT\ was first explained by Smilga~\cite{Smilga:1998dh}. His work considered only the case $m_u=-m_d$, which, as noted above, is the same as $m_u=m_d$ with $\theta_{\rm QCD}=\pi$. The analysis presented here gives the (very simple) generalization to arbitrary non-degenerate quark masses. There is also a close relation between our analysis and the recent work of Aoki and Creutz~\cite{Aoki:2014moa}. These authors do not use \raise0.4ex\hbox{$\chi$}PT\ {\em per se}, but rather an effective theory containing both pions and the $\eta$ meson. If the $\eta$ were integrated out then their theory would reduce to that we consider here, including the $\ell_7$ term, plus small corrections. We think, however, that it is preferable to work in a strict effective theory framework, in which only the light particles are kept as dynamical degrees of freedom. \section{Including discretization effects for Wilson-like fermions} \label{sec:disc} In this section we recall how lattice artifacts can be incorporated into \raise0.4ex\hbox{$\chi$}PT, and study their impact on the phase structure described above at leading non-trivial order. We do this for untwisted Wilson-like fermions---twist will be considered in the following sections. The method leads to the chiral effective theory describing lattice simulations close to the continuum limit. We begin by recalling the analysis for degenerate quarks and then add in non-degeneracy. We work entirely in the two-flavor theory obtained after the strange quark (and the charm quark too, if present) has been integrated out. For untwisted Wilson-like fermions (unlike for twisted-mass fermions), the analysis could also be carried out within SU(3) \raise0.4ex\hbox{$\chi$}PT, but there is no advantage to doing so as the dominant long-distance dynamics lies in the SU(2) sector. Both quark masses and discretization effects break chiral symmetry, and it is important to understand the relative size of these effects. Our focus here is on state-of-the-art simulations, which have $m_{u,d}$ close to their physical values ($m_u\approx 2.5\;$MeV and $m_d\approx 5\;$MeV in the $\overline{\rm MS}$ scheme at $\mu=2\;$GeV), and lattice spacings such that $1/a\approx 3\;$GeV. In this case, the relative size of discretization effects is characterized by $a \Lambda_{\rm QCD}\approx 0.1$ (using $\Lambda_{\rm QCD}=300\;$MeV), so that \begin{equation} a \Lambda_{\rm QCD}^2\approx 30\,{\rm MeV} \gg m_{u,d} \approx a^2 \Lambda_{\rm QCD}^3 \approx 3\,{\rm MeV}\,. \end{equation} The appropriate power-counting is thus (in schematic notation) $a^2 \sim m$. This is the Aoki regime, in which competition between discretization and mass effects leads to interesting phase structure~\cite{Aoki:1983qi,Sharpe:1998xm}. Discretization effects can be incorporated into \raise0.4ex\hbox{$\chi$}PT\ following the method of Ref.~\cite{Sharpe:1998xm}. For unimproved (or partially improved) Wilson fermions, the dominant discretization effect is proportional to $a$. In the pion sector, however, this contribution can be absorbed entirely into a common shift in all quark masses~\cite{Sharpe:1998xm}, and we assume below that this shift has been made. The first non-trivial discretization effect is that proportional to $a^2$. This changes the LO potential to~\cite{Sharpe:1998xm} \begin{align} \mathcal{V}_{a^2}=&-\frac{f^2}{4} \tr({\chi}^\dagger \Sigma + \Sigma^\dagger {\chi}) - W' [\tr(\hat A^\dagger \Sigma + \Sigma^\dagger \hat A)]^2\,. \end{align} Here we are using the notation of Ref.~\cite{Sharpe:2004ps}, in which $\hat A=2 W_0 a \mathbb 1$ is a spurion field, with dimensions of mass squared, and proportional to the identity matrix in flavor space. $W_0$ and $W'$ are new LECs. \begin{figure}[tb!] \centering \includegraphics[scale=.3]{AokiLOlabeled.png} \caption{\label{fig:AokiLO} Phase diagram in LO SU(2) \raise0.4ex\hbox{$\chi$}PT\ including discretization effects with $w'<0$ (Aoki scenario). Equations for the positions of phase transition lines are given in the text.} \end{figure} The analysis of the vacuum structure for degenerate quarks was given in Ref.~\cite{Sharpe:1998xm}. Since ${\cal V}_{a^2}$ is independent of the $\epsilon$, the results are unchanged at LO in the presence of non-degeneracy. To determine the vacuum we must minimize \begin{equation} \mathcal{V}_{a^2} = - f^2 \left(\chi_\ell \cos{\theta} +w' \cos^2{\theta}\right)\,, \end{equation} where $w'={64W'W_0^2 a^2}/{f^2}$. For $w'<0$, the analysis is essentially the same as that for ${\cal V}_{SU(2)}$ with $\ell_7>0$, as given in the previous section. Stationary points are at $\cos\theta=\pm 1$ and \begin{equation} \cos{\theta}=-\frac{\chi_\ell}{2w'}\,, \end{equation} with the latter being the global minimum where valid ($|\cos\theta|\le 1$). This leads to the phase diagram shown in Fig.~\ref{fig:AokiLO}, with an Aoki phase~\cite{Aoki:1983qi} separated from the normal phases by second-order transitions at $|\chi_\ell|=-2w'$. Strictly speaking, the name ``Aoki phase'' has been applied previously only on the diagonal $m_u=m_d$ axis, but in the present approximation it holds also for non-degenerate quarks. Within the Aoki phase the potential is independent of the direction of the condensate, $\hat n$, so that there are two massless Goldstone bosons, the charged pions. Parity and flavor are violated within this phase. With the canonical choice of the direction of the condensate, $\hat n=\hat z$, CP is also violated. For $w'>0$, the global minimum lies at $\cos\theta={\rm sign}( \chi_\ell)$, with a first-order transition at $\chi_\ell=0$. The phase diagram is thus identical to that in the continuum, Fig.~\ref{fig:SU2LO}. The only difference is that here the yellow line indicates a genuine first-order transition, since on the lattice there are no symmetries connecting the two sides. This case is referred to as the first-order scenario~\cite{Sharpe:1998xm}. \bigskip We are now ready to combine the effects of non-degeneracy with discretization errors. This requires that we adopt an appropriate power-counting scheme for the relative importance of $\epsilon^2$, $m$ and $a^2$, where $m$ indicates a generic quark mass. Recalling that $\epsilon^2$ terms are enhanced compared to generic $m^2$ terms we use \begin{equation} m\sim a^2 > \epsilon^2 > m a \sim a^3 > a \epsilon^2 > m^2\sim m a^2 \sim a^4 \dots. \end{equation} This can be thought of as treating $\epsilon\sim a^{1+\delta}$, with $0<\delta < 1/2$. The utility of this power counting is that allows us to first add the $\epsilon^2$ term to those proportional to $m$ and $a^2$, and then consider terms of order $ma\sim a^3$ at a later stage (in Sec.~\ref{sec:higher} below). Indeed, we could, for the purposes of this section, set $\delta=0$, and treat the $\epsilon^2$ term as of LO. We do not do so, however, since this would require us to later treat $a\epsilon^2$ terms as of the same size as those proportional to $ma\sim a^3$. Nevertheless, we will loosely describe the inclusion of $m$, $a^2$ and $\epsilon^2$ terms as constituting our LO analysis, while treating the $ma\sim a^3$ terms as being of NLO. Terms of yet higher order will not be considered. With the power counting in hand, we can extend the inclusion of discretization errors into \raise0.4ex\hbox{$\chi$}PT\ to incorporate the effects of non-degeneracy. This leads to the appearance of new operators in the Symanzik effective Lagrangian, and thus, potentially, to new terms in the chiral Lagrangian. The constraints on additional operators in the Symanzik Lagrangian in the presence of non-degeneracy were worked out in Ref.~\cite{WalkerLoud:2005bt}. Using their results within our power-counting scheme, we find that the lowest order new operator is $\sim a \epsilon^2 \bar\psi\psi$. This is, however, of higher order than we consider here.\footnote{% Furthermore, when mapped to the chiral Lagrangian, it leads to contributions which can be absorbed by making the untwisted mass $m$ have a weak dependence on $\epsilon$. Thus it does not lead to new phases, but only to a small distortion of the phase diagram.} All other operators are of yet higher order. Thus, at the order we work, non-degeneracy only enters our calculation through the continuum $\ell_7$ term. The LO potential thus becomes \begin{align} {\mathcal{V}_{a^2,\ell_7}}=& -\frac{f^2}{4} \tr(\chi^\dagger \Sigma + \Sigma^\dagger \chi) - W' [\tr(A^\dagger \Sigma + \Sigma^\dagger A)]^2 +\frac{\ell_7}{16}[\tr(\chi^\dagger \Sigma - \Sigma^\dagger \chi)]^2 \,. \label{eq:VA2ell7} \end{align} We stress that it is self-consistent to determine the vacuum structure and pion masses from a tree-level analysis of ${\cal V}_{a^2,\ell_7}$ since loop effects only come in at ${\cal O}(m^2,ma^2,a^4)$. In terms of the parameters of $\langle\Sigma\rangle$, the potential is now given by \begin{equation} - \frac{\mathcal{V}_{a^2,\ell_7}}{f^2} = \chi_\ell\cos{\theta} +c_\ell \epsilon^2 n_3^2 \sin^2{\theta} +w' \cos^2{\theta}\,. \end{equation} As before, we can set $n_3=1$ without loss of generality. The stationary points are at $\cos{\theta}=\pm1$ and \begin{equation} \cos{\theta} = \frac{\chi_\ell}{2(c_\ell \epsilon^2-w')}\,. \label{eq:costhetares} \end{equation} The latter minimizes the potential if $c_\ell\epsilon^2-w'>0$ and is valid for $|\cos\theta|\le 1$. This results in the phase diagrams of Figs.~\ref{fig:NLOAoki} and \ref{fig:NLOFirst} for $w'<0$ and $w'>0$, respectively. In the former case, corresponding to the Aoki phase for degenerate quarks, the second-order transition lines lie at \begin{equation} \chi_\ell= \pm 2 (c_\ell \epsilon^2-w') \label{eq:secondboundary} \,. \end{equation} Thus the width of the phase grows as $|\epsilon|$ increases. Furthermore, comparing to Fig.~\ref{fig:AokiLO}, we see that the continuum CP-violating phase and the Aoki phase are continuously connected.\footnote{% This result is in agreement with Creutz' conjecture~\cite{Creutz:2014em}.} The only subtlety in this connection is that the condensate definitely points in the $n_3$ direction for $\epsilon\ne 0$ (i.e. the direction picked out by the non-degenerate part of the mass term), whereas for $\epsilon=0$ the direction is arbitrary. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.29]{AokimNLOlabeled.png} \caption{\label{fig:NLOAoki} Aoki scenario ($w'<0$).} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.29]{FirstmNLOlabeled.png} \caption{\label{fig:NLOFirst} First-order scenario ($w'>0$).} \end{subfigure} \caption{\centering Phase diagrams including effects of both discretization and non-degeneracy. Blue (yellow) lines indicate second (first) order transitions. Equations for the positions of phase transition lines are given in the text.} \label{fig:untwistPhases} \end{figure} In the first-order scenario, Fig.~\ref{fig:NLOFirst}, the first-order transition along the $m_u=-m_d$ line weakens as $|\epsilon|$ increases, until, at $c_\ell \epsilon^2=w'$, the CP-violating phase appears. The second-order transition lines are then given by $|\chi_\ell|=2 (c_\ell \epsilon^2-w')$, i.e. by the same equation as in the Aoki scenario. \begin{comment} \begin{figure}[tb!] \centering \includegraphics[scale=.3]{FirstmNLOlabeled.png} \caption{\centering Phase diagram including the effects of both discretization and non-degeneracy in the first-order scenario ($w'>0$). Blue (yellow) lines indicate first (second) order transitions.} \label{fig:NLOFirst} \end{figure} \end{comment} We next calculate the pion masses throughout the phase plane, expanding about the vacuum as \begin{equation} \Sigma= \exp(i\theta \tau_3) \exp(i\vec\pi\cdot\vec\tau/f)\,. \end{equation} Outside the CP-violating phase, we find \begin{align} m_{\pi^0}^2&= |\chi_\ell| - 2 (c_\ell \epsilon^2-w')\,, \label{eq:mpi0untwistLO} \\ m_{\pi^\pm}^2&= m_{\pi^0}^2 + 2 c_\ell \epsilon^2\,. \label{eq:mpipuntwistLO} \end{align} while within the CP-violating phase we have \begin{align} m_{\pi^0}^2&= 2( c_\ell \epsilon ^2 -w') \sin^2\theta \label{eq:mpi0CPuntwistLO} \\ m_{\pi^\pm}^2&= 2c_\ell \epsilon^2\,, \label{eq:mpipCPuntwistLO} \end{align} where $\theta$ is given in Eq.~(\ref{eq:costhetares}). These results are plotted versus $\chi_\ell$ for various characteristic choices of $\epsilon$ and $w'$ in Fig.~\ref{fig:UnTwistPiMasses}. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.3]{PiMassLO.png} \caption{$w'=c_\ell\epsilon^2=0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.3]{PiMassmNLOlabeled.png} \caption{$w'=0$, $c_\ell\epsilon^2>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.3]{PiMassAokilabeled.png} \caption{$w'<0$, $c_\ell\epsilon^2= 0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.3]{PiMassFirst.png} \caption{$w'>0$, $c_\ell\epsilon^2=0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.3]{PiMassmNLOAokilabeled.png} \caption{$w'<0$, $c_\ell\epsilon^2>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.3]{PiMassmNLOFirstCrit.png} \caption{ $c_\ell\epsilon^2=w'>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.3]{PiMassmNLOFirst2labeled.png} \caption{$c_\ell\epsilon^2>w'>0$} \end{subfigure} \caption{\label{fig:UnTwistPiMasses} Pion masses for untwisted Wilson fermions including the effects of both discretization ($w'\ne 0$) and non-degeneracy ($\epsilon\ne 0$). $m_{\pi^0}^2$ is shown by solid (blue) lines, $m_{\pi^\pm}^2$ by dashed (red) lines. Explicit expressions for the masses are given in the text. Vertical scales differ between the figures.} \end{figure} Figures~\ref{fig:UnTwistPiMasses}a and b show the continuum results for degenerate and non-degenerate masses, respectively. The neutral pion mass vanishes along the second-order transition line, as expected. The full degeneracy at $\chi_\ell=0$ is due to the fact that the theory regains flavor symmetry (with $\theta_{\rm QCD}=\pi$) at this point. A characteristic feature of the spectrum at this order is that the charged pion mass is independent of $\chi_\ell$ within the CP-violating phase. This holds also when discretization errors are included. Figures~\ref{fig:UnTwistPiMasses}c and d show the spectrum for degenerate quarks with discretization errors included, respectively for the Aoki and first-order scenarios, reproducing the results of Ref.~\cite{Sharpe:1998xm}. Our new results are those of Figs.~\ref{fig:UnTwistPiMasses}e-g, which include the effects of both discretization errors and non-degeneracy. In this case the charged and neutral pion masses differ in general. Figure~\ref{fig:UnTwistPiMasses}e shows the behavior in the Aoki scenario, where $m_{\pi^0}$ vanishes on the phase transition lines, and rises above $m_{\pi^\pm}$ in the central region of the CP-violating phase. There are thus two values of $\chi_\ell$ where all pions are degenerate, but these are accidental degeneracies and not indicative of any symmetry. For the first-order scenario Fig.~\ref{fig:UnTwistPiMasses}f shows the spectrum when $\epsilon$ is chosen so that the plot passes through the end-point of the second-order transition line, while Fig.~\ref{fig:UnTwistPiMasses}g shows what happens as one moves through the CP-violating phase. In this case, there are no degenerate points. Simulations using Wilson-like fermions at physical masses, including isospin breaking, have recently begun~\cite{Borsanyi:2014jba}. What is the significance of our results for such simulations? The main issue is whether discretization effects can move the CP-violating phase such that it lies closer to, or even includes, the physical point. Clearly one wants to avoid simulating in this phase, since it has a different vacuum structure from the continuum theory. But even lying close to a second-order transition could lead to algorithmic issues due to critical slowing down. What we have found is that the phase does move closer to the physical point in the Aoki scenario, Fig.~\ref{fig:NLOAoki}. In this scenario, the CP-violating phase now includes a region of positive quark masses. On the other hand, for the first-order scenario, discretization effects move the CP-violating phase away from the physical point. A positive aspect of our results is that discretization errors lead only to a overall shift in pion masses (outside of the CP-violating phase), so that the difference $m_{\pi^\pm}^2-m_{\pi^0}^2$ takes its continuum value $2c_\ell \epsilon^2$ in both scenarios. \clearpage \section{Twisted-mass fermions at maximal twist} \label{sec:twist} In this section we extend the previous analysis to twisted-mass fermions~\cite{Frezzotti:2000nk} at maximal twist. Such fermions have the important practical property of automatic ${\cal O}(a)$ improvement~\cite{Frezzotti:2003ni}. They are being used to simulate QCD with quarks at or near their physical masses~\cite{Abdel-Rehim:2013yaa,Carrasco:2014cwa}, and isospin breaking is now being included~\cite{deDivitiis:2013xla}. The main question we address here is the same as for untwisted fermions: How do discretization effects change the continuum phase structure and pion masses? In the continuum, twisted mass fermions are obtained by a non-anomalous axial rotation, \begin{equation} \mathcal{L}_{\rm QCD} = \overline{\psi} (\slashed{D}+m_\ell +\epsilon_\ell \tau_3) \psi \rightarrow \overline{\psi} (\slashed{D}+ m_\ell e^{i\gamma_5 \tau_1 \omega} +\epsilon_\ell \tau_3)\psi = \overline{\psi} (\slashed{D}+ m + i \gamma_5 \tau_1 \mu +\epsilon_\ell \tau_3)\psi \,, \end{equation} with $m_\ell=(m_u\!+\!m_d)/2$, $\epsilon_\ell=(m_u\!-\!m_d)/2$, $m=m_\ell\cos\omega$, $\mu=m_\ell\sin\omega$, and $\omega$ the twist angle. Conventionally, $m$ is called the untwisted (average) mass and $\mu$ the twisted (average) mass. Choosing the twist in a direction orthogonal to $\tau_3$ leaves the $\epsilon_\ell$ term unchanged. In the continuum this is a convenience, but not a necessity. Once one discretizes $\slashed{D}$ with a Wilson term, however, it is mandatory to twist in a direction orthogonal to $\tau_3$ if one wants to keep the fermion determinant real~\cite{Frezzotti:2003xj}.\footnote{% In Ref.~\cite{deDivitiis:2013xla}, which studies twisted-mass non-degenerate fermions, the twist is chosen in the $\tau_3$ direction. This leads to a complex fermion determinant, which is avoided in practice by perturbing at linear order around the isospin-symmetric theory. Because the twist is in the $\tau_3$ direction, our present results do not apply to these simulations. We will discuss the generalization to $\tau_3$ twist (along with the inclusion of electromagnetism) in an upcoming work~\cite{inprep}.} By convention, this direction is chosen to be $\tau_1$. The rescaled mass matrix that enters \raise0.4ex\hbox{$\chi$}PT\ is now \begin{equation} \chi=\chi_\ell e^{i \tau_1 \omega} +\epsilon \tau_3 = \chi_\ell \cos{\omega} \mathbb{1} \ +i \chi_\ell \sin{\omega} \tau_1 +\epsilon \tau_3 = \widehat m \mathbb{1} +i \widehat \mu\tau_1 +\epsilon \tau_3\,, \label{eq:twistedchi} \end{equation} and is no longer hermitian. Here we have defined \begin{equation} \widehat m\equiv 2B_0 m=\chi_\ell \cos\omega \ \ {\rm and}\ \ \widehat \mu\equiv 2B_0 \mu=\chi_\ell\sin\omega \end{equation} following Ref.~\cite{Sharpe:2004ps}. To determine the effective chiral theory for twisted-mass lattice QCD the first step is to determine the additional operators in the Symanzik Lagrangian that are induced by twisting. As in the untwisted case, the form of the allowed operators can be obtained from the analysis of Ref.~\cite{WalkerLoud:2005bt}, which includes both twist and non-degeneracy. In fact, since $\widehat \mu^2$ is smaller than $\epsilon^2$ in our power-counting, the inclusion of twist does not change the result for the untwisted case, namely that the lowest order new operator is $\sim a\epsilon^2$ and of higher order than we are working. Thus at LO the extension \raise0.4ex\hbox{$\chi$}PT\ to include twist and discretization errors is accomplished by simply using the twisted $\chi$ of Eq.~(\ref{eq:twistedchi}) in the potential ${\cal V}_{a^2,\ell_7}$ of Eq.~(\ref{eq:VA2ell7}). Using our standard parametrization of $\langle\Sigma\rangle$ this gives \begin{equation} - \frac{\mathcal{V}_{a^2,\ell_7}}{f^2} = \widehat m\cos{\theta} +\widehat \mu n_1 \sin{\theta} +c_\ell \epsilon^2 n_3^2 \sin^2{\theta} +w'\cos^2{\theta}\,. \label{eq:Vtwist} \end{equation} We focus in this section on the case of maximal twist, $\widehat m=0$, where simple analytic results can be obtained. Even with this simplification, we note that there is competition between terms in three directions in $\Sigma$: the twist direction $n_1$, the non-degeneracy direction $n_3$, and the identity direction ($w'$ term). Thus we can expect a more complicated phase structure than for untwisted Wilson fermions. Furthermore, since non-degenerate twisted-mass quarks completely break the continuous SU(2) flavor symmetry, we expect, in general, that all three pion masses will differ. We find the phase diagrams shown in Fig.~\ref{fig:maxTwistPhases}. Note that we are now plotting the average mass along the vertical axis and the difference horizontally. We do this because $\widehat \mu$ and $\epsilon$ are proportional to parameters that enter the twisted-mass lattice action. To compare to the earlier plots, one should rotate those of Fig.~\ref{fig:maxTwistPhases} by $45^\circ$ in a clockwise direction. We see that, at maximal twist, it is the Aoki scenario which is preferred, in the sense that the CP-violating phase does not move closer to the physical point. Indeed, the phase diagram in this scenario is identical to that in the continuum, Fig.~\ref{fig:NLOell7}, with the replacement $\chi_\ell\to\widehat \mu$. In the first-order scenario, by contrast, there is an additional phase (colored green in Fig.~\ref{fig:maxTwistPhases}b) which brings lattice artifacts closer to the physical point. Thus the relative merits of the two scenarios are interchanged compared to the untwisted case. \begin{figure}[tb!] \centering \begin{subfigure}{0.99\textwidth} \centering \includegraphics[scale=.4]{maxtwistmNLOlabeled.png} \caption{Aoki scenario or continuum ($w'\leq0$)} \end{subfigure} \begin{subfigure}{0.99\textwidth} \centering \includegraphics[scale=.4]{maxtwistFirstmNLOlabeled.png} \caption{First-order scenario ($w'>0$)} \end{subfigure} \caption{\label{fig:maxTwistPhases} Phase diagrams at maximum twist ($\widehat m=0$).} \end{figure} To understand the phase diagrams we first recall the result for the degenerate case, $\epsilon=0$, which has been studied in Refs.~\cite{Munster:2004am,Scorzato:2004da,Sharpe:2004ps}. These works find, for large $|\widehat \mu|$, that the condensate is aligned with the twist, i.e. $n_1=1$ and $\sin\theta={\rm sign}(\widehat \mu)$. This is as in the continuum. In the Aoki scenario ($w'<0$), this alignment holds for all $\widehat \mu$, and there is a first-order transition at $\widehat \mu=0$ where $\sin\theta$ changes sign. In the first-order scenario ($w'>0$), there are second-order transitions at the two points $\widehat \mu=\pm 2w'$, at which one of the pion masses vanishes. For $|\widehat \mu|< 2w'$ the condensate smoothly rotates within the group manifold with $\sin\theta=\widehat \mu/(2w')$. These features are reproduced by our results along the vertical axes in Fig.~\ref{fig:maxTwistPhases}. We now explain how these results are generalized to $\epsilon\ne 0$. We first observe that we can set $n_2=0$. This is because, for any choice of $n_1$, the $c_\ell$ term in Eq.~(\ref{eq:Vtwist}) (with $c_\ell>0$) will be minimized when $n_3^2$ is maximized, i.e. with $n_3^2=1-n_1^2$. Thus there are only two independent variables, $\theta$ and $n_1$. Since $n_1$ satisfies $|n_1|\le 1$, we parametrize it as $n_1=\cos\varphi_1$. Since $\langle\Sigma\rangle$ is invariant when $\theta$ and $\vec n$ change sign, we need only consider $n_1\ge 0$, i.e. $0\le\varphi_1\le\pi/2$. The stationary points are obtained from simultaneously solving \begin{align} \frac{\partial \mathcal{V}_{a^2,\ell_7}}{\partial \theta} &\propto \cos\theta\left[\widehat \mu \cos\varphi_1 + 2\sin{\theta}(\sin^2\varphi_1c_\ell\epsilon^2-w')\right] =0\,, \label{eq:partialVtheta} \\ \frac{\partial \mathcal{V}_{a^2,\ell_7}}{\partial \varphi_1} &\propto \sin\theta\sin\varphi_1\left[\widehat \mu -2\sin{\theta}\cos\varphi_1c_\ell\epsilon^2 \right] =0\,. \label{eq:partialVtheta1} \end{align} The solutions are \begin{enumerate} \item $\cos\theta=0$ (so that $\sin\theta=\pm1$) together with $\sin\varphi_1=0$ (so that $n_1=1$). In these cases ${\mathcal{V}_{a^2,\ell_7}}/{f^2} = \mp \widehat \mu$, so that the solution with the lowest energy is that with $\sin\theta={\rm sign}(\widehat \mu)$, giving ${\mathcal{V}_{a^2,\ell_7}}/{f^2} = - |\widehat \mu|$. \item $\sin\theta={\rm sign}(\widehat \mu)$ and $n_1=\cos\varphi_1= {|\widehat \mu|}/({2c_\ell \epsilon^2})$ so that ${\mathcal{V}_{a^2,\ell_7}}/{f^2} = - {\widehat \mu^2}/({4 c_\ell \epsilon^2}) - c_\ell \epsilon^2$. This is only valid when $n_1\le 1$, i.e. $|\widehat \mu|\le 2 c_\ell \epsilon^2$. There are two degenerate solutions, with $n_3=\pm \sin\varphi_1$. \item $\sin\theta=\widehat \mu/(2w')$ and $\varphi_1=0$ (implying $n_1=1$) so that ${\mathcal{V}_{a^2,\ell_7}}/{f^2} =- \widehat \mu^2/(4 w') - w'$. This is only valid when $|\widehat \mu|\le 2 w'$. There are two degenerate solutions, with opposite signs of $\cos\theta$. \item $\cos\theta=\pm1$ and $\widehat \mu n_1=0$, so that $\mathcal{V}_{a^2,\ell_7}/f^2=- w'$. This never has lower energy than the third solution and can be ignored. \end{enumerate} The first solution is the continuum one discussed above. The second has lower energy than the first where it is valid, and goes over to the CP-violating phase when $w'=0$. The third solution is relevant only for $w'>0$, in which case it has the lowest energy when $c_\ell \epsilon^2 < w'$. The condensate in this phase is independent of $\epsilon$. These considerations lead to the phase diagrams shown in Fig.~\ref{fig:maxTwistPhases}. The potential is continuous throughout the phase planes, as is the condensate except at the junction between the central (green colored) phase and the CP-violating phase in Fig.~\ref{fig:maxTwistPhases}b. Thus we expect the transitions to be of second order. We calculate pion masses using the parametrization \begin{equation} \Sigma= \exp(i\theta\hat{n} \cdot \vec{\tau}/2) \exp(i\vec \pi\cdot \vec\tau/f) \exp(i\theta\hat{n} \cdot \vec{\tau}/2)\,, \qquad \left[\langle\Sigma\rangle= \exp(i\theta\hat{n} \cdot \vec{\tau}) \right]\,. \label{eq:axialparam} \end{equation} Here we are using an axial transformation to rotate from the twisted basis to the physical basis, which ensures, in the continuum, that the pion fields have physical flavors~\cite{Sharpe:2004ny}. In the continuum-like phase (uncolored in the figures), which lies in the regions $|\widehat \mu|\ge {\rm max}(2c_\ell\epsilon^2,2w')$, we find \begin{align} &m_{{\pi}_1}^2= |\widehat \mu|-2 w'\,, & m_{{\pi}_2}^2= |\widehat \mu|\,, && m_{{\pi}_3}^2= |\widehat \mu|-2 c_\ell\epsilon^2\,. \label{eq:mpiwhite} \end{align} These results are consistent with those of Ref.~\cite{Munster:2006yr}, where a \raise0.4ex\hbox{$\chi$}PT\ calculation in this phase is carried out using the different power-counting $m\gtrsim a$. Various aspects of these results are noteworthy. First, all three masses differ. This is expected since flavor symmetry is completely broken. Second, the charged pions are not mass eigenstates; instead, the eigenstates are $\pi_{1,2}$ and the neutral pion. These two points were also noted in Ref.~\cite{Munster:2006yr}. Third, one of the pion masses vanishes at each of the phase boundaries: $m_{\pi_3}$ at the boundary with the CP-violating (pink colored) phase, and $m_{\pi_1}$ at the boundary with the central (green colored) phase in the first-order scenario.\footnote{% In the degenerate case ($\epsilon=0$) Refs.~\cite{Munster:2004am,Scorzato:2004da,Sharpe:2004ps} find that it is $m_{\pi_3}$ which vanishes at $|\widehat \mu|=2w'$, rather than $m_{\pi_1}$. This difference arises because we twist in the $\tau_1$ direction rather than the $\tau_3$ direction used in Refs.~\cite{Munster:2004am,Scorzato:2004da,Sharpe:2004ps}.} This is expected since these are continuous transitions at which a $Z_2$ symmetry is broken ($\theta \to -\theta$ for the ``green phase'' and $n_3\to -n_3$ for the CP-violating phase). Finally, in the first-order scenario, there are four tricritical points at which {\em both} $m_{\pi_3}$ and $m_{\pi_1}$ vanish. These occur where all three phases meet, i.e. at $|\widehat \mu|=2c_\ell\epsilon^2=2w'$. In the central (green) phase we find \begin{align} &m_{{\pi}_1}^2= 2w'-\frac{\widehat \mu^2}{2w'}\,, & m_{{\pi}_2}^2= 2w'\,, && m_{{\pi}_3}^2= 2w'-2 c_\ell\epsilon^2\,. \label{eq:mpigreen} \end{align} Thus $m_{\pi_2}$ and $m_{\pi_3}$ are independent of $\widehat \mu$ within this phase. These results agree with those in the normal phase, Eq.~(\ref{eq:mpiwhite}), at the boundaries. They also show that $m_{\pi_3}$ vanishes at the borders with the CP-violating (pink) phases ($c_\ell\epsilon^2=w'$). In the CP-violating phase there is mixing between $\pi_1$ and $\pi_3$, with the mass eigenvectors being \begin{equation} \tilde\pi_1= n_1 \pi_1 + n_3 \pi_3\ \ {\rm and}\ \ \tilde\pi_3= -n_3 \pi_1 + n_1 \pi_3\,, \end{equation} where we recall that $n_1=\widehat \mu/(2c_\ell \epsilon^2)$ and $n_3=\sqrt{1-n_1^2}$. The masses are \begin{align} &m_{{\tilde\pi}_1}^2= 2 c_\ell\epsilon^2- 2w'\,, & m_{{\pi}_2}^2= 2 c_\ell\epsilon^2\,, && m_{{\tilde\pi}_3}^2= 2 c_\ell\epsilon^2-\frac{\widehat \mu^2}{2 c_\ell\epsilon^2}\,. \label{eq:mpipink} \end{align} Note that $m_{{\tilde\pi}_1}$ and $m_{\pi_2}$ are independent of $\widehat \mu$, while the $\tilde\pi_3$ mass vanishes along the boundaries with the standard phases. The latter result is consistent with the results above because, on these boundaries $|n_1|=1$ and so $\tilde\pi_3 =\pm \pi_3$. A puzzling feature of these results is what happens at the boundaries between the central (green) and CP-violating (pink) phases. According to Eq.~(\ref{eq:mpigreen}) it is the mass of $\pi_3$ which vanishes there, while Eq.~(\ref{eq:mpipink}) has the mass of $\tilde\pi_1$ vanishing. These appear to be different particles. This is related to a second puzzle, namely that the condensate is discontinuous across the boundary (which lies at $w'=c_\ell \epsilon^2$): \begin{equation} \langle\Sigma\rangle_{\rm Green}^{\rm Boundary} = i \frac{\widehat \mu}{2w'} \tau_1 \pm \sqrt{1-\frac{\widehat \mu^2}{4w'^2}}\mathbb 1 \ \ {\rm vs.}\ \ \langle\Sigma\rangle_{\rm Pink}^{\rm Boundary} = i \frac{\widehat \mu}{2w'} \tau_1 \pm i \sqrt{1-\frac{\widehat \mu^2}{4w'^2}} \tau_3 \,. \label{eq:condjump} \end{equation} Here the $\pm$ signs correspond to the two choices of vacuum state on each side. This situation can be understood by noting that, at the boundary, the vacuum manifold expands to a line which includes all four values of the condensate given in Eq.~(\ref{eq:condjump}): \begin{equation} \langle\Sigma\rangle = i \frac{\widehat \mu}{2w'} \tau_1 + \sqrt{1-\frac{\widehat \mu^2}{4w'^2}} (\cos\phi+i \tau_3 \sin\phi)\,, \end{equation} where $\phi$ is arbitrary. The presence of this flat direction is the reason that one pion is massless, since there is no breaking of a $Z_2$ symmetry to explain the masslessness. The orientation of the flat direction, which is the direction of the massless pion, depends on the position along this vacuum manifold, and thus is different on the two sides of the transition. In this way to two puzzles above are simultaneously explained. Results for pion masses are plotted in Fig.~\ref{fig:MaxTwistPiMasses}. We choose the same parameters for the plots as for the untwisted case, Fig.~\ref{fig:UnTwistPiMasses}, so as to allow a clear comparison. The figures illustrate the discussion given above. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.28]{MaxTwistPiMassLO.png} \caption{ $w'=c_\ell\epsilon^2=0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.28]{MaxTwistPiMassmNLOlabeled.png} \caption{$w'=0$, $c_\ell\epsilon^2>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.28]{MaxTwistPiMassAoki.png} \caption{$w'<0$, $c_\ell\epsilon^2= 0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.28]{MaxTwistPiMassFirstlabeled.png} \caption{$w'>0$, $c_\ell\epsilon^2= 0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.28]{MaxTwistPiMassmNLOAokilabeled.png} \caption{$w'<0$, $c_\ell\epsilon^2>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.28]{MaxTwistPiMassmNLOFirstCritlabeled.png} \caption{$w'>0$, $c_\ell\epsilon^2=w'$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.28]{MaxTwistPiMassmNLOFirst3labeled.png} \caption{$w'>0$, $c_\ell\epsilon^2>w'$} \end{subfigure} \caption{\label{fig:MaxTwistPiMasses} Pion masses for maximally twisted fermions including the effects of both discretization ($w'\ne 0$) and non-degeneracy ($\epsilon\ne 0$). $m_{\pi_2}^2$ is shown by solid (blue) lines, $m_{\pi_3}^2$ (and $m_{\tilde\pi_3}^2$) by dotted (red) lines and $m_{\pi_1}^2$ (and $m_{\tilde\pi_1}^2$) by dashed (green) lines. Not all lines are visible in some figures due to degeneracies. Mixing of pions occurs only within the CP-violating phase in Figs. (e) and (g). Explicit expressions for masses and mixing are given in the text. Vertical scales differ between the figures.} \end{figure} \section{Arbitrary Twist} \label{sec:arbtwist} In this section we give a brief discussion of the phase diagram at arbitrary twist. This allows us to understand how the phase diagrams presented above for untwisted and maximally-twisted quarks are related to one another. We focus on the phase diagram, and in particular, the position of the critical manifold where one or more pions are massless. For arbitrary twist, the potential is given in Eq.~(\ref{eq:Vtwist}). As before, minimization leads to $n_2=0$, so the potential depends only on $\theta$ and $\varphi_1$ (defined by $\cos\varphi_1=n_1$). The equations for stationary points are \begin{equation} - \widehat m \sin{\theta} + \cos\theta\left[\widehat \mu \cos\varphi_1 + 2\sin{\theta}(c_\ell\epsilon^2\sin^2\!\varphi_1-w')\right] = 0\,, \label{eq:partialVthetab} \end{equation} and Eq.~(\ref{eq:partialVtheta1}). We focus on the case when both $\widehat m$ and $\widehat \mu$ are non-zero, since the special cases when one of these vanish have been discussed above. When $|\widehat \mu|,|\widehat m|\gg c_\ell\epsilon^2,|w'|$ the solution which minimizes the potential has \begin{equation} n_1=\cos\varphi_1=1,\ \ n_3=\sin\varphi_1,\ \ \tan{\theta}\approx\frac{\widehat \mu}{\widehat m}\,. \end{equation} The last equation becomes an equality in the continuum limit, and simply describes how the condensate twists to compensate the twist in the mass. Discretization errors (here proportional to $w'$) lead to a small deviation in $\theta$ from this continuum result. We do not quote the analytic form as it is not illuminating. In fact, the result for $\theta$ turns out to be independent of the non-degeneracy $\epsilon$, so the results for the condensate given for the degenerate theory in Refs.~\cite{Munster:2004am,Scorzato:2004da,Sharpe:2004ps} remain valid in this phase. This phase is the extension of the ``uncolored'' phases in Figs.~\ref{fig:untwistPhases} and \ref{fig:maxTwistPhases} to arbitrary twist. At a general position in this phase, the mass eigenstates are $\pi_1$, $\pi_2$ and $\pi_3$ [using the parametrization of Eq.~(\ref{eq:axialparam})] and all have different masses. As $\epsilon^2$ increases, we expect, based on the results of the previous two sections, that we will enter a phase which is connected to the CP-violating (pink) phases found above. This should have a condensate having components in both $n_1$ and $n_3$ directions, and $\theta$ taking non-extremal values. Indeed, if $\sin\theta$ and $\sin\varphi_1$ are both non-zero, Eq.~(\ref{eq:partialVtheta1}) is solved by \begin{equation} \sin{\theta}\cos\varphi_1 =\frac{\widehat \mu}{2c_\ell \epsilon^2}\,. \label{eq:arbtwist1} \end{equation} This requires that $c_\ell\epsilon^2\ge|\widehat \mu|$. Inserting this in Eq.~(\ref{eq:partialVthetab}) then yields \begin{equation} \cos{\theta}=\frac{\widehat m}{2(c_\ell \epsilon^2-w')}\,, \label{eq:arbtwist2} \end{equation} which is valid if $2(c_\ell\epsilon^2-w')\le\widehat m$. The solution given by Eqs.~(\ref{eq:arbtwist1}) and (\ref{eq:arbtwist2}) turns out to give the absolute minimum of the potential where it is valid. Its boundary with the continuum-like phase occurs when $|\cos\varphi_1|=1$, and is thus described by \begin{equation} \left(\frac{\widehat m}{2(c_\ell\epsilon^2-w')}\right)^2 + \left(\frac{\widehat \mu}{2c_\ell\epsilon^2}\right)^2 = 1 \,. \end{equation} For fixed $\epsilon$, this is an ellipse in the $\widehat m$, $\widehat \mu$ plane. One pion ($\pi_3$) is massless along this critical surface. Within the CP-violating phase all pions are massive, with the mass eigenstates being $\pi_2$ and a mixture of $\pi_1$ and $\pi_3$. The general expressions for these masses are uninformative, and we quote only the results along the boundary of this phase. Here, in addition to the massless $\pi_3$ we find \begin{align} &m_{{\pi}_1}^2= 2c_\ell\epsilon^2- \frac{2w'\widehat \mu^2}{(2c_\ell \epsilon^2)^2} && m_{\pi_2}^2= 2c_\ell\epsilon^2\,. \end{align} \bigskip The only other critical lines are those we found at maximal twist, namely at $\widehat m=0$, $\widehat \mu=2w'$ and $c_\ell\epsilon^2\le w'$. \bigskip The position of the critical manifold resulting from these considerations is shown in Fig.~\ref{fig:generalTwistPhases} for both scenarios and in the continuum. The CP-violating phases lie within the (distorted) cone-shaped regions. The contour plots show how the circular contours of the continuum are distorted by discretization effects into ellipses. We note that, in the first-order scenario shown in Fig.~\ref{fig:generalTwistPhases}c, if one passes through any point in the rectangular region in the $(\widehat m,\epsilon)$ plane between the two critical lines there is a first-order transition at which the condensate changes discontinuously. \vfill \newpage \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.6]{ManifoldAoki.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourAoki.png} \caption{Aoki scenario ($w'<0$)} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.6]{ManifoldCont.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourCont.png} \caption{Continuum ($w'=0$)} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.6]{ManifoldFirst.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourFirst.png} \caption{First-order scenario ($w'>0$)} \end{subfigure} \caption{\label{fig:generalTwistPhases} Location of the critical manifold for arbitrary twist. Results are shown only for $\epsilon>0$ since the phase diagrams are symmetric under reflection in the $\epsilon=0$ plane. The left panels show 3-d plots, the right panels contour plots. For $w'>0$, the contour plots do not include the two critical lines which reach down to the $\epsilon=0$ plane. The scale used for $\widehat m$ and $\widehat \mu$ is the same, while that for $\epsilon$ is arbitrary. See text for the equations describing the critical manifold.} \end{figure} \section{Higher order} \label{sec:higher} In this section we consider the effect on the previous results of the inclusion of the next highest order terms in our power counting, i.e. those scaling as $a^3\sim m a$. At this order we can still determine the vacuum using the classical potential of the chiral theory. The ${\cal O}(m a)$ term in this potential is standard, see, e.g. Ref.~\cite{Rupak:2002sm}. The ${\cal O}(a^3)$ terms have been discussed for $\epsilon=0$ in Ref.~\cite{Sharpe:2005rq}; the results carry over unchanged to $\epsilon\ne0$ since the first additional term involving $\epsilon$ scales as $a\epsilon^2$ and is of higher order in our power-counting. The relevant additional terms entering the potential are \begin{equation} \mathcal{V}_{a^3}= -\frac{w f^2}{32 W_0 a} \tr(\chi^\dagger \Sigma + \Sigma^\dagger \chi) \tr(A^\dagger \Sigma + \Sigma^\dagger A) - \frac{w_3 f^2}{(8 W_0 a)^3} \left[\tr(A^\dagger \Sigma +\Sigma^\dagger A)\right]^3 \,, \end{equation} where $w$ and $w_3$ are new LECs. There is also a term proportional to $\tr(A^\dagger A)\tr(A^\dagger \Sigma +\Sigma^\dagger A)$, but this can removed by (yet another) redefinition of $\chi$. Inserting our standard parametrization $\left<\Sigma\right>= \exp(i\theta\vec{n} \cdot \vec{\tau})$, and combining the results with that from the LO potential, we obtain \begin{equation} -\frac{\mathcal{V}_{a^2,\ell_7,a^3}}{f^2} = (\widehat m\cos{\theta}+\widehat \mu n_1 \sin{\theta})(1+w \cos{\theta}) +c_\ell \epsilon^2 n_3^2 \sin^2{\theta} +w'\cos^2{\theta} + w_3 \cos^3{\theta} \,. \end{equation} The new LECs should satisfy $|w|\ll 1$ and $|w_3|\ll |w'|,|c_\ell\epsilon^2|$ in order to be consistent with our power counting. We begin by considering the untwisted theory, $\widehat \mu=0$, where the phase diagram and pion masses can be determined analytically. In this case $\widehat m=\chi_\ell$. As previously, the potential is minimized with $n_3=1$, so that \begin{equation} -\frac{\mathcal{V}_{a^2,\ell_7,a^3}}{f^2} \longrightarrow \chi_\ell\cos{\theta}(1+w\cos{\theta}) +c_\ell \epsilon^2 \sin^2{\theta} +w'\cos^2{\theta} +w_3\cos^3{\theta} \,. \end{equation} The stationary points satisfy \begin{equation} \sin\theta\left[ \chi_\ell -2(\chi_\ell w - c_\ell \epsilon^2 + w')\cos{\theta} +3w_3 \cos^2{\theta}\right]=0\,, \label{eq:higherorderstat} \end{equation} which is solved by $\sin\theta=0$ (i.e. giving the usual continuum solutions with $\cos\theta=\pm 1$) and by the solutions to the quadratic function of $\cos\theta$ in parentheses. The latter will lead to CP-violating vacua. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.35]{Oa3Aokiwlabeled.png} \caption{Aoki Scenario ($w'<0$)} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.35]{Oa3Firstwlabeled.png} \caption{First-Order Scenario ($w'>0$)} \end{subfigure} \caption{\label{fig:oa3wphases} Phase diagrams for untwisted Wilson quarks including the NLO ${\cal O}(ma)$ term proportional to $w$. Compare to LO results in Fig.~\ref{fig:untwistPhases}.} \end{figure} To simplify the discussion we consider the impact of the new terms separately. We first set $w_3=0$. Then we can take $w>0$ without loss of generality, since simultaneously changing $w\to -w$, $\theta\to\theta+\pi$ and $\chi_\ell\to -\chi_\ell$ leaves the potential unaffected. As the $w$ contribution to Eq.~(\ref{eq:higherorderstat}) leaves the function in parentheses linear in $\cos\theta$, the analysis is little changed from that at LO (see Sec.~\ref{sec:disc}). We find that the CP-violating solution, \begin{equation} \cos{\theta}=\frac{\chi_\ell}{2(c_\ell \epsilon^2 - w'-\chi_\ell w)} \,, \end{equation} minimizes the potential where it is valid, i.e. wherever $|\cos\theta|< 1$. The endpoints of this phase give second-order transitions occurring at masses \begin{equation} \chi_\ell = \pm \frac{2(c_\ell \epsilon^2 - w')}{1\pm 2 w}\,. \end{equation} Thus the phase boundaries are no longer symmetric with respect to $\chi_\ell=0$. As in the LO case, if $w'>0$ and $c_\ell \epsilon^2 < w'$, the transition becomes first order (with the $w$ term having no impact since the transition occurs at $\chi_\ell=0$). The resultant phase diagrams are shown in Fig.~\ref{fig:oa3wphases}. We have also calculated the pion masses. In the CP-conserving phases the results are \begin{align} m_{\pi^0}^2&= |\chi_\ell|(1 + {\rm sign}(\chi_\ell) 2w) - 2 (c_\ell \epsilon^2-w')\,, \\ m_{\pi^\pm}^2&= m_{\pi^0}^2 + 2 c_\ell \epsilon^2\,. \end{align} The only change from the LO results, Eqs.~(\ref{eq:mpi0untwistLO}) and (\ref{eq:mpipuntwistLO}), is that the slope with respect to $\chi_\ell$ is no longer symmetric when $\chi_\ell$ changes sign. In the CP-violating phases we find \begin{equation} m_{\pi^0}^2= 2( c_\ell \epsilon ^2 -w'-\chi_\ell w) \sin^2\theta \ \ {\rm and}\ \ m_{\pi^\pm}^2= 2c_\ell \epsilon^2\,, \end{equation} where again only the former result is changed. The resulting pion masses are shown in Fig.~\ref{fig:UnTwistOa3wPiMasses}, and show clearly the above-mentioned asymmetry. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3AokiwsmallepPiMasslabeled.png} \caption{$w'<0$, $c_\ell \epsilon^2=0$, $w>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3FirstwsmallepPiMass.png} \caption{$w'>0$, $c_\ell \epsilon^2=0$, $w>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3AokiwlargeepPiMasslabeled.png} \caption{$w'<0$, $c_\ell \epsilon^2>0$, $w>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3FirstwcritepPiMass.png} \caption{ $c_\ell \epsilon^2=w'>0$, $w>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3FirstwlargeepPiMasslabeled.png} \caption{ $c_\ell \epsilon^2>w'>0$, $w>0$} \end{subfigure} \caption{\label{fig:UnTwistOa3wPiMasses} Pion masses for untwisted Wilson fermions including the effects of the NLO $w$ term with $w>0$ (but with $w_3=0$). The figures should be compared to the LO results in Figs.~\ref{fig:UnTwistPiMasses}(c-g), respectively. See Fig.~\ref{fig:UnTwistPiMasses} also for notation.} \end{figure} We now consider the impact of the $w_3$ term, setting $w=0$. Again, without loss of generality, we can assume $w_3>0$. The CP-violating stationary points are now obtained from Eq.~(\ref{eq:higherorderstat}) by solving a quadratic equation, leading to the solutions \begin{equation} \cos{\theta_\pm}= \frac{(c_\ell \epsilon^2 - w') \pm \sqrt{(c_\ell \epsilon^2 - w')^2 - 3\chi_\ell w_3}}{3w_3} \,. \label{eq:twoCPsol} \end{equation} It is straightforward to see from the properties of a cubic that, since $w_3>0$, only the $\theta_-$ solution can lead to a minimum of the potential. Whether it does lead to a minimum is a more subtle question than in the LO analysis. We begin by discussing the limit of small $|w_3|$. Specifically, if we assume $|c_\ell \epsilon^2-w'|\sim |\chi_\ell| \gg |w_3|$, the square root in Eq.~(\ref{eq:twoCPsol}) can be expanded in powers of $w_3$. It is then straightforward to show that one recovers the LO results aside from small corrections proportional to $|w_3/(c_\ell \epsilon^2-w')|$. In particular, if $c_\ell \epsilon^2-w' > 0$ there is a CP-violating phase ending in second-order transitions to continuum-like phases, while if $c_\ell \epsilon^2-w' < 0$ there is a first-order transition. The positions of these transitions are, however, shifted slightly by the $w_3$ term. The boundaries of the CP-violating phase occur when $\cos\theta_-=\pm 1$ which gives \begin{equation} \chi_\ell= \pm 2 (c_\ell \epsilon^2-w') - 3 w_3\,, \end{equation} without any ${\cal O}(w_3^2)$ corrections. In words, the boundaries are simply offset from the LO result, Eq.~(\ref{eq:secondboundary}), by $-3 w_3$. In the first-order scenario, the transition occurs at the value of $\chi_\ell$ such that the potentials at $\cos\theta=\pm1$ agree. This happens when \begin{equation} \chi_\ell=-w_3\,, \end{equation} so that the first-order transition line is offset from the LO result $\chi_\ell=0$ by $-w_3$ (again, without any higher-order corrections). More interesting changes occur when $|c_\ell\epsilon^2-w'|\sim |w_3|$. Note that this does not require that $w_3$ be large, but rather that there is a cancellation between the $c_\ell\epsilon^2$ and $w'$ terms. Here we encounter a phenomenon first noted at $\epsilon=0$ in Ref.~\cite{Sharpe:2005rq}: one can have a {\em first-order} transition from the continuum-like phase into the CP-violating phase, followed by a second-order transition to the other continuum-like phase. This occurs when the local minimum at $\theta_-$ (with $|\cos\theta_-|<1$ and $\cos\theta_-$ real) has the same potential as that at $\cos\theta=1$. Then, as $\chi_\ell$ is reduced, $\theta$ jumps from $\theta=0$ to $\theta_-$. This is possible with a cubic potential, but not with a quadratic. Solving \begin{equation} \mathcal{V}_{a^2,\ell_7,a^3}(\theta_-)= \mathcal{V}_{a^2,\ell_7,a^3}(0) \end{equation} leads to the following equation for the first-order boundary \begin{equation} \chi_\ell = \frac{(w'-c_\ell \epsilon^2-3w_3)(w'-c_\ell\epsilon^2+w_3)} {4 w_3}\,. \label{eq:newfirstboundary} \end{equation} As one moves along this boundary $\cos\theta_-$ varies. The boundary ends when either $\cos\theta_-=1$, so there is no jump in $\theta$, and the transition becomes second-order, or when $\cos\theta_-=-1$, so there is only a first-order transition without the subsequent CP-violating phase. Combining Eqs.~(\ref{eq:twoCPsol}) and (\ref{eq:newfirstboundary}) we find that the transition becomes second-order at \begin{equation} \chi_\ell = c_\ell \epsilon^2-w'= 3 w_3\,, \end{equation} while it becomes first-order at \begin{equation} \chi_\ell = c_\ell \epsilon^2-w'= -w_3\,. \end{equation} The first of these equations can be satisfied if $w'> -3 w_3$, and so reaches from the first-order scenario ($w'>0$) into a small region of the Aoki scenario. The second requires $w'> w_3$ and thus occurs only in the first-order scenario. These results lead to the phase diagrams shown in Fig.~\ref{fig:untwistnumericalOa3Phases1}. We see that the changes due to the $w_3$ term are more substantive than those due to the $w$ term. \begin{figure}[tb!] \centering \begin{subfigure}{1.0\textwidth} \centering \includegraphics[scale=.35]{Oa3Aoki2w3labeled.png} \caption{\centering Aoki scenario with $w' < -3 w_3 < 0$} \end{subfigure} \begin{subfigure}{1.0\textwidth} \centering \includegraphics[scale=.35]{Oa3Aoki1w3labeled.png} \caption{\centering Aoki or first-order scenario with $-3 w_3 < w' < w_3$ (and $w_3>0$)} \end{subfigure} \begin{subfigure}{1.0\textwidth} \centering \includegraphics[scale=.35]{Oa3Firstw3labeled.png} \caption{\centering First-order scenario with $w'>w_3>0$; $\cos\theta$ in pink region is as is in (a) and (b)} \end{subfigure} \caption{\label{fig:untwistnumericalOa3Phases1} Phase diagrams for untwisted Wilson fermions including the NLO ${\cal O}(a^3)$ term proportional to $w_3$. Compare to LO results in Fig.~\ref{fig:untwistPhases}.} \end{figure} We show the corresponding pion masses in Figs.~\ref{fig:UnTwistOa3w3PiMassesA}-\ref{fig:UnTwistOa3w3PiMassesC}; for the sake of brevity we do not quote the analytic forms. Fig.~\ref{fig:UnTwistOa3w3PiMassesA} shows two ``slices'' through the phase diagram of Fig.~\ref{fig:untwistnumericalOa3Phases1}a. These should be compared to the LO results in Figs.~\ref{fig:untwistPhases}c and \ref{fig:untwistPhases}e, respectively. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Aoki2w3smallepPiMasslabeled.png} \caption{ $c_\ell \epsilon^2=0$, $-w'<-3w_3<0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Aoki2w3largeepPiMasslabeled.png} \caption{ $c_\ell \epsilon^2>0$, $-w'<-3w_3<0$} \end{subfigure} \caption{\label{fig:UnTwistOa3w3PiMassesA} NLO pion masses for untwisted Wilson fermions with $w_3>0$ and $w=0$. Results are for the Aoki scenario with $w' < -3 w_3 < 0$, corresponding to the phase diagram of Fig.~\ref{fig:untwistnumericalOa3Phases1}a. Notation as in Fig.~\ref{fig:UnTwistPiMasses}.} \end{figure} In Fig.~\ref{fig:UnTwistOa3w3PiMassesB} we show two slices through the phase diagram of Fig.~\ref{fig:untwistnumericalOa3Phases1}b. The first, at $\epsilon=0$, shows the first-order transition, at which all pion masses are discontinuous. The charged pions become massless in the CP-violating/Aoki phase, while the neutral pion is massive. In the second slice, for which $\epsilon$ satisfies $0<c_\ell \epsilon^2< w'+3w_3$, the discontinuities remain, but all pions are massive in the CP-violating phase (except at the lower boundary where the neutral pion mass vanishes). Once $c_\ell\epsilon^2\ge w'+3 w_3$, the pion masses behave as in Fig.~\ref{fig:UnTwistOa3w3PiMassesA}b. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Aoki1w3smallepPiMasslabeled.png} \caption{$c_\ell \epsilon^2=0$, $-3 w_3 < w' < w_3$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Aoki1w3midepPiMasslabeled.png} \caption{$c_\ell \epsilon^2\ge w'+3w_3$, $-3 w_3 < w' < w_3$} \end{subfigure} \caption{\label{fig:UnTwistOa3w3PiMassesB} Examples of NLO pion masses for untwisted Wilson fermions with $w_3>0$ and $w=0$. Results are for $-3 w_3 < w' < w_3$, corresponding to the phase diagram of Fig.~\ref{fig:untwistnumericalOa3Phases1}b.} \end{figure} In Fig.~\ref{fig:UnTwistOa3w3PiMassesC} we show four slices through the phase diagram of Fig.~\ref{fig:untwistnumericalOa3Phases1}c. The first (at $\epsilon=0$) shows how the $w_3$ term leads to a discontinuity in the pion masses at the first-order transition, unlike at LO. This was first observed in Ref.~\cite{Sharpe:2005rq}. For non-zero $\epsilon$, the charged and neutral pions are no longer degenerate, and both have a discontinuity. When one reaches $c_\ell \epsilon^2=w'-w_3$, the neutral pion becomes massless at the transition point, as shown in the second slice. This is the beginning of the CP-violating phase. As $\epsilon^2$ increases further, one has both first and second-order transitions, as shown in the third slice. The final slice shows the value of $\epsilon^2$ at which the first-order transition turns into a second-order transition. For larger values of $\epsilon^2$ the pion masses behaves as in Fig.~\ref{fig:UnTwistOa3w3PiMassesA}b. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Firstw3smallepPiMasslabeled.png} \caption{ $c_\ell \epsilon^2=0$, $w'> w_3>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Firstw3crit1epPiMasslabeled.png} \caption{ $c_\ell \epsilon^2=w'-w_3$, $w'>w_3>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Firstw3midepPiMasslabeled.png} \caption{ $w'-w_3<c_\ell \epsilon^2< w'+3w_3$, $w'>w_3>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.32]{Oa3Firstw3crit2epPiMasslabeled.png} \caption{ $c_\ell \epsilon^2=w'+3w_3$, $w'>w_3>0$} \end{subfigure} \caption{\label{fig:UnTwistOa3w3PiMassesC} NLO pion masses for untwisted Wilson fermions with $w_3>0$ and $w=0$. Results are for the first-order scenario with $w' > w_3$, corresponding to the phase diagram of Fig.~\ref{fig:untwistnumericalOa3Phases1}c.} \end{figure} The higher-order analysis in the twisted case is more complicated. Maximal twist no longer occurs, in general, at $\widehat m=0$, so one is forced to do the analysis for both $\widehat m$ and $\widehat \mu$ non-vanishing. In practice, this requires numerical minimization of the potential. The resulting phase diagram and pion masses for $\epsilon=0$ have been discussed in detail in Ref.~\cite{Sharpe:2005rq}. The addition of isospin-breaking leads both to small quantitative changes, and to qualitative changes in small regions of the phase plane. We restrict ourselves here to showing how the NLO terms impact the critical manifold (on which at least one pion is massless). The Aoki and first-order scenarios are shown, respectively, in Figs.~\ref{fig:numbManifoldAoki} and \ref{fig:numbManifold}. \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ManifoldAoki0.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourAoki0.png} \caption{$w_3=w=0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ManifoldAokiw.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourAokiw.png} \caption{$w_3=0$, $w>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ManifoldAokiw3.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourAokiw3.png} \caption{$w'< -3w_3<0$, $w=0$} \end{subfigure} \caption{\label{fig:numbManifoldAoki} Location of the critical manifold in the Aoki scenario ($w'<0$) including NLO terms. Notation as in Fig.~\ref{fig:generalTwistPhases}.} \end{figure} \begin{figure}[tb!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ManifoldFirst0.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourFirst0.png} \caption{$w_3=w=0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ManifoldFirstw.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourFirstw.png} \caption{$w_3=0$, $w>0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ManifoldFirstw3.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[scale=.5]{ContourFirstw3.png} \caption{$w'>w_3>0$, $w=0$} \end{subfigure} \caption{\label{fig:numbManifold} Location of the critical manifold in the first-order scenario ($w'>0$) including NLO terms. Notation as in Fig.~\ref{fig:generalTwistPhases}.} \end{figure} The main effect is a distortion of the elliptical cross sections of the critical manifold. In addition, the two vertical critical lines in the first-order scenario are shifted slightly in position. The most significant qualitative change is the appearance of a hole in the manifold when $w'>w_3>0$, which is (barely) visible above the $\widehat \mu=0$ axis in the right panel of Fig.~\ref{fig:numbManifold}c. This occurs because of the extended first-order transition region seen in the untwisted phase diagram of Fig.~\ref{fig:untwistnumericalOa3Phases1}c. We end this section by addressing the question of whether higher-order effects move unphysical phases closer to the point with physical masses. The answer depends on the sign of $w$ and $w_3$. For untwisted fermions, the results of Figs.~\ref{fig:oa3wphases}-\ref{fig:numbManifold}, show that positive $w$ and $w_3$ move unphysical phases away from the physical point. Conversely, negative values of these LECs would move the phases closer. For twisted-mass fermions the answer is more complicated, depending on the choice of twist angle. \clearpage \section{Conclusions} \label{sec:concl} In this work we have studied how using non-degenerate up and down quarks changes the phase structure caused by competition between quark mass and discretization effects. We draw two main conclusions. First, the continuum CP-violating phase is continuously connected to the Aoki phase induced by discretization effects. Second, discretization effects can move the theory with physical quark masses closer to, or even into, unphysical phases. Whether this happens depends both on the twist angle and on the details of the discretization (the latter impacting the values of the LECs $w'$, etc.). Our overall message is that a complicated phase structure lies in the vicinity of the physical point and simulations should be careful to avoid unphysical phases. For twisted mass fermions our results for pion masses extend those of Ref.~\cite{Munster:2006yr} into the Aoki regime ($m\sim a^2$). In the continuum-like phase, with both twisting and non-degeneracy, the eigenstates are $\pi_1$, $\pi_2$ and $\pi_3$, with all three pions having different masses. Our formulae may be of use in removing the discretization effects from masses determined in simulations, although we stress again that ${\cal O}(m^2)$ terms dropped in our power counting may be important if precision fitting is required. One shortcoming of this work is that it does not include electromagnetic effects. In the pion sector, these lead to isospin breaking that is generically larger than that from quark non-degeneracy, and can also impact the phase structure.\footnote{% Another generalization that one can consider is the inclusion of an isospin chemical potential. This has been discussed recently for degenerate quarks in Ref.~\cite{Kieberg:lat14}.} We will discuss the impact of electromagnetism in an upcoming work~\cite{inprep}, building upon the recent analysis of Ref.~\cite{Golterman:2014yha}. \section*{Acknowledgments} We thank Mario Kieberg and Jac Verbaarschot for discussions and comments. This work was facilitated through the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington. This work was supported in part by the United States Department of Energy grants DE-FG02-96ER40956 and DE-SC0011637.
2,877,628,090,572
arxiv
\section{Introduction} \label{sec:Intr} The Green-function (GF) method is a powerful tool for solving the nuclear many-body problem (see Refs. \cite{Migdal67,SWW77}). General equations of this method are inherently non-linear. However, in the practical applications, the linearized versions of these equations are commonly used. In particular, the linearized Bethe-Salpeter equation for the response function is equivalent at some additional assumptions to the well-known random-phase approximation (RPA). The non-linearity in the nuclear-structure models is frequently contained only on the mean-field level where Hartree-Fock (HF) or Hartree-Fock-Bogoliubov (HFB) approximations and the energy density functional (EDF) theory are applied. Nevertheless, several approaches were developed where effects of non-linearity beyond the mean-field level were considered within the so-called self-consistent RPA (SCRPA, see Refs. \cite{Schuck1973,Dukelsky1990,Dukelsky1998,Schuck2016}) and within the standard GF method in the Bethe-Salpeter and in the Dyson equations (see, e.g., Refs. \cite{Immele1977b,Muraviev1991,Litvinova2015}). In the mean-field approach, e.g. in the RPA, the model space is restricted to one-particle--one-hole ($1p1h$) configurations. This is a reliable method to calculate energies and transition probabilities of low-lying collective states as well as the mean energies and total strengths of the high-lying giant resonances. For more details like the spreading width of giant resonances one has to extend the configuration space by considering e.g. two-particle--two-hole correlations \cite{Drozdz} or by coupling phonons to the $1p1h$ configurations in a shell model approach or in a self-consistent way, see Ref.~\cite{Tselyaev2016} and references therein. In the phonon-coupling models used so far the results depend on the number of phonons considered and moreover the non-collective states may enter the phonon space that leads to the violation of the Pauli principle. In a recent publication \cite{Lyutorovich2017} we presented a scheme to select the most relevant phonons in order to avoid these shortcomings to some extend. This model is a modification of the \emph{Time Blocking Approximation} (TBA), Refs.~\cite{TBA89,KTT97,KST04}, which is one of the versions of the extended RPA developed within the GF method and the particle-phonon coupling model. In the present paper we develop a non-linear form of the TBA which we name \emph{Configuration Blocking Approximation} (CBA). This approach is connected with the self-consistent determination of the phonon space of the model. It assumes some additional restrictions imposed on the phonons included in this space. In our study we focus on the analysis of the influence of these restrictions on the convergence of solutions of the model equations with respect to enlarging the phonon space. In Sec.~\ref{sec:RenTBA} the formalism of the non-linear version of the TBA is introduced. In the first subsection \ref{sec:RPA} we summarize the conventional self-consistent RPA which is the basis for all extended models. Our new approach in its general form is presented in subsection \ref{sec:nTBA}. On a first glance the equations look identical to the TBA equations. There is indeed only one decisive difference: The phonons which couple to the single-particle propagators are not the solutions of the RPA equation but the solutions of the TBA itself which introduces the non-linearity. The first modification concerns the CBA which is presented in Sec.~\ref{sec:CBA}. Here we augment the non-linear model with additional restrictions imposed on the phonon space. The reduced form of this non-linear model is presented in Sections \ref{sec:CBAdiag} and \ref{sec:CBAjust}. The method of construction of the phonon space of the model is described in Sec.~\ref{sec:RPAspace}. In the short subsection \ref{sec:TBAlim} the relation between CBA and TBA is explicitly demonstrated. In Sec.~\ref{sec:res} numerical results of our new approach are presented. We compare previous results with the present ones with the emphasis on the dependence on the size of the phonon configuration space. The conclusions are given in the last Section. \section{The formalism of non-linear TBA } \label{sec:RenTBA} \subsection{RPA as starting point and basis} \label{sec:RPA} Before presenting the involved TBA theories, we summarize here the RPA which serves as starting point for the further development and provides an appropriate basis of the description. Doing so, we introduce ``en passant'' also the basic notations used henceforth. RPA determines the excitation spectrum of a many-body system in the $1p1h$ vicinity of the ground state. There are several ways to write the RPA equations. We use here a formulation in terms of the response operator $R^{\mbss{RPA}}_{\vphantom{1}}\equiv{R}^{\mbss{RPA}}_{12,34}$ which plays the role of a one-body operator in $1p1h$ space (the numerical indices stand for the sets of the quantum numbers of some single-particle basis). This then reads \begin{subequations} \begin{eqnarray} R^{\mbss{RPA}}_{\vphantom{1}}(\omega) &=& -\bigl(\,\omega - \Omega^{\mbss{RPA}}_{\vphantom{1}}\bigr)^{-1} M^{\mbss{RPA}}_{\vphantom{1}}, \label{rfrpa} \\ \Omega^{\mbss{RPA}}_{12,34} &=& \Omega^{(0)}_{12,34} + \sum_{56} M^{\mbss{RPA}}_{12,56}\,{V}^{\vphantom{*}}_{56,34}\,, \label{omrpa} \\ \Omega^{(0)}_{12,34} &=& h^{\vphantom{*}}_{13}\,\delta^{\vphantom{*}}_{42} - \delta^{\vphantom{*}}_{13}\,h^{\vphantom{*}}_{42}\,, \label{omrpaz} \\ M^{\mbss{RPA}}_{12,34} &=& \delta^{\vphantom{*}}_{13}\,\rho^{\vphantom{*}}_{42} - \rho^{\vphantom{*}}_{13}\,\delta^{\vphantom{*}}_{42}\,, \label{mrpa} \\ h^{\vphantom{*}}_{12} &=& \frac{\delta E[\rho]}{\delta\rho^{\vphantom{*}}_{21}}\,, \\ {V}^{\vphantom{*}}_{12,34} &=& \frac{\delta^2 E[\rho]} {\delta\rho^{\vphantom{*}}_{21}\,\delta\rho^{\vphantom{*}}_{34}}\,, \label{screl} \end{eqnarray} \end{subequations} where $E[\rho]$ is the EDF of the model, the $\Omega$ are Hamiltonian matrices in $1p1h$ space (thus carrying four indices), $M^{\mbss{RPA}}_{\vphantom{1}}$ is the RPA norm matrix, $\rho$ is the single-particle density matrix, $h$ is the single-particle Hamiltonian, and $V$ is the residual interaction in the particle-hole channel. In the following we will also use the indices $p$ and $h$ to label the single-particle states of particles and holes, respectively, in the basis in which the density matrix $\rho$ and the Hamiltonian $h$ are diagonal (so that $h^{\vphantom{*}}_{pp}=\varepsilon^{\vphantom{*}}_{p}$ and $h^{\vphantom{*}}_{hh}=\varepsilon^{\vphantom{*}}_{h}$). The poles of $R^{\mbss{RPA}}_{\vphantom{1}}(\omega)$ determine the RPA spectrum of eigenfrequencies $\omega^{\vphantom{*}}_{n}$. The RPA response operator can be expressed explicitly in terms of the spectral representation \begin{equation} R^{\mbss{RPA}}_{\vphantom{1}}(\omega) = -\sum_{n} \frac{\,\sigma^{\vphantom{*}}_{n}|\,Z^{n}\rangle \langle Z^{n}|} {\omega - \omega^{\vphantom{*}}_{n}}\;, \label{rfrpaexp} \end{equation} where $\sigma^{\vphantom{*}}_{n}=\mbox{sgn}(\omega^{\vphantom{*}}_{n})$, the $|\,Z^{n} \rangle$ are the $n$-th RPA eigenstate whose details are given by the explicit $1p1h$ expansion coefficients $Z^{n}_{12}$. Inserting the spectral expansion into the response equation (\ref{rfrpa}) yields the RPA equations explicitly in terms of the expansion coefficients \begin{equation} \sum_{34} \Omega^{\mbss{RPA}}_{12,34}\,Z^{n}_{34} = \omega^{\vphantom{*}}_{n}\,Z^{n}_{12}\,. \label{rpaeve} \end{equation} Both forms, (\ref{rfrpa}) as well as (\ref{rpaeve}), are used in practice for the practical solution of the RPA equations. The entity of all RPA eigenstates constitutes an expansion in $1p1h$ space which is ortho-normal \begin{subequations} \begin{equation} \langle\,Z^{n'}\,|\,M^{\mbss{RPA}}_{\vphantom{1}}|\,Z^{n}\rangle = \delta_{nn'}^{\vphantom{*}} \mbox{sgn}(\omega^{\vphantom{*}}_{n}) \label{zmz} \end{equation} and complete according to the closure relation \begin{equation} \sum_{n}\sigma^{\vphantom{*}}_{n}|\,Z^{n} \rangle \langle Z^{n}| = M^{\mbss{RPA}}_{\vphantom{1}} \;. \label{rpacr} \end{equation} \end{subequations} The RPA basis does also allow a complete representation of matrix operators. Any matrix $A\equiv A^{\vphantom{*}}_{12,34}$ in the $1p1h$ space can be written as \begin{subequations} \label{eq:reprA} \begin{eqnarray} A &=& \sum_{nn'}|\,Z^{n} \rangle A^{\vphantom{*}}_{nn'} \langle Z^{n'}|\;, \label{rpaexp} \\ A^{\vphantom{*}}_{nn'} &=& \sigma^{\vphantom{*}}_{n}\sigma^{\vphantom{*}}_{n'}\, \langle Z^{n}|M^{\mbss{RPA}}_{\vphantom{1}} A\,M^{\mbss{RPA}}_{\vphantom{1}} |\,Z^{n'}\rangle\;. \label{rnndef} \end{eqnarray} \end{subequations} \subsection{The non-linear TBA equations} \label{sec:nTBA} The RPA equations were forcefully closed by the quasi-boson approximation \cite{Rowebook,RingSchuck}. Releasing this restriction, they will couple to higher configurations which can be efficiently expanded in terms of $1p1h\otimes$phonon configuration, where ``phonon'' stands for a subset of RPA eigenstates which have large collective strength and so couple most strongly to the pure $1p1h$ states in the expansion. An explicit expansion in the full $1p1h\otimes$phonon space is extremely costly because it includes too many unimportant contributions. The phonon-coupling models simplify that task by maintaining a $1p1h$ expansion while including the temporary detours through $1p1h\otimes$phonon space by modifying the RPA interaction matrix. We use the phonon-coupling model here in the form of the TBA \cite{TBA89,KTT97,KST04}. In fact, we consider it here, in extension of previous applications, in its self-consistent form. This gives rise to a non-linear equation for the eigenstates $\nu$ in terms of $1p1h$ coefficients $z^{\nu}_{12}$ and eigenfrequency $\omega^{\vphantom{*}}_{\nu}$ which reads \begin{subequations} \label{eq:SCTBA} \begin{equation} \sum_{34} \Omega^{\mbss{CBA}}_{12,34} (\omega^{\vphantom{*}}_{\nu})\,z^{\nu}_{34} = \omega^{\vphantom{*}}_{\nu}\,z^{\nu}_{12} \,, \label{tbaeve} \end{equation} with \begin{eqnarray} \Omega^{\mbss{CBA}}_{12,34}(\omega) &=& \Omega^{\mbss{RPA}}_{12,34} +\sum_{56} M^{\mbss{RPA}}_{12,56}\,\bar{W}^{\vphantom{*}}_{56,34}(\omega) \,, \label{omtba} \\ \bar{W}^{\vphantom{*}}_{12,34}(\omega) &=& {W}^{\vphantom{*}}_{12,34}(\omega) - {W}^{\vphantom{*}}_{12,34}(0) \,. \label{wsub} \end{eqnarray} \end{subequations} The new ingredient in the non-linear TBA (the superscript CBA in the matrix $\Omega^{\mbss{CBA}}_{\vphantom{1}}(\omega)$ will be explained in Sec.~\ref{sec:CBA}) is the matrix $\bar{W}(\omega)$ which represents the induced interaction generated by the intermediate $1p1h\otimes$phonon configurations. The subtraction of $W(0)$ in Eq.~(\ref{wsub}) is necessary to avoid perturbation of the mean-field ground state \cite{Toe88,Gue93} and to ensure stability of solutions of the TBA eigenvalue equation (see \cite{Tselyaev2013}). The key point is that ${W}(\omega)$ is expanded in terms of the $1p1h\otimes$phonon configurations as \begin{subequations} \label{eq:Winduc} \begin{eqnarray} {W}^{\vphantom{*}}_{12,34}(\omega) &=& \sum_{c,\;\sigma}\,\frac{\sigma\,{F}^{c(\sigma)}_{12} {F}^{c(\sigma)*}_{34}} {\omega - \sigma\,\Omega^{\vphantom{*}}_{c}} \,, \label{wdef} \\ \Omega^{\vphantom{*}}_{c} &=& \varepsilon^{\vphantom{*}}_{p'} - \varepsilon^{\vphantom{*}}_{h'} + \omega^{\vphantom{*}}_{\nu} \,, \quad \omega^{\vphantom{*}}_{\nu}>0 \,, \label{omcdef} \end{eqnarray} \end{subequations} where $\sigma = \pm 1$, $\,c = \{p',h',\nu\}$ is an combined index for the $1p1h\otimes$phonon configurations, $\nu$ is the phonon's index. We emphasize that these are the TBA phonons as they emerge from the fully fledged TBA equation. These phonons enter the induced interaction through the $F$ amplitudes \begin{subequations} \begin{eqnarray} {F}^{c(+)}_{ph} &=& \delta^{\vphantom{*}}_{pp'}\,g^{\nu}_{h'h}\!-\! \delta^{\vphantom{*}}_{h'h}\,g^{\nu}_{pp'} \,, \label{fcdef} \\ g^{\nu}_{12} &=& \sum_{34} {V}^{\vphantom{*}}_{12,34}\,{z}^{\nu}_{34} \,, \label{gndef} \end{eqnarray} which obey the symmetry relations \begin{equation} {F}^{c(-)}_{12}={F}^{c(+)*}_{21},\qquad {F}^{c(-)}_{ph}={F}^{c(+)}_{hp}=0 \,. \label{fcrel} \end{equation} \end{subequations} It is important to note that the larger expansion space which is implicitly contained in TBA has an impact on the normalization of the $z$ coefficients. The RPA norm (\ref{zmz}) is extended to \begin{subequations} \begin{eqnarray} \langle\,{z}^{\nu}\,|\,M^{\mbss{RPA}}_{\vphantom{1}} - W^{\prime}_{\nu}\,|\,{z}^{\nu} \rangle &=& \mbox{sgn}(\omega^{\vphantom{*}}_{\nu}) \,, \label{zmwz} \\ W^{\prime}_{\nu} &=& \biggl(\frac{d\,W(\omega)}{d\,\omega}\biggr)_{\omega\,=\,\omega_{\nu}} \,. \label{wnn2} \end{eqnarray} \end{subequations} The second term $\propto W^{\prime}_{\nu}$ is not connected with the non-linearity of the TBA but arises already in the conventional TBA. It accounts for the probability carried over to the space of $1p1h\otimes$phonon configurations. Correspondingly, the first term, covering the content of pure $1p1h$ states, is relatively reduced. These non-linear TBA equations are self-consistent because the ${z}^{\nu}_{\vphantom{1}}$ amplitudes and the energies $\omega^{\vphantom{*}}_{\nu}$ which emerge from the eigenvalue equation (\ref{tbaeve}) are fed back into the phonon coupling amplitudes ${F}$ and the energies $\Omega^{\vphantom{*}}_{c}$. This approach is thus superior to standard TBA and can be considered as the first iteration toward the full scheme. However, self-consistency involves subtle complications which inhibit immediate, naive solution of the Eqs. (\ref{eq:SCTBA},\ref{eq:Winduc}). The problem is solved by configuration blocking outlined in the next subsection. This delivers at the same time as extra bonus an unambiguous and very efficient rule for confining the intermediate states to the most relevant phonons. \subsection{Configuration blocking approximation (CBA)} \label{sec:CBA} In the non-linear TBA described above the following contradiction arises. Equations of the ordinary TBA include configurations of the type $1p1h$ and $1p1h\otimes$phonon, where the phonons are determined within the RPA. If we replace the RPA phonons in the matrix $W(\omega)$ by the solutions of the TBA equation (\ref{tbaeve}), the resulting equations will include implicitly configurations of the type $3p3h$ and higher (see \cite{TBA89,KTT97,KST04}). This will be reflected in the spectrum of the TBA solutions which acquires the huge spectral density of complex configurations. Feeding this back into the induced interaction $W$ grows intractable. And, more important, it becomes contradictory as configurations of this type goes beyond the framework of the TBA. Already at the level of the $1p1h$ configurations of RPA, we have to select the few most collective phonons to render TBA manageable and consistent (see section \ref{sec:RPAspace}). But now we obtain a swarm of states which have more strength in higher configurations, i.e. in the $W^\prime$ term in the norm (\ref{zmwz}), than in its $1p1h$ head. These states are clearly to be excluded. To formalize this decision, let us rewrite the normalization (\ref{zmwz}) in the form \begin{subequations} \begin{eqnarray} &&(z^{\nu})^2_{\mbsu{RPA}} + (z^{\nu})^2_{\mbsu{CC}} = 1 \,, \label{zmwz2} \\ &&(z^{\nu})^2_{\mbsu{RPA}} = \mbox{sgn}(\omega^{\vphantom{*}}_{\nu})\, \langle\,{z}^{\nu}\,|\,M^{\mbss{RPA}}_{\vphantom{1}}|\,{z}^{\nu} \rangle\,, \label{zzrpa} \\ &&(z^{\nu})^2_{\mbsu{CC}} = -\mbox{sgn}(\omega^{\vphantom{*}}_{\nu})\, \langle\,{z}^{\nu}\,|\,W^{\prime}_{\nu}\,|\,{z}^{\nu} \rangle \,. \label{zzcc} \end{eqnarray} \end{subequations} The term $(z^{\nu})^2_{\mbsu{RPA}}$ in Eq. (\ref{zmwz2}) represents the contribution of the $1p1h$ (RPA) components to the norm. The term $(z^{\nu})^2_{\mbsu{CC}}$ represents the contribution of the complex configurations. It is obvious that all states with dominant $(z^{\nu})^2_{\mbsu{CC}}$ must be discarded from the set of TBA phonons because they cannot contain any more sufficient collectivity to contribute to the induced interaction. To block these contributions which are dominated by complex configurations from entering the phonon space of TBA we impose the condition \begin{subequations} \begin{equation} (z^{\nu})^2_{\mbsu{RPA}} > (z^{\nu})^2_{\mbsu{CC}}\,, \label{czz1} \end{equation} which together with Eq. (\ref{zmwz2}) means that \begin{equation} (z^{\nu})^2_{\mbsu{RPA}} > \zeta^2_{\mbsu{min}} \;,\quad \zeta^2_{\mbsu{min}}=\frac{1}{2} \;. \label{czz2} \end{equation} \end{subequations} We introduce the parameter $\zeta^2_{\mbsu{min}}$ to make the impact of the condition (\ref{czz2}) visible throughout the formalism. Only TBA states which satisfy Eq.~(\ref{czz2}) will be included into the induced interaction of TBA. We refer to this model as the configuration blocking approximation (CBA). It is a combination of non-linear TBA and norm blocking determined by Eq.~(\ref{czz2}). The blocking condition (\ref{czz2}) can be formulated in terms of a blocking factor $f_{\nu}$ which we have to introduce into Eq.~(\ref{fcdef}) which now reads as: \begin{subequations} \label{eq:fcph} \begin{equation} {F}^{c(+)}_{ph} = f_{\nu}\left(\delta^{\vphantom{*}}_{pp'}\,g^{\nu}_{h'h}\!-\! \delta^{\vphantom{*}}_{h'h}\,g^{\nu}_{pp'}\right) \,. \label{fcdefa} \end{equation} To automatically embody the blocking condition we put \begin{equation} f^2_{\nu} = \theta \bigl((z^{\nu})^2_{\mbsu{RPA}} - \zeta^2_{\mbsu{min}}\bigr) \,, \label{fcutdef} \end{equation} \end{subequations} where $\theta$ is the Heaviside step function. Thus, in a sense, one can consider $f^2_{\nu}$ as an occupation number for the phonons. At this point, however, the following difficulty arises. The TBA equations (\ref{eq:SCTBA}--\ref{eq:Winduc}) combined with blocking condition (\ref{czz2}) pose a highly non-linear problem. It is thus not guaranteed that a unique solution exists. In fact, one will find a couple of solutions. In the spirit of dominance of $1p1h$ states, we select as the most wanted solution the one which maximizes the total $1p1h$ content of the TBA {\it active} phonons [i.e. the phonons which enter the matrix $W(\omega)$]. We recall that $1p1h$ content is defined as the contribution $(z^{\nu})^2_{\mbsu{RPA}}$ to the norm (\ref{zmwz}). Thus we impose additionally the criterion that we select that TBA solution which yields \begin{equation} \sum_{\nu_a} (z^{\nu_a})^2_{\mbsu{RPA}} = \mbox{max} \label{sumza} \end{equation} where the summation runs over the TBA active phonons only, i.e. those states $\nu_a$ which obey condition (\ref{czz2}). \subsection{CBA in diagonal approximation} \label{sec:CBAdiag} Even with the above discussed restrictions imposed on the space of the TBA phonons, the exact solution of the system of Eqs. (\ref{eq:SCTBA}), (\ref{eq:Winduc}), (\ref{eq:fcph}), (\ref{gndef}), (\ref{fcrel}), and (\ref{zzrpa}) remains a rather difficult task. To simplify these equations further we make use of a diagonal approximation to the induced interaction $\bar{W}(\omega)$. Similar as in case of RPA, also the CBA response function \begin{subequations} \begin{equation} R^{\mbss{CBA}}_{\vphantom{1}}(\omega) = -\bigl[\,\omega - \Omega^{\mbss{CBA}}_{\vphantom{1}}(\omega)\,\bigr]^{-1} M^{\mbss{RPA}}_{\vphantom{1}} \label{rftba} \end{equation} has a spectral representation \begin{equation} R^{\mbss{CBA}}_{\vphantom{1}}(\omega) = -\sum_{\nu} \frac{\,\sigma^{\vphantom{*}}_{\nu} |\,z^{\nu} \rangle \langle z^{\nu}|} {\omega - \omega^{\vphantom{*}}_{\nu}} \;, \label{rfcbaexp} \end{equation} \end{subequations} where $\sigma^{\vphantom{*}}_{\nu}=\mbox{sgn}(\omega^{\vphantom{*}}_{\nu})$. As stated in section \ref{sec:RPA}, any matrix $A$ in the $1p1h$ space can be written in terms of the RPA amplitudes $Z^{n}_{12}$ according to representation (\ref{eq:reprA}). We apply that to the CBA response matrix. First, we recast the definition (\ref{rftba}) of the response operator to a defining equations \begin{equation} \bigl[\,\Omega^{\mbss{CBA}}_{\vphantom{1}}(\omega) - \omega\,\bigr] R^{\mbss{CBA}}_{\vphantom{1}}(\omega) = M^{\mbss{RPA}}_{\vphantom{1}} \label{rftbaeq} \end{equation} and write it explicitly as matrix equation in the basis of RPA states \begin{eqnarray} &&\sum_{n''}\bigl[(\omega - \omega^{\vphantom{*}}_{n})\,\delta^{\vphantom{*}}_{n,n''} - \sigma^{\vphantom{*}}_{n}\bar{W}^{\vphantom{*}}_{nn''}(\omega)\bigr]\, R^{\mbss{CBA}}_{n''n'}(\omega) \nonumber\\ &&= -\sigma^{\vphantom{*}}_{n}\,\delta^{\vphantom{*}}_{n,n'}\,, \label{rfnneq} \\ &&\bar{W}^{\vphantom{*}}_{nn'}(\omega) = \langle Z^{n}|\,\bar{W}^{\vphantom{*}}_{\vphantom{1}}(\omega)\,|Z^{n'} \rangle \,. \label{wnndef} \end{eqnarray} This equation now is solved in diagonal approximation yielding the approximate response \begin{equation} {\tilde R}^{\mbss{CBA}}_{nn'}(\omega) = - \frac{\sigma^{\vphantom{*}}_{n}\,\delta^{\vphantom{*}}_{n,n'}} {\omega - \omega^{\vphantom{*}}_{n} - \sigma^{\vphantom{*}}_{n}\bar{W}^{\vphantom{*}}_{nn}(\omega)} \,. \label{sold1} \end{equation} Furthermore, we note that the matrix element $\bar{W}^{\vphantom{*}}_{nn}(\omega)$ is composed, according to Eqs. (\ref{eq:Winduc}), as a sum of two terms \begin{equation} \bar{W}^{\vphantom{*}}_{nn}(\omega) = \sum_{\sigma = \pm 1} \bar{W}^{(\sigma)}_{nn}(\omega) \;. \label{wnnsum} \end{equation} It is known already from RPA that the terms with negative frequency are very small for stable ground states \cite{Rowebook} and only those situations are considered here. We assume that these negative-frequency contributions are not larger than off-diagonal terms which are omitted in the diagonal approximations and neglect them altogether. According to Eq.~(\ref{fcrel}), it means that the term $\bar{W}^{(\sigma)}_{nn}(\omega)$ with $\sigma = - \sigma^{\vphantom{*}}_{n}$ can be neglected. Thus we assume \begin{equation} \bar{W}^{\vphantom{*}}_{nn}(\omega) = \bar{W}^{(\sigma^{\vphantom{*}}_{n})}_{nn}(\omega) \,. \label{wndef} \end{equation} \subsection{Justification of the blocking value $\zeta^2_{\mbsu{min}}$} \label{sec:CBAjust} The diagonal approximation simplifies the mathematical structure of the response poles to an extend that we can substantiate the choice $\zeta^2_{\mbsu{min}}=1/2$ as done in condition (\ref{czz2}) on formal grounds. For the following derivations, we employ that the TBA corrections are small as compared to the leading RPA structure and label specifically $\nu\rightarrow(n,q)$ where $n$ stands for a certain RPA state which becomes ``bandhead'' of the subsequent TBA structure and $q$ labels the many sub-states in the structure. Skipping the sum over $\sigma$ in Eq. (\ref{wnnsum}) simplifies the structure of the response function ${\tilde{R}}^{\mbss{CBA}}_{nn}(\omega)$ such that its poles $\tilde{\omega}^{\vphantom{*}}_{n,q}$ become the roots of the equation \begin{subequations} \label{poleq3} \begin{equation} \sigma^{\vphantom{*}}_n\tilde{\omega}^{\vphantom{*}}_{n,q}\biggl[ 1 + \sum_{c} \frac{|{\tilde F}^{c(\sigma_n)}_{n}|^2} {{\tilde \Omega}^{\vphantom{*}}_{c}\, ({\tilde \Omega}^{\vphantom{*}}_{c} - \sigma^{\vphantom{*}}_n\tilde{\omega}^{\vphantom{*}}_{n,q})}\biggr] = |\,\omega^{\vphantom{*}}_{n}|\,, \label{poleq} \end{equation} \begin{eqnarray} {\tilde F}^{c(\sigma)}_{n} &=& \sum_{12} Z^{n*}_{12} F^{c(\sigma)}_{12}, \label{fcndef}\\ {\tilde \Omega}^{\vphantom{*}}_{c} &=& \varepsilon^{\vphantom{*}}_{p'} - \varepsilon^{\vphantom{*}}_{h'} + \sigma^{\vphantom{*}}_{n'} \tilde{\omega}^{\vphantom{*}}_{n',q'} \label{tomega} \end{eqnarray} \end{subequations} where we use the combined index $c\!\equiv\!(p',h',\nu'\!=\!(n',q'))$. From that it follows that $\sigma^{\vphantom{*}}_{n}\tilde{\omega}^{\vphantom{*}}_{n,q} > 0$ for all $n$ and $q$. Now, the pole expansion of the (diagonal) response (\ref{sold1}) has the form \begin{subequations} \begin{eqnarray} {\tilde R}^{\mbss{CBA}}_{nn'}(\omega) &=& - \sum_{q} \frac{\sigma^{\vphantom{*}}_{n}\,\zeta^2_{n,q} \,\delta^{\vphantom{*}}_{n,n'}} {\omega - \tilde{\omega}^{\vphantom{*}}_{n,q}} \,, \label{rdexp} \\ \zeta^2_{n,q} &=& \biggl[ 1 + \sum_{c} \frac{|{\tilde F}^{c(\sigma_n)}_{n}|^2} {({\tilde \Omega}^{\vphantom{*}}_{c} - \sigma^{\vphantom{*}}_n\tilde{\omega}^{\vphantom{*}}_{n,q})^2} \biggr]^{-1} \,. \label{andef} \end{eqnarray} \end{subequations} This allows to derive a sum rule for the coefficients $\zeta^2_{n,q}$ by comparing the first terms of the expansions in powers of $1/\omega$ of the right-hand sides of Eqs. (\ref{sold1}) and (\ref{rdexp}). Namely, we have for all $n$: \begin{equation} \sum_{q}\zeta^2_{n,q} = 1\,. \label{ansum} \end{equation} From Eqs. (\ref{rpaexp}) and (\ref{rdexp}) we obtain \begin{equation} {\tilde R}^{\mbsu{CBA}}(\omega) = - \sum_{n,q} \frac{\sigma^{\vphantom{*}}_{n}\,\zeta^2_{n,q} \,|\,Z^{n} \rangle \langle Z^{n}|} {\omega - \tilde{\omega}^{\vphantom{*}}_{n,q}}\,. \label{rfrentbaexp} \end{equation} Comparing Eqs. (\ref{rfcbaexp}) and (\ref{rfrentbaexp}), we confirm that in the diagonal approximation we have $\nu = (n,q)$ and \begin{equation} |\,z^{\nu} \rangle = \zeta^{\vphantom{*}}_{n,q}\,|\,Z^{n} \rangle\,,\quad \omega^{\vphantom{*}}_{\nu} = \tilde{\omega}^{\vphantom{*}}_{n,q}\,,\quad \sigma^{\vphantom{*}}_{\nu} = \sigma^{\vphantom{*}}_{n}\,. \label{cbarentba} \end{equation} From Eqs. (\ref{zzrpa}) and (\ref{cbarentba}) it follows that in this case we have \begin{equation} (z^{\nu})^2_{\mbsu{RPA}} = \zeta^2_{n,q}\,. \label{zzrpaz2} \end{equation} Altogether, condition (\ref{czz2}) in the diagonal approximation takes the form \begin{equation} \zeta^2_{n,q} > \zeta^2_{\mbsu{min}} = 1/2\,. \label{czetda} \end{equation} After all, we can finally argue in favor of the choice $\zeta^2_{\mbsu{min}}$ for the blocking criterion: From Eq. (\ref{ansum}) we see that \\ (i) there exists not more than one pole of the function ${\tilde R}^{\mbss{CBA}}_{nn}(\omega)$ for which $\zeta^2_{n,q} > 1/2$, and \\ (ii) two and more poles of this function can satisfy the condition $\zeta^2_{n,q} > \zeta^2_{\mbsu{min}}$ in case of $\zeta^2_{\mbsu{min}} < 1/2$. \\ Thus one can consider the value $\zeta^2_{\mbsu{min}} = 1/2$ as a threshold below which the fragmentation of the RPA state $|\,Z^{n} \rangle$ becomes significant. If there are no poles of the function ${\tilde R}^{\mbss{CBA}}_{nn}(\omega)$ with $\zeta^2_{n,q} > 1/2$, then all the TBA-fragments of the RPA state $|\,Z^{n} \rangle$ have the structure going beyond the $1p1h$ approximation. This conclusion corroborates the reasoning used in the derivation of the condition (\ref{czz2}). In the calculations presented below we use the CBA scheme in the diagonal approximation. In this scheme, the {\it active} TBA phonons (see Sec.~\ref{sec:CBA}) are found from the solution of the system of the equations (\ref{poleq3}) together with (\ref{andef}) and (\ref{cbarentba}). According to the last equation, this scheme can be also called renormalized TBA. Eq. (\ref{tbaeve}) is solved in CBA using the full $1p1h$ space, i.e. without diagonal approximation. To solve the system (\ref{poleq3}), an iterative procedure is employed with the using an exclusion method in which the space of the active TBA phonons may only decrease starting from a certain iteration. This provides eventually a convergent procedure. \subsection{Construction of the space of the RPA phonons} \label{sec:RPAspace} It is the strength of CBA that it implies a natural criterion for the selection of those phonons which are active in the induced interaction $W$. There remains, nonetheless, an issue of efficiency. The diagonal approximation outlined in sections \ref{sec:CBAdiag} and \ref{sec:CBAjust} proceeds through a representation in terms of RPA phonons and these have much different impact on $W$. Thus it is useful to restrict the summation to the most important phonons. This is, at a technical level, again the same quest for finding the most collective phonons as in standard TBA and many different recipes are used in the literature using a phonon coupling model. In connection with standard TBA, we had introduced in Ref. \cite{optim2017} an efficient criterion for the selection of the collective RPA phonons. The idea is to take the average strength $\langle\,V\,\rangle^{\vphantom{*}}_n$ of the RPA residual interaction in state $n$ as measure of collectivity. This is plausible because it is the residual interaction which mixes the pure $1p1h$ states to a coherent superposition of many states. Moreover, states with large $\langle\,V\,\rangle^{\vphantom{*}}_n$ are generally strong coupling states and thus will also contribute dominantly to the induced interaction. Considering this strength relative to excitation energy then leads to the dimensionless measure of collectivity \begin{subequations} \begin{eqnarray} v^{\vphantom{*}}_n &=& \langle\,V\,\rangle^{\vphantom{*}}_n /\,|\,\omega^{\vphantom{*}}_{n}|\,, \label{vndef} \\ \langle\,V\,\rangle^{\vphantom{*}}_n &=& \langle\,Z^{n}\,|\,V\,|\,Z^{n} \rangle \nonumber \\ &=&\!\! \sum_{ph} [(\omega^{\vphantom{*}}_{n}\!-\!\varepsilon^{\vphantom{*}}_{ph})|Z^{n}_{ph}|^2 -(\omega^{\vphantom{*}}_{n}\!+\!\varepsilon^{\vphantom{*}}_{ph})|Z^{n}_{hp}|^2] , \nonumber \end{eqnarray} where $\varepsilon^{\vphantom{*}}_{ph}=\varepsilon^{\vphantom{*}}_{p}-\varepsilon^{\vphantom{*}}_{h}$. We include into the phonon basis of the TBA only the phonons with \begin{equation} |\,v^{\vphantom{*}}_n\,| > v_{\mbss{min}} \end{equation} \end{subequations} for some given value of $v_{\mbss{min}}$. This criterion had been tested extensively in \cite{optim2017}. Plotting the distribution of $v^{\vphantom{*}}_n$ we found a clear threshold value of $v^{\vphantom{*}}_n=0.05$ below which distribution becomes rapidly diffuse and we took this as a physically sound cutoff criterion. However, the criterion has two mild drawbacks. First, taking simply the average residual interaction $\langle\,V\,\rangle^{\vphantom{*}}_n$ overlooks a few collective states having strong coupling matrix elements which unfortunately happen to compensate each other in the average. Second, the simple inverse energy weight gives much emphasis to high energy phonons while we expect the strongest contributions to $W$ from the low-energy collective modes. This leads us to propose here an improved criterion based on the average of the square of the residual interaction in state $n$, which can be reduced to \begin{subequations} \begin{eqnarray} \langle\,V^2\rangle^{\vphantom{*}}_n &=& \langle\,Z^{n}\,|\,V^2|\,Z^{n} \rangle \nonumber \\ &=& \sum_{ph}\bigl[(\,\omega^{\vphantom{*}}_{n} - \varepsilon^{\vphantom{*}}_{ph})^2 |Z^{n}_{ph}|^2 \nonumber\\ &&\qquad + (\,\omega^{\vphantom{*}}_{n} + \varepsilon^{\vphantom{*}}_{ph})^2 |Z^{n}_{hp}|^2 \bigr] \,, \label{v2mn} \end{eqnarray} where the Hermitean property $V^{\dag}=V$ is taken into account. From this, we form the dimensionless quantity \begin{equation} \kappa_n = \frac{\gamma^2_n}{1+\gamma^2_n} \label{kappdef} \;,\quad \gamma^2_n = \langle\,V^2\rangle^{\vphantom{*}}_n/\,\omega^2_n \end{equation} \end{subequations} as new measure for collectivity. Now we see that $\kappa_n=0$ requires that RPA eigenfrequency equals exactly one of the $\pm\varepsilon^{\vphantom{*}}_{ph}$ and thus is strictly uncorrelated. Consequently, small values of $\kappa_n$ signal small collectivity, i.e. coupling strength, throughout. Moreover, the energy weight $\omega^{-2}_n$ yields the wanted weight on low-energy states. By noting that the amplitudes of the quasiparticle-phonon interaction $g^{n}_{12}$ in the RPA are defined as \begin{equation} g^{n}_{12} = \sum_{34} {V}^{\vphantom{*}}_{12,34}\,Z^{n}_{34}\,, \label{grpadef} \end{equation} we obtain from equations (\ref{v2mn}) and (\ref{grpadef}) \begin{equation} \langle\,V^2\rangle^{\vphantom{*}}_n = \langle\,{g}^{n}\,|\,{g}^{n} \rangle\,. \label{v2mngg} \end{equation} In the macroscopic approach (see, e.g., Refs.~\cite{BM2,BBB83}) the amplitudes $g^{n}_{12}$ are proportional to the deformation parameters $\beta^{\vphantom{*}}_n$ of the respective vibrational modes. So, in this approach $\gamma^2_n \propto \beta^2_n$. This shows explicitly that the selection of the phonons with the largest values of $\gamma^2_n$ corresponds to the selection of the low-energy vibrational modes with the largest deformation parameters having the most strong coupling to the single-particle states. Finally note that the states which are usually referred to as the collective vibrational modes ($3^-_1$ levels in $^{16}$O and $^{40}$Ca, $2^+_1$, $3^-_1$ and $4^+_1$ levels in $^{48}$Ca, $2^+_1$, $3^-_1$, $4^+_1$, $5^-_1$ and $6^+_1$ levels in $^{208}$Pb) have $\gamma^2_n\gtrsim 1$ and, consequently, $\kappa_n \gtrsim 0.5$. These archetype collective modes set the benchmark for collectivity. States with $\kappa_n\approx 0.1$ are still acceptably strong phonon. Values $\ll 1$ signal non-collective states. \subsection{Standard TBA as limit of CBA} \label{sec:TBAlim} As stated above, standard TBA can be obtained as the first iteration of CBA. It amounts to replacing the TBA spectrum in the equations (\ref{eq:Winduc}) for the induced interaction $W$ by the mere RPA spectrum, i.e. by simply identifying in the $1p1h\otimes$phonon summations \begin{equation} c^{CBA} = \{p',h',\nu\} \longrightarrow c^{RPA} = \{p',h',n\} \end{equation} where we emphasize the transition here symbolically through the upper indices $CBA$ and $RPA$. The discussion of cutoff criteria in the selection of RPA states, see previous section, is of particular importance for standard TBA. A proper selection of a few most collective states is compulsory in any phonon coupling model to avoid double counting of complex configuration and violation of the Pauli principle. \section{Results and discussion} \label{sec:res} Before presenting the results, a few words about the calculations are in order. We have computed the same test cases with standard TBA and with CBA in comparison using the same numerical procedures which are explained in detail in Refs.~\cite{optim2017,Tselyaev2016,Lyutorovich2017}. The maximum energy of the single-particle states of the $1p1h$ basis was taken as 100~MeV. For $^{16}$O and $^{40}$Ca, we did not use an energy cutoff on the phonon space while for $^{208}$Pb, the phonon basis was restricted by a maximum phonon energy of 100~MeV. For all cases we use the Skyrme parametrization SV-m64k6 \cite{Lyutorovich2012} which, due its low effective mass, provides a particularly critical test of the impact of stability and convergence. It is standard practice to check the properties of standard TBA under variation of the cutoff parameter, $v_n$ or $\kappa_n$ respectively. This is not so obvious in CBA because the proper cutoff is set by theoretical considerations. Nonetheless, we use the same sort of cutoff in RPA space as a preselection of the expansion basis. Thus one can very well present results as a function of this preselection cutoff $\kappa_\mathrm{min}$ and so study convergence of the method. This is what we will do in several of the following figures. In Fig. \ref{fige3f} we present the results of the energies of the first $3^-$ levels in $^{16}$O, $^{40}$Ca, and $^{208}$Pb. The value $\kappa_\mathrm{min}=1$ corresponds to the RPA. The CBA results show nice convergence with the expansion space $1/\kappa_\mathrm{min}$ and the converged result agrees perfectly with the results of our previous analysis \cite{optim2017}. The standard TBA result, however, shows a almost constant slope (in logarithmic $\kappa_\mathrm{min}$ scale). Here we need again a separate analysis of the distribution of $\kappa_n$ to find the optimum value of $\kappa_n$. Doing that we find an optimum cutoff $\kappa_\mathrm{min}$ in the interval 0.05--0.1 which yields excitation energies again close to the previous results (fine dashed horizontal line) and CBA. The interesting message is that CBA comes to the correct result without separate decision on cutoff parameters. Of course, one still wants to check convergence with $\kappa_n$. But this is a technical aspect. \begin{figure} \centerline{\includegraphics[width=1.05\linewidth,trim={20mm 45mm 10mm 10mm},clip]{e3f3vcrit40.pdf}} \caption{\label{fige3f} Dependence of the energy of $3^-_1$ state calculated in the CBA with $\zeta^2_{\mbsu{min}} = 0.5$ (red solid lines) and in the standard TBA (black dashed lines) on the value of the inverse cutoff parameter $1/\kappa_{\mbsu{min}}$. The fine dashed horizontal line indicates the energy as found in a previous paper using standard TBA with optimized cutoff parameter $v_{\mbsu{min}} = 0.05$ \cite{optim2017}. The experimental values are given in Table~\ref{tab:e3f}.} \end{figure} \begin{figure} \centerline{\includegraphics[width=1.2\linewidth]{e3f3BEexp40.pdf}} \vskip -3.00cm \caption{\label{fige3f3BE3} Same as in Fig. \ref{fige3f} but for the reduced probabilities $B(E3; 0^+_{\mbsu{g.s.}} \rightarrow 3^-_1)$ in units of their experimental values.} \end{figure} In Fig. \ref{fige3f3BE3} the transition probabilities of the $3^-_1$ states in these three nuclei are presented. The quantities are the most sensitive properties for any nuclear structure model as they are directly connected with the collective wave function. First of all one notices that the values converge within our newly developed theory whereas the conventional TBA results do not show this behavior. For $^{208}$Pb the agreement with data is very good, for $^{40}$Ca fair but for $^{16}$O we are off by more then a factor of two. Obviously in light nuclei one does not have enough $1p1h$ configurations for creating collective states. \begin{figure} \centerline{\includegraphics[width=1.05\linewidth,trim={20mm 65mm 10mm 10mm},clip]{e1f3vcrit40.pdf}} \caption{\label{fige1f} Same as in Fig. \ref{fige3f} but for the mean energies (Lorentzian parameter $E_0$) of the giant dipole resonance.} \end{figure} Fig.~\ref{fige1f} shows the analogous results for the mean energies of the giant dipole resonance (GDR). The mean energies were defined as the values of the Lorentzian parameter $E_0$ determined by equating the energy-weighted moments of the calculated strength functions with the respective moments of the Lorentzian function. The moments were calculated within the following finite energy intervals whose centers approximately coincide with $E_0$: 0--40 MeV for $^{16}$O, 10--30 MeV for $^{40}$Ca, and 7--21 MeV for $^{208}$Pb. The trends, observations, and conclusions are exactly the same as in the previous figures. The detailed distributions of the GDR strength distributions (the photoabsorption cross sections) in $^{16}$O, $^{40}$Ca, and $^{208}$Pb are shown in Figs. \ref{o16gdr}--\ref{pb208gdr}. In all calculations of the GDR strength in $^{16}$O and $^{40}$Ca the single-particle continuum was included as described in \cite{Tselyaev2016}. The smearing parameters were 200~keV in $^{16}$O and 400~keV in $^{40}$Ca and $^{208}$Pb. Standard TBA calculations which were done for comparison use the optimized cutoff parameter $v_{\mbsu{min}}=0.05$. The value $\kappa_\mathrm{min}=0.01$ was used in CBA for limiting the RPA expansion basis which is well on the safe side as we see from Figs. \ref{fige3f}--\ref{fige1f}. \begin{figure} \centerline{\includegraphics[angle=90,height=0.78\linewidth,trim={23mm 53mm 10mm 15mm},clip]{O16GDRvcrit.pdf}} \caption{\label{o16gdr} The giant dipole resonance (GDR) in $^{16}$O calculated the CBA (red solid line), in standard TBA (black dashed line), and in RPA (blue dotted line). Experimental data from Ref.~\cite{o16GDRdata} are shown by the green circles.} \end{figure} \begin{figure} \centerline{\includegraphics[angle=90,height=0.78\linewidth,trim={23mm 53mm 10mm 15mm},clip]{Ca40GDRvcrit.pdf}} \caption{\label{ca40gdr} Same as in Fig. \ref{o16gdr} but for $^{40}$Ca. Experimental data are taken from Ref.~\cite{ca40GDRdata}.} \end{figure} \begin{figure} \centerline{\includegraphics[angle=90,height=0.78\linewidth,trim={23mm 53mm 10mm 15mm},clip]{Pb208GDRvcrit.pdf}} \caption{\label{pb208gdr} Same as in Fig. \ref{o16gdr} but for $^{208}$Pb. Experimental data are taken from Ref.~\cite{pb208GDRdata}.} \end{figure} Figs. \ref{o16gdr}--\ref{pb208gdr} shows the same relative trends for all three nuclei. The spectra from CBA agree well with those of standard TBA and both differ significantly from RPA. TBA induces a small down-shift and, more important, smooths the spectra significantly. This smoothing is important to bring the theoretical distributions close to the experimental profile. As can be seen from Figs. \ref{fige3f}--\ref{fige1f}, the convergence in the CBA results is achieved at $\kappa_{\mbsu{min}} \approx \gamma^2_{\mbsu{min}}$ in the interval $0.02-0.05$ which approximately corresponds to the boundary of the non-collective phonons as assessed by plotting the density of phonon states \cite{optim2017}. The results of the ordinary TBA do not show the tendency to convergence. The reason of the convergence in the CBA is a decrease of the number of phonons as compared with the ordinary TBA owing to an additional criterion for the selection of the phonons, Eq. (\ref{czetda}), determined by the parameter $\zeta^2_{\mbsu{min}}$. This decrease becomes very strong when $\gamma^2_{\mbsu{min}}$ and $\kappa_{\mbsu{min}}$ are very small (see Fig.~\ref{fignphon}). At $0.02 \leqslant \gamma^2_{\mbsu{min}} \leqslant 0.05$ the number of the active TBA phonons in the CBA is in the intervals 10--22 for $^{16}$O, 24--45 for $^{40}$Ca, and 77--105 for $^{208}$Pb. In practice, all the phonons with $\omega_n > \omega_{\mbsu{max}}$ for some value of $\omega_{\mbsu{max}}$ appear to be strongly fragmented, that is they have $\zeta^2_n < \zeta^2_{\mbsu{min}}=1/2$ and for this reason are excluded from the basis. The value of $\omega_{\mbsu{max}}$ is different for different nuclei. In the calculations presented in Figs. \ref{fige3f}--\ref{fignphon}, it is approximately 30 MeV for $^{16}$O, 26 MeV for $^{40}$Ca, and 14 MeV for $^{208}$Pb. It is highly satisfying to see that the new selection criterion provides the same cutoff in phonon space as the previously external cutoff criterion developed from the density of phonon states \cite{optim2017} while the new criterion is now inherent in the scheme and so more natural. \begin{figure} \centerline{\includegraphics[width=1.05\linewidth,trim={20mm 55mm 10mm 10mm},clip]{nphon3nucl40.pdf}} \caption{\label{fignphon} Dependence of the number of the active TBA phonons in the CBA with $\zeta^2_{\mbsu{min}} = 0.5$ (red solid lines) and the number of the RPA phonons in the ordinary TBA (black dashed lines) on the value of the cutoff parameter $\kappa_{\mbsu{min}}$. } \end{figure} The additional effect of the renormalization, Eq. (\ref{cbarentba}), consists in a decrease of the phonon's energies, because, as a rule, $\tilde{\omega}^{\vphantom{*}}_{n}<\omega^{\vphantom{*}}_{n}$ for positive $\omega^{\vphantom{*}}_{n}$. In most cases, this decrease improves the agreement with the experiment. For the nuclei considered in the present work, the calculated and experimental values of the energies of the $3^-_1$ states are listed in Table~\ref{tab:e3f}. As before, the Skyrme parametrization SV-m64k6 was used. The CBA results were obtained with $\kappa_{\mbsu{min}} = 0.01$ and $\zeta^2_{\mbsu{min}} = 0.5$. The CBA results in the diagonal approximation CBA(D) imply the approximation described in sections \ref{sec:CBAdiag} and \ref{sec:CBAjust}, that concerns the active TBA phonons used in the CBA (see Sec.~\ref{sec:CBA}). As can be seen from this Table, the diagonal approximation keeps the deviation from the exact results for these states within 0.4~\%. The central question in our investigation concerns the convergence as a function of the size of the phonon space and the results are indeed very convincing. \begin{table} \caption{\label{tab:e3f} The energies of the $3^-_1$ states in $^{16}$O, $^{40}$Ca, and $^{208}$Pb calculated within the RPA, the CBA, and the CBA in the diagonal approximation CBA(D). The experimental data are given in the last column.} \begin{ruledtabular} \begin{tabular}{rcccc} & \multicolumn{4}{c}{Energy of the $3^-_1$ state (MeV)} \\ & RPA & CBA & CBA(D) & experiment \\ \hline $^{16}$O & 8.21 & 7.95 & 7.94 & 6.13 \\ $^{40}$Ca & 4.00 & 3.84 & 3.83 & 3.74 \\ $^{208}$Pb & 3.29 & 3.10 & 3.09 & 2.61 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \centerline{\includegraphics[angle=90,width=1.28\linewidth]{O16E3z2s.pdf}} \caption{\label{fige3z2o16} Differences between the energies of $3^-_1$ state in the nucleus $^{16}$O as calculated in the CBA with different values of the cutoff $\zeta^2_{\mbsu{min}}$, namely $\Delta \omega (3^-_1) = \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.6) - \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.5)$ (red solid lines) and $\Delta \omega (3^-_1) = \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.7) - \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.5)$ (black dashed lines). } \end{figure} \begin{figure} \centerline{\includegraphics[angle=90,width=1.28\linewidth]{Ca40E3z2s.pdf}} \caption{\label{fige3z2ca40} Same as in Fig. \ref{fige3z2o16} but for the nucleus $^{40}$Ca. } \end{figure} \begin{figure} \centerline{\includegraphics[angle=90,width=1.28\linewidth]{Pb208E3z2s.pdf}} \caption{\label{fige3z2pb208} Same as in Fig. \ref{fige3z2o16} but for the nucleus $^{208}$Pb. } \end{figure} As mentioned above, the value $\zeta^2_{\mbsu{min}} = 0.5$ is a threshold below which the fragmentation of the RPA state becomes significant. To study the sensitivity of the results to the increase of $\zeta^2_{\mbsu{min}}$ we have calculated the energies of the first $3^-$ levels in $^{16}$O, $^{40}$Ca, and $^{208}$Pb in the CBA with $\zeta^2_{\mbsu{min}}$ equal to 0.6 and 0.7. The results are shown in Figs. \ref{fige3z2o16}--\ref{fige3z2pb208} where the energy differences $\Delta \omega (3^-_1) = \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.6) - \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.5)$ (red solid lines) and $\Delta \omega (3^-_1) = \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.7) - \omega (3^-_1; \zeta^2_{\mbsu{min}} = 0.5)$ (black dashed lines) are plotted versus the values of $1/\kappa_{\mbsu{min}}$. As can be seen from Figs. \ref{fige3z2o16} and \ref{fige3z2ca40}, these energy differences have a tendency to decrease in $^{16}$O and $^{40}$Ca in the region near $\kappa_{\mbsu{min}} = 10^{-3}$. For $^{208}$Pb, they are stabilized in the region slightly above $\kappa_{\mbsu{min}} = 10^{-2}$. The value of $\Delta \omega (3^-_1)$ at $\zeta^2_{\mbsu{min}} = 0.6$ and $\kappa_{\mbsu{min}} = 10^{-2}$ amounts to 7~keV for $^{208}$Pb (that corresponds to 4~\% of the overall shift of the CBA $3^-_1$ energy with respect to the RPA one). The values of $\Delta \omega (3^-_1)$ at $\zeta^2_{\mbsu{min}} = 0.6$ and $\kappa_{\mbsu{min}} = 10^{-3}$ amount to 2~keV for $^{40}$Ca (1~\% of the overall shift) and 22~keV for $^{16}$O (8~\% of the overall shift). Thus, one can conclude that the small increase of $\zeta^2_{\mbsu{min}}$ with respect to the value $\zeta^2_{\mbsu{min}} = 0.5$ only slightly affects the results. \section{Conclusions} \label{sec:Conc} We have studied phonon-coupling models for nuclear resonances within the time blocking approximation (TBA). A generalized version of TBA is developed in which the self-consistency principle is extended to the phonon space of the model. This leads to a non-linear equation for the energies and transition amplitudes of the nuclear excited states (phonons). The most general version of this non-linear equation is simplified in two steps: First, the space of phonons to be included in the expansion is limited by the natural requirement that only phonons with dominant $1p1h$ contributions are selected. This is the configuration blocking approximation (CBA) which we use as name for the new scheme. The formalism implies a precise limit that the $1p1h$ content must be larger than 50\%. Second, one invokes a diagonal approximation in the representation of the complete set of the solutions of the RPA equations. It turns out, that in this diagonal approximation CBA is equivalent to the renormalization of the amplitudes of the phonons entering the phonon basis of the model which could also describe the new scheme alternatively as a renormalized TBA. The CBA is analyzed in the calculations of the first $3^-$ states and the giant dipole resonances in magic nuclei $^{16}$O, $^{40}$Ca, and $^{208}$Pb. It is shown that CBA produces a natural convergence of the results with respect to enlarging the phonon space of the model. This is an advantage as compared to ordinary TBA where additional, external criteria are needed to limit the phonon expansion basis in reasonable manner. This was done previously by reading off the cutoff from the density of phonon states. It is highly satisfying that the new, implicit cutoff in CBA produces converged results in agreement with the previous selection scheme. \begin{acknowledgements} V.T. and N.L. acknowledge financial support from the Russian Science Foundation (project No. 16-12-10155). This research was supported by the Computer Center of St. Petersburg State University. This work has been supported by contract Re322-13/1 from the DFG. \end{acknowledgements}
2,877,628,090,573
arxiv
\section{Introduction} Cardiovascular diseases are among the main causes of premature deaths worldwide~\cite{naghavi2017}. Some pathological conditions can be diagnosed at an early stage by electrocardiography (ECG), which measures the electrical excitation of the heart. In such cases quick interventions may lead to better outcomes and even save lives~\cite{McMurray_etal12, Gottlieb_etal88, Gottlieb_etal86}. However, to detect such early indicators in undiagnosed patients it would be necessary to carry out continuous health monitoring over extended periods of time. Until now this has been impractical for several reasons. First of all, long-term monitoring generates large amounts of data, much of which is likely to be non-pathological. Searching such data for indicative segments is a tedious task, which must be performed by trained medical professionals. Furthermore, to reduce inconvenience for patients, recording devices should be portable --- ideally wearable --- and have a long battery life to avoid frequent recharging or replacement. This is in conflict with the requirement of the device to perform high accuracy recordings and store or transmit the large amounts of recorded data for off-line processing, as these operations typically consume significant energy. This conflict can be resolved by devices that locally run algorithms to distinguish and filter out irrelevant data continuously and in real-time, without resorting to external- or cloud-computing resources. In this way the amount of data to be stored or transmitted can be drastically reduced and subsequent in-depth analysis by a diagnosing professional is facilitated. Such a system could, for instance, be part of a wearable ECG monitoring device that is provided to patients with a suspected cardiovascular condition (see Fig.~\ref{fig:ecg_monitor}). The patient's electrocardiogram is continuously measured through sensor electrodes and analyzed by our proposed computing system. In this system, the majority of the system components can be in hibernation for low-power operation (shaded components). If a suspicious ECG rhythm is detected, an interrupt is sent to a microcontroller or CPU, which will cause the ECG signal to be recorded onto non-volatile memory or to be transmitted to a server via a wireless connection. In addition, an alarm may be raised if indicative patterns are detected repeatedly, making timely medical treatment more likely. Such a system is designed to perform only preliminary analysis, but can drastically reduce the diagnostic load on the medical professional. \begin{figure}[htbp] \centerline{\includegraphics[width=0.75\textwidth]{images/Health-wearable-device.pdf}} \caption{Possible integration of the proposed system into a wearable ECG monitor. A patient's ECG is measured with differential ECG electrodes and continuously analysed. If an indicative pattern is detected, the system wakes up a microcontroller or CPU to record the relevant data onto a non-volatile storage or transmit it to a server via a wireless connection. Solid lines: ECG signal pathways; dashed lines: command signals; shaded blocks: components in hibernation mode until a pathological signal is detected.} \label{fig:ecg_monitor} \end{figure} Automated ECG arrhythmia detection has received much interest in the literature. Typical approaches involve neural networks~\cite{liu_etal2013, kiranyaz_etal2016, Wang_etal2019}, support vector machines~\cite{ubeyli2007}, fuzzy cognitive maps~\cite{Cardenas_etal2019} or the extraction and analysis of \textit{a priori} defined features~\cite{gradl_etal}. While many of these algorithms achieve detection accuracies of beyond 90 per cent, they are generally not applicable for ultra-low power hardware implementation. Algorithms based on spiking neural networks (SNNs) are suggested to be well-suited for sub milliwatt ECG heart rate~\cite{Das_etal2019} and arrhythmia~\cite{Amirshahi_Hashemi_2019} detection; however these networks are only implemented in simulation on conventional computing hardware. An alternative cloud computing approach involves sending data to an external, more powerful processor~\cite{marzencki_etal2010, Iliev_etal2019}. In spite of advances in data compression~\cite{mamaghanian_etal2011}, continuous data transmission itself adds considerable power demand to the system. Besides, users may be limited in their mobility if they need to remain within range of a receiving device or of a wireless network. \begin{figure*}[htbp] \centerline{\includegraphics[width=0.95\textwidth]{images/system_sketch.pdf}} \caption{System for real-time ECG anomaly detection. Analog sensor inputs are amplified and filtered in a preprocessing stage and then converted to events using Lebesgue sampling. Resulting event trains are processed by a spiking neural network implemented on a neuromorphic processor. For each type of anomalous pattern a readout unit low-pass filters a weighted sum of neuron firing activities to estimate the likelihood of an anomaly being present in the ECG input. A final binary output signal indicates whether any readout unit is above its detection threshold.} \label{fig:system-overview} \end{figure*} Here we propose a scalable, always-on system architecture for continuous signal processing at microwatt power levels (see Fig.~\ref{fig:system-overview}), which exploits the ultra-low power characteristics of Very Large Scale Integration (VLSI) neuromorphic processors~\cite{Chicca_etal14}. Input is evaluated continuously and in real-time. No segmentation of the signal or synchronization to heartbeats is required. In the approach proposed, analog signals are sensed, amplified, filtered, and then converted to trains of events using Lebesgue sampling via a sigma-delta encoder~\cite{Corradi_Indiveri15}. The resulting event trains are expanded by a spiking recurrent neural network (sRNN) of leaky integrate-and-fire (LIF) neurons, implemented on the neuromorphic hardware~\cite{Indiveri_etal11}. Neural firing activities are combined and filtered by an event-based linear readout layer that has been trained to detect the presence of specific anomalous patterns in the ECG input. Finally a binary output is generated that indicates the presence or absence of a pathological event. We validate the architecture proposed by implementing the sRNN on the fully asynchronous, mixed-signal DYNAP-SE device~\cite{Moradi_etal18}, and show experimental results performing real-time ultra-low power analysis of a two-channel ECG signal. In particular, we show that anomalies are detected with high reliability and demonstrate the feasibility of the real-world application of a system as described above. Similar approaches have recently been applied to classify EMG signals with a feed-forward SNN~\cite{Donati_etal2019} and for classification of ECG anomalies with a sRNN~\cite{Corradi_etal2019}, using the same type of hardware. For other processing stages of the system we refer to existing implementations. Their specifications, together with experimental findings from the DYNAP-SE implementation, allow us to estimate power consumption of a complete system. To our best knowledge this work presents the first full-system description of an ECG anomaly detector based on a neuromorphic processor and one of the first implementations of a real-world application on the DYNAP-SE system. \section{Methods} \subsection{The DYNAP-SE system}\label{dynapse} The sRNN was implemented on a fully asynchronous, mixed-signal DYNAP-SE chip, first presented and fully characterized in~\cite{Moradi_etal18} (see Fig.~\ref{fig:die}). The DYNAP-SE chip comprises four cores of 256 Adaptive Exponential Integrate-and-Fire (AdExp) neurons, excitatory and inhibitory dynamic synapses implemented using a ``Differential-Pair Integrator'' (DPI) log-domain circuit~\cite{Bartolozzi_Indiveri07a}, and a hierarchical routing architecture for transmitting neural spiking events within cores, among multiple cores and across multiple chips. Every DYNAP-SE AdExp silicon neuron can subscribe to events from up to 64 presynaptic neurons. For each inter-neuron connection one of four DPI synapse types can be selected, two of which are excitatory and two inhibitory; each with individually tunable characteristics. Neural and synaptic dynamics can be adjusted by setting 25 different circuit parameters, programmed via a temperature-compensated on-chip bias generator~\cite{Delbruck_etal10}, shared by neurons on the same core. This also holds for presynaptic weights for same synapse types. To achieve stronger synaptic weights, multiple connections can be implemented between a pair of neurons, thereby effectively multiplying the weight by the number of connections. Due to the analog nature of the silicon neurons and their device mismatch, temporal and functional characteristics, such as firing thresholds, synaptic efficacy, and time constants, vary between individual neurons and synapses. This intrinsic in-homogeneity is exploited to introduce variability in neuron parameters and enhance signal dimensionality, as explained in Section~\ref{ss:architecture}. For this work, a DYNAP-SE development kit was used, which comprises four DYNAP-SE chips and a Field Programmable Gate Array (FPGA), providing a USB interface to a standard PC for configuring circuit parameters, setting up synaptic connections, sending input events and reading out neural firing activity (see in Fig.~\ref{fig:die}). \begin{figure}[htbp] \centering \subfloat[]{\includegraphics[width=0.45\textwidth]{images/dynapsedie.pdf}\label{fig:die}} \hfill \subfloat[]{\includegraphics[width=0.45\textwidth]{images/devkit.pdf}\label{fig:devkit}} \caption{a) Die photo of the DYNAP-SE multi-core neuromorphic processor~\cite{Moradi_etal18}. The chip is fabricated in standard 180 nm CMOS technology and comprises four cores with 256 adaptive exponential integrate-and-fire neurons each. b) DYNAP-SE development kit with USB interface to a standard PC.} \label{fig:dynapse} \end{figure} \subsection{ECG data set and conversion to spikes}\label{dataset} The set of ECG signals used to evaluate performance in this work is taken from the MIT-BIH Arrhythmia Database~\cite{MIT-BIH}, provided by Massachusetts Institute of Technology and Beth Israel Hospital through PhysioNet~\cite{PhysioNet}. The data set consists of 48 half-hour excerpts from a set of 4000 ambulatory two-channel ECG recordings from 47 subjects. Every ECG rhythm is labeled either as normal or as exhibiting one of 18 anomalous conditions. 23 of the recordings were picked at random while the remaining 25 where selected to include less common anomaly types and parts of low signal quality to challenge arrhythmia detectors. The signal is band-pass filtered between 0.1~Hz and 100~Hz and digitized with a sampling rate of 360~Hz. \begin{figure}[htbp] \centerline{\includegraphics[width=0.95\textwidth]{images/plot_beats.pdf}} \caption{Examples of the ECG signal for normal heartbeats (upper left plot) and each of the five anomaly types used in this work. Blue curves correspond to the first, orange curves to the second channel of the respective recordings.} \label{fig:beats} \end{figure} Some anomalies occur very rarely in the data set. In this work only normal rhythms and the five most prevalent anomaly types were considered, which accounts for 95.6~\% of total recording time. Examples for normal heartbeats and the anomalies used in this work are shown in Fig.~\ref{fig:beats}. Three disjoint subsets of the input data were used for training, validation and testing of the system. For the training set, recordings were segmented into individual heartbeats. From these beats 30000 were then drawn at random and concatenated to form the training input. For validation and test sets, recordings were split into longer segments, each comprising five to ten contiguous heartbeats of the same label. In a similar fashion as with the training set, segments were then selected at random and concatenated to form the input signals. For all three sets, samples were drawn such that normal rhythms made up about 75~\% of each set and any of the included anomaly types 5~\%. Table~\ref{tab:num_beats} holds the number of beats in the three subsets for each label, Table~\ref{tab:num_segs} the number of continuous anomalous segments in the validation and test sets. \bgroup \begin{table}[htbp] \small \begin{center} \begin{tabular}{ |c|c|c|c| } \hline \textbf{Beat type}&\multicolumn{3}{c|}{\textbf{Subset}} \\ \cline{2-4} \textbf{(expert label)} & \textbf{Train.} & \textbf{Valid.} & \textbf{Test.} \\ \hline \textit{Normal rhythm} & 22,500 & 2,315 & 1,569 \\ \hline \textit{Left bundle branch block beat} & 1,500 & 175 & 104\\ \hline \textit{Right bundle branch block beat} & 1,500 & 176 & 103\\ \hline \textit{Premature ventricular contraction} & 1,500 & 154 & 100\\ \hline \textit{Paced beat} & 1,500 & 153 & 98\\ \hline \textit{Atrial premature beat} & 1,500 & 168 & 104\\ \hline \textbf{Total} & 30,00 & 3,116 & 2,078\\ \hline \end{tabular} \caption{Number of heartbeats per label} \label{tab:num_beats} \end{center} \end{table} \bgroup \begin{table}[htbp] \small \begin{center} \begin{tabular}{ |c|c|c| } \hline \textbf{Anomaly type}&\multicolumn{2}{c|}{\textbf{Subset}} \\ \cline{2-3} \textbf{(expert label)} & \textbf{Validation} & \textbf{Testing} \\ \hline \textit{Left bundle branch block beat} & 24 & 15\\ \hline \textit{Right bundle branch block beat} & 22 & 17\\ \hline \textit{Premature ventricular contraction} & 22 & 14\\ \hline \textit{Paced beat} & 21 & 16\\ \hline \textit{Atrial premature beat} & 22 & 14\\ \hline \textbf{Total} & 111 & 76\\ \hline \end{tabular} \caption{Number of segments per anomaly type} \label{tab:num_segs} \end{center} \end{table} ECG signals are converted to trains of events a through sigma-delta encoding scheme~\cite{Corradi_Indiveri15}. For every ECG channel there are two pulse outputs, emitting events either when the input signal increases by a specified amount (up-events) or decreases by a given amount (down-events). \subsection{sRNN architecture}\label{ss:architecture} The architecture of the sRNN is illustrated in Fig.~\ref{fig:network} and has been inspired by the paradigm of reservoir computing~\cite{Maass_etal02, Jaeger02}. In these architectures the weights of the recurrent network are initialized randomly and learning takes place only in the output layer, which reads out the state of a hidden layer sampled from the recurrently connected neurons. The network consists of three neuron populations that are implemented on the DYNAP processor: a feed-forward input expansion layer as well as a recurrent excitatory and a non-recurrent inhibitory group that together form the reservoir layer. Random connections and hardware mismatch (see Section~\ref{dynapse}) serve to project the original signal to a high-dimensional state space. Recurrent connections allow for the state to also encode information about the recent past of the input. \begin{figure} \centerline{\includegraphics[width=0.9\textwidth]{images/network.pdf}} \caption{Recurrent sRNN architecture. Dimensionality of the spiking input is expanded by a feed-forward input expansion layer with post-synaptic connections to the excitatory population of a partitioned reservoir layer. The latter has recurrent connections to itself and additionally stimulates a feed-forward inhibitory population that in turn controls activation of the excitatory population.} \label{fig:network} \end{figure} The input expansion layer consists of 128 neurons. Each has a fixed number of 1 to 64 presynaptic excitatory connections, drawn uniformly at random, to one of the input channels. Together with the mismatch between the hardware neurons, this connection scheme ensures that each neuron responds differently to the input spike trains, therefore increasing the dimensionality of the signal. The reservoir layer is partitioned into 512 excitatory and 128 inhibitory neurons. Each excitatory neuron receives presynaptic input through 16 excitatory connections from the input expansion layer, 32 recurrent connections from other excitatory neurons of the same layer and 16 connections from the inhibitory population. Each inhibitory neuron holds presynaptic connections to 64 of the excitatory neurons. This results in the following connectivities between the three different neuron populations: \begin{align*} & Input \ expansion\text{ to }excitatory: & 12.5 \% \\ & Excitatory\text{ to }excitatory: & 6.25 \% \\ & Excitatory\text{ to }inhibitory: & 12.5 \% \\ & Inhibitory\text{ to }excitatory: & 3.1 \% \end{align*} All connections are drawn at random. Multiple connections between the same pair of neurons may exist. The three above-mentioned neuron populations are implemented on different cores of the neuromorphic processor, which allows setting hardware parameters individually for each and facilitates tuning full-network dynamics. Ideally the network is close to the edge of chaos~\cite{Langton90}, where dynamics are rich in response to an input signal but return to a stable state after external stimulation ends. \subsection{Learning and readout} Spiking activities of all hardware neurons are constantly monitored with a PC. Spike trains are low-pass filtered by convolution with an exponential kernel \(\kappa(t) = \exp{-\frac{t}{\tau_{out}}}\) with time constant \(\tau_{out} = 0.175 \ s\): \[ \vec{x}(t) = \big(\vec{s} * \kappa \big) (t) \] Here, \(\vec{s}(t)\) is a vector containing the spike trains of all hardware neurons and \(\vec{x}(t)\) a vector of the filtered spike trains. For each anomaly type in the input signal one readout unit is trained to detect it by taking a weighted sum of the filtered spiking activities. If this sum exceeds a fixed threshold, the anomaly counts as detected. Training of readout weights and thresholds is done independently for each anomaly type. Readout weights are trained using linear least squares approximation, such that for all readout units \(i\) \[ \|X \cdot \vec{w}^{(i)} - \vec{y}^{(i)} \| \] is minimized. Here, \(\vec{w}^{(i)}\) is a vector holding input weights for unit \(i\). The rows of matrix \(X\) are the filtered spike trains \(\vec{x}\), sampled over time. The vector \(\vec{y}^{(i)}\) is the target at corresponding time points. It is 1 whenever an anomaly of type \(i\) is present in the input signal and 0 otherwise. During a validation run, a subset of the ECG data is used to find optimal detection thresholds for the readout units. Thresholds are chosen such that the number of false negatives for the respective anomaly and the number of false positives are minimised over the validation set. \section{Results} \subsection{Network performance} As described in Section~\ref{dataset} the input signal to the network is assembled from segments of contiguous heartbeats. Within a segment, beats share the same label, which is either \textit{normal} or one of the five considered anomalous patterns. The task for the sRNN is to indicate the presence of anomalous conditions in the ECG. Classification of the specific anomaly type is not required. Network output can be interpreted as binary, being positive whenever any of the readout units indicates a detected anomaly and negative otherwise. Network performance is quantified by means of \(sensitivity\), \(specificity\), \(PPV\) (positive predictive value), and \(NPV\) (negative predictive value), which are defined as follows: \begin{align*} sensitivity &= \frac{\# \ \text{true positives}}{\# \ \text{true positives} + \# \ \text{false negatives}} \\ specificity &= \frac{\# \ \text{true negatives}}{\# \ \text{true negatives} + \# \ \text{false positives}} \\ PPV &= \frac{\# \ \text{true positives}}{\# \ \text{true positives} + \# \ \text{false positives}} \\ NPV &= \frac{\# \ \text{true negatives}}{\# \ \text{true negatives} + \# \ \text{false negatives}} \\ \end{align*} A true positive (false negative) is any segment of anomalous heartbeats during which the presence of an anomaly is (is not) indicated by any of the readout units. A true negative (false positive) is any normal heartbeat during which the network does not (does) indicate the presence of an anomaly. Because only the combined output of all readout units is considered, a unit indicating the presence of an anomaly which does not correspond to its own target label still counts as correct detection. Overall, 91.3~\% of anomalous segments in the test set were detected. Of the normal heartbeats, 2.4~\% were falsely indicated as being anomalous. This means that for a normal ECG input, on average there would be a false positive every 31.5 seconds. As shown in Table~\ref{tab:performance}, for three of the five presented anomaly types the network detected all of the anomalous segments reliably, for another type sensitivity was 88~\%. Only for one anomaly, \textit{premature ventricular contraction}, detection was significantly less reliable with 71.43~\%. There is no apparent correlation between individual subjects or ECG recordings and sensitivity for this anomaly. Specificity is above 99~\% for all anomaly types but particularly high for \textit{left bundle branch block beat} and \textit{atrial premature beat}, which resulted in only two and one false positives over the whole test set, respectively. Values for PPV are lower than the corresponding sensitivities, the lowest being 45.5~\% for \textit{premature ventricular contraction}, whereas NPV is high for all conditions. \begin{table*}[t] \begin{adjustwidth}{-0.5in}{} \small \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline \textbf{Anomaly type} & \textbf{Sensitivity} & \textbf{Specificity} & \textbf{PPV} & \textbf{NPV} & \textbf{MTBFP$^{\mathrm{a}}$} \\ \textbf{(expert label)} & \textbf{(in \%)} & \textbf{(in \%)} & \textbf{(in \%)} & \textbf{(in \%)} & \textbf{(in seconds)} \\ \hline \textit{Left bundle branch block beat} & 100.0 & 99.87 & 88.24 & 100.0 & 582.8 \\ \hline \textit{Right bundle branch block beat} & 88.24 & 99.36 & 60.0 & 99.87 & 116.56\\ \hline \textit{Premature ventricular contraction} & 71.43 & 99.24 & 45.45 & 99.74 & 97.1\\ \hline \textit{Paced beat} & 100.0 & 99.24 & 57.14 & 100.0 & 97.1\\ \hline \textit{Atrial premature beat} & 100.0 & 99.94 & 93.33 & 100.0 & 1165.6\\ \hline \textbf{Overall} & 92.11 & 97.6 & 65.42 & 99.61 & 31.5\\ \hline \multicolumn{6}{l}{\footnotesize $^{\mathrm{a}}$Mean time between false positives during normal ECG input} \end{tabular} \caption{Anomaly detection performance on test set} \label{tab:performance} \end{center} \end{adjustwidth} \end{table*} \subsection{Power consumption} Based on the figures provided in~\cite{Moradi_etal18}, the firing activity and architecture of the sRNN on the DYNAP-SE chip translate into a dynamic power consumption of 286.1~\(\mu\)W. Assuming a static power draw of around 230~\(\mu\)W, total power consumption amounts to 516.1~\(\mu\)W. Together with the power estimates for those parts of the system that were simulated on a desktop computer (see Section~\ref{ssec:feasibility}), the total power consumption of the proposed system is less than 722.1~\(\mu\)W. An overview of the power figures for different components can be found in Table~\ref{tab:power}. \bgroup \begin{table*}[t] \small \begin{center} \begin{tabular}{ p{5mm}|c|c|l } \cline{2-3} & \textbf{Component} & \textbf{Power} \\ \cline{2-3} & \textit{DYNAP-SE total} & 516.1 \(\mu\)W \\ \cline{2-3} & \textit{Amplifiers} & \( < 2 \ \times \ 50 \ \mu\)W$^{\mathrm{a}}$ \\ \cline{2-3} & \textit{Filters} & \( < 4 \ \times\ 4 \ \mu\)W$^{\mathrm{b}}$ \\ \cline{2-3} & \textit{Readout} & \( < 90 \ \mu\)W \\ \cline{2-3} & \textbf{Total} & \(\bm{< 722.1 \ \mu}\)\textbf{W} \\ \cline{2-3} \multicolumn{4}{l}{\footnotesize $^{\mathrm{a}}$One amplifier per ECG channel} \\ \multicolumn{4}{l}{\footnotesize $^{\mathrm{b}}$One low-pass and one notch filter per ECG channel} \end{tabular} \caption{Power consumption} \label{tab:power} \end{center} \end{table*} \section{Discussion} \subsection{Performance metric} In this work network performance is evaluated by means of two key values: sensitivity and specificity, which quantify the ability to detect anomalous patterns in the ECG input on the one hand, and on the other hand proneness to produce erroneous detection events. In the data set, anomalous patterns usually do not occur in an isolated fashion but appear multiple times in close succession. Therefore missing one pathological heartbeat is not problematic as long as others are detected correctly. Hence, sensitivity analysis is done over segments of rhythms with the same label and any detection counts for the whole segment. However, even for the same anomaly type there may be variations between individual beats, in particular for recordings from different subjects. The network might be able to detect the anomaly with high reliability for some subjects and for others not. To make sure that such a scenario does not go undetected, segments always consist of contiguous heartbeats from single recordings. The ratio of erroneous detection events and number of normal heartbeats provides an easy way to quantify specificity. Together with the combined duration of all normal beats it can be easily translated into the expected mean time between two false positives for an input that is free of anomalies, a number that gives an intuitive idea of specificity. \subsection{Network performance} The network reliably detects most of the presented anomalies, except for \textit{premature ventricular contraction}. There is no apparent correlation between individual subjects and sensitivity for this label and it seems that this pattern is particularly difficult for the network to detect, indicating that performance depends to some extent on the type of anomaly that is to be found. \iffalse --TODO: integrate into following paragraph-- Relatively low PPVs and high NPVs suggest that on the one hand not every detection necessarily corresponds to a true anomaly, ...professional should validate positive results. On the other hand, negative results can be trusted with higher confidence. Nevertheless it should be noted that due to the way true positives and false negatives are defined here, their numbers are lower than those for true negatives and false positives, which results in a disproportionally small PPV and large NPV.\fi Which level of detection accuracy is adequate strongly depends on the individual application. For expert systems that autonomously classify heartbeats, high precision is crucial. On the other hand, in an assisted diagnosis scenario where the system only serves as a filter prior to diagnosis by a human expert, precision requirements may be reduced. For instance, long-term monitoring bio-signals from out-patients traditionally generate large amounts of data which raises challenges that can be alleviated by the suggested system: Diagnosing professionals can be directed to relevant sections without having to sift through the complete recording. The amount of data that needs to be stored and further processed can be reduced by filtering out irrelevant non-pathological sequences. This also allows for more compact devices with long battery life. Regarding mean time between false positives (MTBFP) and specificity, overall values are lower than for individual anomalies. The reason is that taking into account detection events for all classes simultaneously implies that false positives will accumulate over classes. PPV is slightly low for some anomalies, suggesting that positive results are not to be interpreted as definite diagnosis but rather serve to initiate and support further analysis. It should be noted however, that because true positives and false negatives are defined with respect to segments of multiple heartbeats, their numbers are lower than those for true negatives and false positives, which refer to individual heartbeats. This results in a disproportionally small PPV. The chosen trade off between accuracy and MTBFP rate can depend on the precise use case. For example, a diagnosing professional can analyze an ECG much more efficiently if only two instead of 70 to 80 rhythms per recorded minute need to be considered. In other scenarios, sensitivity may be weighed up against the number of false alarms by changing detection thresholds. Furthermore, specificity can be improved by only searching for specific anomaly types, e.g. if a specific pathological state is suspected in advance. For example, if only \textit{atrial premature beats} are considered, MTBFP is above 19 minutes with the current setup. Nevertheless, promising techniques have been proposed that may improve performance significantly, but whose implementation would have exceeded the scope of this work: Instead of using random recurrent connections, network dynamics can be tailored to a given task by spectral analysis of the network~\cite{aceituno_17} or techniques for balancing excitation and inhibition~\cite{deneve_13, Alemi_etal2018}, in-reservoir learning algorithms such as~\cite{wilten_clopath17}, or by using more traditional approaches like Backpropagation Through Time (BPTT)~\cite{Rumelhart_etal86a} to train a non-spiking recurrent neural network and using a transfer algorithm as in~\cite{he2019}. Finally, while the linear regression algorithm used in this work allows for efficient training of the readout weights, other learning models such as support-vector machines (SVMs)~\cite{Cortes_Vapnik95} may yield more suitable weights for this task. It may even be considered replacing the layer of readout units by a multilayer perceptron (MLP) trained with backpropagation~\cite{rumelhart1985}. \subsection{Feasibility of preprocessing and readout stages}\label{ssec:feasibility} While the spiking neural network in this work was implemented on neurmorphic hardware, other parts of the system, namely the conversion from analog input to events, the readout stage and the generation of the binary output signal were simulated on a desktop computer. Furthermore, the ECG signal that was used had already been amplified and filtered, so that no preprocessing was necessary. In the following we will argue that an implementation of these stages and therefore of the full proposed system is feasible. \subsubsection{Preprocessing and conversion to events} Typical raw ECG sensor signals have an amplitude of a few millivolts and are generally amplified and noise-filtered. Suitable low-power, low-noise amplifiers have been proposed, for instance, in~\cite{yang_etal11} and~\cite{ghamati_nejad13}, each with a power consumption of less than 50~\(\mu\)W. Similarly, lowpass and notch filters are proposed in~\cite{kumar2016} and~\cite{yehoshuva_etal16}, requiring less than 4~\(\mu\)W power. Implementations of asynchronous sigma-delta encoders that efficiently convert the analog signal to events are described in~\cite{Corradi_Indiveri15} and~\cite{Lichtsteiner_etal08}. \subsubsection{Readout} The readout units have not been set up as neurons on the neuromorphic processor for two reasons: the limited fan-in of the hardware neurons and the fact that all presynaptic weights of a neuron are the same for a given synapse type. The fan-in is critical insofar as a readout unit ideally has access to the firing activity of all neurons in the hidden layers of the network. However, it is certainly possible to design neuromorphic processors with readout units that can subscribe to significantly larger numbers of presynaptic neurons. One example is the Reconfigurable On-line Learning Spiking Neuromorphic Processor (ROLLS)~\cite{Qiao_etal15}, which is similar to the DYNAP-SE used in this work and can be configured such that a single neuron has up to 130k synapses. Furthermore, on an algorithmical level the required fan-in can be reduced by variable selection methods and regularization rules that encourage a large number of zero-weights, such as LASSO~\cite{tibshirani96}. Regarding weight quantization, synapses with individually tunable strengths and a precision of a few bits have been implemented in neuromorphic devices like the Dynap-SEL~\cite{Thakur_etal18}. Although it may not suffice to quantize previously trained high-precision weights, there are learning algorithms such as~\cite{zhu2016},~\cite{hubara2017} and~\cite{Helwegen_etal19} that are designed to find suitable low-precision weights. Another potential solution is the application of transfer methods as suggested in~\cite{he2019}. Assuming similar power figures as for DYNAP-SE, we estimate that an implementation of a readout as described above, that produces a binary output indicating whether one of the detection thresholds is exceeded, would consume less than 90~\(\mu\)W. \section{Conclusion} In this work we propose an always-on ultra-low power system that uses asynchronous neuromorphic hardware to perform real-time anomaly detection on a multi-channel ECG signal with a mean power consumption below one milliwatt. We implement a spiking recurrent neural network on a DYNAP-SE chip and demonstrate that it is able to reliably detect anomalous patterns in the ECG. Remaining parts of the proposed system can be realized with existing circuitry as part of a wearable health monitoring device. \section*{Acknowledgment} This work is supported in part by EU-H2020 grant NeuRAM3 Cube (NEUral computing aRchitectures in Advanced Monolithic 3D-VLSI nano-technologies); by H2020 FET Proactive grant SYNCH (824162); by H2020 ECSEL grant TEMPO (826655); by the European Research Council under the Grant Agreement No. 724295 (NeuroAgents); and by aiCTX AG. Mr Bauer and Dr Muir performed this work as part of their duties at aiCTX AG. \bibliographystyle{IEEEtran}
2,877,628,090,574
arxiv
\section*{Acknowledgments} We thank the anonymous PLDI reviewers and our shepherd, Xiaodi Wu, for their feedback. This work was partially supported by the \grantsponsor{NSF}{National Science Foundation}{nsf.gov} under grant numbers~% \grantnum{NSF}{CCF-2115104}, \grantnum{NSF}{CCF-2119352}, and \grantnum{NSF}{CCF-2107241}. \section{Detailed Results} \label{sec:extended} \begin{table*}[hbt!] \setlength{\tabcolsep}{2pt} \small \caption{Gate count results for the Nam gate set when using $(n,q)$-complete ECC sets\xspace with varying values of $n$ and $q$, and a search timeout of 24 hours. ``Pr.'' shows the result when $n=0$ or $n=1$: the ECC set\xspace is empty, so the results match the ``Quartz Preprocess'' column of \Cref{tab:nam}. When $q=1$, the results are the same when $2 \le n \le 7$.} \include{appendix-tabular} \label{tab:nam:full} \vspace{4em} \end{table*} \begin{table*}[hbt!] \caption{Metrics for Quartz\xspace's generator, when generating $(n,q)$-complete ECC sets\xspace for $2 \le n \le 7$ and $1 \le q \le 4$ for the Nam gate set. $|\m{T}|$ is the resulting number of transformations. The characteristics (see definition in \Cref{subsec:complexity}) for $q=1,2,3,4$ are 7, 16, 27, 40, respectively.} \begin{threeparttable} \begin{tabular}{c|rrrr|rrrr|rrrr} \hline & \multicolumn{4}{c|}{\bf $|\m{T}|$} & \multicolumn{4}{c|}{\bf Verification Time (s)} & \multicolumn{4}{c}{\bf Total Time (s)} \\ \hline $n$ & \multicolumn{1}{c}{$q=1$} & \multicolumn{1}{c}{$q=2$} & \multicolumn{1}{c}{$q=3$} & \multicolumn{1}{c|}{$q=4$} & \multicolumn{1}{c}{$q=1$} & \multicolumn{1}{c}{$q=2$} & \multicolumn{1}{c}{$q=3$} & \multicolumn{1}{c|}{$q=4$} & \multicolumn{1}{c}{$q=1$} & \multicolumn{1}{c}{$q=2$} & \multicolumn{1}{c}{$q=3$} & \multicolumn{1}{c}{$q=4$} \\ \hline 2 & 14 & 38 & 62 & 78 & 0.5 & 0.7 & 1.2 & 2.5 & 0.5 & 0.7 & 1.3 & 2.8 \\ 3 & 14 & 90 & 196 & 208 & 0.8 & 1.3 & 2.6 & 7.8 & 0.8 & 1.5 & 3.7 & 12.0 \\ 4 & 44 & 332 & 1,304 & 2,988 & 1.1 & 2.3 & 8.5 & 48.0 & 1.2 & 3.8 & 21.4 & 118.3 \\ 5 & 78 & 1,334 & 8,002 & 27,942 & 1.5 & 8.8 & 49.5 & 917.0 & 1.6 & 19.2 & 174.7 & 2,452.0 \\ 6 & 120 & 5,794 & 56,152 & 273,532 & 1.9 & 52.5 & 370.3 & 5,802.6 & 2.2 & 138.9 & 1,400.4 & 19,448.0 \\ 7 & 164 & 21,824 & 379,864 & \multicolumn{1}{c|}{--\tnote{$\dagger$}} & 2.3 & 71.8 & 2,673.6 & \multicolumn{1}{c|}{--\tnote{$\dagger$}} & 2.8 & 222.9 & 10,461.2 & \multicolumn{1}{c}{--\tnote{$\dagger$}} \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[$\dagger$] We were unable to generate a $(7, 4)$-complete ECC set\xspace using \SI{512}{GB} of RAM. \end{tablenotes} \end{threeparttable} \label{tab:complexity:full} \end{table*} \Cref{tab:nam:full} shows the final gate count for each circuit for varying $n$ and $q$ after a 24-hour search timeout, with the best results of each circuit highlighted. Interestingly, $q=3$ with $3\le n \le 6$ covers the best optimization results for all circuits obtained among all configurations considered in the table. As circuits are sorted in the order of original size in the table, we can see that when $q=3$, small circuits tend to be better optimized with larger values of $n$, and larger circuits tend to be better optimized with smaller values of $n$. \Cref{tab:complexity:full} shows the run times of the entire generation procedure, and also the time out of that spent in verification, for each of the ECC set\xspace used in \Cref{tab:nam:full}. The table also lists the number of resulting circuit transformations $|\m{T}|$ for each $n$ and $q$, and the characteristic for each $q$ (see definition in \Cref{subsec:complexity}). \Cref{fig:adder_8,fig:barenco_tof_3,fig:barenco_tof_4,fig:barenco_tof_5,fig:barenco_tof_10,fig:csla_mux_3,fig:csum_mux_9,fig:gf2^4_mult,fig:gf2^5_mult,fig:gf2^6_mult,fig:gf2^7_mult,fig:gf2^8_mult,fig:gf2^9_mult,fig:gf2^10_mult,fig:mod5_4,fig:mod_mult_55,fig:mod_red_21,fig:qcla_adder_10,fig:qcla_com_7,fig:qcla_mod_7,fig:rc_adder_6,fig:tof_3,fig:tof_4,fig:tof_5,fig:tof_10,fig:vbe_adder_3} show plots akin to \Cref{fig:scalability} and \Cref{fig:time} for each circuit. In each figure, the left plot shows the optimization result with $(n,q)$-complete ECC sets\xspace for varying $n$ and $q$ after a 24-hour search timeout. The right plot shows the optimization result over time for $q=3$ and $2\le n \le 7$. Each 24-hour result on the right plot corresponds to a green round marker ($q=3$) on the left plot. Among these figures, \Cref{fig:mod5_4} shows the median run for \texttt{mod5\_4}. The median run is defined to be the run with the final gate count being the median of the 7 runs. We present the results of all 7 runs for \texttt{mod5\_4} for $q=3$ and $3 \le n \le 7$ in \Cref{fig:mod54:full}. \clearpage \input{appendix-plot} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/appendix/mod54_time_plot_all.pdf} \caption{7 runs of \texttt{mod5\_4} with a $(n, 3)$-complete ECC set for each $3 \le n \le 7$. Each marker denotes one run. For example, the three markers on the 57.1\%-reduction line of $n=5$ show three runs resulting in 27 gates after 24 hours.} \label{fig:mod54:full} \end{figure*} \section{Conclusion and Future Work} We have presented Quartz\xspace, a quantum circuit superoptimizer that automatically generates and verifies circuit transformations for arbitrary gate sets with symbolic parameters. While Quartz\xspace shows that a superoptimization-based approach to optimizing quantum circuits is practical, we believe there are many opportunities for further improvement. As discussed in \Cref{subsec:scalability}, Quartz\xspace's current search algorithm limits the number of transformations that can be effectively utilized. Improving the search algorithm may therefore lead to better optimization using $(n,q)$-complete ECC sets for larger values of $n$ and $q$, which may also require improving the generator. Another limitation of Quartz\xspace that suggests an opportunity for future work is that it only targets the logical circuit optimization stage and does not consider qubit mapping. Applying superoptimization to jointly optimize circuit logic and qubit mapping is both challenging and promising. \section{Symbolic Quantum Circuits} \label{sec:ecc} To support parametric gates, Quartz\xspace introduces \emph{symbolic} quantum circuits and circuit transformations. The latter are represented compactly using \emph{equivalent circuit classes} (ECCs\xspace). This section introduces these concepts. \paragraph{Quantum circuits.} Quantum programs are represented as {\em quantum circuits}~\cite{Nielsen00}, as shown in \Cref{fig:conventional_circuit}, where each horizontal wire represents a {\em qubit}, and boxes on these wires represent {\em quantum gates}. The semantics of a quantum circuit over $q$ qubits is given by a $2^q \times 2^q$ unitary complex matrix. This matrix can be computed from matrices of individual gates in a compositional manner, using matrix multiplications (for sequential composition of subcircuits that operate on the same qubits) and tensor products (for parallel composition of subcircuits that operate on different qubits). For example, the matrix for the circuit of \Cref{fig:conventional_circuit} is $(CNOT \otimes I) \cdot (U_2(\frac{\pi}{2},\pi) \otimes CNOT) \cdot (U_1(-\pi) \otimes H \otimes H)$. A circuit $C'$ is a \emph{subcircuit} of $C$ if, for some qubit permutation, the matrix computation for $C$ can be structured as $\mbox{\tt\char`\_} \cdot (M_{C'} \otimes I \otimes \cdots \otimes I) \cdot \mbox{\tt\char`\_}$, where $M_{C'}$ is the matrix for $C'$. For example, the green box in \Cref{fig:conventional_circuit} highlights a subcircuit, while the red dashed area is not a subcircuit. The subcircuit notion is invariant under qubit permutation; e.g., the $X$ and $U_1$ gates in \Cref{fig:conventional_circuit} also form a subcircuit. A circuit's matrix is invariant under replacing one subcircuit with another that has the same matrix (but possibly different gates), which underpins peephole optimization for quantum circuits. Many gates supported by modern quantum devices take real-valued parameters. For example, the IBM quantum device supports the $U_1$ gate which takes one parameter and \nolinebreak rotates a qubit about the $x$-axis (on the Bloch sphere), and the $U_2$ gate which takes two parameters for rotating about the $x$- and $z$-axes. The matrix representations of $U_1$ and $U_2$ are: \begin{equation} \label{eqn:u1} U_1(\theta) = \begin{pmatrix} 1 & 0\\ 0 & e^{i\theta} \end{pmatrix} \quad U_2(\phi,\lambda) = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -e^{i\lambda}\\ e^{i\phi} & e^{i(\phi+\lambda)} \end{pmatrix} \end{equation} \begin{figure} \centering \begin{subfigure}{0.48\linewidth} \centering \includegraphics[scale=\pptscale]{figures/conventional_circuit3-crop.pdf} \caption{Quantum circuit.} \label{fig:conventional_circuit} \end{subfigure} \hfill \begin{subfigure}{0.48\linewidth} \centering \vspace{4pt} % \includegraphics[scale=\pptscale]{figures/our_circuit2-crop.pdf} \caption{Symbolic Quantum circuit.} \label{fig:our_circuit} \end{subfigure} \\ \vspace{1em} \begin{subfigure}{\linewidth} \centering \includegraphics[scale=\pptscale]{figures/cz_transformation-crop.pdf} \caption{Transformation fusing two $R_z$ gates across $CZ$ and $X$ gates.} \label{fig:transformation_example} \end{subfigure} \caption{Quantum circuits and transformations. The green highlighted box in (a) is a subcircuit, while the red dashed area is not; the $U_1(-\pi)$ and $X$ gates also form a subcircuit. } \label{fig:circuit} \end{figure} \paragraph{Symbolic circuits.} To support superoptimization of circuits with parametric gates, Quartz\xspace discovers transformations between \emph{symbolic quantum circuits}, as shown in \Cref{fig:our_circuit}, which include (symbolic) parameters ($\theta$, $\phi$, $\lambda$, $\delta$, etc.) and arithmetic operations on these parameters, and are formalized below. Using such circuits, Quartz\xspace can represent transformations such as the one illustrated in \Cref{fig:transformation_example}. The semantics of a \emph{symbolic quantum circuit}, denoted $\sem{\cdot}$, has type $\sem{C} \colon \mathbb{R}^{m} \rightarrow \mathbb{C}^{2^q\times 2^q}$ where $C$ is a circuit over $m$ (symbolic) parameters and $q$ qubits. For a vector of parameter values $\vec{p}\in\mathbb{R}^m$, $\sem{C}(\vec{p})$ is a $2^q \times 2^q$ unitary complex matrix representing a (concrete) quantum circuit over $q$ qubits. For example, \cref{eqn:u1} can be seen as defining the semantics of $U_1$ and $U_2$ as single-gate symbolic quantum circuits. The semantics of a multi-gate symbolic circuit (e.g., \Cref{fig:our_circuit}) is derived from that of single-gate circuits using matrix multiplications and tensor products exactly as for concrete circuits. Henceforth, we use \emph{circuits} to mean symbolic quantum circuits. \paragraph{Circuit equivalence and transformations.} In quantum computing, the states $\ket{\psi}$ and $e^{i\beta}\ket{\psi}$ ($\beta\in\mathbb{R}$) are \emph{equivalent up to a global phase}, and from an observational point of view they are identical~\cite{Nielsen00}. This leads to the following circuit-equivalence definition that underlies Quartz\xspace's optimization. \begin{definition}[Circuit Equivalence] Two symbolic quantum circuits $C_1$ and $C_2$ are \emph{equivalent} if: \begin{equation} \label{eqn:equiv} \forall \vec{p} \in \mathbb{R}^m. \; \exists \beta \in \mathbb{R}. \; \sem{C_1}(\vec{p}) = e^{i\beta} \sem{C_2}(\vec{p}). \end{equation} \end{definition} That is, two circuits are equivalent if for every valuation of the parameters they differ only by a phase factor. The phase factor may in some cases be constant, but generally it may be different for different parameter values. For example, the equivalence between $U_1$ and $R_z$ gates ($U_1(\theta)=e^{i \theta / 2} R_z(\theta)$) requires a parameter-dependent phase factor. % Crucially for peephole optimization, circuit equivalence is invariant under replacing a subcircuit with an equivalent subcircuit. A {\em circuit transformation} $(C_T, C_R)$ is a pair of distinct equivalent circuits, where $C_T$ is a {\em target circuit} to be matched with a subcircuit of the circuit being optimized, and $C_R$ is a {\em rewrite circuit} that can replace the target circuit while maintaining equivalence of the optimized circuit and the input circuit. \Cref{fig:transformation_example} illustrates a circuit transformation. \begin{figure} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[scale=\pptscale]{figures/hadamard_removing-crop.pdf} \caption{Common optimization removing four Hadamard gates.} \label{fig:hadamard_removing} \end{subfigure} \\ \vspace{1em} \begin{subfigure}{\linewidth} \centering \includegraphics[scale=\pptscale]{figures/hadamard_removing_sequence-crop.pdf} \caption{Above optimization as a circuit rewriting using transformations.} \label{fig:hadamard_removing_sequence} \end{subfigure} \\ \vspace{1em} \begin{subfigure}{\linewidth} \centering \includegraphics[scale=\pptscale]{figures/hadamard_removing_eccs-crop.pdf} \caption{An ECC set\xspace that includes the transformations used above.} \label{fig:hadamard_removing_eccs} \end{subfigure} \caption{\label{fig:hadamard}% Illustrating optimization via transformations. } \end{figure} \paragraph{Equivalent Circuit Classes} Quartz\xspace uses {\em equivalent circuit classes} (ECCs), to represent many circuit transformations compactly. An ECC\xspace is a set of equivalent circuits. A transformation is \emph{included} in an ECC\xspace if both its target and rewrite circuits are in the ECC\xspace. ECCs\xspace provide a compact representation of circuit transformations: an ECC\xspace with $x$ circuits includes $x(x-1)$ transformations. A \emph{circuit rewriting} is a sequence of applications of circuit transformations, which Quartz\xspace uses for optimization as illustrated in \Cref{fig:hadamard}. \Cref{fig:hadamard_removing} shows a common optimization that removes four Hadamard gates (i.e., $H$) by flipping a \texttt{$CNOT$}\xspace gate. \Cref{fig:hadamard_removing_sequence} shows how to perform this optimization as a circuit rewriting consisting of three (more basic) circuit transformations, which are instances of the two transformations specified by the ECC set\xspace in \Cref{fig:hadamard_removing_eccs}. \paragraph{Completeness for ECC sets\xspace.} For any number of gates $n$, qubits $q$, and parameters $m$, we assume a \emph{finite} set $\m{C}^{(n,q,m)}$ of % circuits that can be constructed with at most $n$ gates over $q$ qubits and $m$ parameters. The collection $\{\m{C}^{(n,q,m)}\}_{n,q,m\in\mathbb{N}}$ is determined by the \emph{gate set} $\m{G}$ (finite set of possibly parametric gates) as well as a specification $\Sigma$ of how parameter expressions may be constructed; e.g., a finite set of arithmetic operations and some bounds on the depth of expressions or the number of times each parameter can be used, ensuring finiteness of every $\m{C}^{(n,q,m)}$. Henceforth, we fix the gate set $\m{G}$, the parameter-expression specification $\Sigma$, and the number of parameters $m$; and elide $m$ by writing $\m{C}^{(n,q)}$. A transformation $(C_T, C_R)$ is \emph{subsumed} by an ECC set\xspace if there is a circuit rewriting from $C_T$ to $C_R$ that only uses transformations included in the ECC set\xspace. \begin{definition}[$(n,q)$-Completeness] \label{def:nq} Given $\m{G}$, $\Sigma$, and $m$ as above, an ECC set\xspace is \emph{$(n,q)$-complete} (for $\m{G}$, $\Sigma$, and $m$) if it subsumes all circuit transformations over $\m{C}^{(n,q)}$. \end{definition} An $(n,q)$-complete ECC set\xspace can be used to rewrite any two equivalent circuits with at most $n$ gates over $q$ qubits to each other. Any ECC set\xspace is by default $(0,q)$-complete because any transformation involves at least one gate in the target or rewrite circuits. An ECC set\xspace is $(1,q)$-complete if it subsumes all possible transformations between single gates. \Cref{sec:generator,sec:verifier,sec:pruning} describe our approach for constructing a verified $(n,q)$-complete ECC set\xspace. \section{Implementation and Evaluation} \label{sec:eval} We describe our implementation of Quartz\xspace and evaluate the performance of the generator, the verifier, and the optimizer. Quartz\xspace is publicly available as an open-source project~\cite{quartz_github} and also in the artifact supporting this paper~\cite{quartz_zenodo}. \subsection{Implementation} \label{sec:impldetails} \paragraph{Floating-point arithmetic.} The \textproc{RepGen}\xspace{} algorithm as presented in \Cref{sec:generator} uses real-valued fingerprints, where two equivalent circuits always have the same fingerprint. Our implementation of \textproc{RepGen}\xspace{} uses floating-point arithmetic, which introduces some imprecision that can potentially lead to different fingerprints for equivalent circuits. To account for this imprecision, the implementation assumes there exists an {\em absolute error threshold} $E_{max}$, such that fingerprints of equivalent circuits differ by at most $E_{max}$ when computed with floating-point arithmetic. The implementation therefore computes, using floating-point arithmetic, the integer $\left\lfloor\frac{\Call{FingerPrint}{C}}{2 E_{max}}\right\rfloor$, and uses it as the key for storing circuit $C$ in $\m{D}$. Under our assumption, equivalent circuits may still have different integer hash keys, but they may differ only by $1$. Therefore, the implementation introduces an additional step after line~\ref{alg1:eccfy} of \Cref{alg1}, in which ECCs\xspace that correspond to circuits with hash keys $h$ and $h+1$ are checked for equivalence and merged if found equivalent. In our experiments we set $E_{max}=10^{-15}$ based on preliminary exploration of the floating-point errors that occur in practice. \begin{table}[t] \footnotesize \centering \caption{Gate sets used in our evaluation.} \label{tab:gate_set} \vspace{0em} \begin{tabular}{l|l} \hline {\bf Name} & {\bf Supported Gates} \\ \hline Nam~\cite{nam2018automated,VOQC} & $H, \;\; X, \;\; R_z(\lambda),\;\; CNOT$ \\ IBM~\cite{dumitrescu2018cloud} & $U_1(\theta), \;\;U_2(\phi, \lambda),\;\; U_3(\theta, \phi,\lambda), \;\; CNOT$ \\ Rigetti~\cite{rigetti_agave} & $R_x(\frac{\pi}{2}), \;\; R_x(-\frac{\pi}{2}), \;\; R_x(\pi)=X, \;\; R_z(\lambda), \;\; CZ$ \\ \hline \end{tabular} \end{table} \paragraph{Supported gate sets.} Quartz\xspace is a generic quantum circuit optimizer supporting arbitrary gate sets, and it accepts a gate set $\m{G}$ as part of its input. In our experiments, input circuits are given over the ``Clifford + T'' gate set: $H$, $T$, $T^\dagger$, $S$, $S^\dagger$, and $CNOT$; and output (optimized) circuits are in one of the three gate sets listed in \Cref{tab:gate_set}: Nam, IBM, and Rigetti. Nam is a gate set commonly used in prior work~\cite{nam2018automated,VOQC}, IBM is derived from the IBMQX5 quantum processor~\cite{dumitrescu2018cloud}, and Rigetti is derived from the Rigetti Agave quantum processor~\cite{rigetti_gates,rigetti_agave}. To generate and verify circuit transformations for a new gate set, Quartz\xspace only requires, for each gate, a specification of its matrix representation as a function of its parameters, such as \cref{eqn:u1}. % To optimize circuits, a translation procedure of input circuits to the new gate set is also required unless input circuits are provided in the new gate set. \paragraph{Rotation merging and Toffoli decomposition.} Before invoking Quartz\xspace's optimizer, Quartz\xspace preprocesses circuits by applying two optimizations: \emph{rotation merging} and \emph{Toffoli decomposition}~\cite{nam2018automated}. Our preliminary experiments showed that an approach solely based on local transformations and a cost-based search cannot reproduce these optimization passes for large circuits. Rotation merging combines multiple $R_z$/$U_1$ gates that may be arbitrarily far apart (separated by $X$ or $CNOT$ gates), and appears difficult to be represented as local circuit transformations. Toffoli decomposition decomposes a Toffoli gate into the Nam gate set, which involves simultaneous transformation of 15 quantum gates and interacts with rotation merging~\cite[p.~11]{nam2018automated}. We therefore implement these two optimization passes from prior work~\cite{nam2018automated} as a preprocessing step. Toffoli decomposition requires selecting a polarity for each Toffoli gate, which is computed heuristically by prior work~\cite{nam2018automated}. Instead, we use a greedy approach: we process the Toffoli gates sequentially and for each gate we consider both polarities and greedily pick the one that results in fewer gates after rotation merging. For the Nam and IBM gate sets, Quartz\xspace directly applies rotation merging and Toffoli decomposition as a preprocessing step before the optimizer. For the Rigetti gate set, which includes $CZ$ rather than $CNOT$, the algorithm from the prior work~\cite{nam2018automated} is not directly applicable; therefore, Quartz\xspace uses several additional preprocessing steps, as follows. Rather than directly transpiling an input circuit to Rigetti, Quartz\xspace first transpiles it to Nam and applies Toffoli decomposition and rotation merging. Next, Quartz\xspace rewrites each $CNOT$ gate to a sequence of $H$, $CZ$, $H$ gates, cancels out adjacent $H$ or $CZ$ pairs, and then fully converts the circuit to Rigetti by transforming $X$ to $R_x(\pi)$ and $H$ to $R_x(\pi) \cdot R_z(\frac{\pi}{2}) \cdot R_x(\frac{\pi}{2}) \cdot R_z(-\frac{\pi}{2})$. Ultimately, Quartz\xspace invokes the optimizer, using a suitable $(n,q)$-complete ECC set for Rigetti. Note that elimination of adjacent $H$ or $CZ$ pairs during the translation from Nam to Rigetti leads to more optimized circuits: a pair of adjacent $H$ gates (that are canceled out) would otherwise be transformed into a sequence of eight $R_x$ and $R_z$ gates that cannot be canceled out by Quartz\xspace since the cancellation is only correct for specific parameter values, while Quartz\xspace considers symbolic transformations valid for arbitrary parameter values. \paragraph{Symbolic parameter expressions.} As explained in \Cref{sec:ecc}, Quartz\xspace assumes a fixed number of parameters $m$ and a specification $\Sigma$ for parameter expressions used in circuits. Quartz\xspace takes $m$ as input, and supports a flexible form for $\Sigma$ defined by a finite set of parameter expressions and either allowing or disallowing parameters to be used more than once in a circuit. Our experiments use $m=2$ for the Nam and Rigetti gate sets, and $m=4$ for the IBM gate set because it contains gates with up to three parameters. For $\Sigma$, we consider the expressions $p_{i}$, $2p_i$ and $p_{i}+p_{j}$ where $0 \leq i < m$ and $i < j < m$ (recall that $\vec{p}\in\mathbb{R}^m$ is the vector of parameters), and restrict each parameter to be used at most once in a circuit. This restriction significantly reduces the number of circuits \textproc{RepGen}\xspace{} considers, especially for the IBM gate set because the $U_3$ gate requires three parameter expressions. As explained in \Cref{sec:verifier}, the verifier searches for phase factors of the form $\vec{a} \cdot \vec{p} + b$. In our experiments we used $a \in \{-2, -1, 0, 1, 2\}^m$ and $b \in \{0, \frac{\pi}{4}, \frac{2\pi}{4}, \dots,\frac{7\pi}{4}\}$, which proved to be useful and sufficient in our preliminary experimentation with various gates. We later found that for the gate sets of \Cref{tab:gate_set}, $\vec{a}=\vec{0}$ is actually sufficient. That is, these gate sets do not admit any transformations with parameter-dependent phase factors for the circuits we considered; they do however need various constant phase factors. \subsection{Experiment Setup} We compare Quartz\xspace with existing quantum circuit optimizers on a benchmark suite of 26 circuits developed by prior work \cite{nam2018automated, amy2014}. The benchmarks include arithmetic circuits (e.g., adding integers), multiple controlled $X$ and $Z$ gates (e.g., $CCX$ and $CCZ$), the Galois field multiplier circuits, and quantum Fourier transformations. We use Quartz\xspace to optimize the benchmark circuits to the three gate sets of \Cref{tab:gate_set}. As in prior work~\cite{nam2018automated, VOQC}, we measure cost of a circuit in terms of the total gate count. We therefore define the \textproc{Cost} function in \Cref{alg:search} as the number of gates in a circuit.\footnote{Quartz\xspace can in principle be used to optimize for other metrics, e.g. number of $CNOT$ or $T$ gates, but here we focus on total gate count.} Setting $n$ and $q$ for generating an $(n,q)$-complete ECC set\xspace determines the resulting transformations. Our experiments use: $n=6,q=3$ for the Nam gate set; $n=4,q=3$ for the IBM gate set; and $n=3,q=3$ for the Rigetti gate set, which provided good results for our benchmarks. \Cref{sec:eval:generator,subsec:scalability} discuss the impact of $n$ and $q$ on Quartz\xspace's performance. Quartz\xspace's backtracking search (\Cref{alg:search}) is controlled by the hyper-parameter $\gamma$ and the timeout threshold. Our experiments use $\gamma = 1.0001$, which yields good results for our benchmarks. This value for $\gamma$ essentially means we consider cost-preserving transformations but not cost-increasing ones. For the search timeout, we use 24 hours. \Cref{subsec:scalability} discusses the timeout threshold and how it interacts with the settings for $n$ and $q$. To stop the search from consuming too much memory, whenever the priority queue of \Cref{alg:search} contains more than 2,000 circuits we prune it and keep only the top 1,000 circuits. Our preliminary experimentation with this pruning suggested that it does not affect Quartz\xspace's results. All experiments were performed on an \texttt{m6i.32xlarge} AWS EC2 instance with a 128-core CPU and \SI{512}{GB} RAM.% \subsection{Circuit Optimization Results} \label{sec:eval:results} \begin{table} { \centering \caption{% Gate count results for the Nam gate set. The best result for each circuit is in bold. ``Quartz\xspace Preprocess'' lists gate count after Quartz\xspace's preprocessor (\Cref{sec:impldetails}). } \label{tab:nam} \vspace{0em} \footnotesize \resizebox{\columnwidth}{!} {% \begin{threeparttable}[t] \begin{tabular}{l|rrrr|rr} \hline {\bf Circuit} & {\bf Orig.} & {\bf Qiskit} & {\bf Nam} & {\bf \textproc{voqc}\xspace} & {\bf \rotatebox[origin=c]{90}{\begin{tabular}{@{}c@{}}Quartz\xspace\\Preprocess\end{tabular}}} & {\bf \rotatebox[origin=c]{90}{\begin{tabular}{@{}c@{}}Quartz\xspace\\End-to-end\end{tabular}}} \\ \hline {\tt adder\_8} & 900 & 869 & {\bf 606} & 682 & 732 & 724 \\ {\tt barenco\_tof\_3} & 58 & 56 & 40 & 50 & 46 & {\bf 38}\\ {\tt barenco\_tof\_4} & 114 & 109 & 72 & 95 & 86 & {\bf 68}\\ {\tt barenco\_tof\_5} & 170 & 162 & 104 & 140 & 126 & {\bf 98}\\ {\tt barenco\_tof\_10} & 450 & 427 & 264 & 365 & 326 & {\bf 262}\\ {\tt csla\_mux\_3} & 170 & 168 & 155 & 158 & 164 & \textbf{154}\\ {\tt csum\_mux\_9} & 420 & 420 & \textbf{266} & 308 & 308 & 272\\ {\tt gf2\^{}4\_mult} & 225 & 213 & 187 & 192 & 186 & {\bf 177}\\ {\tt gf2\^{}5\_mult} & 347 & 327 & 296 & 291 & 287 & {\bf 277}\\ {\tt gf2\^{}6\_mult} & 495 & 465 & 403 & 410 & 401 & {\bf 391}\\ {\tt gf2\^{}7\_mult} & 669 & 627 & 555 & 549 & 543 & {\bf 531}\\ {\tt gf2\^{}8\_mult} & 883 & 819 & 712 & 705 & {\bf 703} & {\bf 703}\\ {\tt gf2\^{}9\_mult} & 1095 & 1023 & 891 & 885 & 879 & {\bf 873}\\ {\tt gf2\^{}10\_mult} & 1347 & 1257 & 1070 & 1084 & 1062 & {\bf 1060} \\ {\tt mod5\_4} & 63 & 62 & 51 & 56 & 51 & {\bf 26}$^{\dag}$ \\ {\tt mod\_mult\_55} & 119 & 117 & 91 & {\bf 90} & 105 & 93 \\ {\tt mod\_red\_21} & 278 & 261 & {\bf 180} & 214 & 236 & 202 \\ {\tt qcla\_adder\_10} & 521 & 512 & {\bf 399} & 438 & 450 & 422 \\ {\tt qcla\_com\_7} & 443 & 428 & {\bf 284} & 314 & 349 & 292 \\ {\tt qcla\_mod\_7} & 884 & 853 & -$^{\dag\dag}$ & 723 & 727 & {\bf 719} \\ {\tt rc\_adder\_6} & 200 & 195 & {\bf 140} & 157 & 174 & 154 \\ {\tt tof\_3} & 45 & 44 & {\bf 35} & 40 & 39 & {\bf 35} \\ {\tt tof\_4} & 75 & 73 & {\bf 55} & 65 & 63 & {\bf 55} \\ {\tt tof\_5} & 105 & 102 & {\bf 75} & 90 & 87 & {\bf 75} \\ {\tt tof\_10} & 255 & 247 & {\bf 175} & 215 & 207 & {\bf 175} \\ {\tt vbe\_adder\_3} & 150 & 146 & 89 & 101 & 115 & {\bf 85} \\ \hline {\bf \begin{tabular}{@{}c@{}}Geo. Mean\\Reduction\end{tabular}} & - & 3.9\% & 27.3\% & 18.7\% & 18.6\% & 28.7\% \\ \hline \end{tabular}% \begin{tablenotes} \footnotesize \item[$\dag$] Computed as the median of seven runs: 25, 26, 26, 26, 32, 32, 32. % \item[$\dag\dag$] Nam generates an incorrect circuit for \texttt{qcla\_mod\_7}~\cite[Table~1]{kissinger2019reducing}. \end{tablenotes} \end{threeparttable} } } \end{table} \paragraph{Nam gate set.} \Cref{tab:nam} compares Quartz\xspace to Qiskit\xspace~\cite{qiskit}, Nam~\cite{nam2018automated}, and \textproc{voqc}\xspace~\cite{VOQC} for the Nam gate set. (The performance of t$|$ket$\rangle$\xspace~\cite{tket} for this gate set is similar to Qiskit\xspace, see ~\cite{VOQC}.) The table also shows the gate count following Quartz\xspace's preprocessing steps (rotation merging and Toffoli decomposition, see \Cref{sec:impldetails}). Quartz\xspace outperforms Qiskit\xspace and \textproc{voqc}\xspace on almost all circuits, indicating that it discovers most transformations used in these optimizers and also explores new optimization opportunities arising from new transformations and from the use of a cost-guided backtracking search (rather than a greedy approach, e.g., see \Cref{fig:cnot_transformations}). Quartz\xspace achieves on-par performance with Nam~\cite{nam2018automated}, a circuit optimizer highly tuned for this gate set. Nam applies a set of carefully chosen heuristics such as floating $R_z$ gates and canceling one- and two-qubit gates (see \cite{nam2018automated} for more detail). While Quartz\xspace's preprocessor implements two of Nam's optimization passes, the results of the preprocessor alone are not close to Nam.\footnote{We observe that for the \texttt{gf2\^{}n\_mult} circuits, Quartz\xspace' preprocessor outperforms Nam. We attribute this difference to our greedy Toffoli decomposition, discussed in~\Cref{sec:impldetails}, which happens to work well for these circuits.} By using the automatically generated transformations, Quartz\xspace is able to perform optimizations similar to some of Nam's other hand-tuned optimizations, and even outperform Nam on roughly half of the circuits. For \texttt{mod5\_4}, we observed significant variability between runs, caused by randomness in ordering circuits with the same cost in the priority queue ($\m{Q}$ in \Cref{alg:search}). Therefore, \Cref{tab:nam} reports the median result from seven runs as well as individual results. This variability also suggests that Quartz\xspace's performance can be improved by running the optimizer multiple times and taking the best discovered circuit, or by applying more advanced stochastic search techniques~\cite{DBLP:conf/pldi/KoenigPA21}. \begin{table}[t] \centering \caption{% Gate count results for the IBM gate set. The best result for each circuit is in bold. ``Quartz\xspace Preprocess'' lists gate count after Quartz\xspace's preprocessor (\Cref{sec:impldetails}). } \label{tab:ibmq} \vspace{0em} \small \resizebox{\columnwidth}{!} {% \begin{tabular}{l|rrrr|rr} \hline {\bf Circuit} & {\bf Orig.} & {\bf Qiskit} & {\bf t|ket$\rangle$} & {\bf \textproc{voqc}\xspace} & {\bf \rotatebox[origin=c]{90}{\begin{tabular}{@{}c@{}}Quartz\xspace\\Preprocess\end{tabular}}} & {\bf \rotatebox[origin=c]{90}{\begin{tabular}{@{}c@{}}Quartz\xspace\\End-to-end\end{tabular}}} \\ \hline {\tt adder\_8} & 900 & 805 & 775 & 643 & 736 & {\bf 583} \\ {\tt barenco\_tof\_3} & 58 & 51 & 51 & 46 & 46 & {\bf 36}\\ {\tt barenco\_tof\_4} & 114 & 100 & 100 & 89 & 86 & {\bf 67}\\ {\tt barenco\_tof\_5} & 170 & 149 & 149 & 135 & 126 & {\bf 98}\\ {\tt barenco\_tof\_10} & 450 & 394 & 394 & 347 & 326 & {\bf 253}\\ {\tt csla\_mux\_3} & 170 & 153 & 155 & 148 & 164 & {\bf 139}\\ {\tt csum\_mux\_9} & 420 & 382 & 361 & {\bf 308} & 364 & 340\\ {\tt gf2\^{}4\_mult} & 225 & 206 & 206 & 190 & 186 & {\bf 178}\\ {\tt gf2\^{}5\_mult} & 347 & 318 & 319 & 289 & 287 & {\bf 275}\\ {\tt gf2\^{}6\_mult} & 495 & 454 & 454 & 408 & 401 & {\bf 388}\\ {\tt gf2\^{}7\_mult} & 669 & 614 & 614 & 547 & 543 & {\bf 530}\\ {\tt gf2\^{}8\_mult} & 883 & 804 & 806 & 703 & 703 & {\bf 692}\\ {\tt gf2\^{}9\_mult} & 1095 & 1006 & 1009 & 882 & 879 & {\bf 866}\\ {\tt gf2\^{}10\_mult} & 1347 & 1238 & 1240 & 1080 & 1062 & {\bf 1050} \\ {\tt mod5\_4} & 63 & 58 & 58 & 53 & 55 & {\bf 51} \\ {\tt mod\_mult\_55} & 119 & 106 & 102 & {\bf 83} & 109 & 91 \\ {\tt mod\_red\_21} & 278 & 227 & 224 & {\bf 191} & 246 & 205 \\ {\tt qcla\_adder\_10} & 521 & 460 & 460 & 409 & 450 & {\bf 372} \\ {\tt qcla\_com\_7} & 443 & 392 & 392 & 292 & 349 & {\bf 267} \\ {\tt qcla\_mod\_7} & 884 & 778 & 780 & 666 & 726 & {\bf 594} \\ {\tt rc\_adder\_6} & 200 & 170 & 172 & {\bf 141} & 186 & 151 \\ {\tt tof\_3} & 45 & 40 & 40 & 36 & 39 & {\bf 31} \\ {\tt tof\_4} & 75 & 66 & 66 & 58 & 63 & {\bf 49} \\ {\tt tof\_5} & 105 & 92 & 92 & 80 & 87 & {\bf 67} \\ {\tt tof\_10} & 255 & 222 & 222 & 190 & 207 & {\bf 157} \\ {\tt vbe\_adder\_3} & 150 & 133 & 139 & 100 & 115 & {\bf 82} \\ \hline {\bf \begin{tabular}{@{}c@{}}Geo. Mean\\Reduction\end{tabular}} & - & 11.0\% & 11.2\% & 23.1\% & 17.4\% & 30.1\% \\ \hline \end{tabular}% } \end{table} \paragraph{IBM gate set.} \Cref{tab:ibmq} compares Quartz\xspace with Qiskit\xspace~\cite{qiskit}, t$|$ket$\rangle$\xspace~\cite{tket}, and \textproc{voqc}\xspace~\cite{VOQC} on the IBM gate set. Qiskit\xspace and t$|$ket$\rangle$\xspace include a number of optimizations specific to this gate set, such as merging any sequence of $U_1$, $U_2$, and $U_3$ gates into a single gate~\cite{qiskit_optimize1qgates} and replacing any block of consecutive 1-qubit gates by a single $U_3$ gate~\cite{qiskit_consolidateblocks}. Quartz\xspace is able to automatically discover some of these gate-specific optimizations by representing them each as a sequence of transformations. Overall, Quartz\xspace outperforms these existing compilers. \begin{table}[t] \centering \caption{% Gate count results for the Rigetti gate set. The best result for each circuit is in bold. ``Quartz\xspace Preprocess'' lists gate count after Quartz\xspace's preprocessor (\Cref{sec:impldetails}). } \label{tab:rigetti} \vspace{0em} \footnotesize \begin{threeparttable} \begin{tabular}{l|rrr|rr} \hline {\bf Circuit} & {\bf Orig.} & {\bf Quilc\xspace} & {\bf t|ket$\rangle$} & {\bf \rotatebox[origin=c]{90}{\begin{tabular}{@{}c@{}}Quartz\xspace\\Preprocess\end{tabular}}} & {\bf \rotatebox[origin=c]{90}{\begin{tabular}{@{}c@{}}Quartz\xspace\\End-to-end\end{tabular}}} \\ \hline {\tt adder\_8} & 5324 & 3345 & 3726 & 4244 & {\bf 2553} \\ {\tt barenco\_tof\_3} & 332 & 203 & 207 & 256 & {\bf 148} \\ {\tt barenco\_tof\_4} & 656 & 390 & 408 & 500 & {\bf 272} \\ {\tt barenco\_tof\_5} & 980 & 607 & 609 & 744 & {\bf 386} \\ {\tt barenco\_tof\_10} & 2600 & 1552 & 1614 & 1964 & {\bf 960} \\ {\tt csla\_mux\_3} & 1030 & {\bf 614} & 641 & 864 & 654 \\ {\tt csum\_mux\_9} & 2296 & 1540 & 1542 & 1736 & {\bf 1100} \\ {\tt gf2\^{}4\_mult} & 1315 & 809 & 827 & 1020 & {\bf 796} \\ {\tt gf2\^{}5\_mult} & 2033 & 1301 & 1277 & 1573 & {\bf 1231} \\ {\tt gf2\^{}6\_mult} & 2905 & 1797 & 1823 & 2235 & {\bf 1751} \\ {\tt gf2\^{}7\_mult} & 3931 & 2427 & 2465 & 3021 & {\bf 2371} \\ {\tt gf2\^{}8\_mult} & 5237 & 3208 & 3276 & 4033 & {\bf 3081} \\ {\tt gf2\^{}9\_mult} & 6445 & 4070 & 4037 & 4933 & {\bf 3986} \\ {\tt gf2\^{}10\_mult} & 7933 & 4977 & {\bf 4967} & 6048 & 5039 \\ {\tt mod5\_4} & 369 & 211 & 238 & 293 & {\bf 197} \\ {\tt mod\_mult\_55} & 657 & 420 & 452 & 531 & {\bf 361} \\ {\tt mod\_red\_21} & 1480 & 880 & 1020 & 1166 & {\bf 738} \\ {\tt qcla\_adder\_10} & 3079 & -$^\dagger$ & 1884 & 2464 & {\bf 1615} \\ {\tt qcla\_com\_7} & 2512 & 1540 & 1606 & 1954 & {\bf 1095} \\ {\tt qcla\_mod\_7} & 5130 & 3164 & 3202 & 4029 & {\bf 2525} \\ {\tt rc\_adder\_6} & 1186 & 706 & 747 & 984 & {\bf 606} \\ {\tt tof\_3} & 255 & 150 & 160 & 201 & {\bf 135} \\ {\tt tof\_4} & 425 & 271 & 270 & 333 & {\bf 199} \\ {\tt tof\_5} & 595 & 354 & 380 & 465 & {\bf 271} \\ {\tt tof\_10} & 1445 & 878 & 930 & 1125 & {\bf 631} \\ {\tt vbe\_adder\_3} & 900 & 534 & 557 & 705 & {\bf 366} \\ \hline {\bf \begin{tabular}{@{}c@{}}Geo. Mean\\Reduction\end{tabular}} & - & 38.6\% & 36.3\% & 21.9\% & 49.4\% \\ \hline \end{tabular}% \begin{tablenotes} \footnotesize \item[$\dagger$] Quilc\xspace supports up to 32 qubits while \texttt{qcla\_adder\_10} has 36. \end{tablenotes} \end{threeparttable} \end{table} \paragraph{Rigetti gate set.} \Cref{tab:rigetti} compares Quartz\xspace with Quilc\xspace~\cite{quilc} and t$|$ket$\rangle$\xspace~\cite{tket} on the Rigetti gate set. Quartz\xspace significantly outperforms t$|$ket$\rangle$\xspace and Quilc\xspace on most circuits, even though Quilc\xspace is highly optimized for this gate set. We also note that while we employ some simplifications in the preprocessing phase for the Rigetti gate set (see \Cref{sec:impldetails}), most of the reduction in gate count comes from the optimization phase. \subsection{Analyzing Quartz\xspace's Generator and Verifier} \label{sec:eval:generator} \input{table_eval_generator} We now examine Quartz\xspace's circuit generator and circuit equivalence verifier. \Cref{tab:complexity} shows the run times of the entire generation procedure, and also the time out of that spent in verification, for each of the three gate sets and for varying values of $n$, while fixing $q=3$. The table also lists the number of resulting circuit transformations $|\m{T}|$, the size of the resulting representative set $|\m{R}_n|$, and the characteristic (see \Cref{alg1} and \Cref{thm:complexity}). For all gate sets, $|\m{T}|$ and $|\m{R}_n|$ grow exponentially with $n$. In spite of this exponential growth, the generator and verifier can generate, in a reasonable run time of a few hours, an $(n,q)$-complete ECC set\xspace for values of $n$ and $q$ that are sufficiently large to be useful for circuit optimization. The growth in the number of transformations significantly affects the optimizer. For Nam and IBM, our selected values of $n=6$ and $n=4$ result in a similar order of magnitude for $|\m{T}|$. For Rigetti, we use $n=2$, resulting in much smaller $\m{T}$. % This choice is related to the fact that circuits in the Rigetti gate set are larger by roughly an order of magnitude compared to Nam and IBM (compare ``Orig.'' in \Cref{tab:rigetti} with \Cref{tab:nam,tab:ibmq}; see discussion in \Cref{subsec:scalability}). We now evaluate the effectiveness of \textproc{RepGen}\xspace{} and the pruning techniques described in~\Cref{sec:pruning} for reducing the number of circuits Quartz\xspace must consider (which is closely correlated with the number of resulting transformations). To evaluate the relative contribution of each technique, \Cref{tab:eval_generator} reports the number of circuits considered when applying: (i)~\textproc{RepGen}\xspace{} without additional pruning, (ii)~\textproc{RepGen}\xspace{} combined with ECC\xspace simplification, and (iii)~\textproc{RepGen}\xspace{} combined with both ECC\xspace simplification and common subcircuit pruning; and compares each of these to a brute force approach of generating all possible circuits with up to $q$ qubits and $n$ gates. Both \textproc{RepGen}\xspace{} and the pruning techniques play an important role in eliminating redundant circuits while preserving $(n,q)$-completeness. Ultimately, \textproc{RepGen}\xspace{} and the pruning techniques reduce the number of transformations the optimizer must consider by one to three orders of magnitude.% \subsection{Analyzing Quartz\xspace's Circuit Optimizer} \label{subsec:scalability} We now examine Quartz\xspace's circuit optimizer when using an $(n,q)$-complete ECC\xspace set for varying values of $n$ and $q$. For this study we focus on the Nam gate set, and compare different values for $n$ and $q$ by the \emph{optimization effectiveness} they yield, defined as the reduction in geometric mean gate count over all circuits (as in the bottom line of \Cref{tab:nam}). For \texttt{mod5\_4}, when $q=3$ and $3 \leq n \leq 7$, we use the median of 7 runs % due to the variability discussed in \Cref{sec:eval:results}. As we increase $n$ and $q$ we expect Quartz\xspace's optimizer to: (i)~be able to reach more optimized circuits, and (ii)~require more time per search iteration. Both of these follow from the fact that increasing $n$ and $q$ yields more transformations. Under a fixed search time budget, we expect the increased cost of search iterations to reduce the positive impact of larger $n$'s and $q$'s. Because each iteration (\Cref{alg:search}) considers a candidate circuit $C$ and computes $\Call{Apply}{C,T}$ for each transformation $T\in\m{T}$, the cost per iteration scales linearly with the number of transformations $|\m{T}|$. Since $|\m{T}|$ varies dramatically as $n$ and $q$ change,\footnote{ \ifarxiv For example: with $q=3$, $|\m{T}|=196$ for $n=3$ and $|\m{T}|=56,152$ for $n=6$; with $q=4$, $|\m{T}|=208$ for $n=3$ and $|\m{T}|=273,532$ for $n=6$ (see \Cref{tab:complexity:full}). We were unable to generate a $(7,4)$-complete ECC set using \SI{512}{GB} of RAM. \else For example: with $q=3$, $|\m{T}|=196$ for $n=3$ and $|\m{T}|=56,152$ for $n=6$ (\Cref{tab:complexity}); with $q=4$, $|\m{T}|=208$ for $n=3$ and $|\m{T}|=273,532$ for $n=6$~\cite{extended}. We were unable to generate a $(7,4)$-complete ECC set using \SI{512}{GB} of RAM. \fi } we expect the second effect (slowing down the search) to be significant, especially for large circuits which typically require more search iterations (and additionally increase \textproc{Apply}'s running time). \begin{figure} \centering \includegraphics[width=\evalfigfrac\linewidth]{figures/scalability_plot.pdf} \vspace{0em} \caption{ Optimization effectiveness with $(n,q)$-complete ECC sets\xspace for varying $n$ and $q$ after a 24-hour search timeout. % For $n=0$ there are no transformations and the results match the ``Quartz\xspace Preprocess'' column of \Cref{tab:nam}. } \label{fig:scalability} \end{figure} \Cref{fig:scalability} shows optimization effectiveness (reduction in geometric mean gate count) for varying values of $n$ and $q$, under a search timeout of 24 hours. The figure supports the tradeoff discussed above. % Using too small values for $n$ and $q$ results in low effectiveness, and as we increase $n$ or $q$ effectiveness increases but then starts decreasing, as the negative impact of the large number of transformations starts outweighing their benefit. \ifarxiv (See \Cref{tab:complexity:full} for $|\m{T}|$ in each configuration.) \else (See~\cite{extended} for details about $|\m{T}|$ for each configuration.) \fi As expected, the optimal setting for $n$ and $q$ generally varies across circuits---smaller circuits tend to be better optimized with larger values of \ifarxiv $n$ (\Cref{tab:nam:full}). \else $n$~\cite{extended}. \fi Still, \Cref{fig:scalability} shows that there are several settings that yield good overall results: $3 \le n \le 6$ for $q = 3$, and $3 \le n \le 4$ for $q = 4$.% \footnote{ Interestingly, $q=3;3 \le n \le 6$ cover the best optimization results for all circuits obtained among all configurations considered in \ifarxiv \Cref{fig:scalability} (\Cref{tab:nam:full}). \else \Cref{fig:scalability}~\cite{extended}. \fi } \begin{figure} \centering \includegraphics[width=\evalfigfrac\linewidth]{figures/time_plot.pdf} \vspace{0em} \caption{ Optimization effectiveness over time ($q{=}3$; $2{\le} n {\le} 7$). % For each time point, ``best'' computes the reduction in geometric mean gate count obtained by selecting the most effective value for $n$ at that time point for each circuit (i.e., different circuits may use different $n$'s, and the same circuit may use different $n$'s at different time points). } \label{fig:time} \end{figure} \Cref{fig:time} shows how the search time impacts optimization for different choices for $n$ (focusing on $q=3$). For each value of $n$, we observe a quick initial burst, followed by a gentle increase. At the end of the initial burst, effectiveness monotonically decreases as $n$ increases, for all $3 \le n \le 6$. As time progresses the gaps diminish and eventually the order is reversed: at around 21 hours $n=6$ surpasses $n=3$. The settings $n=2$ and $n=7$ yield poor effectiveness: $n=2$ does not contain an adequate number of transformations and quickly saturates the search time, while $n=7$ contains too many transformations and progresses too slowly. \Cref{fig:time} also shows the effectiveness of a hypothetical run constructed by taking the best setting \emph{for each circuit at each time}. This ``best'' curve considerably outperforms the others, because the best setting for $n$ varies across circuits with different sizes. \ifarxiv See \Cref{sec:extended} for more details, including plots akin to \Cref{fig:scalability} and \Cref{fig:time} for each circuit. \else See~\cite{extended} for more details, including plots akin to \Cref{fig:scalability} and \Cref{fig:time} for each circuit and a detailed results table. \fi \section{Circuit Generator} \label{sec:generator} Quartz\xspace builds an $(n,q)$-complete ECC set\xspace using the \textproc{RepGen}\xspace{} algorithm, developed in this section, which interleaves circuit generation and equivalence verification (see \Cref{fig:overview}). \subsection{The \textproc{RepGen}\xspace{} Algorithm} \label{sec:repgen} A straightforward way to generate an $(n, q)$-complete ECC set\xspace is to examine all circuits in $\mathcal{C}^{(n,q)}$, % but there are exponentially many such circuits. To tackle this challenge, \textproc{RepGen}\xspace{} uses {\em representative-based circuit generation}, which significantly reduces the number of circuits considered. The key idea is to % extend a $(j,q)$-complete ECC set\xspace to a $(j+1,q)$-complete one by selecting a representative circuit for each ECC\xspace and using these representatives to build larger circuits. \input{alg-regen} \begin{sloppypar} \paragraph{Sequence representation for circuits.} \textproc{RepGen}\xspace{} represents a circuit as a sequence of instructions that reflects a topological ordering of its gates (i.e., respecting dependencies). E.g., a possible sequence for the circuit in \Cref{fig:our_circuit} is: {\tt% U1 \nolinebreak $\theta$ \nolinebreak 0; \nolinebreak U2 \nolinebreak $2\phi$ \nolinebreak $\lambda{+}\delta$ \nolinebreak 0; \nolinebreak H \nolinebreak 1; \nolinebreak X \nolinebreak 2; \nolinebreak CNOT \nolinebreak 1 \nolinebreak 2; \nolinebreak CNOT \nolinebreak 0 \nolinebreak 1% }.\linebreak We write $()$ for the empty sequence and $L.(g\ \iota)$ for appending gate $g\in\m{G}$ with arguments $\iota$ (parameter expressions and qubit indices) to sequence $L$; e.g., for the second instruction above $g=\texttt{U2}$ and $\iota = (2\phi, \lambda{+}\delta, 0)$. Different sequences may represent the same circuit; e.g. the \texttt{U2} and \texttt{X} instructions above can be swapped. \textproc{RepGen}\xspace eliminates some of this representation redundancy by the same mechanism it uses for avoiding redundancy due to circuit equivalence. \end{sloppypar} We use $|L|$, $\dropfirst{L}$, and $\droplast{L}$ to denote the number of gates in a circuit $L$, its suffix with $|L|-1$ gates, and its prefix with $|L|-1$ gates. Note that each of the latter two represents a subcircuit. We fix an arbitrary total order over single-gate circuits (i.e., $\m{C}^{(1,q)}$), and lift it to a total order of circuits (i.e., sequences). \begin{definition}[Circuit Precedence] We say $L_1$ \emph{precedes} $L_2$, written $L_1 \prec L_2$, if $|L_1| < |L_2|$, or if $|L_1| = |L_2|$ and $L_1$ is lexicographically smaller than $L_2$. \label{def:order} \end{definition} \Cref{alg1} lists the \textproc{RepGen}\xspace{} algorithm, which proceeds in rounds and maintains a database $\m{D}$ of circuits grouped by fingerprints (defined below), a $(j,q)$-complete ECC set\xspace $\m{S}_j$, and a \emph{representative set} (set of representatives) $\m{R}_j$ for ECCs in $\m{S}_j$. The $j$-th round produces a $(j, q)$-complete ECC set\xspace from the $(j-1, q)$-complete ECC set\xspace generated in the previous round. Each round proceeds in two steps. \paragraph{Step 1: Constructing circuits.} Before the first round, the initial ECC set\xspace is $\m{S}_0 = \emptyset$ and the representative set is $\m{R}_0 = \{()\}$, i.e., a singleton set consisting of the empty circuit (over $q$ qubits). In the $j$-th round, \textproc{RepGen}\xspace{} uses the $(j{-}1,q)$-complete ECC set\xspace $\m{S}_{j-1}$ and its representative set $\m{R}_{j-1}$ computed previously, and constructs possible size-$j$ circuits by appending a single gate to each circuit in $\m{R}_{j-1}$ with size $j-1$. \textproc{RepGen}\xspace{} enumerates all possible gates $g$ and arguments $\iota$ according to $\m{G}$ and $\Sigma$. % For each generated circuit $L'$, \textproc{RepGen}\xspace checks if $\dropfirst{L'}$ is a representative from the previous round. If so, \textproc{RepGen}\xspace concludes that $L'$ extends existing representatives and must be considered in generating $\m{S}_{j}$. Otherwise, circuit $L'$ is considered redundant and ignored. % We prove the correctness of \textproc{RepGen}\xspace{} in \Cref{thm_representative}. To identify potentially equivalent circuits, \textproc{RepGen}\xspace computes circuit \emph{fingerprints}, and uses them as keys for storing circuits in the hash table $\m{D}$. The fingerprint is computed using fixed, randomly selected parameter values and quantum states. Recall that for a circuit $C$ over $q$ qubits and $m$ parameters, and parameter values $\vec{p}\in\mathbb{R}^m$, $\sem{C}(\vec{p})$ is a (concrete) $2^q \times 2^q$ complex matrix. The fingerprint of a circuit $C$ is \begin{equation} \label{eqn:fingerprint} \Call{FingerPrint}{C} = \left| \bra{\psi_0} \sem{C}(\vec{p_0}) \ket{\psi_1} \right|, \end{equation} where the parameter values $\vec{p_0}$ and quantum states $\ket{\psi_0}$ and $\ket{\psi_1}$ are fixed and randomly selected, and $|\cdot|$ denotes modulus of a complex number. With infinite precision, \cref{eqn:equiv,eqn:fingerprint} ensure that equivalent circuits have identical fingerprints. This section presents and analyzes \textproc{RepGen}\xspace{} assuming infinite precision, while \Cref{sec:impldetails} presents an adaptation for finite-precision floating-point arithmetic. \paragraph{Step 2: Examining circuits with equal fingerprints.} In Step 1, \textproc{RepGen}\xspace generates circuits and stores them in the hash table $\m{D}$ grouped by their fingerprints. In Step 2, \textproc{RepGen}\xspace{} partitions each set $\gamma = \m{D}[f]$ of potentially equivalent circuits into a verified ECC set\xspace using the function \Call{Eccify}{}. % \Call{Eccify}{} considers each circuit in $\gamma$ and checks if it is equivalent to some existing ECC\xspace in $\gamma$ by querying the verifier (Section~\ref{sec:verifier}); the circuit is then either added to the matching ECC\xspace, or becomes a new singleton ECC\xspace. \textproc{RepGen}\xspace{} then combines the ECC sets\xspace for each $\gamma$ to get the ECC set\xspace $\m{S}_j$. Having constructed an ECC set\xspace $\m{S}_j$, \textproc{RepGen}\xspace{} computes $\m{S}_j$'s representative set $\m{R}_j$, which is the set of representatives of the ECCs\xspace in $\m{S}_j$ (\Cref{alg1} line~\ref{alg1:rn}). The representative of an ECC\xspace is its $\prec$-minimum circuit (\Cref{def:order}). During the operation of \textproc{RepGen}\xspace{}, singleton ECCs\xspace are important: their representatives must be considered when generating circuits in the next round, and they may grow to non-singleton ECCs\xspace as more circuits are generated. However, singleton ECCs\xspace in $\m{S}_n$ ultimately yield no transformations, so we remove them from the result of the \textproc{RepGen}\xspace{} algorithm (line~\ref{alg1:return}). \subsection{Correctness of \textproc{RepGen}\xspace{}} \label{subsec:representative_pruning} When constructing circuits of size $j$ (in round $j$), \textproc{RepGen}\xspace only considers circuits $L'$ that extend previously constructed representatives, i.e., $\droplast{L'}\in\m{R}_{j-1}$, and only when the extension leads to $\dropfirst{L'}\in\m{R}_{j-1}$. In this section we prove that in spite of that, \textproc{RepGen}\xspace{} always generates an $(n,q)$-complete ECC set\xspace. Below we use $\m{D}_j$ to denote the value of $\m{D}$ after Step 1 in \textproc{RepGen}\xspace{}'s $j$-th round (i.e., at \Cref{alg1} line~\ref{alg1:eccfy}) and $\m{D}_0$ for the initial value of $\m{D}$ (line~\ref{alg1:fingerprint}). \begin{lemma} \label{lem:alg1inv} \Cref{alg1} maintains the following invariants (for any $1 \leq j \leq n$, and writing $\sqsubseteq$ for Hoare ordering, i.e., $\m{X \sqsubseteq Y} \equiv \forall X \in \m{X}.\, \exists Y\in\m{Y}.\, X \subseteq Y$): \begin{enumerate} \item\label{lem:alg1inv:monotone} $\m{D}_{j-1} \sqsubseteq \m{D}_{j}$, $\m{S}_{j-1} \sqsubseteq \m{S}_{j}$, and $\m{R}_{j-1} \subseteq \m{R}_{j}$; \item\label{lem:alg1inv:d} for any $L\in\m{C}^{(n,q)}$, $L\in\bigcup\m{D}_j$ iff $|L| \leq j$ and either $L=()$ or $\dropfirst{L},\droplast{L}\in\m{R}_{j-1}$; and \item\label{lem:alg1inv:r} for any $L\in\m{C}^{(n,q)}$, $L\in\m{R}_{j}$ iff $|L| \leq j$ and $L$ does not have a $\prec$-smaller equivalent circuit in $\m{C}^{(n,q)}$. \end{enumerate} \end{lemma} \begin{proof} For \cref{lem:alg1inv:monotone}, $\m{D}_{j-1} \sqsubseteq \m{D}_{j}$ and $\m{S}_{j-1} \sqsubseteq \m{S}_{j}$ follow from the monotonic updating of $\m{D}$ (i.e., circuits are only added), and from monotonicity (w.r.t. Hoare ordering) of \Call{Eccify}{}. To see that $\m{R}_{j-1} \subseteq \m{R}_{j}$, observe that in the $j$-th round, all circuits constructed are of size $j$, so any circuit added to an existing ECC\xspace has more gates than its representative in $\m{R}_{j-1}$, which will therefore remain its representative in $\m{R}_{j}$ (recall that if $|L|<|L'|$ then $L \prec L'$). We prove \cref{lem:alg1inv:d} by induction on $j$. Both the base case ($j=1$) and the induction step follow from \Cref{alg1} lines~\ref{alg1:enum_circuit}--\ref{alg1:add_circuit}, combined with \cref{lem:alg1inv:monotone} and either the definition of $\m{D}_0$ or the induction hypothesis. We prove \cref{lem:alg1inv:r} by induction on $j$. Slightly generalizing from the statement above, we take the base case to be $j=0$, which follows from line~\ref{alg1:initr}. In the induction step, for sufficiency, consider a circuit $L$, $1 \leq |L| \leq j$, with no $\prec$-smaller equivalent circuit. (The $|L|=0$ case follows from line~\ref{alg1:initr} and \cref{lem:alg1inv:monotone}.) Both $\dropfirst{L}$ and $\droplast{L}$ are of size $\leq j-1$ with no $\prec$-smaller equivalent circuits (if either had a $\prec$-smaller equivalent circuit, we could use it to construct a $\prec$-smaller equivalent circuit for $L$). By the induction hypothesis, $\dropfirst{L},\droplast{L}\in\m{R}_{j-1}$, so by \cref{lem:alg1inv:d}, $L \in \bigcup\m{D}_j$. By lines~\ref{alg1:eccfy}--\ref{alg1:rn}, $\m{R}_j$ includes the $\prec$-minimal element of each class of equivalent circuits in $\bigcup\m{D}$, so it must include $L$. Necessity follows from sufficiency, combined with the fact that two circuits in $\m{R}_j$ cannot be equivalent and that $\m{R}_j$ does not contain circuits of size greater than $j$. \end{proof} \begin{theorem}[\textproc{RepGen}\xspace] \label{thm_representative} In \Cref{alg1}, every $\m{S}_j$ ($0 \leq j \leq n$) is a $(j,q)$-complete ECC set\xspace, and the algorithm returns an $(n,q)$-complete ECC set\xspace. \end{theorem} \begin{proof} We proceed using proof by contradiction. Let $j$ be the smallest such that $\m{S}_j$ is not $(j,q)$-complete. We must have $j>0$, with $\m{S}_{j-1}$ $(j{-}1,q)$-complete, and by \Cref{lem:alg1inv} \cref{lem:alg1inv:monotone} $\m{S}_{j}$ is also $(j{-}1,q)$-complete. (As $\m{S}_{j-1} \sqsubseteq \m{S}_{j}$, $\m{S}_{j}$ only includes more transformations.) Let $(L,L')$ be the minimal (under the pairwise lexicographic lifting of $\prec$) pair of equivalent circuits of size $\leq j$ that cannot be rewritten to each other using transformations included in $\m{S}_j$. We must have $|L'|=j$, since otherwise $|L|,|L'|\leq j-1$, but $\m{S}_{j}$ is $(j{-}1,q)$-complete. If $\dropfirst{L'}$ has a $\prec$-smaller equivalent circuit, then $\m{S}_{j}$ can rewrite $L'$ to a $\prec$-smaller equivalent circuit, which it must also not be able to rewrite to $L$, contradicting the minimality of $L'$. Therefore, $\dropfirst{L'}$ does not have a $\prec$-smaller equivalent circuit; The same argument works for $\droplast{L'}$. Therefore, by using \Cref{lem:alg1inv} \cref{lem:alg1inv:r} we get $\dropfirst{L'},\droplast{L'}\in\m{R}_{j-1}$, and by \Cref{lem:alg1inv} \cref{lem:alg1inv:d}, $L'\in\m{D}_j$. But if $L'\in\m{D}_j$ then either $\m{S}_j$ includes a transformation that rewrites $L'$ to a smaller equivalent circuits, that it cannot rewrite to $L$, contradicting the minimality of $L'$; or $L'$ does not have a $\prec$-smaller equivalent circuit, contradicting the definition of the pair $(L,L')$. \end{proof} \subsection{Complexity of \textproc{RepGen}\xspace} \label{subsec:complexity} We analyze the time complexity of \textproc{RepGen}\xspace (its space complexity is the same). First, observe that the number of single-gate circuits $|\m{C}^{(1,q)}|-1$, which is determined by the gate set $\m{G}$, parameter-expression specification $\Sigma$, number of qubits $q$, and parameters $m$, provides an upper bound for the number of single-gate extensions of any existing circuit. (The $-1$ is due to $()\in\m{C}^{(1,q)}.)$ This \emph{characteristic} of $\m{G}$, $\Sigma$, $q$, and $m$, denoted $\mathrm{ch}(\m{G}, \Sigma, q, m) = |\m{C}^{(1,q)}|-1$, bounds the number of iterations of the loops in \Cref{alg1} lines~\ref{alg1:enum_gate} and~\ref{alg1:enum_input} in each round. (The bound may not be tight as $\Sigma$ may impose more restrictions, e.g., single use of parameters.) While $\sum_{j=0}^{n}\mathrm{ch}(\m{G}, \Sigma, q, m)^j$ provides a trivial upper bound on the complexity of \textproc{RepGen}\xspace{}, the following theorem shows that \textproc{RepGen}\xspace{}'s running time can be bounded using the number of \emph{resulting representatives} $|\m{R}_n|$. In practice, this number is significantly smaller than $\mathrm{ch}(\m{G}, \Sigma, q, m)^n$ (see \Cref{tab:complexity}). \begin{theorem}[Complexity of \textproc{RepGen}\xspace{}] \label{thm:complexity} The time complexity of \Cref{alg1}, excluding the verification part (line~\ref{alg1:eccfy}), is \begin{equation*} O\big(|\m{R}_{n\xspace}| \cdot \mathrm{ch}(\m{G}, \Sigma, q, m) \cdot n\xspace\big). \end{equation*} \end{theorem} \begin{proof} The $j$-th round of \Cref{alg1} considers circuits from $\m{R}_{j-1}$ with size $j-1$, and for each one it considers at most $\mathrm{ch}(\m{G}, \Sigma, q, m)$ possible extensions. It takes $O(n\xspace)$ to construct a new circuit and add it to $\m{D}$. (We assume $O(1)$ complexity for hash table insert and lookup, i.e., we use average and amortized complexity.) Summing over all rounds of \Cref{alg1}, and recalling that $\m{R}_{j}\subseteq\m{R}_n$ (\Cref{lem:alg1inv} \cref{lem:alg1inv:monotone}): % \begin{equation*} \begin{aligned} &\sum_{j=1}^{n\xspace} |\{L \in \m{R}_{j-1} : |L| = j-1\}| \cdot \mathrm{ch}(\m{G}, \Sigma, q, m) \cdot n\xspace \\ \leq\ & |\m{R}_{n\xspace}| \cdot \mathrm{ch}(\m{G}, \Sigma, q, m) \cdot n\xspace. \end{aligned} \end{equation*} \end{proof} Note that if $\m{G}$, $\Sigma$, $q$, and $m$ are considered as constant then the time complexity of \Cref{alg1} is $O(|\m{R}_{n\xspace}| \cdot n\xspace)$. \Cref{tab:complexity} lists some empirical $\mathrm{ch}(\m{G}, \Sigma, q, m)$ and $|\m{R}_n|$ values. \section{Introduction} \label{sec:intro} Quantum computing comes in many shapes and forms. There are over a dozen proposals for realizing quantum computing in practice, and nearly all these proposals support different kinds of quantum operations, i.e., instruction set architectures (ISAs). The increasing diversity in quantum processors makes it challenging to design optimizing compilers for quantum programs, since the compilers must consider a variety of ISAs and carry optimizations specific to different ISAs. To reduce the execution cost of a quantum circuit, the most common form of optimization is {\em circuit transformations} that substitute a subcircuit matching a specific pattern with a functionally equivalent new subcircuit with improved performance (e.g., using fewer quantum gates). Existing quantum compilers generally rely on circuit transformations manually designed by experts and applied greedily. For example, Qiskit~\cite{qiskit} and t$|$ket$\rangle$\xspace~\cite{tket} use greedy rule-based strategies to optimize a quantum circuit and perform circuit transformations whenever applicable. \textproc{voqc}\xspace~\cite{VOQC} formally verifies circuit transformations but still requires users manually specify them. Although rule-based transformations can reduce the cost of a quantum circuit, they have two key limitations. First, because existing optimizers rely on domain experts to design transformations, they require significant human effort and may also miss subtle optimizations that are hard to discover manually, resulting in sub-optimal performance. Second, circuit transformations designed for one quantum device do not directly apply to other devices with different ISAs, which is problematic in the emerging diverse quantum computing landscape. For example, IBMQX5~\cite{dumitrescu2018cloud} supports the \texttt{$U_1$}\xspace, \texttt{$U_2$}\xspace, \texttt{$U_3$}\xspace and \texttt{$CNOT$}\xspace gates, while Rigetti Agave~\cite{rigetti_agave} supports the $R_x(\pm\frac{\pi}{2})$, $R_x(\pi)$, \texttt{$R_z(\lambda)$}\xspace, and \texttt{$CZ$}\xspace gates. As a result, circuit transformations tailored for IBMQX5 cannot optimize circuits on Rigetti Agave, and vice versa. Recently, Quanto~\cite{quanto} proposed to automatically discover transformations by computing concrete matrix representations of circuits. Its main restriction is that it does not discover symbolic transformations, which are needed to deal with common \emph{parametric} quantum gates in a general way. This paper presents Quartz\xspace, a quantum circuit superoptimizer that automatically generates and verifies symbolic circuit transformations for arbitrary gate sets, including parametric gates. Quartz\xspace provides two key advantages over existing quantum circuit optimizers. First, for a given set of gates, Quartz\xspace generates symbolic circuit transformations and formally verifies their correctness in a fully {\em automated} way, without any manual effort to design or implement transformations. Second, Quartz\xspace explores a more comprehensive set of circuit transformations by discovering {\em all} possible transformations up to a certain size, outperforming existing optimizers with manually designed transformations. \paragraph{ECC sets.} We introduce {\em equivalent circuit classes} (ECCs\xspace) as a compact way to represent circuit transformations. Each ECC\xspace is a set of functionally equivalent circuits, and two circuits from an ECC\xspace form a valid transformation. We say that a transformation is \emph{subsumed} by an {\em ECC set\xspace} (a set of ECCs\xspace) if the transformation can be decomposed into a sequence of transformations, each of which is a pair of circuits from the same ECC\xspace in the ECC set\xspace. We use {\em $(n,q)$-completeness} to assess the comprehensiveness of an ECC set\xspace---an ECC set\xspace is $(n,q)$-complete if it subsumes {\em all} valid transformations between circuits with at most $n$ gates and $q$ qubits. \begin{figure} \centering \includegraphics[width=\introfigfrac\linewidth]{figures/overview9-crop.pdf} \vspace{0em} \caption{Quartz\xspace overview. } \label{fig:overview} \end{figure} \begin{sloppypar} \paragraph{Overview.} \Cref{fig:overview} shows an overview of Quartz\xspace, which uses an {\em interleaving} approach: it iteratively generates candidate circuits, eliminates redundancy, and verifies equivalences. In the $j$-th iteration, Quartz\xspace generates a $(j,q)$-complete ECC set\xspace based on the $(j{-}1,q)$-complete ECC set\xspace from the previous iteration. The generated ECC set\xspace may contain redundant transformations. % We introduce \textproc{RepGen}\xspace{}, a representative-based circuit generation algorithm that uses a $(j{-}1,q)$-complete ECC set\xspace to generate circuits for a $(j,q)$-complete ECC set\xspace with fewer redundancies. The circuits are sent to the {\em circuit equivalence verifier}, which formally verifies equivalence between circuits and produces a $(j,q)$-complete ECC set\xspace. After generating an $(n,q)$-complete ECC set\xspace, Quartz\xspace employs several pruning techniques to further eliminate redundancies. % Finally, Quartz\xspace's {\em circuit optimizer} applies the discovered transformations to optimize an input circuit. \end{sloppypar} \paragraph{Circuit Generator} Given a gate set and a circuit size $n$, Quartz\xspace's {\em circuit generator} generates candidate circuits of size at most $n$ using the \textproc{RepGen}\xspace{} algorithm, which avoids generating all possible circuits (of which there are exponentially many) while ensuring $(n,q)$-completeness. To this end, \textproc{RepGen}\xspace{} iteratively constructs ECC sets\xspace, from smaller to larger. % For each ECC\xspace, \textproc{RepGen}\xspace{} selects a \emph{representative circuit} and constructs larger circuits by extending these representatives. To discover equivalences between circuits, \textproc{RepGen}\xspace{} uses random inputs to assign a {\em fingerprint} (i.e., a hash) to each circuit and checks only the circuits with the same fingerprint. We prove an upper bound on the running time of \textproc{RepGen}\xspace{} in terms of the number of representatives generated. For the gate sets considered in our evaluation, \textproc{RepGen}\xspace{} reduces the number of circuits in an ECC set\xspace by one to three orders of magnitudes while maintaining $(n,q)$-completeness. \begin{sloppypar} \paragraph{Circuit Equivalence Verifier} Quartz\xspace's {\em circuit equivalence verifier} checks if two potentially equivalent circuits are indeed functionally equivalent. A major challenge is dealing with gates that take one or multiple parameters (e.g., \texttt{$U_1$}\xspace, \texttt{$U_2$}\xspace, and \texttt{$U_3$}\xspace in IBMQX5, and $R_z$ in Rigetti Agave). For candidate equivalent circuits, Quartz\xspace checks whether they are functionally equivalent for % arbitrary combinations of parameter assignments and quantum states. To this end, Quartz\xspace computes symbolic matrix representations of the circuits. The resulting verification problem involves trigonometric functions and, in the general case, a quantifier alternation; Quartz\xspace soundly eliminates both and reduces circuit equivalence checking to SMT solving for quantifier-free formulas over the theory of nonlinear real arithmetic. The resulting SMT queries are efficiently solved by the Z3~\cite{z3} SMT solver. \end{sloppypar} \paragraph{Circuit Pruning} Having generated an $(n, q)$-complete ECC set\xspace, Quartz\xspace optimizes circuits by applying the transformations specified by the ECC set\xspace. To improve the efficiency of this optimization step, described next, Quartz\xspace applies several pruning techniques to eliminate redundant transformations. \paragraph{Circuit Optimizer} Quartz\xspace's {\em circuit optimizer} uses a cost-based backtracking search algorithm adapted from TASO~\cite{jia2019taso} to apply the verified transformations. The search is guided by a cost model that compares the performance of different candidate circuits (in our experiments the cost is given by number of gates). Quartz\xspace targets the \emph{logical optimization} stage in quantum circuit compilation. That is, Quartz\xspace operates before {\em qubit mapping} where logical qubits are mapped to physical qubits while respecting hardware constraints~\cite{dhj+magic-2018,wdd+tilt-2021}. \paragraph{Evaluation} Our evaluation on three gate sets derived from existing quantum processors shows that Quartz\xspace can generate and verify circuit transformations for different gate sets in under 30 minutes (using 128 cores). For logical circuit optimization, Quartz\xspace matches and often outperforms existing optimizers. On a benchmark of 26 circuits, Quartz\xspace obtains average gate count reductions of 29\%, 30\%, and 49\% for the Nam, IBM, and Rigetti gate sets; the corresponding reductions by existing optimizers are 27\%, 23\%, and 39\%. \section{Circuit Optimizer} \label{sec:optimizer} \begin{figure} \centering \includegraphics[scale=\pptscale]{figures/graph_representation-crop.pdf} \caption{Graph representation for \Cref{fig:conventional_circuit}'s circuit. The green box (subcircuit, also convex subgraph) and red dashed area (not a subcircuit, also non-convex subgraph) match those of \Cref{fig:conventional_circuit}.} \label{fig:graph_representation} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/cnot_transformations3-crop.pdf} \vspace{0em} \caption{A transformation sequence applied by Quartz\xspace that reduces the total gate count in the {\tt gf2\^{}4\_mult} circuit by swapping the control and target qubits of three $CNOT$ gates. Note that the first three transformations do not reduce gate count. } \label{fig:cnot_transformations} \end{figure*} Quartz\xspace's {\em circuit optimizer} applies the verified transformations generated by the generator to find an optimized equivalent circuit for a given input circuit (see \Cref{fig:overview}). A key step is computing $\textproc{Apply}(C, T)$, the set of circuits that can be obtained by applying transformation % $T=(C_T,C_R)$ to circuit $C$. This involves finding all possible ways to match $C_T$ with a subcircuit of $C$. Quartz\xspace's optimizer uses a \emph{graph representation for circuits}, explained below, to implement this operation. In the graph representation, subcircuits correspond to convex subgraphs,\footnote{For a graph $G$, $G'$ is a convex subgraph of $G$ if for any two vertices $u$ and $v$ in $G'$, every path in $G$ from $u$ to $v$ is also contained in $G'$.} and Quartz\xspace adapts the graph-matching procedure from TASO~\cite{jia2019taso} to find all matches between $C_T$ and a convex subgraph of $C$. In the graph representation, a circuit $C$ over $q$ qubits is represented as a directed graph $G$, where each gate over $d$ qubits is a vertex with in- and out-degree $d$. Edges are labeled to distinguish between qubits in multi-qubit gates (e.g., the control and target qubits of a $CNOT$ gate). $G$ also includes $q$ sources and sinks, one for each qubit. % \Cref{fig:graph_representation} illustrates the graph representation of \Cref{fig:conventional_circuit}'s circuit. As the figure also illustrates, subcircuits correspond to convex subgraphs. The optimizer first converts an $(n,q)$-complete ECC set\xspace into a set of transformations (in the graph representation). For each ECC\xspace with $x$ equivalent circuits $C_1,\ldots,C_x$, the optimizer considers a pair of transformations between the representative and each other circuit. For example, if $C_1$ is the representative circuit in the ECC\xspace, then the optimizer considers transformations $C_1\rightarrow C_i$ and $C_i\rightarrow C_1$ for $1<i\leq x$. These $2(x - 1)$ transformations guarantee that any two circuits from the same ECC\xspace are reachable from each other. To optimize an input circuit using the above transformations, the optimizer uses a {\em cost-based backtracking search} algorithm adapted from TASO~\cite{jia2019metaflow, jia2019taso}. The search is guided by a cost function $\Call{Cost}{\cdot}$ that maps circuits to real numbers. In our evaluation, the cost is given by the number of gates in a circuit, but other cost functions are possible. % \begin{algorithm}[t] \caption{Cost-Based Backtracking Search Algorithm. } \label{alg:search} { \small \begin{algorithmic}[1] \State {\bf Inputs:} Verified transformations $\m{T}$, a cost model $\Call{Cost}{\cdot}$, a hyper-parameter $\gamma$, and an input circuit $C_{in}$. \State {\bf Output:} an optimized circuit $C_{best}$ \State {\em // $\m{Q}$ is a priority queue of circuits sorted by their $\Call{Cost}{\cdot}$.} \State $\m{Q} = \{C_{in}\}$ \State $C_{best} = C_{in}$ \State $\m{D}_{seen} = \{C_{in}\}$ \While{$\m{Q} \neq \emptyset$ and the search has not timed out} \State $C=\m{Q}$.dequeue() \If{$\Call{Cost}{C} < \Call{Cost}{C_{best}}$} \State $C_{best} = C$ \EndIf \For{each transformation $T \in \m{T}$} \For{each $C_{new} \in \Call{Apply}{C, T}\setminus \m{D}_{seen}$} \If{$\Call{Cost}{C_{new}} < \gamma \cdot \Call{Cost}{C_{best}}$} \State $\m{Q}$.enqueue($C_{new}$) \State $\m{D}_{seen} = \m{D}_{seen} \cup \{C_{new}\}$ \EndIf \EndFor \EndFor \EndWhile \State \Return $C_{best}$ \end{algorithmic}} \end{algorithm} \Cref{alg:search} shows the pseudocode of our search algorithm. To find an optimized circuit, candidate circuits are maintained in a priority queue $\m{Q}$. At each iteration, the lowest-cost circuit $C$ is dequeued, and Quartz\xspace applies all transformations to get equivalent new circuits $C_{new}$, which are enqueued into $\m{Q}$ for further exploration. Circuits considered in the past are ignored using $\m{D}_{seen}$. The search is controlled by a hyper-parameter $\gamma$. Quartz\xspace ignores candidate circuits whose cost is greater than $\gamma$ times the cost of the current best circuit $C_{best}$. The parameter $\gamma$ trades off between search time and the search's ability to avoid local minima. For $\gamma=1$, \Cref{alg:search} becomes a greedy search that only accepts transformations that strictly improve cost. On the other hand, a higher value for $\gamma$ enables application of transformations that do not immediately improve the cost, which may later lead to otherwise inaccessible optimization opportunities. For example, \Cref{fig:cnot_transformations} depicts a sequence of five transformations that reduce the total gate count in the {\tt gf2\^{}4\_mult} (see \Cref{sec:eval:results}) circuit by four, via flipping three $CNOT$ gates; note that the first three transformations do not reduce the gate count at all. Quartz\xspace's circuit optimizer is designed to optimize circuits before \emph{mapping}. {\em Circuit mapping} converts a quantum circuit to an equivalent circuit that satisfies hardware constraints of a given quantum processor. These constraints include connectivity restrictions between qubits and the directions to perform multi-qubit operations. While transformations discovered by Quartz\xspace are also applicable to circuits after mapping, applying them naively may break hardware constraints. Therefore, we leave it as future work to build an optimizer for after-mapping circuits using Quartz\xspace's transformations. \section{Pruning Redundant Transformations} \label{sec:pruning} \begin{figure} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[scale=\pptscale]{figures/subst_example-crop.pdf} \caption{A circuit transformation.} \label{fig:subst_example} \end{subfigure} \\ \vspace{1em} \begin{subfigure}{\linewidth} \centering \includegraphics[scale=\pptscale]{figures/redudant_subst-1-crop.pdf} \caption{Redundant circuit transformation (common subcircuit).} \label{fig:redundant_subst-1} \end{subfigure} \caption{Illustrating a redundant circuit transformation.} \label{fig:common_subcircuit} \end{figure} Quartz\xspace applies two pruning steps after \textproc{RepGen}\xspace{} generates an $(n,q)$-complete ECC set\xspace to further eliminate redundancy. These steps maintain $(n,q)$-completeness while reducing the number of transformations the optimizer needs to consider. \subsection{ECC\xspace Simplification} \label{subsec:circuit_simplification} All ECCs\xspace generated by \textproc{RepGen}\xspace{} have circuits with exactly $q$ qubits. For each ECC\xspace, a qubit (or a parameter) is {\em unused} if all circuits in the ECC\xspace do not operate on the qubit (or the parameter). An {\em ECC\xspace simplification} pass removes all unused qubits and parameters from each ECC\xspace. After this pass, some ECCs\xspace may become identical, among which only one is kept. Because there is no specific order on parameters in a circuit, Quartz\xspace also finds identical ECCs\xspace under a permutation of the parameters and maintains only one of them. \subsection{Common Subcircuit Pruning} \label{subsec:common_subcircuit_pruning} Quartz\xspace eliminates transformations whose target and rewrite circuits include a common subcircuit at the beginning or the end. \Cref{fig:common_subcircuit} illustrates this \emph{common subcircuit pruning}; the common subcircuit is highlighted in grey. % \Cref{thm_common} explains why such transformations are always redundant. \begin{definition} A subset of gates $C'$ in a circuit $C$ is a \emph{subcircuit at the beginning} of $C$ if all gates in $C'$ are topologically before all gates in $C\setminus C'$. Similarly, a subset of gates $C'$ in a circuit $C$ is a \emph{subcircuit at the end} of $C$ if all gates in $C'$ are topologically after all gates in $C\setminus C'$. \end{definition} \begin{theorem} \label{thm_common} For any two quantum circuits $C_1$ and $C_2$ with a common subcircuit at the beginning or the end, if $C_1$ and $C_2$ are equivalent, then eliminating the common subcircuit from $C_1$ and $C_2$ generates two equivalent circuits. \end{theorem} \begin{proof} Recall that $\sem{C} (\vec{p})$ (for all $\vec{p}$---we elide $\vec{p}$ in this proof) denotes the matrix representation of circuit $C$. Let $\sem{C}^\dagger$ be the conjugate transpose of $\sem{C}$, and recall that as $\sem{C}$ is unitary, we have $\sem{C}^\dagger \sem{C} = \sem{C} \sem{C}^\dagger = I$. Let $C_{s}$ denote the common subcircuit shared by $C_1$ and $C_2$. Let $C_1'$ and $C_2'$ represent the new circuits obtained by removing $C_{s}$ from $C_1$ and $C_2$. When $C_{s}$ is a common subcircuit at the beginning of $C_1$ and $C_2$, the matrix representations for the new circuits are $\sem{C_i'} = \sem{C_{s}}^\dagger \sem{C_i}$, where $i=1,2$. Equivalence between $C_1$ and $C_2$ implies the existence of $\beta$ such that $\sem{C_1} = e^{i\beta} \sem{C_2}$, therefore $\sem{C_1'} = e^{i\beta} \sem{C_2'}$. The case where $C_s$ is a common subcircuit at the end is similar. \end{proof} \Cref{thm_common} shows that every transformation pruned in common subcircuit pruning must be subsumed by other transformations (assuming initial $(n,q)$-completeness). Observe that if two circuits have a common subcircuit at the beginning (resp. the end), then they must have a common gate at the beginning (resp. the end). Therefore, to implement common subcircuit pruning, Quartz\xspace only checks for a single common gate at the beginning or the end. \section{Related Work} \label{sec:related} \paragraph{Quantum circuit compilation.} Several optimizing compilers for quantum circuits have been recently introduced and are being actively developed: Qiskit\xspace~\cite{qiskit} and t$|$ket$\rangle$\xspace~\cite{tket} support generic gate sets; Quilc\xspace~\cite{quilc} is tailored to Rigetti Agave quantum processors; \textproc{voqc}\xspace~\cite{VOQC} is formally verified in Coq. CertiQ~\cite{shi2020certiq} is a framework for writing and verifying Qiskit\xspace compiler passes. Nam et al.\xspace~\cite{nam2018automated} develop heuristics tailored to the $\{H, X, R_z, CNOT\}$ gate set. Unlike Quartz\xspace, these systems rely on quantum-computing experts to design, implement, and verify transformations. Quanto~\cite{quanto} automatically discovers transformations by computing concrete matrix representations of circuits. It supports parameters only by considering concrete values, and unlike Quartz\xspace, it does not discover or verify symbolic transformations, which are the source of many of the challenges Quartz\xspace deals with. Quanto uses floating-point matrix equality to identify equivalence between circuits, while Quartz uses a combination of fingerprinting, SMT-based verification, the \textproc{RepGen}\xspace{} algorithm, and other pruning techniques, which are needed since symbolic parameters greatly increase the number of possible circuits in the generation procedure. Different from the aforementioned quantum optimizers that consider circuit transformations, PyZX~\cite{pyzx} employs ZX-diagrams as an intermediate representation for quantum circuits and uses a small set of complete rewrite rules in ZX-calculus~\cite{hadzihasanovic18, jeandel18} to simplify ZX-diagrams, which are finally converted back into quantum circuits. While our approach builds on some of the techniques developed in prior work, Quartz\xspace is the first quantum circuit optimizer that can automatically generate and verify symbolic circuit transformations for arbitrary gate sets. \paragraph{Superoptimization.} Superoptimization is a compiler optimization technique originally designed to search for an optimal sequence of instructions for an input program~\cite{massalin1987}. Our approach to generating quantum circuit transformations by tracking equivalent classes of circuits is inspired by prior work in automatically generating peephole optimizations for the X86 instruction set~\cite{heule2016, bansal2006} and generating graph substitutions for tensor algebra~\cite{jia2019taso, tensat, wang2021pet}. TASO~\cite{jia2019taso} is a tensor algebra superoptimizer that optimizes computation graphs of deep neural networks using automatically generated graph substitutions. TENSAT~\cite{tensat} reuses the graph substitutions discovered by TASO and employs equality saturation for tensor graph superoptimization. While Quartz\xspace draws inspiration from TASO and uses a similar search procedure, it is significantly different from prior superoptimization works because it targets quantum computing, which leads to a different semantics (i.e., using complex matrices) as well as a different notion of program equivalence (i.e., up to a global phase). Verifying quantum circuit transformations therefore uses different techniques compared to other superoptimization contexts. Applying equality saturation as in TENSAT~\cite{tensat} for optimizing quantum circuits is an interesting avenue for future work. \section{Circuit Equivalence Verifier} \label{sec:verifier} Given two circuits $C_1$ and $C_2$ over $q$ qubits and $m$ parameters, the verifier checks if they are equivalent (i.e., up to a global phase). Recalling \cref{eqn:equiv}, that means checking if $ \forall \vec{p} \in \mathbb{R}^m. \; \exists \beta \in \mathbb{R}. \; \sem{C_1}(\vec{p}) = e^{i\beta} \sem{C_2}(\vec{p})$. Note that the equality here is between two $2^q \times 2^q$ complex matrices. There are two challenges in automatically checking \cref{eqn:equiv}. One is the quantifier alternation, which may be needed to account for global phase; the other is the use of trigonometric function, which is common in quantum gates' matrix representations. For example, the $U_3$ gate supported by the IBM quantum processors has the following matrix representation: \begin{equation} \label{eqn:u3} \sem{U_3}(\theta, \phi,\lambda) = \begin{pmatrix} \cos(\frac{\theta}{2}) & -e^{i\lambda} \sin(\frac{\theta}{2})\\ e^{i\phi}\sin(\frac{\theta}{2}) & e^{i(\phi+\lambda)} \cos(\frac{\theta}{2}) \end{pmatrix}. \end{equation} While some SMT solvers support quantifiers and trigonometric functions~\cite{cvc4,DBLP:conf/cade/CimattiGIRS17}, our preliminary attempts showed they cannot directly prove \cref{eqn:equiv} for the circuit transformations generated by Quartz\xspace. Our verification approach is therefore to reduce \cref{eqn:equiv} to a quantifier-free formula over nonlinear real arithmetic by eliminating both the quantification over $\beta$ and the trigonometric functions. The resulting verification conditions are then checked using the Z3~\cite{z3} SMT solver. This approach can efficiently verify all circuit transformations generated in our experiments (\Cref{sec:eval:generator}). \paragraph{Phase factors} To eliminate the existential quantification over the phase $\beta$, we search over a finite space of linear combinations of the parameters $\vec{p}$ for a value that can be used for $\beta$. We consider $\beta(\vec{p}) = \vec{a} \cdot \vec{p} + b$, where $\vec{a} \in A$ and $b \in B$ for some \emph{finite} sets $A \subseteq \mathbb{R}^m$ and $B \subseteq \mathbb{R}$. (Our experimentation with various combinations of quantum gates suggested that $\vec{a}\neq\vec{0}$ is sometimes needed, so we develop the mechanism with this generality; however, in the experiments reported in \Cref{sec:eval}, constant phase factors, i.e. $\vec{a}=\vec{0}$, turned out to be sufficient for the three gate sets and the parameter specifications used.) Given circuits $C_1$ and $C_2$, we find candidates for the coefficients $\vec{a}$ and $b$ using an approach similar to the one we use for generating candidate transformations. We select random parameter values $\vec{p_0}$ and quantum states $\ket{\psi_0}$ and $\ket{\psi_1}$ and find all combinations of $\vec{a}$ and $b$ as above that satisfy the following equation up to a small floating-point error (note that unlike \cref{eqn:fingerprint}, $|\cdot|$ is not used): \begin{equation} \label{eqn:phaseab} \bra{\psi_0} \sem{C_1}(\vec{p_0}) \ket{\psi_1} = e^{i(\vec{a} \cdot \vec{p} + b)} \cdot \bra{\psi_0} \sem{C_2}(\vec{p_0}) \ket{\psi_1}, \end{equation} For every such candidate coefficients $\vec{a}$ and $b$, we then attempt to verify following equation, \begin{equation} \label{eqn:verifier-qf} \forall \vec{p} \in \mathbb{R}^m. \; \sem{C_1}(\vec{p}) = e^{i(\vec{a} \cdot \vec{p} + b)} \sem{C_2}(\vec{p}), \end{equation} which unlike \cref{eqn:equiv}, does not existentially quantify over $\beta$. If \cref{eqn:verifier-qf} holds for some candidate coefficients, then $C_1$ and $C_2$ are verified to be equivalent. Otherwise, we consider the transformation given by $C_1$ and $C_2$ to fail verification, but that case did not occur in our experiments. \paragraph{Trigonometric functions} Matrices of parametric quantum gates we encountered only use their parameters inside arguments to $\sin$ or $\cos$ (after applying Euler's formula). Under this assumption, we reduce \cref{eqn:verifier-qf} to nonlinear real arithmetic in three steps. First, we eliminate expressions such as $\frac{\theta}{2}$ that occur in some quantum gates (e.g., \cref{eqn:u3}) by introducing a fresh variable $\theta'=\frac{\theta}{2}$ and substituting $\theta'+\theta'$ for $\theta$. After this step, all arguments to $\sin$ and $\cos$ are linear combinations of variables and constants (e.g., from phase factors) with integer coefficients. Second, we exhaustively apply Euler's formula $e^{i\phi} = \cos\phi + i\sin\phi$, % and trigonometric identities for parity and sum of angles: $\sin(-x)=-\sin(x)$, $\cos(-x)=\cos(x)$, $\sin(x + y) = \sin(x) \cos(y) + \cos(x) \sin(y)$, and $\cos(x + y) = \cos(x) \cos(y) - \sin(x) \sin(y)$. After these steps, $\sin$ and $\cos$ are only applied to atomic terms (variables and constants). For each constant $c$, we require precise symbolic expressions for $\sin(c)$ and $\cos(c)$ (e.g., $\sin(\frac{\pi}{4}) = \frac{\sqrt{2}}{2}$), and eliminate $\sin$ and $\cos$ over constants using these expressions. Third, for every variable $t$ such that $\sin(t)$ or $\cos(t)$ is used we substitute $s_t$ for $\sin(t)$ and $c_t$ for $\cos(t)$, where $s_t$ and $c_t$ are fresh variables with a constraint $s_t^2 + c_t^2 = 1$, which fully eliminates trigonometric functions. Ultimately, Z3 can check the transformed version of \cref{eqn:verifier-qf} using the theory of quantifier-free nonlinear real arithmetic. During the development of Quartz\xspace we occasionally encountered verification failures, but these were due to implementation bugs, and the counterexamples obtained from Z3 were useful in the debugging process. Thus, verification is useful not only to ensure the ultimate correctness of the generated transformations, but also in the development process. %
2,877,628,090,575
arxiv
\section*{Acknowledgments} \noindent This work is part of the research programme of the Netherlands Organisation for Scientific Research (NWO). We thank Femius Koenderink for stimulating discussions. S.R.K.R. acknowledges an ERC Starting Grant with project number 852694. \\
2,877,628,090,576
arxiv
\section{Introduction} The XXZ Heisenberg chain is defined by the Hamiltonian \begin{equation} \label{Ham} H_{XXZ}={V\over a}\sum_{n=1}^N \left(\left(S_n^xS_{n+1}^x+S_n^yS_{n+1}^y\right)+ \rho\left(S_n^zS_{n+1}^z-1/4\right)\right)\,. \end{equation} The Hilbert space is the tensor product of $N$ spaces furnishing the doublet representation of SU(2). $S_n^{x,y,z}$ are the spin operators acting on the $n$th site and in (\ref{Ham}) the $(N\!\!+\!\!1)$th site is identified with the first one. We consider the easy axis region defined by \begin{equation} \rho=\cosh\gamma>1\,. \end{equation} The $V/a$ factor with $a$ being the lattice spacing and $V=2/\pi$ is included for the sake of proper normalization. The Hamiltonian (\ref{Ham}) can be diagonalized by Bethe Ansatz (BA) \cite{Be,Or,desClGa,Yang}, the BA equations (BAE) were analyzed in \cite{BVV,ViWo,deVW}. Due to these studies it is well known by now, that the chain with even number of sites has two states in which there are no free parameters \cite{ViWo}. These are the ground state and the first excited state, and as the difference in their energy disappears exponentially fast with the length of the chain $N$ \cite{deVW,Bax}, they are often referred to as the two ground states, although it is more precise to call them the two vacua. The other excited states are described in terms of a few parameters characterizing some kind of dressed particles. These posses a gap, and their parameters satisfy BA type equations, the so called higher level Bethe Ansatz equations (HLBAE) \cite{BVV,ViWo}. The excited states form two groups which can be associated with the two ground states \cite{ViWo}. The model has two relativistic continuum limits \cite{McCWu}. One is constructed by setting $\gamma=0$ first and taking the $a\to0$ (together with $N\to\infty$ but $Na=L$) continuum limit afterwards resulting a massless SU(2) conformal field theory. To obtain the other relativistic limit one has to perform the $\gamma\to0$ and $a\to0$ simultaneously keeping the gap in the properly normalized spectrum constant. The aim of the present work is to study the details of this second limit, and analyze the limiting theory. Our results are as follows. \begin{itemize} \item The limiting theory is a massive relativistic theory with a mass $M$, if the continuum limit is performed in such a way, that \begin{equation}\label{rellim} \gamma\to0\,,\quad\quad a\to0\,,\quad\quad {4\over a}\exp\left\{-{\pi^2\over2\gamma}\right\}\to M\,. \end{equation} As the length of the chain $L=Na$ is kept finite, also the number of sites must be adjusted: \begin{equation}\label{rellim'} N={LM\over4}\exp\left\{{\pi^2\over2\gamma}\right\}\to \infty\,. \end{equation} Our expression for the mass $M$ differs from that found in \cite{McCWu} in a prefactor of the exponential. \item In the above limit the difference in the energies of the two `ground states' (physical vacua) stays finite but exponentially small in the length of the system: \begin{equation}\label{endifiscl} \Delta E_0=\sqrt{{8M\over\pi L}}e^{-LM}\,. \end{equation} \item The excited states of the system can be described in terms of excitations (dressed particles). Each particle is characterized by a rapidity we denote by $\vartheta$. The energy and momentum contribution of the particles are the sums of the contributions of the individual particles \begin{equation} E-E_0=\sum\varepsilon(\vartheta)\,,\quad\quad P=\sum p(\vartheta)\,, \end{equation} with \begin{equation} \varepsilon(\vartheta)=M\cosh\vartheta\,,\quad\quad p(\vartheta)=M\sinh\vartheta\,. \end{equation} The rapidities of the particles must satisfy a set of BA type equations: \begin{mathletters}\label{ge} \begin{eqnarray}\label{gea} Lp(\vartheta_h)&=&2\pi({\cal I}_h+I_0)-\sum_l^{n(\vartheta)} \phi\left({\vartheta_h-\vartheta_l\over\pi}\right) +\sum_{\alpha}^{n(\kappa)} 2\tan^{-1}{\vartheta_h-\kappa_{\alpha}\over\pi/2}\,,\nonumber\\ {\cal I}_h&=&{1\over2}\left({n(\vartheta)-2n(\kappa)\over2} \right)\,({\rm od}\,1)\,,\quad\quad I_0=0\ {\rm or}\ 1/2\,, \end{eqnarray} and \begin{eqnarray}\label{geb} \sum_h^{n(\vartheta)}2\tan^{-1}{\kappa_{\alpha}- \vartheta_h\over\pi/2} &=&2\pi{\cal J}_{\alpha}+\sum_{\beta}^{n(\kappa)} 2\tan^{-1}{\kappa_{\alpha}-\kappa_{\beta}\over\pi}\,,\nonumber\\ {\cal J}_{\alpha}&=&\left({n(\kappa)-n(\vartheta)+1\over2} \right)\,({\rm od}\,1)\,. \end{eqnarray} \end{mathletters} Here $n(\vartheta)$ is the number of particles, the set of variables $\kappa$ is needed to describe the internal symmetry of the states, their number $n(\kappa)$ obeys $n(\vartheta)-2n(\kappa)\geq0$, and \begin{equation}\label{phifv} \phi(x)={1\over i}{\rm ln}{\Gamma\left({1\over2}-i{x\over2}\right) \Gamma\left(1+i{x\over2}\right)\over \Gamma\left({1\over2}+i{x\over2}\right) \Gamma\left(1-i{x\over2}\right)}\ . \end{equation} \item The excited states form SU(2) multiplets in which the energy is the same. Each {\em multiplet} is characterized by one solution of the equations (\ref{ge}). The number of states belonging to one multiplet is $(n(\vartheta)-2n(\kappa))+1$. Within a multiplet the states are labeled by the value of the $S^z$ taking the values $n(\vartheta)/2\!-\! n(\kappa), \ n(\vartheta)/2\!-\! n(\kappa)\smi1,\ \ldots,\ -n(\vartheta)/2\!+\! n(\kappa)$. This indicates, that the particles have SU(2) symmetry or a symmetry producing the same multiplet structure, and the $z$ component of the spin connected to this symmetry coincides with the $z$ component of the real spins building up the original chain. \item The solutions of (\ref{ge}) form two groups, one for $I_0=0$ and one for $I_0=1/2$. We argue, that the two sets of excited states are the excitations of the two vacua. In both sets the $S^z=0$ states, including the vacua, are eigenstates of flipping all spins, but with different eigenvalues. In the case of the two particle excitations the symmetry of the singlets is the same as that of the corresponding vacuum, and that of the triplets is the opposite. This is the same structure as found in the $XXX$ chain, where it is a consequence of the SU(2) symmetry of the Hamiltonian (and this structure can be interpreted in terms of SU(2) with a modified coproduct, or equivalently in terms of a $q$-deformed SU(2) at $q=-1$). \item The two particle scattering matrix can be given up to an overall phase as \begin{equation}\label{smm1} \hat S(\Delta\vartheta)=-\exp\left\{i\left(2\pi I_0+ \phi\left({\Delta\vartheta\over\pi}\right) \right)\right\} \left(\hat{P}_{tr}+{\Delta\vartheta+ i\pi\over \Delta\vartheta-i\pi}\hat{P}_{s}\right)\,, \end{equation} where $\hat{P}_{tr}$ and $\hat{P}_{s}$ are the projectors on the triplet resp.~singlet subspace of the two spins. \end{itemize} All these shows, that the scaling limit (SL) of the XXZ yields a massive relativistic theory (in duplicate) which is of the same structure as the massive sector of the theory obtained through the SL of the attractive Hubbard chain and identified as a regularization of the SU(2) symmetric chiral invariant Gross-Neveu (CGN) model \cite{WoFo1,WoFo2}. It is widely accepted, that the XYZ chain is a lattice regularization of the sine-Gordon (SG) theory or the massive Thirring model (MTM) \cite{Lut} (as these latter two are equivalent \cite{Col}). It is also known, that the MTM/SG theory at a special value of the coupling is equivalent to the CGN model (up to a free massless boson field), more over this value of the coupling corresponds to the antiferromagnetic (our $\rho$=1) point of the XYZ chain. This way recovering the massive sector of the CGN model is not unexpected. It is remarkable, however, that the SL of the XXZ chain is taken through couplings not corresponding to stable MTM/SG theories. Our detailed analysis also rises some questions connected with the mass-formula and the symmetry of the vacua. The (\ref{rellim}) mass-formula is not of the form found in the case of the Hubbard chain (a prefactor $\sqrt{\gamma}$ is `missing'), and it defines a $\beta$-function different from the one expected based on perturbation calculations. This may be connected with the fact, that the $\rho$=1 point in the space of the couplings of the XYZ model is singular (in the sense, that certain quantities like the ground-state energy of the XXZ chain are singular in this point \cite{desClGa}), thus approaching this point from different directions may lead to different results. In our SL the isotropic point is approached along the XXZ line while the limit taken in \cite{Lut} involves the XY-type anisotropy. Another interesting point is the existence of two very similar but not completely identical limiting theories. The two theories are distinguished by the symmetry of the vacuum with respect to reversing all spins. One has to note, however, that measuring this symmetry in the continuum limit may encounter difficulties, as the SL of the corresponding operator can not be constructed directly. This symmetry has not been studied in other representations of the CGN model although it may be present also in other cases. In the SL of the half-filled Hubbard chain it is even stronger in the sense that the vacuum is a singlet of both the spins and the isospins, and it is symmetric under both the spin and isospin reversal. The paper is organized as follows. In Section \ref{sec:BA} we summarize those properties of the XXZ chain we need to construct its scaling limit and we also propose a formula to calculate the symmetry with respect to flipping all spins. In Section \ref{sec:skalalim} the SL is constructed and the properties of the limiting theory are analyzed. Specially we construct the limiting process resulting in a relativistic dispersion (\ref{sec:spectr}), analyze the vacua in this limit (\ref{sec:vakumok}), derive the secular equations of the limiting theory (\ref{sec:seceq}), we give the SU(2) multiplet structure of the eigenstates (\ref{sec:multipl}), derive the two particle $S$-matrix (\ref{sec:Smatrix}), we study the reflection symmetry of the two particle $S^z=0$ states (\ref{sec:reflsim}), and we also argue, that the boundary condition obeyed by the particles of the limiting theory is connected to the parity properties of $N$ kept unchanged in the $N\to\infty$ limit (\ref{sec:boundary}). The more technical details are collected in appendices. The properties of the elliptic functions we use are listed in Appendix \ref{sec:lista}. The behavior of the two vacua are calculated in Appendix \ref{sec:alapok}, and in Appendix \ref{sec:strstr} we check if the the approximations used to derive the HLBAE of the lattice model remain valid also in the SL. In Appendix \ref{sec:solstr} we describe the structure of the solutions of the HLBAE and give the classes of solutions which become degenerate in the SL. Finally some details of our numerical calculations are given in Appendix \ref{sec:numerics}. \section{Bethe Ansatz solution of the XXZ chain} \label{sec:BA} \subsection{The BA equations} This section is intended to summarize the BA solution of the XXZ chain \cite{BVV,ViWo,deVW} with emphasis on those points which are relevant to the construction of the SL. First let us recall some properties of the Hamiltonian (\ref{Ham}). In the literature two versions of the XXZ model are commonly found which are distinguished by the relative sign of the $(S_n^xS_{n+1}^x+S_n^yS_{n+1}^y)$ and $(S_n^zS_{n+1}^z)$ terms. The two theories are equivalent if $N$ is even, the corresponding transformation on the Hilbert spaces is generated by $\prod_{k}\sigma^z_{2k}$ which relates the spin operators as: $\{S_{2k}^x,S_{2k}^y,S_{2k}^z\}\rightarrow \{-S_{2k}^x,-S_{2k}^y,S_{2k}^z\}$. As this is a symmetry of the Hamiltonian, the energy spectrum does not change, but it does have an effect on the eigenvalues of the momentum operator which is relevant to the connection to the continuum limit. The momentum of every spin wave is shifted by $\pi$ which affects all the states with an odd number of spin waves, including the ground state whenever $N/2$ is an odd integer. There is another useful operator, which commutes with the Hamiltonian, but whose eigenvalues are affected: \begin{equation}\label{tukor} \hat\Sigma=\prod_{n=1}^N\,\sigma_n^x\,, \end{equation} which represents a reflection on the $x$-axis. (In a basis given by the products of the $S^z$ eigenstates of the individual spins $\hat\Sigma$ simply flips all the spins (the up ones down and the down ones up)). The symmetry connected with this operation we call spin reversal or reflection symmetry.) The $S^z=0$ BA eigenstates of (\ref{Ham}) are expected to be also eigenstates of (\ref{tukor}). For the sake of definiteness we have chosen the positive sign in (\ref{Ham}) and note that when $N$ is an integer multiple of four, the results should agree with the ones obtained from the opposite convention. The Hamiltonian (\ref{Ham}) and $S^z$ can be simultaneously diagonalized by the Bethe Ansatz (BA) and due to the symmetry corresponding to inverting the spins it is sufficient to consider only $S^z\geq 0$ states. The eigenvectors are explicitly given in terms of a wave number set $\left\vert\{k_1\ldots k_r\}\right\rangle$, for the states with $S^z=N/2-r$, ($r\leq{N/2}$) \begin{equation} \label{BAstate} \left\vert\{k_1\ldots k_r\}\right\rangle= \sum_{n_1<\ldots<n_r} \left( \sum_{\cal P} \exp\left\{i\sum_\alpha k_{{\cal P}_\alpha} n_\alpha + {i\over2}\sum_{\alpha<\beta} \Psi_{{\cal P}_\alpha {\cal P}_\beta}\right\}\prod_{n_\alpha}\sigma_{n_\alpha}^- \left\vert F\right\rangle\right), \end{equation} where the summation is over all permutations $\cal P$ of $\{1,\ldots, r\}$ and the wave numbers $k_\alpha$ together with the phase shifts $\Psi_{\alpha\beta}$ satisfy the equations \begin{equation} \label{fazis} \cot{\Psi_{\alpha,\beta}\over2}=-\rho {\cot{k_{\alpha}\over2}-\cot{k_{\beta}\over2}\over (1-\rho)\cot{k_{\alpha}\over2}\cot{k_{\beta}\over2}-(1+\rho)}\,, \end{equation} \begin{equation}\label{bea1} \exp\left\{Nk_{\alpha}\right\}= \exp\left\{\sum_{\beta(\not=\alpha)} \Psi_{\alpha,\beta}\right\}\,. \end{equation} The energy and the momentum of the state are \begin{equation}\label{em} E={V\over a}\sum_{\alpha}\left(\cos k_{\alpha}-\rho\right)\,; \quad\quad Q=\sum_{\alpha} k_{\alpha}\,. \end{equation} The equation (\ref{fazis}) can be solved for the phase shifts by parameterizing $k_\alpha$'s in terms of new variables, $v_\alpha$ \cite{Yang} which are commonly called rapidities: \begin{equation} k_{\alpha}=\Phi(v_{\alpha},\gamma/2)\,,\quad\quad \Psi_{\alpha,\beta}=\Phi(v_{\alpha}-v_{\beta},\gamma)\,, \end{equation} with \begin{equation} \Phi(z,\delta) \equiv {1\over i} \ln{\sin(z+i\delta)\over\sin(z-i\delta)}\,. \end{equation} When expressing (\ref{bea1}) in terms of the rapidities it turns out to be useful to take the logarithm of the equation. In order to do this one has to define the branch cuts of $\Phi(z,\delta)$. We choose $\Phi(z,\delta)$ to be a continuous function of $z$ with $\Phi(0,\delta)=\pi$ in the strip $\vert{\rm Im}\,z\vert<\delta$, while in the regions ${\rm Im}\,z>\delta$ and ${\rm Im}\,z<-\delta$ we choose those levels in which $\Phi\to\mp2i\delta$ if ${\rm Im}\,z\to\pm\infty$, respectively. This results in cuts running along the lines ${\rm Im}z=\pm\delta$, ${\rm Re}z\leq0$ and ${\rm Re}z\geq\pi$, and corresponds to choosing $f_1=f_2=0$ in \cite{ViWo}. Taking the logarithm of (\ref{bea1}) it becomes: \begin{equation}\label{BAe} N\Phi(v_{\alpha},\gamma/2)=2\pi I_{\alpha}+ \sum_{\beta}\Phi(v_{\alpha}-v_{\beta},\gamma)\,, \end{equation} where the $I_{\alpha}$ are half-odd integers and turn out to be very useful quantum numbers for characterizing the states with macroscopic number of spin waves \cite{Yang}. The energy and the momentum in terms of the rapidities are \begin{equation}\label{emphi} E={V\over a}{\sinh\gamma\over2}\sum_{\alpha} \Phi^{\prime}(v_{\alpha},\gamma/2)\,, \quad\quad Q=\sum_{\alpha}\Phi(v_{\alpha},\gamma/2)\,. \end{equation} For technical reasons it is useful to restrict the real part of the rapidities to an interval of length $\pi$. We choose $0<{\rm Re}\,v_{\alpha}\leq\pi$, but note, that for our results it is important only, that the interval contains the point $\pi/2$ in its interior. Note also, that the wave function, the energy and the momentum are independent of the particular choice of this interval and branch cuts of $\Phi(z,\delta)$ (the latter modulo $2\pi$). \subsection{The ground states} In the ground state(s) of the XXZ model with $N$ even, there are $N/2$ rapidities ($S^z\!=\!0$), all of them are real and all the $I_j$ quantum numbers are {\em consecutive} half-odd integers satisfying $I_{j+1}-I_j=-1$ for $\lambda_{j+1}>\lambda_j$ \cite{Yang}. (Here and in the following we denote the real rapidities by $\lambda_j$.) For large $N$ the distribution of $\lambda$'s can be well approximated by a smooth density $\sigma_0(\lambda)$ satisfying the linear integral equation \cite{desClGa,Yang,BVV,ViWo}: \begin{eqnarray} \label{BAinte} -\Phi^{\prime}(\lambda,\gamma/2)&=&\sigma_0(\lambda)-{1\over2\pi} \int\limits_{\Lambda}^{\Lambda+\pi} \Phi^{\prime}(\lambda-\lambda^{\prime},\gamma) \sigma_0(\lambda^{\prime})d\lambda^{\prime}\,, \\ \label{BAeLambda} \mbox{with} \;\;\;\;\;\; N\Phi(\Lambda,\gamma/2) &=&2\pi\left(I_1+1/2\right)+ \sum_{j}^{N/2}\Phi({\Lambda}-\lambda_{j},\gamma)\,. \end{eqnarray} Eq.(\ref{BAinte}) can be solved by Fourier transformation leading to \begin{equation}\label{sigma0} \sigma_0(\lambda)={K\over\pi^2}\,{\rm dn}\! \left({2K\over\pi}\lambda,k\right)\;\;\;\;\; \mbox{with} \;\;\;\;\; {K^{\prime}\over K}={\gamma\over\pi}\,, \end{equation} where ${\rm dn}(w)$ is the Jacobian elliptic function and $K$ is the complete elliptic integral of the first kind with modulus $k$. (See Appendix \ref{sec:lista} for notations and some properties of the elliptic functions.) By means of this density the sum over $j$ in (\ref{BAeLambda}) can be calculated, and the rapidity set can be reconstructed. One finds, that there are {\em two nonequivalent} sets of $N/2$ real $\lambda_j$'s satisfying (\ref{BAe}) with $I_{j+1}-I_j=-1$ \cite{ViWo}. These are: \begin{mathletters} \begin{equation} \lambda_j={\pi\over2K}F\left({2\pi\over N}(j-I_0),k\right)\,, \quad I_j=2I_0-1/2-j\,, \quad j=1,2,\ldots N/2 \end{equation} for $N/2=$even and \begin{equation} \lambda_j={\pi\over2K}F\left({2\pi\over N}(j-1/2+I_0)),k\right)\,, \quad I_j=1/2-2I_0-j\,, \quad j=1,2,\ldots N/2 \end{equation} \end{mathletters} for $N/2=$odd, with $F(w,k)$ being the elliptic integral of first kind. The two sets are distinguished by $I_0$ taking the values of $1/2$ and $0$, respectively. These two states of lowest energy are almost degenerate. The true ground state corresponds to $I_0=1/2$ and the difference between their energy is exponentially small in the length of the chain \cite{Bax,deVW}. The leading term is: \begin{equation}\label{endifi} \Delta E_0=E(I_0\!=\!0)-E(I_0\!=\!1/2)= {V\over a}{\sqrt{8k^{\prime}}\over\pi^{3/2}}{\rm sh}\gamma K{k_1^{N/2}\over N^{1/2}}\ , \end{equation} where $k'=\sqrt{1-k^2}$ and $\sqrt{k_1}=(1-k')/k$. As $\Delta E_0$ is exponentially small in the length of the chain and -- as we shall see -- all the other states are separated from the lowest two by a finite gap, we call these the two ground states or the physical vacua. The two vacua can be distinguished for instance by their momentum, $Q$ and the eigenvalue, $\Sigma$ of the operator (\ref{tukor}) \cite{deVW}: \begin{eqnarray} \label{pi0} Q(I_0,N)&=&2\pi(I_0-1/2+N/4)\,({\rm mod}\,2\pi) \\ \label{tukorertek} \Sigma(I_0,N)&=&(-1)^{(2I_0-1+N/2)}\,. \end{eqnarray} \subsection{The excited states} The states other than the two vacua are called excited states and the ones of not too high energy can be described in terms of some dressed particles. These are created by leaving holes in the sequence of the $\lambda_j$'s, and by introducing complex rapidities. The smooth density $\sigma(\lambda)$ of the real rapidities is still a good approximation but (\ref{BAinte}) receives contributions from both the holes and the complex rapidities. When one solves for $\sigma(\lambda)$ we are left with a few equations relating the parameters of the excitations only \cite{BVV,ViWo}: \begin{eqnarray} \label{hlbae1a} 1\!&=& \exp i\!\left\{\!N\!\left(\!{\rm am}\!\left({2K\over\pi}\theta_h,k\!\right) \!-\!{\pi\over2}\right) \!-\! 2\pi I_0 \!+\! \sum_{b=1}^{n(\theta)}\left(\!{\cal F} (\theta_h\!-\!\theta_b,\gamma)\!-\!\frac{\pi}{2}\right) \!+\! \sum_{\alpha}^{n(\chi)} \Phi\!\left(\!\theta_h\!-\!\chi_{\alpha},\frac{\gamma}{2}\right)\!\right\}\! \\ \label{hlbae1b} 1\!&=& \exp i\!\left\{\sum_{h=1}^{n(\theta)} \Phi(\chi_{\alpha}-\theta_h,\gamma/2) - \sum_{\beta=1}^{n(\chi)}\Phi(\chi_{\alpha}-\chi_{\beta},\gamma) - \pi \right\} \end{eqnarray} where \begin{equation} {\cal F}(x,\gamma)\equiv x+\sum_{m=1}^{\infty}{e^{-\gamma m}\over m\cosh(\gamma m)}\sin(2mx)\,, \end{equation} and $I_0$ is again either $1/2$ or 0. The positions of the holes in the real-rapidity distribution are denoted by $\theta_h$ and their number is denoted by $n(\theta)$, whose parity is the same as that of $N$. The variables $\chi$ represent the set of complex rapidities $z$ in the following way. If for a $\chi$ $\vert{\rm Im}\chi\vert<\gamma/2$, it represents a close pair \begin{mathletters}\label{komplexek} \begin{equation}\label{kozeli} z^{\pm}=\chi\pm i\gamma/2+\ordo{e^{-N\Omega(z^+,\gamma)}} \end{equation} with $\Omega(z,\gamma)$ being a positive valued function of the order of unity (as long as $\gamma>0$). If $\vert{\rm Im}\chi\vert>\gamma/2$, the $\chi$ represents a wide root: \begin{equation}\label{tavoli} z=\chi+i\,{\rm sgn}({\rm Im}\chi)\gamma/2\,. \end{equation} \end{mathletters} As the $\chi$s are either real, or form complex conjugated pairs, the typical complex rapidity configurations are the 2-strings (the two $z$ represented by a real $\chi$), the quartets (the four complex rapidities represented by a complex conjugated pair with $\vert{\rm Im}\chi\vert<\gamma/2$), and wide pairs (corresponding to a complex conjugated pair of $\chi$s with $\vert{\rm Im}\chi\vert>\gamma/2$). The total number of the complex parameters $\chi_{\alpha}$ is denoted by $n(\chi)$. The energy and the momentum of the excited states can be expressed in terms of the parameters of the holes only and are given as: \begin{eqnarray} \label{epstheta} E=E_0+\sum_h\epsilon(\theta_h) &\;\;\;\;\mbox{with}\;\;\;\;& \epsilon(\theta)={V\over a}{K\over\pi}\sinh\gamma \,{\rm dn}\left({2K\over\pi}\theta,k\right)\,, \\ \label{ptheta} Q=Q(I_0,N)+\sum_h q(\theta_h) &\;\;\;\;\mbox{with}\;\;\;\;& q(\theta)={\rm am}\!\left({2K\over\pi}\theta,k\right)-{1\over2}\pi, \end{eqnarray} with $E_0$ being the ground state energy, and $Q(I_0,N)$ is given in (\ref{pi0}). The appearance of $Q(I_0,N)$ in the momentum suggests, that the $I_0=1/2$ resp.\ $I_0=0$ excited states should be considered as excitations above the $I_0=1/2$ resp.\ $I_0=0$ vacua. The relation between the functions $\epsilon(\theta)$ and $q(\theta)$ give the dispersion relation of the particles: \begin{equation}\label{ep} \epsilon(q)={V\over a}{K\over\pi}\sinh\gamma\sqrt{1-k^2\cos^2q}\,. \end{equation} The spin of the state characterized by a solution is \begin{equation}\label{spin} S^z={1\over2}n(\theta)-n(\chi)\,. \end{equation} The states with $S^z<0$ are obtained by flipping all spins, for example using the operator $\hat\Sigma$ of (\ref{tukor}). The $S^z=0$ BA eigenstates are expected to be eigenstates of this operation: otherwise certain points of the spectrum were twofold degenerated (with the two states connected by $\hat\Sigma$). This can happen {\em accidentally} but is not forced by the symmetry, as $\hat\Sigma$ has one-dimensional representations only. Recently Doikou and Nepomechi has proposed a formula for the parity of the $S^z=0$ in the planar ($\rho<1$) region \cite{DoNe}. Now we {\em conjecture}, that in the easy axis region we study, the spin reversal symmetry of the $S^z=0$ states in analogy with their formula is given as \begin{equation}\label{sumall} \Sigma=(-1)^{\mu},\quad\quad{\rm with}\quad\quad \mu=\frac{N}{2}+\frac{2}{\pi}\sum^{N/2}_{\alpha=1}v_{\alpha}\,. \end{equation} We have not proved this formula, but performed numerical calculations to see this symmetry (Appendix \ref{sec:numerics}): solving the BA equations numerically for a number of $S^z=0$ states has shown, that the wave function is either symmetric or antisymmetric under $\hat\Sigma$, and the value of $\Sigma$ is correctly given by (\ref{sumall}). In the large $N$ limit (\ref{sumall}) can be transformed into a simpler form using the density of the real roots \cite{ViWo}: \begin{equation}\label{simsum} \mu=(2I_0+1)+\frac{N}{2}+\frac{2}{\pi}\left\{ \sum_{\alpha}^{n(\chi)}\chi_{\alpha}-\frac{1}{2}\sum_h^{n(\theta)}\theta_h \right\}\,({\rm mod}\,2). \end{equation} In summary, the structure of the states can be described as follows: the system has two vacua found in the $N$ even system, one of them is symmetric, the other is anti-symmetric under (\ref{tukor}), one of them has no (lattice) momentum, in the other this quantity is $\pi$. Above each vacuum there is a set of excited states which can be considered as scattering states of dressed particles. The momenta of the particles are quantized through a set of equations of the BA type (\ref{hlbae1a}-\ref{hlbae1b}) (HLBAE). The excitation energy is the sum of the energy contributions of the individual particles and the momentum is the sum of the contribution of the individual particles added to the momentum of the corresponding vacuum. The $S^z=0$ states are eigenstates of the operation flipping all spins, and the eigenvalue of this operation is related to the chain length, quantum number $I_0$ and the rapidities describing the excitations. \section{The scaling limit} \label{sec:skalalim} \subsection{The relativistic spectrum} \label{sec:spectr} Before constructing the scaling limit we note, that certain quantities (the momentum $Q$ and the reflection $\Sigma$) depend on the parity of $N$ (and $N/2$) but not on its magnitude. To make the procedure uniquely defined we carry out the $N\to\infty$ limit through $N$s being integer multiples of four, in which case both in $Q$ and $\Sigma$ the $N$ can be replaced by zero, and at the end of the section we discuss the consequences of other possible choices. First, we show that (\ref{rellim}) defines the limit in which the dispersion relation (\ref{ep}) describes relativistic particles. Note that the continuum momentum ($P$, $p$) is obtained from the lattice momentum ($Q$, $q$) after division by the lattice spacing $a$: \begin{equation} \epsilon^2=\left({V\over a}{K\over\pi}\sinh\gamma\right)^2 \left(1-k^2\cos^2(ap)\right) \approx \left({VKk\sinh\gamma\over\pi}\right)^2 \left(\left({k'\over ka}\right)^2+p^2\right)\,. \end{equation} Here we expanded $\cos(ap)$ about 1 and now observe that in the continuum limit ($a\rightarrow 0$) a relativistic dispersion relation emerges, provided $\gamma\to0$ together with \begin{equation} \left({VKk\sinh\gamma\over\pi}\right)=1\quad{\rm and}\quad \left({k'\over ka}\right)=M\,. \end{equation} These requirements imply $V=2/\pi$ and (\ref{rellim}) (see Appendix \ref{sec:lista}). It will prove to be useful to introduce the scaled rapidity variable, $\vartheta$ which also makes the limiting procedure more transparent. It is defined in terms of the $\theta$ and $\gamma$ as (see also (\ref{sigma0})): \begin{equation}\label{vartheta} K+\vartheta \equiv K\frac{2\theta}{\pi}\,, \end{equation} while the energy $\varepsilon(\vartheta)=\epsilon(\theta(\vartheta))$ and momentum $p(\vartheta)={1\over a}q(\theta(\vartheta))$ can be expressed as \begin{equation}\label{sclr} \varepsilon(\vartheta) = \frac{2K\sinh\gamma}{\pi^2}\frac{k'}{a}\frac{1}{{\rm dn}(\vartheta)} \\ \;\;;\hspace{.5in} \frac{\sin(ap(\vartheta))}{a} = \frac{k'}{a}\frac{{\rm sn}(\vartheta)}{{\rm dn}(\vartheta)}\,. \end{equation} In the scaling limit $\vartheta/K$ approaches 0 as $k\rightarrow 1$ which yields \begin{equation}\label{vt} \pi\frac{\theta-\pi/2}{\gamma}\rightarrow\vartheta \end{equation} \begin{equation} \varepsilon(\vartheta)\rightarrow\frac{k'}{a}\cosh(\vartheta) \;\;;\hspace{.5in} p(\vartheta)\rightarrow\frac{k'}{a}\sinh(\vartheta), \end{equation} in accord with ${k'}/{a}\rightarrow M =$ constant. The total momentum also contains a macroscopic quantity (\ref{ptheta}) and can be written in the continuum limit as, \begin{equation}\label{mom1} P={Q(I_0)\over a}+\sum_h M\sinh(\vartheta_h)\,. \end{equation} In view of (\ref{mom1}) the two sectors corresponding to $I_0=1/2$ and $I_0=0$ become infinitely separated in momentum space lending a strong support to interpreting the $I_0=1/2$ resp.~$I_0=0$ excited states as excitations above the corresponding ($I_0=1/2$ resp.~$I_0=0$) vacua. Technically while (\ref{mom1}) gives the true momentum for the $I_0=1/2$ sector in the $a\to0$ limit, the continuum momentum of the $I_0=0$ sector needs to be redefined as \begin{equation} \label{mom2} P={Q(I_0)+\pi\over a}+\sum_h M\sinh(\vartheta_h)\,. \end{equation} As an alternative, we might keep both sets by redefining the lattice so that two sites form one elementary cell. In this case the lattice momenta $0$ and $\pi$ are equivalent, and the $Q(I_0)$ term can be dropped. In any case the energy and momentum measured relative to those of the corresponding vacuum are \begin{equation}\label{relem} E-E_0=\sum_h M\cosh(\vartheta_h)\,, \quad\quad P=\sum_h M\sinh(\vartheta_h)\,. \end{equation} \subsection{The vacua in the scaling limit} \label{sec:vakumok} The two lowest lying states are characterized purely by the density of rapidities, $\sigma_0(\lambda)$ whose behavior in the scaling limit is crucial for our arguments. The reason is that our considerations about the excited states are based on the existence of a non-vanishing background of rapidities (see also Appendix \ref{sec:strstr}). One can show that although $\sigma_0(\lambda)$ vanishes around $\pi/2$ (and diverges at $0$ and $\pi$), the density of the $\vartheta$'s is non-vanishing as required. The density $\varrho_0(\vartheta)$ of the rescaled rapidities (\ref{vartheta}) can be expressed as \begin{equation} N\sigma_0\left(\frac{\pi}{2K}(\vartheta+K)\right)d\lambda= L\varrho_0(\vartheta)d\vartheta\,, \end{equation} provided $d\vartheta=2Kd\lambda/\pi$ (here the l.h.s.~is the number of $\lambda$'s in the interval $\{\lambda,\lambda+d\lambda\}$ at $\lambda=\pi/2+\pi\vartheta/2K$, and the r.h.s.~gives the number of $\vartheta$'s in the interval $\{\vartheta,\vartheta+d\vartheta\}$), which leads to \begin{equation} \varrho_0(\vartheta)={1\over2\pi}M\cosh(\vartheta)\,. \end{equation} The ({\ref{endifiscl}) difference in the energies of the two vacua is obtained directly by evaluating the scaling limit of (\ref{endifi}). We have to note, however, that (\ref{endifi}) is the leading term in a large $N$ expansion, and as the expansion coefficients are functions of $\gamma$ one has to see, if these neglected terms behave properly. In Appendix \ref{sec:alapok} we show that (\ref{endifiscl}) is the leading term of a large $L$ expansion of the same quantity and the neglected terms decay faster. Finally we note, that the reflection symmetry is not effected by the limiting process, i.e.~$\Sigma=\pm1$ in the two states. \subsection{Secular equations in the scaling limit} \label{sec:seceq} We define the secular equations of the limiting theory as the SL of the HLBAE of the lattice model. This limit exists but if it is meaningful is a delicate question: deriving the HLBAE two approximations has been used, integrating instead of summing over the rapidities, and neglecting the exponentially small corrections to the close pairs. These have to be justified even in the SL. Replacing the sums with integrals require that the density of rapidities does not vanish in the scaling limit (see the previous section) more over it can be shown that the error introduced remains negligible. Also the corrections to the close pairs remain exponentially small even in the SL as it is shown in Appendix \ref{sec:strstr}. To derive the scaling limit of the HLBA we first observe, that the energy and momentum diverge unless all $\theta$ scale to $\pi/2$ as $\gamma\to0$ (i.e.~all $\vartheta$ are finite). In this case we expect also some of the $\chi$'s to scale to $\pi/2$. For convenience we introduce their rescaled versions as \begin{equation} \label{kappadef} \kappa=\pi\frac{\chi-\pi/2}{\gamma} \end{equation} It is straightforward to determine how the functions ${\rm am}\!$ , ${\cal F}$ and $\Phi$ behave in the scaling limit: \begin{eqnarray} {1\over a}\left({\rm am}\!(\vartheta+K)-{\pi\over2}\right) &\stackrel{\gamma\rightarrow 0}{\longrightarrow}& p(\vartheta)=M\sinh\vartheta\,,\\ {\cal F}\left({\Delta\vartheta\gamma\over\pi},\gamma\right) &\stackrel{\gamma\rightarrow 0}{\longrightarrow}& \phi\left({\Delta\vartheta\over\pi}\right)\,,\\ \Phi\left({\Delta\kappa\gamma\over\pi},{\gamma\over m}\right) &\stackrel{\gamma\rightarrow 0}{\longrightarrow}& \pi-2\tan^{-1}\left({\Delta\kappa\over\pi/m}\right)\,, \\ \Phi\left({\Delta\chi},{\gamma\over m}\right) &\stackrel{\gamma\rightarrow 0}{\longrightarrow}& 0\hspace{1.5in}{\rm if}\;\; \Delta\chi\to{\rm finite.} \end{eqnarray} After substituting the above expressions to the HLBA equations, their scaling limit is obtained. The complex rapidities which remain at finite distance from $\pi/2$ drop out from the equations, although they do affect $S^z$ of the state as we will see in the next section. The equations for the holes and the complex parameters approaching $\pi/2$ become \begin{eqnarray}\label{geea} Lp(\vartheta_h)&=&2\pi({\cal I}_h+I_0)-\sum_l^{n(\vartheta)}\phi\left( {\vartheta_h-\vartheta_l\over\pi}\right) +\sum_{\alpha}^{n(\kappa)} 2\tan^{-1}{\vartheta_h-\kappa_{\alpha}\over\pi/2}\,,\\ &&{\cal I}_h={1\over2}\left({n(\vartheta)- 2n(\kappa)\over2}\right)\,({\rm od}\,1)\,, \\ \label{geeb} \sum_h^{n(\vartheta)}2\tan^{-1}{\kappa_{\alpha}-\vartheta_h\over\pi/2} &=&2\pi{\cal J}_{\alpha}+\sum_{\beta}^{n(\kappa)} 2\tan^{-1}{\kappa_{\alpha}-\kappa_{\beta}\over\pi}\,,\\ &&{\cal J}_{\alpha}=\left({n(\kappa)- n(\vartheta)+1\over2}\right)\,({\rm od}\,1)\,. \end{eqnarray} It is to be noted, that the quantum number $I_0$ directly appears in the secular equations of the excited states. As we have discussed before, we interpret the two sets of solutions distinguished by the value of $I_0$ as the excited states of the corresponding vacua. \subsection{Multiplet structure of the states} \label{sec:multipl} A few simple solution of the above secular equations are given and the structure of solutions is discussed in Appendix \ref{sec:solstr}. Here we discuss the degenerations in the spectrum developing in the SL. As both the energy and the momentum depend on the rapidities of the holes only, we expect degenerate multiplets corresponding to solutions differing in their $\chi$ content only. As (\ref{geea}) and (\ref{geeb}) contain both the $\vartheta$ and $\kappa$ sets the solutions becoming degenerate can differ in the $\chi$'s not scaling to $\pi/2$ (and which disappeared from the equations). In Appendix \ref{sec:solstr} we discuss the types of different solutions of the HLBAE (\ref{hlbae1b}) and relate them to the solutions of (\ref{geeb}). We argue, that at a given number of holes, $n(\vartheta)$, each having the form $\theta_h=\gamma\vartheta_h/\pi+\pi/2$, to any number $0\leq n(\kappa)\leq n(\vartheta)/2$ (\ref{hlbae1b}) have $n(\vartheta)/2\!-\! n(\kappa)\spl1$ different solutions in which $n(\kappa)$ of the $\chi$s scale to $\pi/2$ like $\chi_{\alpha}=\gamma\kappa_{\alpha}/\pi+\pi/2$ with the common $\kappa_{\alpha}$ set being one solution of (\ref{geeb}). These solutions differ in the number of $\chi$s not scaling to $\pi/2$. It is clear than, that the number of different {\em solutions} of (\ref{hlbae1a}-\ref{hlbae1b}) becoming degenerated in the SL and given by one solution of the system (\ref{geea}-\ref{geeb}) is $n(\vartheta)/2\!-\! n(\kappa)\spl1$. The number of $\chi$s not scaling to $\pi/2$ can be $0,1,\ldots,n(\vartheta)/2\!-\! n(\kappa)$, and the corresponding states have spins $S^z=n(\vartheta)/2\!-\! n(\kappa),\ldots,1,0$ respectively. Completing this set of states with the states $S^z=-1,-2,\ldots,-(n(\vartheta)/2\!-\! n(\kappa))$ which are obtained by flipping all spins we see, that the degeneration of the {\em states} given by one solution of the equations (\ref{geea}-\ref{geeb}) is $n(\vartheta)\!-\! 2n(\kappa)\spl1$ This corresponds exactly to SU(2) multiplets: the states are characterized by two quantum number $l=n(\vartheta)/2\!-\! n(\kappa)$ and $l\geq m\geq-l$, and the $m$ does not influence the energy. This way we may consider the particles as `spin' 1/2 particles forming SU(2) eigenstates with quantumnubers $l$ and $m$. (This picture will be refined when discussing the spin reversal symmetry.) Apparently the equations (\ref{geea}) and (\ref{geeb}) give the $l=m$ states directly (as usual in SU(2) BA systems), and to have the complete description of the $m<l$ states one should find the $\chi$s not scaling to $\pi/2$, nevertheless these parameters do not enter into the spectrum. This interpretation is consistent with the expectation for the number of different solutions: the number of nonequivalent solutions of (\ref{geeb}) given by (\ref{izoszam}) is just the number of different $l=n(\vartheta)/2\!-\! n(\kappa)$, $l=m$ states of $n(\vartheta)$ spin 1/2 particles. One has to note, however, that while the quantumnumber $m$ coincides with the $S^z$ of the system, it can not be seen, if $S^2=l(l+1)$ or the quantity given by $l(l+1)$ is different from the square of the total spin. Note also, that this SU(2) symmetry develops in the SL, but it is not present in the initial model. \subsection{The two particle scattering matrix} \label{sec:Smatrix} The two particle scattering phase shifts of the physical particles can be reconstructed from the secular equations. The method for this is based on the idea, that the deviations of the particle momenta from the free values can be interpreted as the phase shifts of the particles scattering on each other \cite{AnLo2,Kor,DeLo1}. Consider a two particle scattering-state on a ring. If the momenta are $p_1$ and $p_2$ and the particles obey twisted boundary conditions with a twisting angle $\varphi$ ($\varphi=0$ for periodic and it is $\pi$ for the antiperiodic boundary condition), then the boundary condition requires $Lp_1+\delta_{12}=2\pi n_1+\varphi$ and $Lp_2-\delta_{12}=2\pi n_2+\varphi$ with $n_1$ and $n_2$ integers and $\delta_{12}$ being the phase shift \cite{HaQuiBa,SuSha}. Writing the equations (\ref{geea}) in this form the phase shifts can be found. A weakness of this procedure is that in most of the interesting cases $\varphi$ is not known (as the boundary conditions for the dressed particles can differ from those of the bare ones). The twisting could be seen directly in the equation describing one single particle. Since, however, the parity of the particle number is usually determined by that of the number of elements in the chain, the sectors with even and odd number of particles are disjoint in the sense, that a given system represents either one or the other, and one can not see, if the twisting is the same in both sectors. (A trivial example for particle number dependent boundary conditions is given by a spin chain where the spin waves are described by Fermions through a Jordan-Wigner transformation.) Now we {\em suppose} periodic boundary conditions, i.e.\ $\varphi=0$, but we keep in mind, that our results hold up to a rapidity independent constant (see also \cite{EssKo}). A triplet state of two particles is described by two $\vartheta$s and no $\kappa$s. For such a state Eq.\ (\ref{geea}) yields \begin{equation}\label{deltatr} \delta^{tr}_{12}=\phi\left({\Delta\vartheta\over\pi}\right)+ \pi-2\pi I_0\,, \quad \Delta\vartheta=\vartheta_1-\vartheta_2\,, \end{equation} where the $\pi$ comes from the parity prescription for the parameters ${\cal I}_h$. A singlet state is characterized in addition to $\vartheta_1$ and $\vartheta_2$ by a $\kappa$ for which Eq.\ (\ref{geeb}) yields $\kappa=(\vartheta_1+ \vartheta_2)/2$. For this case the above reasoning leads to \begin{equation}\label{deltasing} \delta^{s}_{12}=\phi\left({\Delta\vartheta\over\pi}\right)- 2\tan^{-1}\left({\Delta\vartheta\over\pi}\right)-2\pi I_0\,. \end{equation} From the phase shifts (\ref{deltatr}) and (\ref{deltasing}) the two particle $S$-matrix can be given as \begin{equation}\label{smm} \hat S(\Delta\vartheta)=-\exp\left\{i\left(2\pi I_0+ \phi\left({\Delta\vartheta\over\pi}\right)\right)\right\} \left(\hat{P}_{tr}+{\Delta\vartheta+ i\pi\over \Delta\vartheta-i\pi}\hat{P}_{s}\right)\,, \end{equation} where $\hat{P}_{tr}$ and $\hat{P}_{s}$ are the projectors on the triplet resp.~singlet subspace of the two spins. This $S$-matrix is consistent with the SU(2) symmetry of the limiting model detected in the degenerations. The scattering matrices (\ref{smm}) of the two sets of solutions differ in an overall sign due to the appearance of $I_0$. To decide, if this difference is significant in the sense that it has physical consequences regarding the nature of the particles in the two sets of states, needs further considerations. \subsection{Spin reversal symmetry} \label{sec:reflsim} Finally we have to discuss the question of the symmetry of the excited states with respect to $\hat\Sigma$ of (\ref{tukor}). The two vacua are eigenstates with eigenvalues $\pm1$. As we have mentioned it already, checking numerically a number of $S^z=0$ excited states of the finite lattice model (with finite $\gamma$) we have seen, that these states are also eigenstates of $\hat\Sigma$ with the eigenvalue $\Sigma$ given by (\ref{sumall}-\ref{simsum}), but on that level we could not see an easy rule for the eigenvalues. We have found some indications, however, that there is a correlation between the spin-structure of the dressed particles and the spin reversal symmetry. We have examined a number of solutions of different types with two holes and one close pair of the $N=20$ lattice. It has been found that in accordance with Appendix \ref{sec:solstr} the solutions form two classes: in class 1 the real part of the close pair is in the vicinity of $(\theta_1+\theta_2)/2$ while in class 2 it is in the neighborhood of $(\theta_1+\theta_2\pm\pi)/2$. In case of $I_0=1/2$ the solutions in class 1 are symmetric and the others, in class 2, are antisymmetric under $\hat\Sigma$ while just the opposite is true for $I_0=0$ (Appendix~\ref{sec:numerics}). In the scaling limit the solutions of class 1 resp.~class 2 satisfy (\ref{sing}) resp.~(\ref{trip}) representing singlet and triplet states. We believe that while increasing the value of $N$ the symmetry properties do not change, and we conclude, that in this type of states the singlets have the symmetry of the vacuum, and the symmetry of the triplets is the opposite. Accepting the validity of (\ref{sumall}) (and so that of (\ref{simsum})) we can have $\Sigma$ also for the $n(\theta)=4$, $n(\chi)=2$ states. Taking the results of Appendix \ref{sec:solstr} we find that \begin{equation}\label{genszigma} \Sigma=\Sigma(I_0,N,l)=(-1)^{(2I_0-1)+N/2-l}=\Sigma(I_0,N)(-1)^l \end{equation} with $l=0,\,1,\,2,$ corresponding to singlet, triplet and quintuplet configurations, respectively. We think, this is an indication, that (\ref{genszigma}) is generally true. If we perceive the reflection symmetry as the product of the symmetry of the vacuum and that of the spin structure of the excitations, the latter should be \begin{equation}\label{spart} \Sigma=(-1)^l \end{equation} i.e.\ we have to consider now the singlets to be symmetric and the triplets anti-symmetric. This is just the opposite as in the case of two ordinary SU(2) spins, and resembles a $q$-deformed SU(2) structure at $q=-1$ \cite{Balog}. It can be shown in general \cite{Wo}, that defining the spin operators for a system of particles as \begin{equation} \sigma^{\pm}_2=\sum(-1)^{(j-1)}\sigma^{\pm}_j\quad\quad \sigma^z_2=\sum\sigma^z_j \end{equation} (with $\sigma_j$ being the Pauli matrices acting on the spin labeled by $j$) results an SU(2) structure (equivalent to a $q$-deformed SU(2) at $q=-1$) in which the $S^z=0$ members of the multiplets have the eigenvalue (\ref{spart}). Particularly in the case of two spins the $\sigma_2^z=0$ member of the triplet is $\frac{1}{\sqrt2}(|\uparrow\downarrow\rangle- |\downarrow\uparrow\rangle)$, while the singlet is $\frac{1}{\sqrt2}(|\uparrow\downarrow\rangle+ |\downarrow\uparrow\rangle)$. We have to note, that the symmetry properties of the limiting theory are in strong analogy with that of an ordinary XXX Heisenberg chain. The eigenstates of this model are $S^2$ and $S^z$ eigenstates with eigenvalues $l(l+1)$ and $m$, respectively, with $l\geq|m|$ integers if $N$ is even. It is known, that the $m=0$ states of an even number of spins are eigenstates of $\hat\Sigma$ with an eigenvalue analogous to (\ref{genszigma}) \begin{equation}\label{genszigmap} \Sigma_{XXX}=(-1)^{N/2-l}\,. \end{equation} This is a consequence of the fact, that any such state can be given as linear combination of states built up as products of $l$ triplet and $N/2-l$ singlet pairs \cite{SuFa}. The ground state (the vacuum) is singlet, thus all the singlet excitations have the same symmetry as the vacuum and the triplets have the opposite. Thus we may consider the degenerated states as forming normal SU(2) multiplets of the $N$ spin of a the chain, but equally well we may think of these states as the above described multiplets of the {\em dressed particles}, and all this is a direct consequence of the SU(2) symmetry of the XXX Hamiltonian. In the case of our limiting theory there is no Hamiltonian, and the symmetry is recognized in an indirect way: the SL is taken through states not showing the SU(2) structure, but this symmetry appears, as degenerations characteristic for this symmetry develop. Now we see, that the reflection symmetry properties complete this structure. \subsection{Boundary conditions in the limiting theory} \label{sec:boundary} In the previous part of the Section we supposed, that the $N\to\infty$ limit is taken through integer multiples of four ($N=4\cal N$). Actually in order to be able to define the limit of the momentum uniquely and to obtain the parameters in the limiting equations together with the properties of the excited states properly defined, we have to fix the way of the $N\to\infty$ limit, but we have four different possibilities: $N=4\cal N+\nu$, $\cal N\to\infty$ and $\nu=0,1,2,3$. In these cases (\ref{pi0}) and (\ref{genszigma}) are \begin{eqnarray} \label{pi02} Q(I_0,N)&\equiv&Q(I_0,\nu)=2\pi(I_0-1/2+\nu/4)\,, \\ \label{tukorertek2} \Sigma(I_0,N,l)&=&\Sigma(I_0,\nu,l)=(-1)^{(2I_0-1)+\nu/2+l} \end{eqnarray} (the latter applying for $\nu=0,2$ only, as $\hat\Sigma$ has no eigenstates among the BA states of the $N$=odd chain). Now we want to investigate the $\nu=1,2,3$ cases. First consider the $\nu=2$ case. Now we have to define the momentum in the continuum limit as \begin{equation}\label{mom3} P={Q(I_0,2)-\pi\over a}+\sum_h M\sinh(\vartheta_h) \end{equation} or \begin{equation}\label{mom4} P={Q(I_0,2)\over a}+\sum_h M\sinh(\vartheta_h) \end{equation} to pick the $I_0=1/2$ or $I_0=0$ states respectively. In this procedure all the results are exactly the same as before, except those concerning the reflection symmetry (and as we shall see the $S$-matrices must be affected too). Now, according to (\ref{tukorertek2}) ((\ref{tukorertek})) the secular equations (\ref{geea}) with $I_0=0$ describe the excitations of the reflection symmetric vacuum, and $I_0=1/2$ gives the excitations of the antisymmetric one (just the opposite as for $\nu=0$). The nature of the excitations should be given by the symmetry of the corresponding vacuum, and the secular equations are the quantizations of the momenta of the individual particles. Now the fact, that the same set of particles (say a singlet pair above one of the vacua) may obey two different quantization condition, can be interpreted so that in the two cases the {\em boundary conditions} are different: if they are periodic in one case, they are anti-periodic in the other (the difference in the twisting is $\pi$). The two particle scattering matrices given by (\ref{smm}) are obtained supposing periodic boundary conditions. From the very same secular equations but supposing anti-periodic boundary conditions the same $S$-matrices multiplied by $(-1)$ are obtained, i.e.~interpreting the limiting theories for $\nu=0$, $I_0=0(1/2)$ and $\nu=2$, $I_0=1/2(0)$ as being the same ones but with different boundary conditions, is consistent with the $S$-matrices obtained. It is also consistent with the reflection symmetry of the $S^z=0$ states: we checked numerically (for $N=18$), that also in the $\nu=2$ case the two particle states scaling to singlets have the same symmetry as the vacuum. Now it is not hard to see, that with slight modifications of the prescription of the limiting process for the momentum we can consistently define the scaling limit also for $\nu=1$ and $\nu=3$. This way, together with the two possible values of $I_0$ we can obtain four cases. Also these are described by the equations (\ref{geea}-\ref{geeb}) but with the restriction, that $n(\vartheta)$ must be odd. Also in these cases the eigenstates form SU(2) multiplets corresponding to the different combinations of $n(\vartheta)$ spins of length $1/2$ and Eqs.~(\ref{geea}-\ref{geeb}), just as in the previously discussed cases, give directly the $l=m=n(\vartheta)/2-n(\kappa)$ states. We interpret these four sets of states as the excited states with odd numbers of particles above the two vacua, both with two different boundary conditions. Unfortunately, as none of these sets contain the corresponding vacuum, we can not tell, which values of the parameters $I_0$ and $\nu$ correspond to which vacuum. Now, however, the boundary conditions can be seen directly: the equations of the one particle states read \begin{equation} Lp(\vartheta)=2\pi(I_0+1/4+{\rm integer}) \end{equation} indicating, that these particles obey twisted boundary conditions with twisting angle $2\pi(I_0+1/4)$ \cite{HaQuiBa,SuSha}. A possible explanation of the fact that for different $\nu$s we get different limits can be the following. In many cases in order to define the local operators in a naive continuum limit one has to group the lattice points into elementary cells of certain length \cite{WoEcTr,WoFo1,WoFo2}, which will be the points of the continuum. If the chain length is not an integer multiple of the size of the elementary cell, necessarily boundary terms occur in the limit which can cause a rapidity independent shift in the phase (twisting) as a particle is taken around the ring. Now it seems, that the size of the elementary cell is four, and so there are four different possibilities as far as the `surface terms' are concerned. In addition to this the number of particles must have the same parity as the chain-length (the vacua are spinles, while the particles are spin 1/2 particles). These together add up to give the variety of sectors discussed above. \section{Acknowledgment} We are grateful to J.~Balog, P.~Forg\'acs, L.~Palla and J.~S\'olyom for illuminating discussions. T.H.~was supported in part by the U.S.\ Department of Energy under contract \#DE-FC02-94ER40818 and by the Hungarian National Science Fund OTKA, grant No.~T19917. F.W.\ was supported by the Hungarian National Science Fund OTKA, grants No.~T014443 and T022607.
2,877,628,090,577
arxiv
\section{Introduction} Clustering in metal alloys is known as the very early stage of first-order transformations within a bulk crystal, which largely influences the mechanical properties of quenched/aged materials. Coherent with the matrix, small clusters of solute atoms have a significantly lower nucleation barrier than the terminal second-phase of a completely different crystal structure. Early clusters nucleate and grow in size forming the so-called GP-zones known to be a metastable precursor of the equilibrium phase~\cite{christian}. The formation and growth mechanisms of the early clusters are poorly understood presently due to the lack of direct atomistic observations of structural changes during the transition process. However, inspired by the observations made on the quenched structures using transmission electron microscopy (TEM)~\cite{marceau10,marceau10-2,nutyen67,ozawa70}, 3D atom probe~\cite{sato04,esmaeili07,marceau10,marceau10-2,biswas11}, and positron annihilation~\cite{somoza02,somoza10,marceau10-2} techniques, the formation of early clusters has been empirically associated with so-called quenched-in defects. Formed within the bulk crystal upon quenching, excess vacancies and/or dislocations loops have been presumed to decrease the energy barrier for nucleation facilitating cluster formation~\cite{esmaeili07,ozawa70,somoza10,marceau10,marceau10-2}. Understanding solute clustering mechanisms is of crucial importance to design an effective age-hardening process producing desired mechanical properties in alloys~\cite{christian}. To our knowledge, no systematic study of the clustering mechanisms has been done using atomic-scale simulation methods such as molecular dynamics (MD) and Monte Carlo (MC) simulations, due to their main restrictions of accessing the relevant time scales of diffusional transformations. Dynamical calculations using classical density functional theory (CDFT) are also inefficient due to the high spatial resolution required to resolve the sharp density spikes in solid phases~\cite{jaatinen09}. The recently developed phase field crystal (PFC) method~\cite{elder04,elder07,wu10,greenwood10,greenwood11} has shown promise for simulating structural transformations on diffusive time scales. This new formalism carries the essential physics of CDFT without the need to resolve the sharp atomic density peaks. In the most recent PFC formalism developed by Greenwood et al.~\cite{greenwood10,greenwood11,greenwood11-2}, various crystal symmetries can be easily stabilized by construction of relevant correlation kernels. This approach preserves the numerical efficiency of the original PFC model and is able to dynamically simulate the precipitation of solid phases within a parent phase of different crystal symmetry~\cite{greenwood10} and/or chemical composition~\cite{greenwood11-2}. This letter proposes a new approach to study the clustering phenomenon that relies on atomic-scale simulations using the previously developed alloy PFC model of ref.~\cite{greenwood11-2}. We explore the formation and growth mechanisms of early clusters in a quenched bulk lattice of a supersaturated Al-Cu alloy initially containing quenched-in defects such as dislocations. \section{Model Structure} We start with the free energy functional in the binary PFC model~\cite{greenwood11-2}, \begin{align} \frac{\Delta\freedm}{kT\rho^o} = \int f dr &= \int \bigg\{\frac {n^2}{2}-\eta\frac {n^3}{6}+\chi\frac {n^4}{12}+\nline(n+1)\Delta\freedm _{mix} &-\frac 1{2}n \int dr'C^n_{eff}(|r-r'|)n'+\alpha|\vec{\nabla}c|^2\bigg\} dr \label{PFCalEnrgy2} \end{align} where $n$ and $c$ represent reduced dimensionless atomic number density and solute concentration fields, respectively. $\eta$ and $\chi$ are coefficients added to fit the ideal energy to a polynomial expansion ($\eta=\chi=1$ describes a Taylor series expansion of the bulk free energy around the reference density) and \begin{align} \Delta\freedm_{mix}=\omega\{c\ln(\frac{c}{c_o})+(1-c)\ln(\frac{1-c}{1-c_o})\} \label{Fmix} \end{align} represents the energy density associated with the entropy of mixing. The coefficient $\omega$ is introduced to fit the entropic energy away from the reference composition $c_0$. The parameter $\alpha$ is a coefficient (taken as 1 in this study). These parameters are discussed further in ref.~\cite{greenwood11-2}. For a binary alloy, Greenwood et al.~\cite{greenwood11-2} introduced the correlation function \begin{align} C^n_{eff}=X_1(c)C^{AA}_2 + X_2(c)C^{BB}_2 \label{CorrEff} \end{align} , where $X_1(c)=1-3c^2+2c^3$ and $X_2(c)=1-3(1-c)^2+2(1-c)^3$. $C^{AA}_2$ and $C^{BB}_2$ are correlation functions representing, respectively, contributions to the excess free energy for the situations where A atoms are in the preferred crystalline network of B atoms and B atoms which are in a structure preferred by A atoms. The correlation functions $\hat{C}^{ii}_2(\vec{k})$ are defined to have reciprocal space peaks (i.e. $k_j$, corresponding to the inverse of interplanar spacings) determined by the main families of planes in the equilibrium crystal unit cell structure for the $i^{th}$ component. Each peak is represented by the following Gaussian form of width $\alpha_j$, modulated for temperature $σ$ by a Debye-Waller prefactor which accounts for an effective transition temperature $\sigma_{Mj}$~\cite{greenwood11-2}. \begin{align} \hat{C}^{ii}_{2j}=e^{-\frac{\sigma^2}{\sigma^2_{Mj}}}e^{-\frac{(k-k_j)^2}{2\alpha^2_j}} \label{CorrF} \end{align} The equations of motion of the total density and concentration fields follow dissipative dynamics~\cite{archer05}. The total mass density and total reference density per unit volume are defined as $\rho=\rho_A+\rho_B$ and $\rho^o=\rho_A^o+\rho_B^o$, respectively. Thus, the equations of motion can be written for $n$($=\rho/\rho^o-1$) and $c$($=\rho_B/\rho$) as $\pxpy{n}{t}=\vec{\nabla}.\{M_n\vec\nabla(\frac{\delta \Delta F}{\delta n})\}+\eta_n(\sigma,t)$ and $\pxpy{c}{t}=\vec{\nabla}.\{M_c\vec\nabla(\frac{\delta \Delta F}{\delta c})\}+\eta_c(\sigma,t)$, respectively~\cite{greenwood11-2}. $M_n$ and $M_c$ are dimensionless kinetic mobility parameters (equal to 1 in this study). $\eta_n(\sigma,t)$ and $\eta_c(\sigma,t)$ are stochastic noise variables subsuming the role of fast atomic vibrations in density and concentration fields, respectively. \section{Results} \subsection{Phase diagram reconstruction} To examine the equilibrium properties of this binary PFC model for a 2D Al-Cu system, we construct the phase diagram for the coexistence of two square phases. The coexistence lines between the respective phases are obtained by a common tangent construction of the free energy curves of solid and liquid at the reference density ($\bar{n}=0$). Following Greenwood et al.~\cite{greenwood11-2}, the free energy curves of the square phases are calculated using the two-mode approximation of the density fields which is defined by \begin{align} n_i(\vec{r})=\sum_{j=1}^{N_i}A_j\sum_{l=1}^{N_j}e^{2\pi\mathbf{i}\vec{k}_{l,j}.\vec{r}/a_i} \label{Density} \end{align} , where the subscript $i$ denotes a particular solid phase with a lattice spacing $a_i$, and the index $j$ counts the $N_i$ modes of the $i$-phase. $A_j$ is the amplitude of mode $j$ and $l$ is the index over the group of reciprocal space peaks corresponding to mode $j$, $N_j$. Accordingly, $\vec{k}_{l,j}$ is the reciprocal lattice vector normalized to a lattice spacing of 1, corresponding to each index $l$ in the family $j$. The free energy curve for each phase can be calculated as a function of the composition $c$ by substituting the above density field approximation into Eq.~(\ref{PFCalEnrgy2}) and integrating over the unit cell. The resulting crystal free energy is then minimized for the amplitudes $A_j$. For the liquid phase, the amplitude $A_j$ is set to zero and the density is considered as the reference density ($\bar{n}=0$). A more detailed description of this methodology is provided in ref.~\cite{greenwood11-2}. \begin{figure}[htbp] \resizebox{3.3in}{!}{\includegraphics{PhaseDiagram}} \caption{The constructed phase diagram for a square-square system with the inset showing the Al-rich side of the experimental phase diagram of Al-Cu system taken from Ref.~\cite{baker}. The parameters for ideal free energy contribution were $\eta=1.4$ and $\chi=1$, while $\omega=0.005$ and $c_0=0.5$ for entropy of mixing. Widths of the correlations peaks are $\alpha_{11Al}=2.4$, $\alpha_{10Al}=\sqrt{2}\alpha_{11Al}$ (the required ratio to introduce isotropic elastic constants in an square phase~\cite{greenwood11-2}), $\alpha_{11\theta}=2.4$ and $\alpha_{10\theta}=\sqrt{2}\alpha_{11\theta}$. The peak positions for pure Al correspond to $k_{11Al}=2\pi$, $k_{10Al}=\sqrt{2}k_{11Al}$, $k_{11\theta}=(81/38)\pi$ and $k_{10\theta}=\sqrt{2}k_{11\theta}$. The effective transition temperatures are set to $\sigma_{M11Al}=0.55$, $\sigma_{M10Al}=0.55$, $\sigma_{M11\theta}=0.55$ and $\sigma_{M10\theta}=0.55$; The concentration $c$ is rescaled considering the Cu-content in the $\theta$-phase.} \label{fig:PhaseDiagram} \end{figure} In the Al-rich side of the experimental Al-Cu phase diagram, shown in the inset of Fig.~\ref{fig:PhaseDiagram}, there is a eutectic transition between the Al-rich $\alpha$-fcc phase and an intermediate phase $\theta$ (containing $\approx 32.5at.\%$ Cu) with a tetragonal crystal structure. For 2D simulations, in order to approximate these equilibrium properties, we reconstruct the binary phase diagram of Al and $\theta$, both with a square symmetry but differing in Cu-content. The lattice constant (and thus the reciprocal space peaks) of $\theta$ is approximated by interpolating between those of Pure Al and Cu. The solid phase free energy is calculated with a variable lattice constant weighted by concentration $c$ using the interpolation functions $X_1$ and $X_2$. The polynomial fitting parameters in Eq.~(\ref{PFCalEnrgy2}) (namely $\eta$, $\chi$ and $\omega$) and width of various peaks ($\alpha_j$) in the correlation kernel $\hat{C}^{ii}_{2j}$ are then chosen so as to obtain the same compositions for $\alpha$-phase solubility limit and eutectic point as those in the experimental phase diagram. \subsection{Simulation of clustering} With the equilibrium properties obtained above, simulations of clustering were performed on a rectangular mesh with grid spacing $dx=0.125$ and time step $dt=1$. Considering the lattice parameter of 1, each atomic spacing is resolved by 8 mesh spacings. The dynamical equations were solved semi-implicitly in Fourier space for higher efficiency. The initial conditions were chosen to study the proposed dominant role of quenched-in dislocation-type defects in the bulk crystal during the early stage precipitation in dilute Al-Cu alloys quenched from a solutionizing temperature~\cite{nutyen67,ozawa70,somoza02,somoza10,desorbo58}. According to this hypothesis, dislocation loops, generated by excess vacancies, are responsible for local lattice distortions facilitating segregation and diffusion of Cu-atoms, while also driving the system towards a more thermodynamically-stable state~\cite{nutyen67,ozawa70,somoza10,marceau10,marceau10-2}. Therefore, as initial conditions, we use a crystal lattice of uniform composition distorted by introducing dislocations. \begin{figure}[htbp] \resizebox{3.3in}{!}{\includegraphics{Clustering}} \caption{(colour online) PFC simulation of clustering phenomena on a system of 256$\times$256 atoms after 225,000 time steps containing clusters with various sizes and concentrations; (a) The developed structure of a long-lived cluster; (b) The initially distorted structure; For graphical illustration, the concentration field is superimposed on the density field, and ranges from dark blue to dark red as the Cu-content increases.} \label{fig:Clustering} \end{figure} PFC simulation is performed for quench/aging of Al-2at.$\%$Cu from the solutionizing temperature of $\sigma=0.17$ to $\sigma=0.04$ with the initial conditions shown in Fig.~\ref{fig:Clustering}(b). During the simulation, first, small clusters form with a slightly higher Cu-content than that of the matrix. As time progresses, some of these clusters shrink in size and concentration and a few get stabilized (e.g. the cluster shown in Fig.~\ref{fig:Clustering}(a)). In contrast, as expected, quenching the same initial structure from the solutionizing temperature of $\sigma=0.17$ to a temperature within the single-phase $Sq$-$Al$ region, i.e., $\sigma=0.16$, leads to complete removal of distortion. \section{Discussion} \subsection{Evolution of clusters} The dislocation-induced cluster structure shown in Fig.~\ref{fig:Clustering}(a) is consistent with TEM observations in Al-1.7at.$\%$Cu~\cite{nutyen67} and Al-1.1at.$\%$Cu-0.5at.$\%$Mg alloys~\cite{marceau10,marceau10-2}, where dislocation loops appear in the bulk lattice of the quenched structures. Using resistometric measurements and TEM techniques for Al-1.2at.$\%$Si alloy, Ozawa and Kimura~\cite{ozawa70,ozawa71} have associated the formation of dislocation (or vacancy) loops upon quenching to the coalescence of excess vacancies. They have further suggested that the solute atoms segregate towards the loops stabilizing them into solute clusters. Also, tracing vacancy clusters by positron annihilation, Somoza et al.~\cite{somoza02,somoza10} have proposed that vacancy-Cu pairs are present at the quenched-state in Al-1.74at.$\%$Cu alloy. To our knowledge, our PFC simulations are the first atomic-scale simulations to support the above hypothesis of vacancy/dislocation-mediated solute clustering and nucleation mechanisms of early stage precipitation. \subsection{Analysis of work of formation} We further investigated the above mechanisms of cluster formation and growth by analyzing the system energetics for a long-lived cluster. To avoid possible finite size effects, a test with same conditions as those of the above simulation was performed on a larger system, e.g., 512$\times$512 atoms. The strain field caused by the dislocations displacement fields is evaluated by \begin{align} \epsilon = \sum_{i=1}^{N_{tri}} \sum_{j=1}^{3}\bigg(\frac{a_{ij}-a_o}{a_o}\bigg) \label{Strain} \end{align} , which is calculated over triangulated density peaks using the Delaunay Triangulation method. $N_{tri}$ is the number of triangles in the field, $a_o$ is the dimensionless equilibrium lattice parameter (the number of grid points resolving one lattice spacing, i.e., 8), $a_{ij}$ is the length of the $j^{th}$ side of the $i^{th}$ triangle. Small clusters, each accompanied by at least one dislocation, appear to be in local equilibrium with the matrix shown in Fig.~\ref{fig:StrainField}(a)). During the simulation, following Fig.~\ref{fig:StrainField}(b) and (c), cluster ``a" continues to grow while, simultaneously, its accompanying dislocation climbs up towards nearby dislocations, creating larger local strain fields (i.e. 0.001, 0.0016 and 0.014 for cluster ``a" in Fig.~\ref{fig:StrainField}(a), (b) and (c), respectively). This mechanism of stress relaxation through solute segregation has been shown through phase-field studies by Leonard and Haataja~\cite{leonard05} to be the main cause of alloy destabilization by structural spinodal decomposition in the presence of dislocations. Also, PFC studies of thin layers deposition by Muralidharan and Haataja~\cite{muralidharan10} indicated that, due to the above mechanism, some immiscible alloys exhibit miscibility gap around the inter-layer interface in the presence of coherency stresses. \begin{figure*}[htbp] \resizebox{5.1in}{!}{\includegraphics{StrainField}} \caption{(colour online) (a-c) Snapshots taken at 3 different times showing the structural changes during formation of cluster ``a"; (d) work of formation (evaluated from Eq.(~\ref{Nucl.En})) vs. $R$ for increasing dislocation strain fields, i.e., increasing $\Sigma b_i^2$; the dashed curve represents the work of formation for direct homogeneous nucleation of clusters in absence of dislocations, i.e., when $\Sigma b_i^2=0$; (e) the variation of numerically evaluated total energy, $\Delta G_{tot}$, and weighted average burger's vectors, $\Sigma b_i^2$, due to the formation of cluster ``a" in the above box;} \label{fig:StrainField} \end{figure*} The effect of dislocations on the nucleation of clusters is investigated by considering the following form of work of formation: \begin{align} W &= 2\pi R\gamma + \pi R^2 (-\Delta f + \Delta G_s) - \Delta G_{sr} + \Delta G_d \label{Nucl.En} \end{align} where $R$ is the cluster radius in terms of number of lattice spacings and \begin{align} \gamma = \frac {\int_{Area} {\alpha|\vec{\nabla}c|^2 dr}}{L} \label{Surf.En} \end{align} is a Cahn-Hilliard type interfacial free energy per unit length of the interface in 2D. $Area$ represents the area of the surface containing the cluster, and $L$ is the circumferential length of a round cluster of radius $R$. Assuming low dislocation density in the system, the interfacial free energy is taken to be solely chemical, neglecting the structural contributions~\cite{Turnbull}. \begin{align} \Delta f=f^b-\mu_c^b|_{c^b}(c^b-c^{cl})-f^{cl} \label{DrivingForce} \end{align} is the bulk driving force for nucleation of a cluster at a given concentration, where superscripts `$b$' and `$cl$' denote the bulk matrix and cluster ``phase'' quantities, respectively. \begin{align} \Delta G_s = 2 G_A \delta^2 \frac{K_B}{K_B+G_A} \label{StrainEnergy} \end{align} represents the strain energy for a coherent nucleus~\cite{hoyt}, where $\delta$ is the misfit strain and $G_A$ and $K_B$ are 2D shear and bulk moduli, respectively, calculated from PFC 2D mode approximation~\cite{greenwood11-2}. \begin{align} \Delta G_{sr} = \eta^2\chi_d E A \ln(R) \label{StressRelaxE} \end{align} is defined as the stress relaxation term due to segregation of solute into dislocations~\cite{cahn57}, where $A= \frac{G_A \Sigma b_i^2}{4 \pi (1-\nu)}$, $\nu = \frac{E}{2 G_A}-1$, $\eta=\frac{1}{a}\frac{\partial a}{\partial c}$ is the linear expansion coefficient with respect to concentration, $E$ is the 2D Young's modulus~\cite{greenwood11-2}, $\chi_d=(\frac{\partial^2 f}{\partial c^2})^{-1}$, $\Sigma b_i^2$ represents a weighted average of the burger's vectors around the dislocations accompanying the cluster and $a$ is the lattice parameter. The prefactor of the logarithm term, $\eta^2\chi_d E A$, approximates how strain energy is reduced due to solute segregation around a dislocation~\cite{larché85}. \begin{align} \Delta G_d = \zeta A \label{Disl.E} \end{align} accounts for the increase in the total system energy due to presence of dislocations, where $\zeta$ is a prefactor of order ten giving the average amount of energy per dislocation core~\cite{Hull}. Fig.~\ref{fig:StrainField}(d) plots the evaluation of the above form of work of formation (Eq.~(\ref{Nucl.En})) for cluster ``a" at different mean concentrations up to that of the largest cluster shown in Fig.~\ref{fig:StrainField}(c). The mean concentration of each cluster is estimated within a radius of $R$, defined by radially averaging the radius of the concentration field bound by a threshold of $[c^b+\frac{\sum^N {c-c^b}}{N}]$. The dashed curve in Fig.~\ref{fig:StrainField}(d) represents the work of formation for direct homogeneous nucleation of clusters in absence of dislocations, i.e., $\Sigma b_i^2=0$. The energy barrier for homogeneous nucleation seems to be smaller than that of the dislocation-assisted clustering by a single dislocation, i.e., $\Sigma b_i^2=1$. However, according to the plots shown in Fig.~\ref{fig:StrainField}(d), as Cahn~\cite{cahn57} also pointed out, the barrier for formation of clusters on dislocations can be significantly reduced or even completely eliminated by increasing the magnitude of strain field around the dislocations (i.e. increasing $\Sigma b_i^2$). Notably, the local minimum also shifts to larger nucleus sizes until it vanishes (i.e., work of formation continuously slops down vs. $R$). It is noteworthy that, in the absence of quenched-in defects, nucleation of the second phase requires introduction of a thermally-activated noise to produce fluctuations in both density and concentration fields. Assuming dislocations are present in the bulk matrix of a supersaturated quenched alloy, in this study, we demonstrate how elasticity itself can drive the system into a phase transition. The influence of a thermally-activated noise on the transformation kinetics will be investigated in a future study through use of a well-defined noise algorithm. We have, however, observed in our simulations that in the case of a mismatch between the two species, such as in Al-Cu alloys, introducing a Gaussian noise to both density and concentration fields will not have a major impact on the overall path of the transformation. In other words, the phase transformation is mainly driven by the interactions between the elastic fields of the dislocations and the solute atoms. The total work of formation, $\Delta G_{tot}$, is also estimated numerically by measuring the change in the grand potential within a box engulfing cluster ``a" during its formation and growth in the bulk matrix, i.e., \begin{align} \Delta G_{tot} = \int_V\Omega-\int_V\Omega^b =\nline\int_V {[f-\mu_c c - \mu_n n]} &-\int_V {[f^b -\mu_c^b c^b - \mu_n^b n^b]} \label{TotalE} \end{align} . Here, $\mu_c=\frac{\partial f}{\partial c}$ and $\mu_n=\frac{\partial f}{\partial n}$ are diffusion potentials of concentration and density fields, respectively, and $V$ is the total volume. The above work of formation has contributions from the interfacial energy and driving force for formation of clusters (i.e., $\Delta G_{tot}=\Delta G_{\gamma}-\Delta G_v$), both of which include the elastic effects. Since the above box contains only one cluster, the calculated change in the grand potential accounts for the structural and compositional changes during the formation and growth of only cluster ``a". While the growth of cluster ``a" raises the local free energy, other parts of the system may undergo a process of annihilation and/or shrinkage of sub-critical clusters and their accompanying dislocations leading to an overall decrease in the free energy of the system. As can be seen in Fig.~\ref{fig:StrainField}(e), the total work of formation increases with the growth of cluster ``a" until a maximum value, after which it starts to decrease. Also, as can be seen in this figure, the estimated values of $\Sigma b_i^2$ at various sizes of cluster ``a" closely corresponds to its analytical relationship with the cluster size at the local minima mapped on the energy plots of Fig.~\ref{fig:StrainField}(d). Likewise the work of formation, during formation and growth of cluster ``a", the value of $\Sigma b_i^2$ reaches a maximum at the critical size of the cluster. According to our data, cluster ``a" continuously grows in presence of dislocations implying that, at each sub-critical cluster size, the system is sitting at a local energy minimum. Since cluster ``a" at each sub-critical size is in a local equilibrium with the matrix we call it a metastable precursor to the cluster ``a" with a critical size. This is analogous to previous PFC studies of crystals solidification which show that metastable amorphous precursors emerge first due to their lower nucleation barrier than that of a crystalline solid~\cite{toth11,tegze09}. In our case, the nucleation barrier is lowered by the effect of locally straining a sub-critical cluster (as a result of local accumulation of dislocations burger's vectors, as illustrated in Fig.~\ref{fig:Clustering}(a)), making it thermodynamically favorable for the cluster to receive more solute atoms from the matrix and grow in size. \begin{figure}[htbp] \resizebox{3.4in}{!}{\includegraphics{StrainELand}} \caption{(colour online) Common tangent construction using mean-field free energy curves of unstrained (solid curve) and strained solid phases (dashed curves).} \label{fig:StrainELand} \end{figure} \subsection{Metastable phase coexistence} The metastable coexistence between a sub-critical cluster ``a" and the matrix at the quench/aging temperature is elucidated by evaluating the mean field free energy of a system comprising an unstrained matrix phase and strained solid phases with different magnitudes of distortion, i.e., a uniform strain. The free energy-concentration curve of a strained solid phase, at a given temperature, can be achieved by calculating the peaks of correlation kernel $\hat{C}_{2j}^{ii}$, at locations slightly off those of the equilibrium density peaks, $k_j$, for a square structure. The introduced amount of strain is defined in Fourier space by \begin{align} \epsilon=|k-k_j|/k_j \label{FourierStrain} \end{align} , where index $j$ denotes one family of planes in reciprocal space. As can be inferred from Fig.~\ref{fig:StrainELand}, increasing the amount of strain from 0.0016 to 0.014 (corresponding approximately to the average strain within the cluster ``a" shown in Fig.~\ref{fig:StrainField}(b) and (c), respectively), raises the free energy in the strained solid. The free energy wells also shift to different concentrations of solute. Such a configuration admits a common tangent between the free energy curves of the unstrained matrix (e.g. the solid curve) and the distorted ones (e.g. dashed curves)~\cite{larché85}, leading to a (metastable) multiphase coexistence with a lower free energy (as demonstrated in Fig.~\ref{fig:StrainELand}). In other words, at each level of local strain, there is a thermodynamic driving force for a transformation from a single-phase structure of a strained matrix to a phase-coexistence between a strained cluster and an unstrained matrix. On the other hand, despite the fact that the above transformation is thermodynamically favorable, the configuration of energy plots in Fig.~\ref{fig:StrainELand} implies that the driving force for nucleation is lower for the strained cluster (i.e., using the definition of $\Delta f$ in Eq.~(\ref{Nucl.En})). However, since the energy curves in this figure are derived from a mean-field PFC approximation, the illustrated phase-coexistence does not carry the effect of interfacial energy and only includes a mean-field sense of the misfit strain. These factors have a significant impact on the thermodynamics of phase-coexistence at cluster sizes smaller than that of the critical nucleus. In fact, the previously described stress relaxation term in the definition of work of formation (Eq.~(\ref{StressRelaxE})), $\Delta G_{sr}$, overcompensates for the effect of reduced driving force for formation of strained clusters. \subsection{Clustering mechanism} Based on our PFC simulations, we propose the following mechanism of clustering: (1) Stress relaxation by segregation of solute atoms into highly-strained areas in the matrix, such as around dislocations, (2) strain-aided nucleation of sub-critical clusters at concentrations higher than that of the matrix and (3) subsequent growth and enrichment of sub-critical clusters into overcritical sizes, only if a sufficient strain field is preserved, to overcome the nucleation barrier. The above mechanism is consistent with the experimentally observed formation and enrichment of highly-strained coherent GP-zones in quenched-aged dilute Al-Cu alloys~\cite{biswas11}, proposed as the initial step before precipitation of the semi-coherent and incoherent equilibrium $\theta$-phases~\cite{christian}. GP-zones in dilute binary Al alloys are normally known as coherent/semi-coherent particles often with a crystal structure and composition similar to those of the final equilibrium precipitate~\cite{christian,hoyt}. Our clusters possess the same chemical composition and lattice parameter as those of the equilibrium theta-phase pre-set by the relevant peaks in our correlation functions. Thus, they would represent an early-stage evolution of the so-called GP-zones. An investigation on the transformation of GP-zones into the subsequent metastable and equilibrium precipitates will be followed in a future study in 3D with more complex crystal structures. We expect to observe a gradual loss of coherency as GP-zones grow in size, as dictated by the energy arguments. We also note that we expect our results to hold qualitatively in 3D, since the same type of elastic effects are expected to appear around the dislocations regardless of their dimension and any possible partial splitting of dislocations around the clusters. \section{Summary} In summary, we showed that the alloy phase field crystal model of ref.~\cite{greenwood11-2} which stabilizes different crystal structures can be used to simulate and analyze the mechanisms of clustering phenomenon in bulk lattice of quenched/aged alloys. In accordance with the existing experimental observations, our simulations suggests that quenched-in defects, such as dislocations, significantly lower the energy barrier for nucleation of clusters. Furthermore, analysis of overall system energy and local energy changes reveal that the formation and growth of sub-critical clusters are thermodynamically favorable in conjunction with quenched-in mobile dislocations. Consistent with existing experiments, our simulations shed significant light on the elusive energetic mechanism of the growth and enrichment of early clusters which are the precursors of bulk precipitation. \begin{acknowledgements} We acknowledge the financial support received from National Science and Engineering Research Council of Canada (NSERC), Ontario Ministry of Research and Innovation (Early Researcher Award Program) and the Clumeq High Performance Centre. \end{acknowledgements}
2,877,628,090,578
arxiv
\section{INTRODUCTION\label{sec:intro}} Jupiter's regular satellites have nearly coplanar orbits with small eccentricities, and probably originated in an orbiting circumplanetary disk of dust and gas --- a Solar nebula in miniature \citep{1982Icar...52...14L}. As Jupiter approached its present mass, its tides opened a gap in the solar nebula \citep{1986ApJ...307..395L, 1993prpl.conf..749L}. Incoming gas then had too much angular momentum to fall directly onto the planet, and instead went into orbit, forming a circumjovian disk \citep{1999ApJ...526.1001L}. The disk governed the flow of material to the planet and provided the environment in which the satellites formed. A key question is therefore how quickly the orbital angular momentum was redistributed within the disk, allowing some material to accrete on the planet and some to spiral outward where it may have been removed by Solar gravity or by photoevaporation. Also, was the flow laminar or turbulent? Did the released gravitational potential energy become heat in the interior, or was it dissipated in the disk atmosphere? And what did the resulting internal temperatures, densities and flow fields mean for the processing of the moon-forming ices and silicates? As with the much larger disks orbiting young stars \citep{1974MNRAS.168..603L, 1981ARA&A..19..137P, 1995ARA&A..33..199B, 2011ARA&A..49..195A}, circumplanetary disks' evolution is controlled by the transport of orbital angular momentum. In other astrophysical disks, magnetic forces carry angular momentum outward in the turbulence resulting from magneto-rotational instability or MRI \citep{1991ApJ...376..214B, 1998RvMP...70....1B}. The instability can work only if the disk material is ionized enough to couple to the magnetic fields. In the disks around young giant planets, as in protostellar disks, the low temperatures mean thermal ionization is ineffective except very near the central body. Ionization by radioactive isotopes' decay, lightning, bolide impacts and planetesimal ablation is also weak \citep{1996Icar..123..404T}. Adequate ionization might be produced by interstellar cosmic rays if not for rapid recombination on the surfaces of dust grains \citep{2011ApJ...743...53F}. Our purpose here is to find whether the disk around Jupiter is ionized enough for MRI turbulence if an additional ionization process is considered: the X-rays from the young Sun \citep{1999ApJ...518..848I}. Below we compute the magnetic coupling, which depends on the ionization state, which in turn depends on the distribution of densities and temperatures. A variety of models has been proposed for the circumjovian disk. We consider typical examples from two broad classes. In the minimum-mass models \citep{1982Icar...52...14L, 2003Icar..163..198M}, all the ingredients for the satellites are present from an early stage. The gases are eventually dispersed while all the solids are incorporated into the satellites. The disk surface density, obtained by augmenting the rock and ice of the Galilean satellites with gases to Solar or near-Solar composition, is about $10^7$~g~cm$^{-2}$ at the surface of the planet with a power-law radial falloff. The large mass column means few cosmic rays or X-rays penetrate the interior. Recombination is rapid, and the minimum-mass disk couples poorly to magnetic fields \citep{1996Icar..123..404T}. The second class of models is gas-starved \citep{2002AJ....124.3404C, 2006Natur.441..834C}. Gas and dust trickle into the disk from the surrounding Solar nebula. While some of the solids accumulate into larger solid bodies, much material is lost to the planet through the effective viscosity of the gas and the gravitational torques exerted by the gas on the proto-satellites. In this picture, today's moons are the last generation to form before the gas dispersed. The disk surface densities in this model are less than $1\,000$~g~cm$^{-2}$, low enough that some cosmic rays can reach the midplane \citep{2011ApJ...743...53F}. While disks of gas and dust have been used to explain the moons of both Jupiter and Saturn \citep{2010ApJ...714.1052S}, other classes of model may be required given that Jupiter has four large satellites with a gradient in density, while Saturn has just one large satellite. Saturn's smaller inner moons may have grown from ring particles transported out across the Roche limit, with the more distant experiencing more mergers \citep{2012Sci...338.1196C}. However it is unclear whether Titan formed the same way. Another mechanism, gas-poor planetesimal capture \citep{1986sate.conf...89S, 2006Icar..181..486E}, involves collisions among a swarm of planetesimals. This picture has not so far yielded a quantitative accounting for the Galilean moons' large masses and their decrease in density with distance from the planet. We therefore focus on a circumplanetary gas and dust disk as the most promising model for the origins of Jupiter's large moons. The paper is laid out as follows. The minimum-mass circumplanetary disk models are described in section~\ref{sec:mmcjd}, the gas-starved models in section~\ref{sec:gssn}. The chemical reaction network used to compute the magnetic diffusivities is laid out in section~\ref{sec:ionization} and the MRI turbulence criteria in section~\ref{sec:mri}. The resulting distributions of magnetic activity in the disks are shown in section~\ref{sec:deadzones}. Implications for the evolution of the dust and the growth of satellites are discussed in section~\ref{sec:solids}, and our conclusions are presented in section~\ref{sec:conclusions}. \section{MINIMUM MASS CIRCUMJOVIAN DISK MODELS \label{sec:mmcjd}} The minimum-mass models of the circumjovian disk are built in a similar way to minimum-mass Solar nebula models. The satellite system's mass of $2.1\times10^{-4} M_J$ is combined with enough hydrogen and helium to reach Solar composition. The resulting disk has a few percent of Jupiter's mass $M_J$, and extends from inside the present orbit of Io at $5.9 R_J$ to at least the orbit of Callisto at $26 R_J$ (where $R_J$ is the radius of Jupiter). Temperatures in the disk's outer reaches must remain below the water sublimation threshold to account for the ice-rich makeup of Ganymede and Callisto. The release of gravitational energy as disk material spirals toward the planet may raise temperatures too high unless the accretion-stress-to-gas-pressure ratio $\alpha<10^{-5}$ \citep{2003Icar..163..198M}. Stresses near or above this danger level potentially arise from the damping of the wakes raised in the disk gas by satellitesimals \citep{2001ApJ...552..793G} and from the stellar tides periodically forcing the disk \citep{2012A&A...548A.116R}. However in this paper we focus on whether magnetic forces can yield still larger stresses. Another constraint comes from observing that Callisto appears to be only partly differentiated \citep[moment of inertia $I/MR^2 \approx 0.355$;][]{2001Icar..153..157A} though we note that it would be desirable to have the partly-differentiated interpretation confirmed \citep{1997Icar..130..540M, 2013Icar..226.1185G}. Keeping ice and rock mixed is feasible only if the ice never melted during the moon's assembly. The gravitational potential energy of the component parts must then have been released as heat over a period of 0.6~Myr or longer \citep{2008Icar..198..163B}. To slow Callisto's growth, it may be helpful to drop the circumjovian disk's surface density sharply between Ganymede and Callisto \citep{2003Icar..163..198M}. Finally, to avoid its ice melting in the heat released by short-lived radionuclide decay, Callisto also must have finished accreting at least 4~Myr after the formation of the refractory calcium-aluminum-rich inclusions \citep{2008Icar..198..163B}. The raw materials must persist in orbit around Jupiter until at least this date. Each circumjovian disk model is specified by the radial profiles of gas surface density $\Sigma(r)$, solids-to-gas mass ratio $\phi(r)$ and midplane temperature $T_c(r)$. From these we obtain the scale height and density using \begin{equation}\label{eq:scaleheight} H(r) = c_s/\Omega = \left({\cal R} T_c r^3 \over \mu G M_J\right)^{1/2} \end{equation} and \begin{equation} \rho(r,z) = {\Sigma(r) \over \sqrt{2\pi} H} \exp[-z^2/(2 H^2)], \end{equation} where $c_s$ is the isothermal sound speed, $\Omega$ the orbital frequency, ${\cal R}$ the gas constant and $\mu=2.3$ the mean molecular weight, and the density varies with the cylindrical coordinates $(r, z)$. For each model we consider versions in which the solids (1) take the form of sub-micron dust grains, and (2) are locked up in bodies of 1~cm or larger. Particles this big are few enough that their combined cross-section for recombination is too low to affect the abundances of free charges. We set the sub-micron grains' dust-to-gas mass ratio $\epsilon(r)$ equal to $\phi(r)$ in the first, dusty case and zero in the second, dust-free case. \subsection{Takata \& Stevenson (1996) Model --- MM96} We include a minimum-mass model very similar to that used by \cite{1996Icar..123..404T} to facilitate comparison with their ionization results. This model, which we call MM96, has the simple surface density profile \begin{equation} \Sigma(r) = \Sigma_0 (R_J/r) \end{equation} with $\Sigma_0 = 10^7$\,g\,cm$^{-2}$, and the temperature profile \begin{eqnarray} T_c(r) &=& 3600 (R_J/r)\, {\rm K} \qquad {\rm for}\ r/R_J \le 30 \\ &=& 120\, {\rm K} \qquad {\rm for}\ r/R_J \ge 30. \end{eqnarray} The model differs from \cite{1996Icar..123..404T} in that we compute the density scale height from the temperature via eq.~\ref{eq:scaleheight}, yielding $H=0.086 r$ within $30R_J$ and $H\sim r^{3/2}$ beyond, while they simply took $\sqrt{2} H\approx 0.1 r$. We have checked that the two density distributions yield similar magnetic diffusivities under X-ray ionization and dust surface recombination. Our MM96 model includes a 1\% mass fraction of solid material. The surface density and midplane temperature profiles of the MM96 model are plotted in figure~\ref{fig:sd} along with those of the six other models described below. \begin{figure}[htb!] \epsscale{0.54} \plotone{f1a.pdf}\\ \plotone{f1b.pdf} \caption{\sf The seven subnebula models' radial profiles of surface density (top) and midplane temperature (bottom). The minimum-mass models appear at left, the gas-starved models in the center and right panels. Each model's gas and dust surface densities are shown by two matching curves, with the gas the larger one. The three minimum-mass models at left are MM96 (solid), MM03 (long-dashed) and SEMM (dotted). Note that the MM03 and SEMM models overlap throughout in solid surface density and temperature. The gas-starved models at center differ in opacity and are K0 (solid) and K-4 (dashed), while those at right differ in the growth timescale and are TG50 (solid) and TG05 (dashed). Grey shading in the surface density panels indicates the range penetrated by X-rays (darker) and cosmic rays (lighter). The shadings in the temperature panels indicate the approximate water ice stability range at the minimum (darker) and maximum (lighter) pressures found in the gas-starved models \citep{2002AJ....124.3404C}. \label{fig:sd}} \end{figure} \subsection{Mosqueira \& Estrada (2003) Model --- MM03} The MM03 model has a more complex surface density profile, \begin{eqnarray} \Sigma(r) &=& \Sigma_{\rm in}^0 (R_{\rm in}/r) \qquad {\rm for}\ r \le r_1 \\ &=& a_1 (r_1/r)^{b_1} \qquad {\rm for}\ r_1 \le r \le r_2 \\ &=& \Sigma_{\rm out}^0 (R_{\rm out}/r) \qquad {\rm for}\ r \ge r_2 \end{eqnarray} where $\Sigma_{\rm in}^0 = 51 \times 10^4$\,g\,cm$^{-2}$, $\Sigma_{\rm out}^0 = 0.31 \times 10^4$\,g\,cm$^{-2}$, $r_1 = 20 R_J$, $r_2 = 26 R_J$, $R_{\rm in} = 14 R_J$, $R_{\rm out} = 87 R_J$, \begin{equation} a_1 = \Sigma_{\rm in}^0 R_{\rm in}/r_1 = 35.7 \times 10^4\,{\rm g}\,{\rm cm}^{-2}, \end{equation} and \begin{equation} b_1 = \ln[(\Sigma_{\rm in}^0 R_{\rm in} r_2)/(\Sigma_{\rm out}^0 R_{\rm out} r_1)]/\ln(r_2/r_1) = 13.4871. \end{equation} Note that the values of $a_1$ and $b_1$ in Table~2 of \cite{2003Icar..163..198M} are not exact and that there is a missing $r_1$ in their eq.~6 for $\Sigma$ in the transition region. The temperature in Kelvins, based on fitting their Figure~3, is \begin{eqnarray} T_c(r) &=& 3750 (R_J/r) \qquad {\rm for}\ r/R_J \le 23 \\ &=& 655.55 (r/R_J)^{-1/2} + 26.35 \qquad {\rm for}\ 23 \le r/R_J \le 40 \\ &=& 130 \qquad {\rm for}\ r/R_J \ge 40. \end{eqnarray} Like MM96, our MM03 model includes a 1\% mass fraction of solid material. \subsection{Solids-Enhanced Minimum Mass Model --- SEMM} The solids-enhanced minimum mass model preferred by \cite{2003Icar..163..232M} and \cite{2009euro.book...27E} differs from the MM03 model in having 90\% of the gas removed within $r_1 = 20 R_J$. The gas surface density is unchanged outside $r_2 = 26 R_J$, and in the transition zone between 20 and $26 R_J$ varies smoothly as $3.57\times 10^4\left(r/r_1\right)^{-4.711}$~g~cm$^{-2}$. The surface density of the solids is left unchanged throughout. In this sense, the model is not solids-enhanced but gas-depleted. \section{IMPROVED GAS-STARVED SUBNEBULA MODEL \label{sec:gssn}} In the gas-starved models, only a fraction of the material needed to form the satellites orbits the planet at any given instant. The subnebula is replenished by the slow inflow of gas and solids after Jupiter opens a gap in the Solar nebula. An approximate overall balance between the growth of new satellites and loss by migrating into the planet regulates the mass fraction of the satellite system to $\sim 10^{-4}$ \citep{2006Natur.441..834C}. Gas-starved models are constructed assuming material from the Solar nebula falls steadily on the circumplanetary disk \citep{2002AJ....124.3404C}. Hydrodynamical calculations treating the vertical structure show the solar nebula gas approaches the planet and its disk from above and below \citep{2008ApJ...685.1220M, 2012ApJ...747...47T, 2012MNRAS.427.2597A}. The circumplanetary disk structure is insensitive to the distribution of the injected Solar nebula gas once a steady-state is reached, depending instead on the disk's angular momentum balance \citep{2011MNRAS.413.1447M}. This contrasts with the minimum-mass models, where the size is fixed by the angular momentum of the gas at the time the disk is assembled. Orbital angular momentum is transferred through the gas-starved subnebula by an unspecified process that yields accretion stresses equal to a constant, $\alpha$, times the gas pressure \citep{1973A&A....24..337S}. The temperature is determined by the resulting release of gravitational energy, together with the illumination from Jupiter and from the surrounding Solar nebula, balanced by radiative losses. Regarding the circumplanetary disk's size, we can say that the outer edge lies within 40\% of the planet's Hill radius $r_H$, since at greater distances the stellar tide is strong and periodic ballistic orbits cross \citep{2011MNRAS.413.1447M}. A smaller disk can expand to $0.4r_H$ under magnetic stresses \citep{2013MNRAS.428.2668L}. On the other hand, photoevaporation is capable of truncating circumplanetary disks to a small fraction of the Hill radius \citep{2011AJ....142..168M}. The maximum size of $0.4r_H$ for Jupiter corresponds to about $300R_J$. Several further constraints apply to conditions inside Jupiter's disk. Ganymede's composition requires the water ice sublimation point to lie inside this moon's orbit when the last generation of satellites form. Slow growth of the planet before the Solar nebula starts to dissipate is ruled out because the stellar tides raise a two-armed spiral wave in hydrodynamical models of an inviscid circumplanetary disk, setting a floor on the accretion torques that yields a slowest allowed planet mass doubling time of 5~Myr \citep{2012A&A...548A.116R}. This suggests the maximum $\tau_G=10^8$~years used by \cite{2002AJ....124.3404C} applies only after the Solar nebula starts to dissipate. Yet another constraint comes from \cite{2005A&A...439.1205A} who found that rocky satellites within 10$R_J$ survive migration if $\alpha > 2\times 10^{-4}$ and temperatures remain low enough for long enough to form Callisto if $\alpha < 10^{-3}$. Note that the heating in their models is distributed in the disk interior. Releasing the heat in a magnetically-active surface layer at lower optical depth yields cooler midplane temperatures \citep{2011ApJ...732L..30H}. We therefore do not attempt to meet the last constraint. In the gas-starved models of \cite{2002AJ....124.3404C}, the inflowing material is assumed to be deposited uniformly in the region extending to distance $r_c$ from the planet, with the total rate of mass inflow equal to $F_\ast$. The gas component of the disk spreads viscously, both onto the planet and out to some assumed outer edge at $r_d$. Three parameters distinguish the gas-starved models of \cite{2002AJ....124.3404C}. These are the stress-to-pressure ratio $\alpha$; the opacity of the disk to its own radiation, assumed independent of temperature, density and position; and the rate at which mass falls on the planet, measured by the planet growth timescale $\tau_G=M_J/{\dot M_J}$ (where ${\dot M_J} \approx F_\ast$). We construct versions of these models with three improvements, (1) making the opacities temperature-dependent, (2) properly treating optically-thin disk annuli and (3) more accurately computing the illumination by Jupiter. Due to the first of these, we replace their constant opacity parameter $K$ by a dust depletion factor, for which we use the symbol $f_{\rm opac}=\epsilon/0.01$. The temperature-dependent opacities are taken from \cite{1994ApJ...421..615P}. The second and third improvements are made by using midplane temperatures from the analytic vertical structure model of \cite{1990ApJ...351..632H} for viscous dissipation and isotropic solar nebula irradiation, with the extension for irradiation by a central source (i.e.\ Jupiter) by \cite{2001A&A...379..515M}. The temperature-dependent opacity for undepleted grain composition is taken from Figure 6 of \cite{1994ApJ...421..615P}, which includes contributions from silicates, troilite, metallic iron, organics, and water ice. It increases from $0 \,{\rm cm}^2 \,{\rm g}^{-1}$ at $0\,$K to $6.50 \,{\rm cm}^2 \,{\rm g}^{-1}$ at $174\,$K, and shows multiple local minima and maxima at higher temperatures, ranging from $1.95 \,{\rm cm}^2 \,{\rm g}^{-1}$ at $700\,$K to $6.28 \,{\rm cm}^2 \,{\rm g}^{-1}$ at $425\,$K. The product of the disk gas surface density $\Sigma$ and viscosity $\nu$ is determined by the mass inflow model and independent of the vertical structure. So \begin{equation} \nu \Sigma = {4 F_* \over 15 \pi} \cases{ {{\textstyle 5} \over {\textstyle 4}} - \sqrt{{\textstyle r_c} \over {\textstyle r_d}} - {{\textstyle 1} \over {\textstyle 4}} \left({\textstyle r} \over {\textstyle r_c}\right)^2 & for $r < r_c$ , \cr & \cr \sqrt{{\textstyle r_c} \over {\textstyle r}} - \sqrt{{\textstyle r_c} \over {\textstyle r_d}} & for $r \ge r_c$ ,} \end{equation} as in \cite{2002AJ....124.3404C}. We also follow \cite{2002AJ....124.3404C} in adopting the $\alpha$ prescription for the viscosity: \begin{equation} \nu = \alpha c_s^2/\Omega, \end{equation} where $c_s$ is the midplane sound speed. We consider heating by viscous dissipation in the disk, incoming isotropic radiation at the ambient nebular temperature $T_{\rm neb}$, and incoming radiation from Jupiter. According to order-of-magnitude estimates, radial heat advection is unimportant. The irradiation from our central source, Jupiter, is highly directional, with the cosine of the characteristic angle (measured from the inward directed normal to the disk surface) at which the light enters the disk equal to \citep{1997ApJ...490..368C} \begin{equation} \mu_J = {4 \over 3\pi} \left(R_J \over r\right) + \left(H \over r\right) \left({d\ln H \over d\ln r} - 1\right) , \end{equation} and the flux intercepted by a surface element (either top or bottom) of the disk equal to \begin{equation} 4\pi H_J(0) = \left(\mu_J \over 2\right) \left(R_J \over r\right)^2 \sigma_{\rm SB} T_J^4 , \label{eq:HJ} \end{equation} where $\sigma_{\rm SB}$ is the Stefan-Boltzmann constant. Equation (\ref{eq:HJ}) was dervied by \cite{1991ApJ...375..740R} in the limit that $H/r \ll 1$ and $r \gg R_J$ (see also \citealt{1970PThPh..44.1580K}). It differs from $4\pi H_J(0) = (dH/dr) (R_J/r)^2 \sigma_{\rm SB} T_J^4$ used by \cite{2002AJ....124.3404C}. Energy balance requires the outgoing flux on the disk surface to equal to the sum of the incoming flux plus the emission from viscous dissipation. Thus the {\it net} (outgoing minus incoming) flux on the surface of the disk is just the emission from viscous dissipation. If we define the accretion temperature $T_d$ in terms of the net flux, \begin{equation} 2 \sigma_{\rm SB} T_d^4 = {9 \over 4} \Omega^2 \nu \Sigma . \end{equation} To determine the midplane and surface temperatures, we use the analytic model of the vertical structure developed by \cite{1990ApJ...351..632H} for the treatment of viscous dissipation and isotropic irradiation by the ambient nebula, and we use the extension of this model by \cite{2001A&A...379..515M} for the treatment of irradiation by a central source (Jupiter in our case). We use the simplest form of this model with the following approximations. We assume that the different forms of mean opacities are all equal to the Rosseland mean opacity. We use the same mean opacity for the disk's own radiation, the radiation from the ambient nebula, and the radiation from Jupiter. We assume that the extinction is dominated by absorption. Then the temperature at optical depth $\tau$ from the surface is given by (see eq.~3.11 of \cite{1990ApJ...351..632H} and eq.~61 of \cite{2001A&A...379..515M}) \begin{eqnarray}\label{eq:Ttau} T^4(\tau) &=& {3 \over 4} \left[\tau \left(1 - {\tau \over 2 \tau_c}\right) + {1 \over \sqrt{3}} + {1 \over 3 \tau_c}\right] T_d^4 + T_{\rm neb}^4 \cr & & \cr & & + {3 \over 4} \left[\mu_J \left(1 - e^{-\tau/\mu_J}\right) + {1 \over \sqrt{3}} + {1 \over 3\mu_J} e^{-\tau/\mu_J}\right] \left(4\pi H_J(0) \over \sigma_{\rm SB}\right), \end{eqnarray} where $\tau_c$ is the optical depth to the midplane: \begin{equation} \tau_c = \kappa \Sigma/2, \end{equation} and we use for $\kappa$ the Rosseland mean opacity at the midplane temperature. From eq.~\ref{eq:Ttau}, the midplane temperature (at $\tau = \tau_c$) is given by \begin{eqnarray}\label{eq:Tctau} T_c^4 &=& {3 \over 4} \left[{\tau_c \over 2} + {1 \over \sqrt{3}} + {1 \over 3 \tau_c}\right] T_d^4 + T_{\rm neb}^4 \cr & & \cr & & + {3 \over 4} \left[\mu_J \left(1 - e^{-\tau_c/\mu_J}\right) + {1 \over \sqrt{3}} + {1 \over 3\mu_J} e^{-\tau_c/\mu_J}\right] \left(\mu_J \over 2\right) \left(R_J \over r\right)^2 T_J^4 , \end{eqnarray} while the surface temperature (at $\tau = 0$) is given by \begin{equation} T_s^4 = {3 \over 4} \left[{1 \over \sqrt{3}} + {1 \over 3 \tau_c}\right] T_d^4 + T_{\rm neb}^4 + {1 \over 8} \left(R_J \over r\right)^2 T_J^4 \end{equation} for $\mu_J \ll 1$. The term in Equation (\ref{eq:Tctau}) with $T_d^4$ due to viscous heating differs from the expression used by \cite{2002AJ....124.3404C} in the numerical coefficients and especially in the $1/\tau_c$ dependence in the optically thin limit ($\tau_c \ll 1$). \cite{2009euro.book...59C} corrected the \cite{2002AJ....124.3404C} expression for viscous heating for the optically thin regime, but their expression also differs from Equation (\ref{eq:Tctau}) in the numerical coefficients. If $\tau_c \ll \mu_J \ll 1$, \begin{equation} T_c^4 \approx T_s^4 \approx {1 \over 4\tau_c} T_d^4 + T_{\rm neb}^4 + {1 \over 8} \left(R_J \over r\right)^2 T_J^4 . \end{equation} If $\tau_c \ll 1$ but $\tau_c \not\ll \mu_J$, the last term of eq.~\ref{eq:Tctau} cannot be reduced to $(1/8) (R_J/r)^2 T_J^4$. If $\tau_c \gg 1$, \begin{equation} T_c^4 \approx {3 \tau_c \over 8} T_d^4 + T_{\rm neb}^4 + {\sqrt{3} \over 4} \left(\mu_J \over 2\right) \left(R_J \over r\right)^2 T_J^4 , \end{equation} and \begin{equation} T_s^4 \approx {\sqrt{3} \over 4} T_d^4 + T_{\rm neb}^4 + {1 \over 8} \left(R_J \over r\right)^2 T_J^4 . \end{equation} We consider four gas-starved models, two of which are similar to the high ($K=1 \,{\rm cm}^2 \,{\rm g}^{-1}$) and low ($K=10^{-4} \,{\rm cm}^2 \,{\rm g}^{-1}$) opacity models considered by \cite{2002AJ....124.3404C}. For the $f_{\rm opac}=1$ model, which is optically thick out to $\sim 60 R_J$, we only need to decrease $\tau_G$ slightly to produce surface density and midplane temperature profiles that are similar to those shown in Figure~6 of \cite{2002AJ....124.3404C}. For the $f_{\rm opac}=10^{-4}$ model, almost the entire subnebula is optically thin, and we obtain significantly higher midplane temperatures with our improved vertical structure model (which radiates away the accretion power inefficiently in the optically thin regime), if we take the parameters from Figure 5 of \cite{2002AJ....124.3404C}. To place the ice sublimation front near Ganymede's orbit, we therefore use a lower stress parameter $\alpha$ and a longer growth timescale $\tau_G$. Since the improved and \citet{2002AJ....124.3404C} models have similar surface density and midplane temperature profiles after suitably adjusting $\alpha$ and $\tau_G$, the models are expected to have similar ionization states. However the satellites' orbital migration as they interact with the disk can be very different in the improved models, due to sharp jumps in the local power-law indices of the surface density and midplane temperature profiles resulting from the temperature-dependent opacity (Li \& Lee, in preparation). In addition, we consider a scenario where the total rate of mass inflow to the disk is initially nearly constant at $F_\ast(0)$ and then decays exponentially with time, i.e., $F_\ast(t) = F_\ast(0) \exp(-t/\tau_{\rm in})$, due to the dispersal of the Solar nebula \citep{2006Natur.441..834C, 2008Icar..198..163B, 2012ApJ...753...60O}. During the exponential decay, the total mass delivered to the disk after time $t$ is $F_\ast(t) \tau_{\rm in}$. The last generation of satellites has total mass $M_T$ and forms after time $t_s$, where $F_\ast(t_s) \tau_{\rm in} = M_T/\Phi$ and the overall solids-to-gas ratio in the inflow $\Phi \approx 0.01$. Note that $\Phi$ is a surface-integrated measure of the infalling material and need not match $\phi(r)$, the disk's internal radial profile. So $\tau_G(t_s) = M_J/F_\ast(t_s) = (M_J\Phi/M_T) \tau_{\rm in}$. For the jovian satellites, if $\tau_{\rm in} \approx 1\,$Myr, $\tau_G(t_s) \approx 50\,$Myr. Thus we consider two models with $\tau_G (0) = 5\,$Myr and $\tau_G (t_s) = 50\,$Myr. The four specific gas-starved models we construct have the parameters listed in Table~\ref{tab:gs}, as well as $r_c = 30 R_J$, $r_d = 150 R_J$, $T_{\rm neb} = 150\,{\rm K}$, and $T_J = 500\,{\rm K}$, and their surface density and midplane temperature profiles are shown in Figure~\ref{fig:sd}. Treating the opacities' temperature dependence leads to several new features. For example, the steep $T_c$ drop near $27R_J$ in the K-4 model results from a sharp decline in the opacity at 174~K where water ice sublimates. The model is marginally optically-thin at this location. Temperatures are significantly higher in the $\tau_G (0) = 5\,$Myr model, which is allowed as there are no compositional constraints on the earlier generations of satellites lost by migration into the planets. \begin{table}[tb] \caption{\sf The four gas-starved subnebula models. \vspace*{3mm} \label{tab:gs}} \begin{center} \begin{tabular}{llrl} \hline Name & $\alpha$ & $\tau_G$ (Myr) & $f_{\rm opac}$ \\ \hline \hline K0 & 0.005 & 70 & 1 \\ K$-$4 & 0.0009 & 20 & 10$^{-4}$ \\ TG05 & 0.001 & 5 & 0.1 \\ TG50 & 0.001 & 50 & 0.1 \\ \hline \end{tabular} \end{center} \end{table} \section{IONIZATION STATE\label{sec:ionization}} \subsection{Chemical Network\label{sec:network}} The ionization state is calculated by integrating a chemical network treating in simplified form the most important gas-phase pathways: molecular ionization, dissociative molecular recombination, charge transfer to metal atoms, and radiative recombination of metal ions. The closed and balanced set of reactions related to the representative molecular ion HCO$^+$ is \begin{eqnarray} {\rm H}_2 + X &\rightarrow& {\rm H}_2^+ + e^-\\ {\rm H}_2^+ + {\rm H}_2 &\rightarrow& {\rm H}_3^+ + {\rm H}\\ {\rm H}_3^+ + {\rm CO} &\rightarrow& {\rm HCO}^+ + {\rm H}_2\\ 2 {\rm H} + G &\rightarrow& {\rm H}_2 + G\\ {\rm HCO}^+ + e^- &\rightarrow& {\rm CO} + {\rm H} \label{eq:dr} \end{eqnarray} Here every species (except the X-rays, $X$ and grains, $G$) is created in at least one reaction and destroyed in at least one other. Over the whole set, no species is produced or consumed on balance. The subset producing the ions and electrons, eqs. 26-29, boils down to \begin{equation} 2{\rm H}_2 + 2X + 2{\rm CO} \rightarrow {\rm H}_2 + 2{\rm HCO}^+ + 2 e^- \label{eq:molecularion}. \end{equation} That is, each X-ray striking a hydrogen molecule yields one ion and one electron. We follow \citet{2006A&A...445..205I} and others in approximating eqs.~\ref{eq:molecularion} and~\ref{eq:dr} by \begin{eqnarray} {\rm H}_2 + X &\rightarrow& {\rm HCO}^+ + e^-\label{eq:mi}\\ {\rm HCO}^+ + e^- &\rightarrow& {\rm H}_2,\label{eq:dr2} \end{eqnarray} neglecting the fact that the molecular ion holds just one hydrogen atom. This is fine since the HCO$^+$ is orders of magnitude less abundant than the H$_2$ and forming the ions leaves the H$_2$ density basically unchanged. Similarly, we don't follow CO destruction and reformation since the ion is so much less abundant than the molecule. To eqs.~\ref{eq:mi} and~\ref{eq:dr2} we add the charge transfer and radiative recombination reactions involving the representative metal magnesium: \begin{eqnarray} {\rm HCO}^+ + {\rm Mg} &\rightarrow& {\rm HCO} + {\rm Mg}^+\label{eq:ct}\\ {\rm Mg}^+ + e^- &\rightarrow& {\rm Mg} + h\nu. \end{eqnarray} The product radical in eq.~\ref{eq:ct} readily breaks apart into H and CO. Simplifying by again taking into account the large abundances of H$_2$ relative to H, and CO relative to HCO$^+$, we arrive at the reduced network \begin{eqnarray} {\rm H}_2 + X &\rightarrow& {\rm HCO}^+ + e^-\\ {\rm HCO}^+ + e^- &\rightarrow& {\rm H}_2\label{eq:molrec}\\ {\rm HCO}^+ + {\rm Mg} &\rightarrow& {\rm H}_2 + {\rm Mg}^+\\ {\rm Mg}^+ + e^- &\rightarrow& {\rm Mg} + h\nu. \end{eqnarray} Whether the carbon is oxidized or reduced makes little difference here because (1) the rate coefficient for pathway~\ref{eq:molrec} at $3\times 10^{-6}/\sqrt{T}$~cm$^3$~s$^{-1}$ \citep{2006A&A...445..205I} is similar to that for the corresponding methane ion, $10^{-6}$ \citep{1996Icar..123..404T}, and (2) anyway gas-phase recombination proves less important than the grain surface pathway, near the dead zone boundary in our cases with dust. Also treated in the network are grain charging and discharging \citep{2006A&A...445..205I} through collisions with ions and electrons, and charge exchange in grain-grain collisions. Grain charges from $-2$ to $+2$ are considered. Additionally the metal atoms are allowed to thermally adsorb on and desorb from the grains. The reactions and their rate coefficients are described by \cite{2006A&A...445..205I}, with the electron sticking probabilities revised to include the grain charge following \cite{2011ApJ...739...50B}. The magnesium locked up inside grains is assumed to be 99\% of the Solar abundance of $3.7\times 10^{-5}$ per hydrogen atom, with the remaining 1\% available to participate in the recombination network, either in the gas phase or adsorbed on grain surfaces. The gas-phase magnesium abundance had little effect on the magnetic activity above a threshold level of $10^{-6}$ times Solar, in protostellar disk models by \citet{2007ApJ...659..729T}. We solve the kinetic equations describing the reaction network using a semi-implicit extrapolation method. While bringing the network to equilibrium we record the recombination time $t_{\rm rec}$ needed to reach an electron fraction within 1\% of the equilibrium value. We include monodisperse grains $a=0.1$~$\mu$m in radius with internal density $\rho_d=2$~g~cm$^{-3}$. This yields a geometric cross-section per unit dust mass similar to that of the size distribution used to compute the opacities by \cite{1985Icar...64..471P} and \cite{1994ApJ...421..615P}. Furthermore the same dust-to-gas ratios are used for the opacities and the grain surface recombination in the dusty versions of our gas-starved models. \subsection{Ionization Processes} The chemical reaction network is driven by the ionization from interstellar cosmic rays, radioisotope decay and protosolar X-rays. The cosmic rays yield an ionization rate $10^{-17}$~s$^{-1}$ well outside the Solar nebula. They strike our material isotropically over the upper hemisphere and their secondary particles are absorbed over a column 96~g cm$^{-2}$ following \cite{1981PASJ...33..617U} and \cite{2009ApJ...690...69U}. We consider two radioisotope ionization scenarios. Long-lived isotopes such as potassium-40 yield an ionization rate $6.9\times 10^{-23}(\epsilon/0.01)$~s$^{-1}$, while short-lived isotopes such as aluminum-26 if present yield a much higher rate, $3.7\times 10^{-19}(\epsilon/0.01)$, where $\epsilon$ is the dust-to-gas mass ratio \citep{1992Icar...97..130S, 1996Icar..123..404T, 2009ApJ...690...69U, 2009Icar..204..658C}. The Solar nebula for most of its lifetime blocks direct sightlines so that the protosolar X-rays reach the planet's vicinity entirely through scattering. Jupiter and its disk at one time lay in a gap in the Solar nebula \citep{1993prpl.conf..749L} and the geometry of the gap surely influenced the flux of X-rays reaching the planet. Furthermore, toward the end of the Solar nebula's evolution the gas interior to Jupiter's orbit cleared first, judging from the central holes observed in the so-called transitional systems found among protostellar disks today \citep{2005ApJ...630L.185C, 2010ApJ...708.1107M, 2011ApJ...732...42A}. Jupiter and surrounding material were then directly exposed to protosolar X-rays. However lacking detailed information about the X-ray transfer in either of these geometries, we use the ionization rates vs.\ column in the Solar nebula derived from Monte Carlo transfer calculations by \citet{1999ApJ...518..848I}, taking the case with the 5~keV thermal spectrum from their figure~3 and scaling the luminosity to $2\times 10^{30}$~erg~s$^{-1}$, the median observed in young Solar-mass stars in the Orion Nebula Cluster \citep{2000AJ....120.1426G}. The scattered X-rays are absorbed in a column of about 8~g~cm$^{-2}$. The ionization rate contributions from all the non-thermal processes are shown as functions of the mass column in figure~\ref{fig:ioniz}. \begin{figure}[htb!] \epsscale{0.6} \plotone{f2.pdf} \caption{\sf Ionization rates per H nucleus vs.\ the column of overlying material, resulting from X-rays (solid curve), cosmic rays (long-dashed curve) and short- and long-lived radionuclides (dashed and dotted lines). Only the X-rays and cosmic rays arriving from above are included. The X-ray ionization rate is from Monte Carlo calculations of the transfer through the Solar nebula of photons with a 5~keV thermal spectrum \citep{1999ApJ...518..848I}. The X-rays are extrapolated past 80~g~cm$^{-2}$ using the $e$-folding depth of their scattered component, 8~g~cm$^{-2}$ \citep{2008ApJ...679L.131T} while the cosmic rays' $e$-folding depth is 96~g~cm$^{-2}$ \citep{1981PASJ...33..617U, 2009ApJ...690...69U}. The radionuclide abundances correspond to the interstellar dust-to-gas mass ratio~$\epsilon=0.01$. \label{fig:ioniz}} \end{figure} Finally we treat the thermal ionization of the low-ionization-potential element potassium, which becomes important at the temperatures above $1\,000$~K reached inside Io's orbit \citep{1996Icar..123..404T}. We solve the Saha equation using potassium's ionization energy of 4.3407~eV and assuming 99\% of the Solar potassium abundance is locked up inside the grains, with the remaining 1\% in the gas and available for collisional ionization. The steep temperature dependence of the thermal ionization means the dead zone boundary is insensitive to this choice. \section{MAGNETO-ROTATIONAL TURBULENCE\label{sec:mri}} Analytic and numerical results indicate that the criterion for magneto-rotational instability to drive turbulence is \begin{equation}\label{eq:elsasser} \Lambda \equiv {v_{Az}^2\over\eta\Omega} > 1, \end{equation} where the dimensionless Elsasser number $\Lambda$ depends on the Alfv\'{e}n speed $v_{Az}$ for the vertical component of the magnetic field, along with the magnetic diffusivity $\eta$ and orbital frequency $\Omega$. This means the instability must grow faster than the magnetic fields can diffuse across its fastest-growing wavelength \citep{1996ApJ...457..798J, 1999ApJ...515..776S, 2001ApJ...561L.179S, 2002ApJ...577..534S, 2007ApJ...659..729T}. In the diffusivity we include the contributions from the induction equation's Ohmic and ambipolar terms, added in quadrature. A further requirement for the instability to grow near its top rate is that the background toroidal magnetic fields have a pressure less than the gas pressure \citep{2000ApJ...540..372K}. We compute the diffusivity $\eta$ including the current densities from all the charged species in the chemical network described in section~\ref{sec:ionization}, following eqs.~21 to~31 of \cite{2007Ap&SS.311...35W}. Both the Ohmic and ambipolar terms in the induction equation are included. Ohmic diffusion occurs at densities high enough for the main charged species to couple to the neutrals through collisions, while ambipolar drift is important when densities are low enough and collisions rare enough that the neutrals slip through the plasma which remains tied to the magnetic fields by Lorentz forces. At intermediate densities a third non-ideal effect, the Hall term, is important \citep{1999MNRAS.303..239W}. We neglect the Hall term because it affects the turbulence threshold and saturation level only slightly when comparable to the Ohmic term \citep{2002ApJ...577..534S}. However, dramatic effects appear in unstratified non-linear calculations when the Hall term dominates \citep{2013MNRAS.434.2295K}. Stratified calculations are urgently needed. The maximum possible accretion stress depends on the Elsasser number $\Lambda$. When the diffusivity is dominated by the ambipolar term, the stress can reach about 1\% of the gas pressure if $\Lambda\approx 1$, and 10\% if $\Lambda\approx 10$, according to 3-D unstratified shearing-box MHD results \citep{2011ApJ...736..144B}. The higher of these stress levels can occur starting one decade above our magnetic activity threshold, or one contour level in the plots in section~\ref{sec:deadzones} below. \subsection{Magnetic Fields} The magneto-rotational instability grows from initially-weak magnetic fields into long-lived turbulence in both local and global MHD calculations \citep{2000ApJ...534..398M, 2006A&A...457..343F}. Over a wide range of seed field strengths, the pressure in the fields' vertical component saturates between $10^{-4}$ and $10^{-2}$ times the midplane gas pressure \citep{2000ApJ...534..398M, 2006A&A...457..343F, 2010ApJ...708.1716S, 2010MNRAS.409.1297F, 2011ApJ...742...65O}. Owing to the fields' buoyancy, the magnetic pressure declines more slowly with height than the gas pressure. We therefore compute the Elsasser number at each point assuming that the pressure in the vertical component of the magnetic field is simply 0.1\% of the midplane gas pressure, independent of height. Note that this measures not just the net vertical or seed magnetic field delivered with the gas arriving from the Solar nebula, but the overall vertical field including the part generated locally in the turbulence. We seek places in the circumjovian disk where turbulence can be sustained. Choosing $10^{-3}$ for the midplane pressure ratio means the magnetic field's vertical component has pressure greater than the gas above 3.7~density scale heights. In saturated MRI turbulence, the toroidal magnetic field has a pressure at least ten times the vertical component \citep{2000ApJ...534..398M}, giving a total magnetic pressure exceeding the gas pressure above about $3H$. The MRI's linear growth rate is reduced at low plasma beta \citep{2000ApJ...540..372K} so turbulence would be increasingly weaker above this height. However there would be less weakening if we included (1) the field strength's fall-off above a few scale-heights, and (2) the gas pressure profile's extended tail resulting from magnetic support. Both these effects increase the plasma beta over our simple picture, and both are observed in the 3-D numerical calculations cited above. Jupiter's magnetic field can safely be neglected since it is weaker than the MRI-generated fields in all our disks. This is true if the planet's field strength is 10~Gauss at its surface, located at 2$R_J$, and falls off like a dipole in proportion to the inverse cube of the radius. \section{DEAD ZONES\label{sec:deadzones}} \subsection{Tests} As a test, we begin by replicating the \cite{1996Icar..123..404T} findings under ionization by short-lived radioisotopes and cosmic rays. Their gas-phase reaction network differs from ours in lacking charge transfer to metal atoms, including instead charge transfer to methane, ammonia and water molecules. They consider grains 1~cm in radius, which contribute negligibly to the overall recombination cross-section. Our network can be made very like theirs by removing the dust and replacing the magnesium with a generic molecule having abundance $10^{-3}$ per hydrogen atom and recombination rate coefficient $10^{-6}$~cm$^3$~s$^{-1}$. The resulting ionization fractions in the MM96 model disk closely follow \cite{1996Icar..123..404T} figures~4 (b), (c) and (f). Next we restore the gas-phase reaction network described in section~\ref{sec:ionization}, including the metal ions which are long-lived due to their much smaller recombination coefficient. The corresponding ionization fractions in the disk interior are about three orders of magnitude greater. This is consistent with the picture in protostellar disks, where the metal atoms play a significant role when the dust abundance is low \citep{2002MNRAS.329...18F, 2013ApJ...765..114D}. \subsection{Fiducial Models} We then consider the most favorable situation for the MRI, with ionization by X-rays, cosmic rays and short-lived radionuclides, looking at one fiducial model each from the minimum-mass and gas-starved classes. The minimum-mass MM96 model is shown in figure~\ref{fig:deadupclose2}. With recombination on grains (top panel), the dead zone extends to five scale heights and above, where the low gas densities mean the boundary is set by ambipolar diffusion. Combined with the low plasma beta above $5H$, this means MRI turbulence is weak or absent throughout. In contrast, without dust (bottom panel) the dead zone extends only up to $3H$, leaving a small fraction of the mass column magnetically active. The active layer's lower boundary is set by Ohmic diffusion over its whole length. The two recombination scenarios shown in figure~\ref{fig:deadupclose2} yield similar outcomes when applied to the steeper surface density profile $\Sigma=10^7(R_J/R)^{1.3}$~g~cm$^{-2}$ originally suggested by \cite{1982Icar...52...14L}. \begin{figure}[tb!] \epsscale{.5} \plotone{f3a.pdf}\\ \plotone{f3b.pdf} \caption{\sf The minimum-mass MM96 model has a substantial dead zone even in the most favorable ionization scenario with X-rays, cosmic rays and short-lived radionuclides. The dusty case is above, the dust-free case below. In the grey shaded region the Ohmic diffusivity is greater than the ambipolar diffusivity. The contours show Elsasser numbers computed from the quadrature sum of the two. The Elsasser number is unity on the heavy green contour. Other contours are spaced by an order of magnitude, with the solid green ones on the MRI-unstable side and the dashed red ones on the MRI-stable side. The uppermost dashed contour in the dusty case is for Elsasser number~0.1. Blue dots mark where turbulent mixing is capable of affecting the dead zone's diffusivity. At the smallest blue dots, the overlying column of free electrons is sufficient to lift the diffusivity above the threshold for turbulence if instantaneously well-mixed. At the medium blue dots, the recombination time is also at least 10\% of the turbulent mixing time. At the largest blue dots, recombination is slower than mixing. \label{fig:deadupclose2}} \end{figure} As a fiducial gas-starved disk we choose the K-4 model shown in figure~\ref{fig:deadupclose}. In contrast to the fiducial minimum-mass model, the disk here resembles the Solar nebula in having a substantial magnetically-active surface layer overlying an interior dead zone. Over almost the whole radial extent of our calculation, the boundary between the two layers lies below the height of $3H$ where the plasma beta falls to unity. MRI can therefore grow at near its maximum rate. Over a similar radial range, the Ohmic term dominates the magnetic diffusivity at the boundary. Basically the whole mass column is active beyond $67R_J$ with dust, or $48R_J$ without. \begin{figure}[tb!] \epsscale{.5} \plotone{f4a.pdf}\\ \plotone{f4b.pdf} \caption{\sf Magnetically-active layer and dead zone in the gas-starved K-4 model with ionization by X-rays, cosmic rays and short-lived radionuclides. Recombination on grains is included in the top panel only. Contours, shading and symbols are as in figure~\ref{fig:deadupclose2}. \label{fig:deadupclose}} \end{figure} \subsection{Turbulent Mixing} We also consider turbulent mixing, which alters the resistivity if the mixing is faster than the chemical reactions. In geometrically-thin accretion disks, vertical gradients are generally steeper than radial gradients, so the greatest effects come from mixing in the vertical direction. Representing the mixing as a diffusion process, we can write the mixing time as the ratio of the squared density scale height to the diffusion coefficient. The diffusion coefficient in MRI turbulence is approximately the mean squared velocity dispersion divided by the shear rate $\frac{3}{2}\Omega$, and the velocities are roughly equal to the Alfv\'{e}n speeds. The vertical mixing timescale is therefore \begin{equation} {t_{\rm mix}} = {3\over 8\pi} \left({2c^2\over v_{Az}^2}\right) {2\pi\over\Omega}. \end{equation} Using the result that MRI turbulence leads to tangled magnetic fields in which the vertical component contributes around 10\% of the magnetic pressure \citep{2000ApJ...534..398M}, we can say that the number of orbits needed to mix through one scale height is about equal to the plasma beta parameter $\beta=2c^2/v_A^2$ for the total magnetic field. This result lets us estimate the importance of mixing relative to recombination in the circumjovian disk, compared with the nearby Solar nebula. The ratio of the turbulent mixing timescale $\sim\beta/\Omega$ to the recombination timescale $\sim 1/\rho^2$ is greater in the small disk in proportion to $\rho^2/\Omega \sim (\Sigma/H)^2/\Omega \sim (1/10^{-3})^2/10^3 = 1000$. That is, mixing is less effective in the circumjovian disk. Here we obtained a lower bound by using a surface density $\sim 100$~g~cm$^{-2}$ from the gas-starved disks, where recombination is slowest among the circumplanetary models. Also, we took similar chemical compositions so that recombination rates are simply proportional to density squared, and we assumed that the MRI turbulence saturates at similar plasma beta values in the two situations, giving comparable mixing timescales when measured in local orbits. For a more thorough evaluation taking into account the different chemical composition, consider a blob of gas high in the disk atmosphere where the ionization is substantial. The question is whether, as the blob is carried downward in the turbulence, the electron fraction remains greater than the ambient equilibrium value. To answer, we approximate the reacting flow by moving the blob instantaneously to the interior point of interest. If recombination brings the ionization fraction to equilibrium at the new location over a time at least comparable to the turbulent mixing time, then the mixing can change the ionization state. Sometimes we can avoid even the complication of integrating the chemical network. If mixing the overlying column thoroughly and instantaneously would yield a magnetic diffusivity too high for MRI turbulence, the point of interest will remain dead. We focus on the electrons' mixing, since the transport shifts only the turbulent layer's bottom boundary, which typically is set by the Ohmic diffusivity, and the Ohmic term is controlled by the electron fraction. The procedure in detail is as follows: \begin{enumerate} \item Determine the local chemical equilibrium ionization states at all heights in some annulus of the model disk. \item Consider a height $z_1$ where the electron fraction is below the MRI threshold or critical value $x_c(z_1)$. In local chemical equilibrium, the point $z_1$ is magnetically dead. Integrate the columns of free electrons and of neutrals above $z_1$ to find the lowest height $z_2>z_1$ such that the mean electron fraction between $z_1$ and $z_2$ exceeds the critical value $x_c$. If no $z_2$ within the model disk yields a mean electron fraction exceeding the critical value, then even instantaneous mixing would not be effective, and point $z_1$ will remain dead. \item Take the abundances of all species $j$ from $z_2$ and insert them at $z_1$. That is, the number density $n_{1j}$ is equal to $n_{2j}(n_{1n}/n_{2n})$, where the subscripts $n$ indicate the background neutral component. Find the recombination time by integrating the reaction network at $z_1$ till the electron fraction drops below the critical value $x_c$. \item The mixing time between the two heights is $(z_2-z_1)^2/D$, where $D=v_{Az}^2/\Omega$ is the turbulent diffusion coefficient measured at $z_1$. The diffusion coefficient generally increases with height, so the value at the bottom determines the mixing timescale. \item Mixing can alter the ionization fraction at $z_1$ if recombination is not much faster than mixing. We require $t_{rec}>0.1 t_{mix}$. \end{enumerate} Similar results come from extending step~2 to consider all heights, and not just the lowest point $z_2$ having $x_{e,avg}>x_{e,c}$. This is because the biggest ratio of recombination to mixing time typically is found at or just above $z_2$. \subsection{Parameter Survey} Putting together the turbulence criterion and the mixing timescale, we infer that the magnetic fields drive turbulence where either (1) the equilibrium ionization is strong enough by eq.~\ref{eq:elsasser}, or (2) MRI occurs in the overlying layers in the equilibrium ionization state, and the resulting mixing is fast enough to raise local ionization levels to the threshold. Below we calculate the extent of the turbulence for several models of the Jovian subnebula. The seven circumjovian disk models from sections~\ref{sec:mmcjd} and~\ref{sec:gssn} are shown together with their dust-free versions in the next three figures. Each figure corresponds to one ionization scenario. Figure~\ref{fig:deadxrcrsr} has X-rays, cosmic rays and short-lived radionuclides. Figure~\ref{fig:deadcrsr} has the X-rays switched off. In figure~\ref{fig:deadxrsr} the X-rays are restored, while the cosmic rays are switched off. The first of these three scenarios is most favorable for magnetic activity, the second is relevant if no protosolar X-rays reach the Solar nebula gap where Jupiter and its disk reside \citep[as assumed by][]{2011ApJ...743...53F}, and the third covers the possibility that the wind from the young Sun screens out the 0.1-1~GeV cosmic rays contributing most to the ionization \citep{2011ApJ...735....8P}. Each figure has fourteen panels, corresponding to the seven model disks with and without dust. In figure~\ref{fig:deadxrcrsr} the left top and bottom panels are identical to figure~\ref{fig:deadupclose2}, while the fifth panels in the top and bottom rows reiterate figure~\ref{fig:deadupclose}. Our calculations extend from 4$R_J$ inside Io's present orbit out to 80$R_J$ or a little more than 10\% of the planet's Hill radius, and from the equatorial plane up to 5$H$. \begin{figure}[htb!] \epsscale{1} \plotone{f5.pdf} \caption{\sf Magnetically-active layers and dead zones under ionization by X-rays, cosmic rays and short-lived radionuclides, in the three minimum-mass (left side) and four gas-starved Jovian subnebula models (right side), with and without recombination on dust (top and bottom rows, respectively). Symbols are as in figure~\ref{fig:deadupclose2}. \label{fig:deadxrcrsr}} \end{figure} \begin{figure}[htb!] \epsscale{1} \plotone{f6.pdf} \caption{\sf As figure~\ref{fig:deadxrcrsr} except that the X-ray ionization is omitted. \label{fig:deadcrsr}} \end{figure} \begin{figure}[htb!] \epsscale{1} \plotone{f7.pdf} \caption{\sf As figure~\ref{fig:deadxrcrsr} except that the cosmic ray ionization is omitted. \label{fig:deadxrsr}} \end{figure} In interpreting these figures, recall that X-rays ionize the uppermost few tens of grams per square centimeter, the cosmic rays the next few hundred and the radionuclides dominate in the deeper interior (figures~\ref{fig:sd} and~\ref{fig:ioniz}). Comparing figures~\ref{fig:deadxrcrsr} and~\ref{fig:deadcrsr} we see that the X-ray ionization is crucial for magnetic activity. With X-rays, all gas-starved models show activity at least in their outer reaches. Without X-rays, none of the seven dusty models has a green contour with the sole exception of TG05, where temperatures near $4R_J$ are high enough for thermal ionization. Removing the X-rays greatly reduces the ion density near the disk surface, pushing down the active layer's ambipolar-diffusion-dominated top boundary in many cases so far that it meets the bottom boundary and the activity is cut off completely. Note that since the blue dots show turbulent mixing's effects on the Ohmic resistivity, the dots are offset from the dead zone edges in locations where those edges are determined by ambipolar diffusion. Mixing is effective only in locations where the blue dots are adjacent to layers subject to MRI turbulence. In contrast, the cosmic rays have little effect on the dead zone's size. Figure~\ref{fig:deadxrsr} differs from figure~\ref{fig:deadxrcrsr} mostly in having more contours in the disks' interiors, indicating the dead zones are more thoroughly dead. As \cite{2011ApJ...743...53F} found, cosmic rays by themselves generally provide too little ionization to support the MRI in the circumjovian disk. An exception is the minimum-mass models with no dust. Here the dead zone boundary lies near the cosmic ray penetration depth, and is shifted upward a few tenths of a scale height with the cosmic rays excluded. We also experimented with replacing the short-lived radionuclides by the less-ionizing long-lived radionuclides (not shown). The minimum-mass models' deep interiors are even more diffusive, but the active layer boundaries, with ionization controlled by X-rays or cosmic rays, are unaffected. Considering the three figures~\ref{fig:deadxrcrsr}-\ref{fig:deadxrsr} together, we see that in all cases with substantial activity (i.e.\ where the Elsasser number exceeds ten somewhere) the active layer's bottom boundary lies in or near the Ohmic-dominated region. On the other hand, in all cases where the active layer has an upper boundary lying on our grid, that boundary is set by ambipolar diffusion. The upper boundary in many cases also experiences slow MRI growth because it lies above the heights of 3 and 3.7$H$ where the total and vertical plasma beta fall to unity. Again considering the three figures together, we see that local chemical equilibrium is mostly a good approximation. Mixing is capable of changing the dead zone boundary by only a few tenths of a scale height. The thickest mixing layer occurs in the MM96 model without dust, where X-ray ionization yields enough free electrons to activate the MRI a half scale-height into the equilibrium dead zone. Looking separately at the minimum-mass models, we see that all with dust are quite dead. Magnetic activity is possible only in a low-surface-density zone outside the orbit of Callisto like that advocated by \cite{2003Icar..163..198M}. However the dust-free versions of the three minimum-mass models all have more substantial MRI-unstable upper atmospheres. Far from the planet these even extend below $3H$, where the gas pressure exceeds the magnetic pressure. The situation is quite different in the gas-starved models, which have an active layer in every scenario with X-rays. The dead zone's size varies among the dusty gas-starved models owing to the differing dust abundances and surface densities. The dusty K0 model is MRI-stable near the planet due to ambipolar diffusion associated with its low densities, while the dusty K-4 model's active layers comfortably reach the planet if X-rays are included. The TG50 model has a similar gas surface density to the K-4, but a hundred times greater dust abundance and thus a smaller active region. The TG05 model, with its stronger accretion heating due to a higher mass flow rate, is hot enough inside $6R_J$ for collisional ionization in a surface layer. The results are otherwise insensitive to the details of the temperature and density structure: the dead zones are little-changed when we make the gas-starved disks vertically isothermal at the accretion temperature. \subsection{Magnetic Field Strength} Other choices for the magnetic field strength will change the picture as follows. The Ohmic diffusivity is independent of the field strength, while the ambipolar diffusivity is generally proportional to the magnetic pressure \citep{2007Ap&SS.311...35W}. The Elsasser number, which by eq.~\ref{eq:elsasser} depends on the ratio of the magnetic pressure to the diffusivity, thus varies in proportion to the magnetic pressure if the Ohmic term dominates, and is field-strength-independent in the ambipolar regime. As an example, if the vertical magnetic field has a pressure ten times greater than we assumed above, reaching 1\% of the midplane gas pressure, then (1) where the Ohmic term dominates, the Elsasser number is an order of magnitude larger. The circumplanetary disk's interior is better-coupled. (2) Where the ambipolar term dominates, the Elsasser numbers are unchanged. The active layer's top edge typically does not move. (3) The Ohmic-to-ambipolar transition shifts deeper by about one contour. In some cases the dead zone's lower boundary switches from the Ohmic to the ambipolar regime. \section{SOLID MATERIAL\label{sec:solids}} In this section we discuss the implications of the circumjovian disk's magnetic activity for the evolution of the solid material inside. The abundance of solids in the material delivered to the disk could be less than suggested by Jupiter's overall heavy-element abundance of about three times Solar \citep{2003NewAR..47....1Y}. The nearby Solar nebula may have become depleted in solids during the assembly of the planet's core \citep{2005Icar..179..415H}. Also, once the planet grows massive enough to open a gap in the Solar nebula, only the fraction of the solid mass contained in small grains is readily carried to the vicinity of the planet according to hydrodynamical results from \cite{2007A&A...462..355P}. Grains smaller than about 10~$\mu$m are accreted on the planet and its circumplanetary disk, and particles of intermediate size are captured by gas drag in the pressure maxima immediately inside and outside the planet's orbit. Large bodies become trapped in orbital resonances \citep{1985Icar...62...16W}. We saw above that solids in the form of fine dust particles can prevent magneto-rotational turbulence through their large recombination cross-section. Removing the dust can restore turbulence in the layers where the ionizing radiation is absorbed. We would therefore like to know, if turbulence is absent, how long before the dust settles out? The settling time is the distance to the equatorial plane divided by the grains' terminal speed. In the Epstein regime, where the particles are smaller than the gas molecules' mean free path and drift through the gas slower than the sound speed, the settling time $t_S = (\Omega^2 t_D)^{-1}$ where the gas drag stopping time $t_D = (\rho_d/\rho)(a/c_s)$ \citep[e.g.][]{2010ApJ...708..188T}. Recalling that our grains are $a=0.1$~$\mu$m in radius with internal density $\rho_d=2$~g~cm$^{-3}$ we obtain the timescales shown in figure~\ref{fig:settle}. \begin{figure}[tb!] \epsscale{1.15} \plottwo{f8a.pdf}{f8b.pdf} \caption{\sf Settling time for 0.1-$\mu$m grains as functions of position in the three minimum-mass (left) and four gas-starved models (right). The contours are logarithmic with spacing of one decade. Settling times are 0.1~year and less on the dotted contours, 1 to $10^5$~years on the thin solid contours and a million years and longer on the heavy solid contours. \label{fig:settle}} \end{figure} In the minimum-mass models (left panels), which are stable against MRI turbulence if dusty, the grains settle below 3$H$ in under a million years. Note that we have neglected coagulation. Grains that stick together to form compact aggregates on colliding settle still faster \citep{2005A&A...434..971D}. Since the minimum-mass models by construction receive no resupply from outside, it seems likely the atmosphere will become dust-depleted. Good coupling to the fields (figures~\ref{fig:deadxrcrsr}-\ref{fig:deadxrsr}) will then let some of the atmosphere accrete on the planet, leaving the disk overall gas-depleted. In the gas-starved models (right panels), the settling is so fast that the atmosphere would become dust-free down to $3H$ in less than a thousand years --- except that in these models, dust and gas are resupplied across much of the disk surface. The atmosphere's dust abundance will therefore be determined by external factors. Near the disk midplane, the dust is resupplied faster than it settles. The resupply time is the disk surface density divided by the incoming mass flux. Resupply is slowest at the inner boundary where its timescale is under 2000~years in all four gas-starved models -- lower than the midplane settling time in all cases. Consider also that good magnetic coupling extends to the midplane in the models' outermost annuli at low dust abundances (figures~\ref{fig:deadxrcrsr}-\ref{fig:deadxrsr}). MRI turbulence can then loft grains from the interior into the atmosphere. The most likely outcome for the gas-starved disks is thus an atmosphere containing some dust overlying an interior with a higher dust-to-gas ratio. Both minimum-mass and gas-starved models contain layered dead zones where the local MRI cannot drive turbulence. The dead zones provide favorable quiescent environments for growing larger bodies \citep{2012MNRAS.422.1140G} which could eventually be assembled into the regular satellites \citep{2013MNRAS.428.2668L}. The Epstein regime applies throughout figure~\ref{fig:settle}, with two minor exceptions. First, the grains settle at a terminal speed that exceeds the sound speed in a fraction of the topmost scale height in the gas-starved models. This simply means these low-density regions quickly lose their dust. Second, the gas mean free path is less than the grain size at the midplane in the innermost annulus of the MM96 model. This densest point of all in the seven models is dead with or without dust, so the settling time there is irrelevant for our purposes. \section{SUMMARY AND CONCLUSIONS\label{sec:conclusions}} We examined the prospects for magnetic activity in seven different models of the circumjovian disk, spanning a range of surface densities and including both minimum-mass and gas-starved models. The gas-starved models were refined to include temperature-dependent opacities and properly treat annuli of low optical depth. For each model we computed the ionization state, treating gas-phase recombination, charge transfer to long-lived metal atoms, and adsorption of the free charges on grains. Where grains are present, the last of these is the main recombination channel everywhere that the ionization fraction is low enough for the grains to remain within one or two electrons of neutral. From the abundances of all the charged species we computed the magnetic diffusivities, including the contributions from both Ohmic diffusion and ambipolar drift. The magnetic forces can drive turbulence if the distance the field diffuses per orbit is less than the fastest-growing wavelength of the MRI -- that is, if the dimensionless Elsasser number $\Lambda = v_{Az}^2/(\eta\Omega)$ is bigger than unity. To see where magneto-rotational turbulence is possible, we plotted contours of the Elsasser number vs.\ distance from the planet and height above the equatorial plane, including both Ohmic and ambipolar terms in the diffusivity $\eta$ and choosing a magnetic field whose vertical component has a pressure 0.1\% of the midplane gas pressure. We investigated two limiting cases: the stellar X-rays either (a) reach the circumplanetary disk with the same flux as at the corresponding column in the nearby Solar nebula, or (b) are blocked completely by the nebula. All the dusty minimum-mass models we considered are thoroughly magnetically dead, with or without X-ray ionization. MRI turbulence is unlikely to occur near the midplane in the dense, cold parts of a minimum-mass circumjovian disk. However if turbulence is absent, the dust will settle out of the upper layers in a relatively short time. Removing the dust greatly reduces the recombination rate, allowing MRI turbulence in a surface layer reaching down to near the cosmic ray penetration depth, or if cosmic rays are excluded, the X-ray penetration depth. In particular, surface layer angular momentum transport by magnetic forces is possible in models where the surface density falls off steeply beyond Callisto. These models might need to be modified to include stronger accretion stresses in the outer part. More generally, if the dust-depleted surface layers accrete on the planet or are removed through photoevaporation, the disk will be left gas-poor overall. It seems possible that minimum-mass models turn into something resembling the solids-enhanced minimum-mass models put forward by \cite{2003Icar..163..198M, 2003Icar..163..232M} and \cite{2009euro.book...27E}. By contrast, all the gas-starved models whether dusty or dust-free have an accreting surface layer, with one group of exceptions: in the dusty cases without X-rays, the ion densities are so low that ambipolar diffusion prevents magnetic fields from acting on the bulk neutral gas. Another consequence of the gas-starved models' low densities is that grains rapidly settle out in the absence of turbulence. A supply of fresh dust and gas from the Solar nebula is assumed in constructing these models. The resupply is fast enough to keep the interior dusty, but too slow to prevent settling from partially depleting the atmosphere. Furthermore, we saw that turbulence is capable of reaching the midplane in outer annuli, so small grains can be returned to the atmosphere through mixing. The most likely outcome is an atmosphere with a reduced dust content. In summary, both minimum-mass and gas-starved models of the circumjovian disk have conductivities generally sufficient for magnetic forces to provide the assumed accretion stresses. However, a key quantity is the X-ray flux reaching the neighborhood of the planet. Without the X-rays, the dusty gas-starved models couple to magnetic fields too poorly for magneto-rotational turbulence to operate. The minimum-mass models' low internal conductivities with or without X-rays, on the other hand, are as required to keep the material in place. Both classes of models can develop layered dead zones, which could provide a favorable quiescent environment for assembling regular satellites \citep{2013MNRAS.428.2668L}. The magnetic coupling maps point to several more areas where our understanding is lacking. Future minimum-mass modeling may need to treat the loss of dust-depleted gas from the surface layers. In the gas-starved models, the stress-to-pressure ratio ought to increase with radius. Consequently the material will pile up at locations where the inflow slows. Episodic accretion outbursts will result if some additional angular momentum transport process switches on when the disk surface density grows large enough. The trigger can be the gravitational instability if the accretion bottleneck fills up so much that a gas-starved model approaches the surface densities of a minimum-mass model \citep{2012ApJ...749L..37L}. For the future development of the gas-starved models it is important to address the issue of weak magnetic coupling in the absence of stellar X-rays. This motivates more careful calculations of the transfer of the X-rays into the gap opened in the Solar nebula by Jupiter's tides. The X-rays could have reached the circumplanetary disk at full strength if the Solar nebula interior to the planet's orbit was cleared away, as appears to have happened in the so-called transitional disks observed around some young stars today \citep{2005ApJ...630L.185C, 2010ApJ...708.1107M, 2011ApJ...732...42A}. Also, planets lying nearer their stars can have better-ionized disks owing to the greater X-ray intensities. Further constraints on the conditions in the circumjovian disk can potentially be derived from the Laplace resonance. The three inner Galilean moons, Io, Europa and Ganymede, have orbital periods nearly in the ratio 1:2:4. \cite{2002Sci...298..593P}, \cite{2010ApJ...714.1052S} and \cite{2012ApJ...753...60O} demonstrated that resonances can be assembled outside-in during satellite formation and migration in gas-starved subnebula models. Whether this works in circumjovian disk models with magnetic stresses remains to be seen. We have focused on the cold parts of the disk. The thermally-ionized zone near the planet could be important for its role in regulating the planet's spin \citep{2011AJ....141...51L}, launching bipolar jets \citep{2003A&A...411..623F, 2006ApJ...649L.129M} and determining whether the planet begins its life cold or warm \citep{2007ApJ...655..541M}. The strong temperature dependence of the ionization in this regime suggests that better thermodynamical models are needed. \begin{acknowledgments} This work was supported by the NASA Outer Planets Research program through grant 07-OPR07-0065, by the Hong Kong Research Grants Council through grant HKU 7024/08P, and by the Center for Planetary Science at Kobe University under the auspices of the MEXT Global COE program titled ``Foundation of International Center for Planetary Science.'' The work was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2013. All rights reserved. \end{acknowledgments} \bibliographystyle{apj}
2,877,628,090,579
arxiv
\section{Supplementary Material} In this supplementary material, we describe the numerical method used in our article, provide absolute values of experimental observables and discuss the numerical errors. \section{Numerical method} We employ Matrix Product States (MPS) in 1D and Projected Entangled Pair States (PEPS) in 2D to compute ground states and time evolution~\cite{WhiteDMRGSupp,DMRGSupp,MPS1Supp,MPS2Supp,PEPS1Supp,PEPS2Supp}. For a chain of $N$ quantum systems, each having physical dimension $d$, a MPS has the form \begin{eqnarray*} |\psi_{MPS}\rangle & = & \sum_{s_{1}, s_{2}, \ldots, s_{N}} \mathrm{tr}(A[1]^{s_{1}} A[2]^{s_{2}} \ldots A[N]^{s_{N}}) |s_{1} s_{2} \ldots s_{N}\rangle \qquad , \end{eqnarray*} where the $s_{i}$ run from $1$ to $d$ and the $A[i]^{s_{i}}$, for a fixed $s_{i}$, are matrices of dimension \mbox{$D \times D$}. The number of variational parameters in the ansatz is determined by the bond dimension, $D$, which also bounds the entanglement of the MPS. The MPS family constitutes a very good approximation to ground states of gapped local Hamiltonians in one dimension~\cite{GSmpsSupp}, and it has become a very successful tool for the study of quantum many-body systems. PEPS constitute the natural generalization of the MPS ansatz to larger dimensions~\cite{PEPS1Supp,PEPS2Supp}, and they are also known to be good approximations for ground and thermal states of gapped local Hamiltonians~\cite{GSpepsSupp}. The basic algorithms for the study of many-body systems using these families of states are the variational search for the ground state and the simulation of time evolution. A detailed description of these algorithms, which are similar for 1D and 2D systems, can be found in the review paper~\cite{PEPS3Supp} (see also references therein). The computational cost of MPS algorithms scales as \mbox{$\mathcal{O}(ND^{3})$}, which allows the simulation of large chains and bond dimensions. PEPS algorithms scale as \mbox{$\mathcal{O}(ND^{10})$}, so that 2D simulations are much more demanding, and only smaller values of the virtual bond dimension $D$ can be considered. In the variational search for ground states, the numerical error comes from limiting the bond dimension to a certain maximum value. Comparing the results with those obtained after running the search with a larger $D$ gives an estimation of the magnitude of this truncation error. Typically, the algorithm is run repeatedly with increasing bond dimension until convergence is achieved within the desired numerical precision. The simulation of time evolution of MPS (or PEPS) is based on a Suzuki-Trotter decomposition of the evolution operator~\cite{DMRGSupp}. This introduces another source of numerical error, in addition to the truncation of the bond dimension. The magnitude of this Trotter error can be controlled by decreasing the size of the time step, $\delta t$, or using a higher order decomposition of the exponential. In particular, we use a second order Trotter decomposition in the case of one-dimensional simulations, while a first order decomposition is used in 2D. For each time step, the evolution operator for $\delta t$ is applied and an optimal MPS or PEPS approximation to the evolved state is found as described in~\cite{MPS2Supp}. The magnitude of the Trotter error is controlled by comparing results for various values of $\delta t$, and the truncation error, as in the ground state search, is estimated from the comparison of results for different $D$. \section{Absolute values of experimental observables} The relative quantities shown in the main text result from normalizing the computed expectation values at the end of the evolution to the corresponding values in the true AFM ground state. For completeness, we show in this section the actual absolute values obtained for each observable, as well as the reference AFM values, and discuss the convergence of the numerical results. As explained in the article, we model our system by a bipartite \mbox{$t-J$} Hamiltonian, that, in 1D, reads: \begin{eqnarray}\label{eq:tJSupp} H \! = \! - \! t_{\mathrm{e}} \! \sum_{k \in \mathrm{e},\,\sigma} \! ( c_{k, \sigma}^{\dag} c_{k+1, \sigma} + \mathrm{h.c.} ) \! + \! J_{\mathrm{e}} \! \sum_{k \in \mathrm{e}} \! \Big( \mathbf{S}_{k} \mathbf{S}_{k+1} - \frac{n_{k} n_{k+1}}{4} \Big) \! - \! t_{\mathrm{o}} \! \sum_{k \in \mathrm{o},\,\sigma} \! ( c_{k, \sigma}^{\dag} c_{k+1, \sigma} + \mathrm{h.c.} ) \! + \! J_{\mathrm{o}} \! \sum_{k \in \mathrm{o}} \! \Big( \mathbf{S}_{k} \mathbf{S}_{k+1} - \frac{n_{k} n_{k+1}}{4} \Big) \, , \end{eqnarray} where the subscripts e and o denote even and odd sites and double occupancy of sites is forbidden, as implicitly assumed for the \mbox{$t-J$} model. The couplings on even links are constant, $t_{\mathrm{e}}=t$ and $J_{\mathrm{e}}=J$, while the time-dependent odd couplings are increased from $0$ to their final values, $t$ and $J$, during a total ramping time $T$, according to \mbox{$t_{\mathrm{o}}(\tau) = t \cdot \sqrt{\tau / T}$} and \mbox{$J_{\mathrm{o}}(\tau) = J \cdot \tau / T$}. We set \mbox{$J=1$}. The two-dimensional system consists of several such chains, coupled in the transverse direction by $t_{\mathrm{o}}(\tau)$, $J_{\mathrm{o}}(\tau)$. \subsection{Reference values in the AFM ground state} We use the algorithms described above to compute numerically the true AFM ground state for various lattices, as shown in Tab.~\ref{tab:1Supp} (for chains of lengths $N=22-82$). The numerical convergence is checked by comparing the values obtained using bond dimensions $D=80$ and $100$. As can be seen from the values in the table, the maximum relative error is of the order $10^{-5}$ for the magnetization. Note that the energy is correct up to $10^{-9}$. We use these values as the reference for adiabaticity of the total chain of length $N$. \begin{table} \begin{tabular}{ c || c | c | c | c | c | c } N & $M_{\mathrm{stag}}^{2}(D=80)$ & $M_{\mathrm{stag}}^{2}(D=100)$ & $P_{0}(D=80)$ & $P_{0}(D=100)$ & $E_{\mathrm{spin}}(D=80)$ & $E_{\mathrm{spin}}(D=100)$ \\ \hline \hline 22 & 0.139654860817 & 0.139654869116 & 0.807121935054 & 0.807121935023 & -0.434912539817 & -0.434912539817 \\ \hline 42 & 0.086355374176 & 0.086355479294 & 0.775723907194 & 0.775723905709 & -0.438751678570 & -0.438751678582 \\ \hline 62 & 0.064239765116 & 0.064240094937 & 0.760657976140 & 0.760657965104 & -0.440148664874 & -0.440148664972 \\ \hline 82 & 0.051802232654 & 0.051803243348 & 0.751408113523 & 0.751408046065 & -0.440871682329 & -0.440871682728 \\ \end{tabular} \caption{\label{tab:1Supp} Squared staggered magnetization $M_{\mathrm{stag}}^{2}$, mean singlet fraction per double well $P_{0}$, and mean spin energy per site $E_{\mathrm{spin}}$, for the AFM of total length $N$ obtained from ground state computation with MPS of bond dimension \mbox{$D=80$} and \mbox{$D=100$}.} \end{table} \begin{table} \begin{tabular}{ c || c | c } L & $M_{\mathrm{stag}}^{2}(D=80)$ & $M_{\mathrm{stag}}^{2}(D=100)$ \\ \hline \hline 22 & 0.141157102848 & 0.141157444444 \\ \hline 42 & 0.088248384680 & 0.088248983897 \\ \hline 62 & 0.065508480733 & 0.065509337169 \\ \end{tabular} \caption{\label{tab:2Supp} Squared staggered magnetization $M_{\mathrm{stag}}^{2}$ for middle sublattices of length $L=22$, $42$, and $62$, for an AFM of total length $N=82$ obtained from ground state computation with MPS of bond dimension \mbox{$D=80$} and \mbox{$D=100$}.} \end{table} \begin{table} \begin{tabular}{ c || c | c } L & $M_{\mathrm{stag}}^{2}(D=80)$ & $M_{\mathrm{stag}}^{2}(D=100)$ \\ \hline \hline 22 & 0.141068047422 & 0.141069521513 \\ \hline 42 & 0.088395582256 & 0.088397878145 \\ \hline 62 & 0.065999995175 & 0.066003286606 \\ \end{tabular} \caption{\label{tab:3Supp} Squared staggered magnetization $M_{\mathrm{stag}}^{2}$ for middle sublattices of length $L=22$, $42$, and $62$, for an AFM of total length $N=142$ obtained from ground state computation with MPS of bond dimension \mbox{$D=80$} and \mbox{$D=100$}.} \end{table} As discussed in the main text, for long enough chains, we observe that antiferromagnetic order develops in a middle sublattice faster than on the total chain. We find that the timescales for observables measured on the sublattice are controlled by the range of the observable itself, as far as finite size effects can be ignored. Therefore, in order to quantify this observation, we need to compare the observables in the evolved sublattice with the corresponding AFM values for a sublattice of the same size in an infinite chain. The thermodynamic mean singlet fraction $P_{0, \mathrm{TD}} = \ln(2) \approx 0.693$ and mean spin energy $E_{\mathrm{spin}, \mathrm{TD}} = 1/4-\ln(2) \approx -0.44315$ are well known \cite{HulthenSupp}. The squared staggered magnetization on a finite sublattice, however, cannot be computed exactly, so that we approximate the reference value by the numerical estimation in a long chain ($N=82$), shown in Tab.~\ref{tab:2Supp}. Increasing the chain length, the reference values do not change significantly, as one can see in Tab.~\ref{tab:3Supp} for $N=142$. In 2D, the corresponding values of the AFM were obtained with Quantum Monte Carlo by means of the ALPS code~\cite{ALPSSupp}, and they are listed in Tab.~\ref{tab:4Supp}. \begin{table} \begin{tabular}{ c || c | c } N & $M_{\mathrm{stag}}^{2}$ & $E_{\mathrm{spin}}$ \\ \hline \hline 4 $\times$ 4 & 0.233090090642115 & -0.574325441574560 \\ \hline 6 $\times$ 6 & 0.164620512963000 & -0.603311944444444 \\ \hline 8 $\times$ 8 & 0.135035306250000 & -0.618890781250000 \\ \hline 10 $\times$ 10 & 0.121902426000000 & -0.625150439774003 \\ \end{tabular} \caption{\label{tab:4Supp} Squared staggered magnetization $M_{\mathrm{stag}}^{2}$ and mean spin energy per site $E_{\mathrm{spin}}$ for the AFM of total size $N$ obtained from Quantum Monte Carlo using the ALPS code~\cite{ALPSSupp}.} \end{table} \subsection{Adiabatically evolved states} In the following, we present the absolute values of our observables for the state obtained at the end of the adiabatic ramping, in the same sequence as they appear in the main text. \subsubsection{Absence of holes} Fig.~\ref{fig:1Supp} and \ref{fig:2Supp} show the computed expectation values at the end of the protocol, as a function of the total ramping time, for the case of no holes. Convergence of the numerical results is checked by comparing the results for bond dimension \mbox{$D=40$} and \mbox{$60$} and for Trotter steps \mbox{$\delta t = 0.02$} and $0.005$. The largest relative error found (for the largest system and the longest ramping time) is of the order of $10^{-4}$, which ensures enough precision for our analysis. \begin{figure}[h] \centering \begin{tabular}{c c c} \includegraphics[width=0.32\textwidth]{picsSupp/fig1aSupp.eps} & \includegraphics[width=0.32\textwidth]{picsSupp/fig1bSupp.eps} & \includegraphics[width=0.333\textwidth]{picsSupp/fig1cSupp.eps} \end{tabular} \caption{\label{fig:1Supp} $M_{\mathrm{stag}}^{2}$, $P_{0}$, and $E_{\mathrm{spin}}$, as functions of the ramping time $T$, for chains of length $N=22$ (solid blue), $42$ (dashed red), $62$ (dash-dotted green), and $82$ (dash double-dotted brown). The results were obtained with MPS of bond dimension $D=60$ and Trotter step $\delta t = 0.02$ (lines), $D=40$ and $\delta t = 0.02$ (circles), $D=40$ and $\delta t = 0.005$ (crosses), and $D=60$ and $\delta t = 0.005$ (squares).} \end{figure} \begin{figure}[h] \centering \begin{tabular}{c c c} \includegraphics[width=0.32\textwidth]{picsSupp/fig2aSupp.eps} & \includegraphics[width=0.32\textwidth]{picsSupp/fig2bSupp.eps} & \includegraphics[width=0.333\textwidth]{picsSupp/fig2cSupp.eps} \end{tabular} \caption{\label{fig:2Supp} Same quantities as in Fig.~\ref{fig:1Supp}, evaluated on sublattices of length $L=22$ (solid blue), $42$ (dashed red), and $62$ (dash-dotted green), of a total lattice of length $N=82$.} \end{figure} \subsubsection{Effect of holes} Injecting holes into the sample results in a substantial drop of the squared staggered magnetization, and an increase in the energy (Fig.~\ref{fig:3Supp}). The latter implies that the system gets excited and the numerical simulation via MPS becomes more demanding. Now, the relative error after the longest ramping time, $T=30$, for the case of 4 holes on $N=82$ sites shown in the main text, is of the order of $0.01-0.1$, but it becomes significantly smaller for shorter times. This worst-case error does not affect our conclusions, since the main effect we observe, the drop of the magnetization value upon hole arrival, is much larger ($\approx 30\%$) and occurs already at much shorter times ($T \approx 20$), for which the numerical error is only of the order of $10^{-3}$. The figure also shows that 2 holes on $42$ sites are more dramatic than on $82$ sites, and that the negative effect of holes increases with their number. \begin{figure}[h] \centering \begin{tabular}{c c c} \includegraphics[width=0.32\textwidth]{picsSupp/fig3aSupp.eps} & \includegraphics[width=0.312\textwidth]{picsSupp/fig3bSupp.eps} & \includegraphics[width=0.333\textwidth]{picsSupp/fig3cSupp.eps} \end{tabular} \caption{\label{fig:3Supp} $M_{\mathrm{stag}}^{2}$, $P_{0}$, and $E_{\mathrm{spin}}$, as functions of the ramping time $T$, evaluated on the middle $L=42$ site sublattice, for 2 (solid) and 4 holes (dashed) on $N=82$ sites (thick blue) and $N=42$ sites (thin red), and the holes are initially located at the boundaries, and \mbox{$t=2$}. The results correspond to $D=60$ and Trotter step $\delta t = 0.02$ (lines), $D=40$ and $\delta t = 0.02$ (circles), $D=40$ and $\delta t = 0.005$ (crosses), and $D=60$ and $\delta t = 0.005$ (squares).} \end{figure} \subsubsection{Harmonic trap} By including a harmonic trap \mbox{$V_{\mathrm{t}} \sum_{k} (k - k_{0})^{2} \hat{n}_{k}$}, the holes can be confined outside of the sample. We consider 10 holes left and 10 holes right of a sample of size $82$, and successively increase the trap strength, as shown in Fig.~\ref{fig:4Supp}. Consistent with the results in the previous section, our simulation is most demanding for the weakest trap, when holes can still enter the sample and excite the system. The largest relative error in that case is of the order of $0.01-0.1$, but it decreases significantly if the trap strength is increased. Again, this worst-case error does not affect any of our conclusions. \begin{figure}[h] \centering \begin{tabular}{c c c} \includegraphics[width=0.32\textwidth]{picsSupp/fig4aSupp.eps} & \includegraphics[width=0.304\textwidth]{picsSupp/fig4bSupp.eps} & \includegraphics[width=0.326\textwidth]{picsSupp/fig4cSupp.eps} \end{tabular} \caption{\label{fig:4Supp} $M_{\mathrm{stag}}^{2}$, $P_{0}$, and $E_{\mathrm{spin}}$, as functions of the ramping time $T$, evaluated on the middle $L=82$ site sublattice, for 10 holes initially on each boundary of $82$ fermions, with a harmonic trap of strength \mbox{$V_{\mathrm{t}}=0.004$} (dash-dotted green), \mbox{$V_{\mathrm{t}}=0.006$} (dashed red), and \mbox{$V_{\mathrm{t}}=0.02$} (solid blue), and \mbox{$t=3$}. The inset in b) shows the occupation $n$ of lattice site $l$ after ramping time $T=30$, and we find that the holes delocalize precisely $\pm 2t$ at the boundaries of the trap. Again, the results correspond to $D=60$ and Trotter step $\delta t = 0.02$ (lines), $D=40$ and $\delta t = 0.02$ (circles), $D=40$ and $\delta t = 0.005$ (crosses), and $D=60$ and $\delta t = 0.005$ (squares).} \end{figure} \subsection{Two dimensional case} In 2D, the time evolution is done with PEPS of bond dimension \mbox{$D=2$}, \mbox{$D=3$} and \mbox{$D=4$}, and the largest relative error is of the order of $10^{-2}$, for the largest system in Fig.~\ref{fig:5Supp}. Our results suggest that qualitative insight can already be gained from PEPS with $D=2$. Therefore, Fig.~\ref{fig:6Supp} shows the effect of holes and harmonic trap for $D=2$ without convergence check. Just as in 1D [Fig.~\ref{fig:3Supp}(a)], the staggered magnetization decreases significantly with increasing number of holes, and the harmonic trap confines the holes on the outside. \begin{figure}[h] \centering \begin{tabular}{c c c} \includegraphics[width=0.32\textwidth]{picsSupp/fig5aSupp.eps} & \includegraphics[width=0.314\textwidth]{picsSupp/fig5bSupp.eps} & \includegraphics[width=0.327\textwidth]{picsSupp/fig5cSupp.eps} \end{tabular} \caption{\label{fig:5Supp} $M_{\mathrm{stag}}^{2}$, $P_{0}$, and $E_{\mathrm{spin}}$, as functions of the ramping time $T$, for $N=4 \times 4$ (solid blue), $6 \times 6$ (dashed red), $8 \times 8$ (dash-dotted green), and $10 \times 10$ (dash double-dotted brown). The results were obtained with PEPS of bond dimension $D=4$ and Trotter step $\delta t = 0.03$ (lines), $D=2$ (circles), and $D=3$ (crosses).} \end{figure} \begin{figure}[h] \centering \begin{tabular}{c c} \includegraphics[width=0.32\textwidth]{picsSupp/fig6aSupp.eps} & \includegraphics[width=0.322\textwidth]{picsSupp/fig6bSupp.eps} \end{tabular} \caption{\label{fig:6Supp}(a) $M_{\mathrm{stag}}^{2}$ as a function of the ramping time $T$, for $N=8 \times 8$ without holes (solid blue), with 1 hole (dashed red), and 2 holes (dash-dotted green), where the holes are initially localized at the boundary, and $t=2.5$. (b) $N=8 \times 8$ with 4 holes initially distributed at the boundary, with no harmonic trap (dash-dotted green), and with a trap of strength \mbox{$V_{\mathrm{t}}=0.25$} (dashed red), and \mbox{$V_{\mathrm{t}}=2.5$} (solid blue), and again \mbox{$t=2.5$}. The results correspond to $D=2$ and Trotter step $\delta t = 0.03$.} \end{figure}
2,877,628,090,580
arxiv
\section{Introduction} Interacting quantum theories in low energy or semiclassical regimes can be described by effective equations which amend the classical ones by correction terms. Compared to the full quantum description an analysis of such effective equations is much easier once they have been derived. In addition to the simpler mathematical structure, difficult conceptual and interpretational problems of wave functions can be evaded, still allowing one to compute potentially observable effects. Technical and conceptual problems are even more severe in quantum gravity, in particular in background independent formulations. Yet, especially in this case observational guidance would be of invaluable help. Since the high energy regimes of cosmology are commonly considered as the best access to such guidance, an effective description for fields relevant for early universe cosmology is needed. In this paper we use an effective framework of perturbative loop quantum gravity around a spatially flat isotropic background space-time to derive correction terms to the classical constraints. Our analysis will be done for scalar metric modes in longitudinal gauge as this can be used to simplify the perturbative basic variables. They can then be chosen to be of diagonal form, although now fully inhomogeneous, which is the main reason for simplifications as they have been used extensively in symmetric models \cite{IsoCosmo,HomCosmo,SphSymm,LivRev}. The main constructions of these models can thus be extended, in a similarly explicit form, to inhomogeneous situations without assuming any symmetry. This allows us to compute explicit corrections to effective constraints which, in combination with the Hamiltonian analysis of cosmological perturbation theory in \cite{HamPerturb}, leads to corrected perturbation equations and new effects \cite{InhomEvolve}. A physical regime is selected by introducing a background geometry in the background independent quantization through a specific class of states \cite{InhomLattice}. This keeps the characteristic background independent features of the quantum theory, such as its spatial discreteness, intact while bringing the theory to a form suitable for perturbative expansions and applications. In the perturbative regime, we will make use of special structures provided by the geometrical background which can usually be chosen to allow symmetries, e.g.\ isotropy for cosmological perturbations around a Friedmann--Robertson--Walker model. In particular, we use this to introduce regular lattice states with a spacing (in geometrical variables rather than embedding coordinates) whose size determines at which scales quantum effects become important. The geometrical spacing thus specifies on which scales physical fields are being probed by a given class of states. On such a lattice, explicit calculations can be done. We demonstrate this by providing higher curvature terms as well as corrections to inverse powers of metric components. Several issues that arose in isotropic models will be clarified. Finally, we discuss general aspects of effective equations and the semiclassical limit of loop quantum gravity. The article thus consists of two parts, an explicit scheme to derive correction terms presented in Sec.~\ref{s:HamConstr} and \ref{EffHamDiscuss}, and a more general discussion of effective equations and the classical limit in Sec.~\ref{Effective}. \section{Basic variables and operators} The basic variables of interest for a canonical formulation of gravity \cite{ADM} are the spatial metric $q_{ab}$ occurring in the space-time metric \begin{equation} \label{metric} {\mathrm{d}} s^2=-N^2{\mathrm{d}} t^2+q_{ab}({\mathrm{d}} x^a+N^a{\mathrm{d}} t) ({\mathrm{d}} x^b+N^b{\mathrm{d}} t)\,. \end{equation} (or equivalent quantities such as a triad $e^i_a$ with $e^i_ae^i_b=q_{ab}$ or its inverse $e^a_i$), extrinsic curvature $K_{ab}=(2N)^{-1}(\dot{q}_{ab}-D_aN_b-D_bN_a)$ (or related objects such as the Ashtekar connection) and matter fields with their momenta. The components $N$ (lapse function) and $N^{a}$ (shift vector) of the space-time metric are not dynamical, and thus do not have momenta, but are important for selecting the space-time slicing or the gauge. Because of their role in loop quantum gravity, we will use Ashtekar variables \cite{AshVar,AshVarReell} which are a densitized triad $E^a_i=|\det e^j_b| e^a_i$ and the connection $A_a^i=\Gamma_a^i-\gamma K_a^i$ with the spin connection \begin{equation} \label{SpinConnFull} \Gamma_a^i= -\epsilon^{ijk}e^b_j (\partial_{[a}e_{b]}^k+ {\textstyle\frac{1}{2}} e_k^ce_a^l\partial_{[c}e_{b]}^l)\,. \end{equation} compatible with the triad, $K_a^i=K_{ab}e^b_i$ and the positive Barbero--Immirzi parameter $\gamma$ \cite{AshVarReell,Immirzi}. We use them here in perturbative form on a flat isotropic metric background, focusing on scalar modes. This means, as explained in more detail in \cite{HamPerturb}, that the unperturbed metric as well as its perturbations can be assumed to be diagonal, $E^a_i=\tilde{p}^{(i)}(x)\delta^a_i$, which simplifies calculations. For scalar modes, all diagonal components of the metric $q_{ab}=a^2(1-2\psi(x)^2)\delta_{ab}$ are in fact equal, but we will see that this restriction is not general enough for formulating a loop quantization. Moreover, we can choose a vanishing shift vector $N^a=0$, implying that extrinsic curvature $K_a^i=\tilde{k}_{(i)}(x)\delta_a^i$ is diagonal, too. (The Ashtekar connection, on the other hand, will not be diagonal because it has non-diagonal contributions from the spin connection. It is of the form $A_a^i=\tilde{k}_{(i)}(x)\delta_a^i+\psi_I(x)\epsilon^{iI}_a$ where the non-diagonal part $\psi_I$ arising from the spin connection computed in Sec.~\ref{SpinConn} can be dealt with perturbatively.) Our calculations will thus be done in a given gauge, and would be more complicated in others. Nevertheless, we are including the general perturbations of metric and matter variables relevant for scalar modes without too strong restrictions as they could arise in other gauges. \subsection{Gauge choices and their implications for a quantization} In general, the space-time gauge is determined by prescribing the behavior of lapse function $N$ and shift vector $N^a$ occurring in a metric (\ref{metric}). The lapse function, as we will see, can be chosen arbitrarily in our calculations, but the shift vector is restricted for a diagonal perturbation to be realized. We are thus using a particular class of gauges in setting up our calculations, although we do not explicitly make use of the form of gauge transformations. This is important because the canonical constraints, most importantly the Hamiltonian constraint $H$ in addition to the diffeomorphism constraint $D_a$, and thus also the gauge transformations $\delta_{\xi}f=\{f,\xi^0H+\xi^aD_a\}$ they generate will be corrected by quantum effects. Classical properties of the gauge transformations should thus not be used before one computes quantum corrections. It is then a priori unclear which particular gauge choices, other than fixing lapse and shift directly, are allowed. Some gauges implicitly refer to gauge transformation equations to relate metric to matter perturbations, or to select the space-time slicing such as for the flat gauge. In this example, one would make use of gauge transformations to set the spatial metric perturbation equal to zero which allows one to focus calculations on the simpler matter part. In this process, one solves gauge transformation equations of the metric perturbation, depending on lapse and shift, such that the transformed perturbation vanishes. This determines a gauge to be chosen, but makes use of explicit gauge transformation equations which are not guaranteed to remain unchanged with quantum corrections. Our choice of vanishing shift, on the other hand, is harmless because it does not refer to explicit gauge transformation equations. We are thus working at a more general level keeping metric and matter perturbations independent. A combined gauge invariant combination of the two perturbations can be determined once the quantum corrected gauge transformations have been computed. When constraints are modified, manifest covariance of the resulting equation becomes an issue as it is discussed in more detail in \cite{HamPerturb}. Such quantum corrections are derived from effective constraints of gravity which are defined as expectation values of quantum gravity operators in general states \cite{EffAc}. We motivate the procedure here briefly, and will provide some further information in Sec.~\ref{Effective}; for details we refer to \cite{EffAc,Karpacz,EffectiveEOM}. If constraint operators satisfy the classical constraint algebra, covariance would be manifest for the expectation values. But there is an additional step involved in deriving effective equations: the expectation values depend on infinitely many quantum variables such as the spreads of states which do not have classical analogs. Effective equations are obtained by truncating this to finitely many fields (similarly to the derivative expansion in low energy effective actions), resulting in equations of motion of the classical form corrected by quantum terms. Indeed, any quantum theory is based on states which are not just determined by expectation values of the basic variables such as $A_a^i$ and $E^a_i$ in loop quantum gravity. Expectation values of the basic variables would correspond to the classical values in constraint expressions, but there are infinitely many further parameters such as the spread and deformations of the state from a Gaussian. These additional, quantum variables can suitably be parameterized in the form \begin{equation} \label{QuantVars} G^{a,n}_q=\langle(\hat{q}-\langle\hat{q}\rangle)^{n-a} (\hat{p}_q-\langle\hat{p}_q\rangle)^a\rangle_{\rm Weyl} \end{equation} for any degree of freedom $(q,p_q)$ present in the classical theory. Here, $1<n\in{\mathbb N}$, $0\leq a\leq n$, and the subscript ``Weyl'' denotes totally symmetric ordering. Every classical degree of freedom thus does not only give rise to expectation values $\langle\hat{q}\rangle$ and $\langle\hat{p}_q\rangle$ but to infinitely many additional quantum variables. All of these variables are dynamical, and are in general coupled to each other. Moreover, expectation values of most operators, including Hamiltonians, in general states depend on all these infinitely many variables. This is to be reduced to a finite set for an effective description which introduces quantum correction terms into the classical equations. In particular, spread and deformation parameters are usually assumed to be subdominant compared to expectation values. Without explicitly constructing semiclassical states satisfying such conditions, one can make semiclassicality assumptions for those parameters to be negligible. This is what we will do in this paper as a shortcut to deriving effective expressions from a full quantum theory. Since special quantization steps are involved in the construction of operators which reformulate classical expressions, corrections in effective equations will result which are not sensitive to the precise form of semiclassical states. \subsection{Lattice states and basic operators} We are thus able to implement all degrees of freedom needed for inhomogeneities in a way which is accessible to explicit calculations. While the general kinematical arena of loop quantum gravity is based on discrete spatial structures built on arbitrary graphs with possibly high-valent vertices, we will use regular lattices with 6-valent vertices. Regularity of the lattice is implemented by making use of symmetries of the background we are perturbing around: The three independent generators of translational symmetry define lattice directions. In explicit constructions of lattice states, a scale $\ell_0$ will appear which is the coordinate length of lattice links measured in a given, fixed embedding.\footnote{The coordinate length need not be the same for all links, but can be chosen this way without loss of generality.} But this parameter is independent of the quantum variables assigned to each link we will be using, which means that the quantum theory will be defined on ``freely floating'' lattices as in the full theory, respecting diffeomorphism invariance. The scale $\ell_0$ will only become important in the continuum limit, when making contact between the quantum variables and classical continuous fields. This obviously breaks manifest diffeomorphism covariance, just as the classical perturbation theory in basic fields rather than gauge-invariant combinations, since the classical perturbations are written with respect to a background space-time. Compared to the full theory, we are restricting states by assuming regularity and thus allowing, e.g., only unknotted links and vertices of valence at most six. This turns out to be sufficient to include all relevant perturbative degrees of freedom. While the general graphs of loop quantum gravity allow more freedom, its meaning is not known and appears redundant in our application. \subsubsection{Holonomies and fluxes} The canonical fields are given by $(A_a^i,E^b_j)$ which are to be turned into operators on a suitable Hilbert space. To set this up, we need to choose a functional representation of state, which is conveniently done in the connection representation where states are functionals of $A_a^i$. According to loop quantum gravity, lattice graphs then label states and determine their expressions as functions of connections: a state associated with a given graph depends on the connection only through holonomies \[ h_e(A)={\cal P}\exp(\smallint_e{\mathrm{d}} t A^i_a\dot{e}^a\tau_i) \] along its edges. Here $\tau_j=-\frac{i}{2}\sigma_j$ are the $SU(2)$-generators in terms of Pauli matrices $\sigma_j$ and ${\cal P}$ denotes path ordering. That those are the basic objects represented on a Hilbert space together with fluxes \begin{equation}\label{flux} F_S(E)=\int_S{\mathrm{d}}^2y E^a_i\tau^in_a \end{equation} for surfaces $S$ with co-normal $n_a$ is the basic assumption of loop quantum gravity \cite{LoopRep}. Our corrections to cosmological perturbation equations will be implications of this fact and thus test the theory directly. Using the perturbative form of $A_a^i$, we can split off perturbatively the non-diagonal part (composed of spin connection components) in an expansion and exploit the diagonality of the remaining part to obtain $h_{v,I}=\exp (\gamma\tau_I\int_{e_{v,I}}{\mathrm{d}} t \tilde{k}_I(e_{v,I}(t))$. Similarly, fluxes will be of the form $F_{v,I}=\int_{S_{v,I}}{\mathrm{d}}^2y \tilde{p}^I(y)$. A lattice link starting at a vertex $v$ in direction $I$ in a fixed orientation is denoted $e_{v,I}$, and a lattice plaquette transversal to this edge and centered at its midpoint as $S_{v,I}$. (Here and in the following set-up we closely follow \cite{InhomLattice} to which we refer the reader for more details; note, however, that $\gamma$ has been absorbed there in $\tilde{k}_I$.) Matrix elements of the variables $h_{v,I}$ together with $F_{v,I}$ form the basic objects of loop quantum gravity in this setting. They are thus elementary degrees of freedom, comparable to atoms in condensed matter. Classical fields will, as we display in detail later, emerge from these objects in suitable regimes and limits only. Even in such regimes where one can recover the usual metric perturbations there will in general be correction terms examples of which we aim to compute below. Correction functions will then also depend on the basic objects $h_{v,I}$ and $F_{v,I}$ directly, which can be expressed through the classical metric perturbations in a secondary step.\footnote{This already indicates that difficulties which were sometimes perceived in isotropic models, where corrections seemed to depend on the scale factor whose total scale is undetermined, do not occur in this inhomogeneous setup.} To recover the correct semiclassical behavior one has to make sure that effective equations of motion can indeed be written in a form close to the classical ones. Since classical Hamiltonians are local functionals of extrinsic curvature and densitized triad components, it must then be possible to approximate the non-local, integrated objects $h_{v,I}$ and $F_{v,I}$ by local values of $\tilde{k}_I$ and $\tilde{p}^I$ evaluated in single points This is indeed possible if we assume that $\tilde{k}_I$ is approximately constant along any edge, whose coordinate lengths are $\ell_0=\int_{e_{v,I}}{\mathrm{d}} t$. We can then write \begin{equation}\label{hol} h_{v,I}=\exp (\smallint_{e_{v,I}}{\mathrm{d}} t \gamma\tilde{k}_I\tau^I)\approx \cos(\ell_0\gamma\tilde{k}_{I}(v+I/2)/2)+ 2\tau_I\sin(\ell_0\gamma\tilde{k}_{I}(v+I/2)/2) \end{equation} where $v+I/2$ denotes, in a slight abuse of notation, the midpoint of the edge which we use as the most symmetric relation between holonomies and continuous fields, and \begin{equation} \label{ScalarFlux} F_{v,I}= \int_{S_{v,I}} \tilde{p}^I(y){\mathrm{d}}^2y\approx \ell_0^2\tilde{p}^I(v+I/2) \end{equation} (note that the surface $S_{v,I}$ is defined to be centered at the midpoint of the edge $e_{v,I}$). This requires the lattice to be fine enough, which will be true in regimes where fields are not strongly varying. For more general regimes this assumption has to be dropped and non-local objects appear even in effective approximations since a function $\tilde{k}_I$ will be underdetermined in terms of the $h_{v,I}$. Since the recovered classical fields must be continuous, this means that they can arise only if quantizations of $h_{v,I}$ and $F_{v,I}$, respectively, for nearby lattice links do not have too much differing expectation values in a semiclassical state. If this is not satisfied, continuous classical fields can only be recovered after a process of coarse graining as we will briefly discuss in Sec.~\ref{Coarse}. In addition to the assumption of slowly varying fields on the lattice scale, we have also made use of the diagonality of extrinsic curvature which allows us to evaluate the holonomy in a simple way without taking care of the factor ordering of su(2)-values along the path. We can thus re-formulate the theory in terms of U(1)-holonomies \begin{equation} \eta_{v,I} = \exp(i\smallint_{e_{v,I}}{\mathrm{d}} t \gamma\tilde{k}_I/2)\approx \exp(i\ell_0 \gamma\tilde{k}_I(v+I/2)/2) \end{equation} along all lattice links $e_{v,I}$. On the lattice, a basis of all possible states is then given by specifying an integer label $\mu_{v,I}$ for each edge starting at vertex $v$ in direction $I$ and defining \begin{equation}\label{hol_action} \langle \tilde{k}(x)|\ldots,\mu_{v,I},\ldots\rangle:= \prod_{v,I} \exp (i\mu_{v,I}\smallint_{e_{v,I}}{\mathrm{d}} t \gamma\tilde{k}_I/2) \end{equation} as the functional form of the state $|\ldots,\mu_{v,I},\ldots\rangle$ in the $k$-representation. The form of the states is a consequence of the representation of holonomies. States are functions of U(1)-holonomies, and any such function can be expanded in terms of irreducible representations which for U(1) are just integer powers. This would be more complicated if we allowed all possible, also non-diagonal, curvature components as one is doing in the full theory. In such a case, one would not be able to reduce the original SU(2)-holonomies to simple phase factors and more complicated multiplication rules would have to be considered. In particular, one would have to make sure that matrix elements of holonomies are multiplied with each other in such a way that functions invariant under SU(2)-gauge rotations result \cite{RS:Spinnet}. This requires additional vertex labels which we do not need in the perturbative situation. For the same reason we have simple multiplication operators given by holonomies associated with lattice links, \begin{equation} \hat{\eta}_{v,I}|\ldots,\mu_{v',J},\ldots\rangle = |\ldots,\mu_{v,I}+1,\ldots\rangle\,. \end{equation} There are also derivative operators with respect to $\tilde{k}_I$, quantizing the conjugate triad components. Just as holonomies are obtained by integrating the connection or extrinsic curvature, densitized triad components are integrated on surfaces, then called fluxes (\ref{flux}), before they can be quantized. For a surface $S$ of lattice plaquette size intersecting a single edge $e_{v,I}$ outside a vertex, we have \begin{equation} \hat{F}_{v,I} |\ldots,\mu_{v',J},\ldots\rangle= 4\pi\gamma\ell_{\mathrm P}^2\mu_{v,I} |\ldots,\mu_{v',J},\ldots\rangle \end{equation} or \begin{equation} \label{FluxVert} \hat{{\cal F}}_{v,I} |\ldots,\mu_{v',J},\ldots\rangle= 2\pi\gamma\ell_{\mathrm P}^2(\mu_{v-I,I}+\mu_{v,I}) |\ldots,\mu_{v',J},\ldots\rangle\,. \end{equation} if the intersection happens to be at the vertex. The Planck length $\ell_{\rm P}=\sqrt{G\hbar}$ arises through a combination of $G$ from the basic Poisson brackets and $\hbar$ from a quantization of momenta as derivative operators. Here, in a similar notation as above, $v-I$ denotes the vertex preceding $v$ along direction $I$ in the given orientation. We will later call such labels simply $\mu_{v-I,I}=\mu_{v,-I}$ as illustrated in Fig.~\ref{eminusI}. These operators quantize integrated triad components (\ref{ScalarFlux}). This shows that all basic degrees of freedom relevant for us can be implemented without having to use the more involved SU(2)-formulation. \begin{figure} \centerline{\includegraphics[width=8cm,keepaspectratio]{eminusI}} \caption{Edges adjacent to a vertex $v$ in a given direction $I$. For the edge oriented oppositely to the chosen one for direction $I$, the labels ``$v-I,I$'' and ``$v,-I$'' can be chosen interchangeably, defining in this way negative values for the label $I$. \label{eminusI}} \end{figure} Note, that even for scalar perturbations which classically have triads proportional to the identity, distinct $\tilde{p}^I(v)$-components have to be treated differently at the quantum level. One cannot assume all edge labels around any given vertex to be identical while still allowing inhomogeneity. Moreover, operators require local edge holonomies which change one edge label $\mu_{v,I}$ independently of the others. Similarly, corresponding operators $\hat{F}_{v,I}$ and $\hat{F}_{v,J}$ ($I\neq J$) act on different links coming out of a vertex $v$ and have thus independent eigenvalues in general. To pick a regime of scalar modes, one will choose a state whose edge fluxes are peaked close to the same triad value in all directions and whose holonomies are peaked close to the same exponentiated extrinsic curvature values, thus giving effective equations for a single scalar mode function. But this to equal values in different directions cannot be done at the level of operators. These basic operators $h_{v,I}$ and $F_{v,I}$ will appear in more complicated ones and in particular in the constraints. As we can see, they depend not directly on the classical fields $\tilde{p}^I(x)$ and $\tilde{k}_I(x)$ but, in local approximations, on quantities $p^I(x):=\ell_0^2\tilde{p}^I(x)$ and $k_I(x):=\ell_0\tilde{k}_I(x)$ rescaled by factors of the lattice link size $\ell_0$. This re-scaling occurring automatically in our basic variables has two advantages: It makes the basic variables independent of coordinates and provides them unambiguously with dimensions of length squared for $p$ while $k$ becomes dimensionless. (Otherwise, one could choose to put dimensions in coordinates or in metric components which would make arguments for the expected relevance of quantum corrections more complicated.) This also happens in homogeneous models \cite{Bohr}, but in that case, especially in spatially flat models, there was sometimes confusion about the meaning of even the re-scaled variables. This is because the scale factor, for instance, as the isotropic analog of $\tilde{p}^I$ could be multiplied by an arbitrary constant and thus the total scale would have no meaning even when multiplied by the analog of $\ell_0^2$. Thus, correction functions depending on this quantity in an isotropic model require an additional assumption on how the total scale is fixed. This is not necessary in inhomogeneous situations. Here, the quantities $p^I$ will appear in quantum corrections and their values determine unambiguously when corrections become important. The corresponding fluxes are the relevant quantum excitations, and when they are close to the Planck scale quantum corrections will unambiguously become large. On the other hand, if the $p^I$ become too large, approaching the Hubble length squared or a typical wave length squared, discreteness effects become noticeable even in usual physics. As we will see in more detail in Sec.~\ref{magnitude}, this allows one to estimate orders of magnitudes of corrections to be expected even without detailed calculations \cite{InhomEvolve}. Although the size of the $p^I$ is coordinate independent, unlike the value of the scale factor, say, its relation to the classical field depends on $\ell_0$ and thus on the lattice size. It may thus appear that $p^I$ is coordinate dependent, but this is clearly not the case because it derives directly from a coordinate independent flux. The lattice values are defined independently of coordinates, just by attaching labels $\mu_{v,I}$ to lattice links. Once they have been specified and the lattice has been embedded in a spatial manifold, their relation to classical metric fields can be determined. It is, of course, the classical fields such as metric components which depend on the coordinate choice when they are tensorial. The relation between $p^I$ and the classical metric depends on the lattice spacing measured in coordinates because the representation of the classical metric itself depends on which coordinates have been chosen. Thus, our basic quantities are coordinate independent and coordinates enter only when classical descriptions are recovered in a semiclassical limit. \subsubsection{Volume} An important ingredient to construct constraints is the volume operator. Using the classical expression $V=\int{\mathrm{d}}^3x \sqrt{|\tilde{p}^1\tilde{p}^2\tilde{p}^3|}$ we introduce the volume operator $\hat{V}=\sum_{v} \prod_{I=1}^3 \sqrt{|\hat{\cal F}_{v,I}|}$ which, using (\ref{FluxVert}), has eigenvalues \begin{equation}\label{V_action} V(\{\mu_{v,I}\})= \left(2\pi\gamma\ell_{\mathrm P}^2\right)^{3/2} \sum_{v} \prod_{I=1}^3\sqrt{ |\mu_{v,I}+\mu_{v-I,I}|}\,. \end{equation} While densitized triad components are directly implemented through basic fluxes, the process of quantizing triad or co-triad components is more indirect. While they are uniquely determined from the densitized triad classically, one needs to take inverse components. With flux operators having discrete spectra containing zero, this is not possible in a direct manner at the quantum level. Nevertheless \cite{QSDI}, one can construct operators for co-triad components based on the classical identity \begin{equation} \label{cotriad} \left\{A_a^i,\int\sqrt{|\det E|}\mathrm{d}^3x\right\}= 2\pi\gamma G \epsilon^{ijk}\epsilon_{abc} \frac{E^b_jE^c_k}{\sqrt{|\det E|}} =4\pi\gamma G e_a^i\,. \end{equation} On the left hand side, no inverse appears and we just need to express connection components in terms of holonomies, use the volume operator and replace the Poisson bracket by a commutator. Resulting operators are then of the form $h_e[h_e^{-1},\hat{V}]$ for SU(2)-holonomies along suitable edges $e$, e.g.\ \begin{equation} \label{commPoiss} \mathop{\mathrm{tr}}\nolimits(\tau^ih_{v,I}[h_{v,I}^{-1},\hat{V}_v])\sim -\frac{1}{2}i\hbar\ell_0 \widehat{\{A_a^i,V_v\}} \end{equation} for $h_{v,I}$ as in (\ref{hol}). This shows that factors of the link size $\ell_0$ are needed in reformulating Poisson brackets through commutators with holonomies, which, as will become clear below, are provided by the discretized integration measure in spatial integrations such as they occur in the Hamiltonian constraint. \section{Hamiltonian constraint} \label{s:HamConstr} Holonomies, the volume operator and commutators between them are finally used to define Hamiltonian constraint operators. We will briefly describe the general procedure and then derive resulting correction terms in effective equations for both gravitational and matter contributions to the constraint. \subsection{Gravitational part} The gravitational contribution to the Hamiltonian constraint is given by \begin{eqnarray}\label{HamConstr} H[N] &=& \frac{1}{16\pi G} \int_{\Sigma} \mathrm{d}^3x N\left|\det E\right|^{-1/2} \left(\epsilon_{ijk}F_{ab}^iE^a_jE^b_k\right.\\ && -\left.2(1+\gamma^{-2}) (A_a^i-\Gamma_a^i)(A_b^j-\Gamma_b^j)E^{[a}_iE^{b]}_j\right)\nonumber \end{eqnarray} in terms of Ashtekar variables with the curvature $F^i_{ab}=2\partial_{[a}A^i_{b]}+\epsilon^{ijk}A_a^jA_b^k$. The second term, quadratic in extrinsic curvature components $K_a^i=\gamma^{-1}(\Gamma_a^i-A_a^i)$, is in general more complicated to deal with due to the appearance of spin connection components as functionals of $E^a_i$ through (\ref{SpinConnFull}). One usually starts with quantizing the first term and then uses the identity \cite{QSDI} \begin{equation} \label{Kcomm} K_a^i=\gamma^{-1}(A_a^i-\Gamma_a^i) \propto \left\{A_a^i,\!\left\{\int{\mathrm{d}}^3x F_{ab}^i \frac{\epsilon^{ijk}E^a_jE^b_k}{\sqrt{|\det E|}},\int{\sqrt{|\det E|}}\mathrm{d}^3x\right\}\right\} \end{equation} which allows one to express the second contribution in terms of the first. In the first term, then, the densitized triad components including the inverse determinant can be quantized using (\ref{cotriad}), and the curvature components $F_{ab}^i$ can be obtained through holonomies around appropriately chosen small loops \cite{RS:Ham}. On our regular lattices, natural loops based at a given vertex are provided by the adjacent lattice plaquettes. After replacing the Poisson brackets by commutators, the resulting first part of the Hamiltonian operator, $\hat{H}^{(1)}=\sum_v\hat{H}^{(1)}_v$ has non-zero action only in vertices of a lattice state, each contribution being of the form \begin{eqnarray} \label{Hone} \hat{H}_v^{(1)} &=& \frac{1}{16\pi G}\frac{2i}{8\pi\gamma G\hbar}\frac{N(v)}{8} \sum_{IJK}\sum_{\sigma_I\in\{\pm1\}} \sigma_1\sigma_2\sigma_3 \epsilon^{IJK}\\ &&\times\mathop{\mathrm{tr}}\nolimits(h_{v,\sigma_II}(A) h_{v+\sigma_II,\sigma_JJ}(A) h_{v+\sigma_JJ,\sigma_II}(A)^{-1} h_{v,\sigma_JJ}(A)^{-1} h_{v,\sigma_KK}(A) [h_{v,\sigma_KK}(A)^{-1},\hat{V}]) \nonumber \end{eqnarray} summed over all non-planar triples of edges in all possible orientations. (There are 48 terms in the sum, but we need to divide only by 8 since a factor of six arises in the contraction of basic fields occurring in the constraint.) The combination \[ h_{v,\sigma_II}(A) h_{v+\sigma_II,\sigma_JJ}(A) h_{v+\sigma_JJ,\sigma_II}(A)^{-1} h_{v,\sigma_JJ}(A)^{-1} \] gives a single plaquette holonomy with tangent vectors $e_{v,\sigma_II}$ and $e_{v,\sigma_JJ}$ as illustrated in Fig.~\ref{Plaquette}. \begin{figure} \centerline{\includegraphics[width=6cm,keepaspectratio]{Plaquette}} \caption{Elementary lattice plaquette with holonomies around a closed loop. \label{Plaquette}} \end{figure} When expanded in $\ell_0$ assuming sufficiently small edges, the leading term is of the order $\ell_0^3$ which automatically results in a Riemann sum representation of the first term in (\ref{HamConstr}). This justifies $\hat{H}^{(1)}$ as a quantization of the classical expression. As seen from the argument, one needs to assume that the lattice is sufficiently fine for classical values of the fields $A_a^i$. Thus, there are states corresponding to coarser lattices on which stronger quantum corrections can result. As usually, semiclassical behavior is not realized on all states but only for a select class. For any low-curvature classical configuration, one can make sure that a chosen lattice leads only to small quantum corrections such that sufficiently many semiclassical states exist. \subsubsection{Quantization} The required calculations for SU(2) holonomies and their products usually do not allow explicit diagonalizations of operators. But some physical regimes allow one to decouple the matrix components at least approximately. This is realized for several symmetric models and also for perturbations at least of some metric modes around them. In particular, after splitting off the non-diagonal part of the connection in the perturbative expansion considered here, we can take the trace explicitly and reduce the expression to U(1). Since the diagonal part of the Ashtekar connection for our perturbations is contributed entirely by extrinsic curvature, we are effectively using ``holonomies'' computed for extrinsic curvature rather than the Ashtekar connection. Although extrinsic curvature is a tensor rather than a connection, it is meaningful to use it in expressions resembling holonomies, denoted here simply as $h_{v,I}$, on a given metric background. This has the additional advantage of easily combining the remaining quadratic terms in $K_a^i$ with the first term of the constraint (\ref{HamConstr}) without using squares of multiple commutators from quantizing (\ref{Kcomm}). Writing \begin{eqnarray} \label{Curv}% F_{ab}^i &=& 2\partial_{[a}\Gamma_{b]}^i +2 \gamma\partial_{[a} K_{b]}^i+\epsilon_{ijk}\left(\Gamma_a^j+\gamma K_a^j\right)\left(\Gamma_b^k+\gamma K_b^k\right) \nonumber \\ &=&2\partial_{[a}\Gamma_{b]}^i +2 \gamma\partial_{[a} K_{b]}^i+\gamma \epsilon_{ijk} \left(\Gamma_a^j K_b^k+\Gamma_b^k K_a^j \right)+\epsilon_{ijk}\left(\Gamma_a^j\Gamma_b^k+\gamma^2 K_a^j K_b^k\right) \end{eqnarray}% we obtain a term $2 \gamma\partial_{[a} K_{b]}^i+\gamma^2 K_a^j K_b^k$ resembling ``curvature'' $F_{ab}^i(K)$ as computed from extrinsic curvature alone, a curvature term of the spin connection as well as cross-terms $\epsilon_{ijk} \left(\Gamma_a^j K_b^k+\Gamma_b^k K_a^j \right)$. In our context due to the diagonality conditions the cross-terms disappear \cite{HamPerturb} and we only have the $K$-curvature term and spin connection terms to quantize. The first term can then be combined with the term quadratic in $K_a^i$ in (\ref{HamConstr}), removing the need to use double commutators. We denote this contribution to the constraint as \begin{equation} H_K[N]:= \frac{1}{8\pi G} \int_{\Sigma} \mathrm{d}^3x N \left|\det E\right|^{-1/2} \left(\epsilon_{ijk} \gamma\partial_aK_b^i -K_a^j K_b^k \right)E^{[a}_jE^{b]}_k \end{equation} (since also $\partial_aK_b^i$ drops out as used later, the constraint is $\gamma$-independent) and the remaining term as \begin{equation} H_{\Gamma}[N]:=\frac{1}{8\pi G} \int_{\Sigma} \mathrm{d}^3x N \left|\det E\right|^{-1/2} \left(\epsilon_{ijk}\partial_a\Gamma_b^i+ \Gamma_a^j \Gamma_b^k\right)E^{[a}_jE^{b]}_k\,. \end{equation} Then, $H[N]=H_K[N]+H_{\Gamma}[N]$ is the constraint for scalar modes in longitudinal gauge. Both terms can rather easily be dealt with, using holonomies around a loop for the first term (this subsection) and direct quantizations of $\Gamma_a^i$ for the second (Sec.~\ref{SpinConn}). The split-off spin connection components are thus quantized separately, which is possible in the perturbative treatment on a background, and then added on to the operator. Note also that as a further simplification the derivative term of extrinsic curvature disappears from the constraint for diagonal variables as assumed here. This will automatically happen also from holonomies around loops. We emphasize that the quantization procedure followed is special to the given context of scalar perturbations on a flat isotropic background. Nevertheless, it mimics essential steps of the full constructions as discussed in more detail in Sec.~\ref{QuantProc}. Its main advantage is that it allows explicit derivations of all necessary terms and thus explicit effective equations to be confronted with observations. Moreover, it is far from clear that the constructions currently done in the full setting will remain unchanged with further developments. We thus evaluate the key features of the scheme without paying too close attention to current details. Following the general procedure, we thus obtain vertex contributions \begin{eqnarray}\label{Kcontrib}% \hat{H}_{K,v}&=& -\frac{1}{16\pi G}\frac{2i}{8\pi\gamma^3 G\hbar} \frac{N(v)}{8} \sum_{IJK}\sum_{\sigma_I\in\{\pm1\}} \sigma_1\sigma_2\sigma_3 \epsilon^{IJK}\\ &&\times\mathop{\mathrm{tr}}\nolimits \left(h_{v,\sigma_II} h_{v+\sigma_II,\sigma_JJ} h_{v+\sigma_JJ,\sigma_II}^{-1} h_{v,\sigma_JJ}^{-1} h_{v,\sigma_KK} \left[h_{v,\sigma_KK}^{-1},\hat V_v\right]\right)\,.\nonumber \end{eqnarray} As before, $h_{v,I}$ denotes a $K$-holonomy along the edge oriented in the positive $I$-direction and starting at vertex $v$, but we also include the opposite direction $h_{v,-I}$ in the sum to ensure rotational invariance. Note that following our convention, such holonomies are identified with $h_{v-I,I}^{-1}$. In some of the holonomies, $v+I$ is again the vertex adjacent to $v$ in the positive $I$-direction. The $\{IJK\}$-summation is taken over all possible orientations of the ${IJ}$-loop and a transversal $K$-direction. Also, for notational brevity, we introduce \begin{equation}% c_{v,I}:=\frac{1}{2}\mathop{\mathrm{tr}}\nolimits(h_{v,I}) \quad,\quad s_{v,I}:=-\mathop{\mathrm{tr}}\nolimits(\tau_{(I)}h_{v,I}) \end{equation}% such that (\ref{hol}) becomes $h_{v,I}=c_{v,I}+2\tau_I s_{v,I}$. In a continuum approximation, we have \begin{equation} \label{contapprox} c_{v,I}=\cos(\gamma k_I(v+I/2)/2) \quad,\quad s_{v,I}=\sin(\gamma k_I(v+I/2)/2) \end{equation}% where $k_I(v)=\ell_0 \tilde k_I(v)$. After substituting this expression into (\ref{Kcontrib}) and making use of the identity\footnote{Here the fundamental representation of $\tau_I$ has been used: $\mathop{\mathrm{tr}}\nolimits({\rm 1\:\!\!\! I})=2, \,\mathop{\mathrm{tr}}\nolimits(\tau_I \tau_J)=-\frac{1}{2}\delta_{IJ}$} (for some fixed $I,J,K$ and numbers $x_i$ and $y_i$) \begin{eqnarray}% \epsilon_{IJK}\mathop{\mathrm{tr}}\nolimits\left[(x_1 {\rm 1\:\!\!\! I}+2y_1\tau_I )(x_2 {\rm 1\:\!\!\! I}+ 2y_2\tau_J)(x_3{\rm 1\:\!\!\! I}+2y_3\tau_K) \right]&=&\epsilon_{IJK} \mathop{\mathrm{tr}}\nolimits (x_1 x_2 x_3 {\rm 1\:\!\!\! I} )+8\epsilon_{IJK} \mathop{\mathrm{tr}}\nolimits (y_1 y_2 y_3 \tau_I \tau_J \tau_K) \nonumber \\&=&2 (x_1 x_2 x_3 - y_1 y_2 y_3 \epsilon_{IJK})\epsilon_{IJK}, \nonumber \end{eqnarray}% any one term of the sum in (\ref{Kcontrib}) becomes \begin{eqnarray}\label{one_sum}% &&\frac{i}{8\pi\gamma G\hbar} \mathop{\mathrm{tr}}\nolimits(h_{v,I} h_{v+I,J} h_{v+J,I}^{-1} h_{v,J}^{-1} h_{v,K}[h_{v,K}^{-1},\hat{V}_v])\\ &=&-\epsilon_{IJK} \left\{\left[(c_{v,I} c_{v+J,I}+s_{v,I} s_{v+J,I})c_{v,J} c_{v+I,J}+(c_{v,I} c_{v+J,I}-s_{v,I} s_{v+J,I})s_{v,J} s_{v+I,J}\right]\hat A_{v,K}\right\} \nonumber \\ &&+\epsilon_{IJK}^2 \left\{\left[(c_{v,I} s_{v+J,I}-s_{v,I} c_{v+J,I})s_{v,J} c_{v+I,J}+(s_{v,I} c_{v+J,I}+c_{v,I} s_{v+J,I})c_{v,J} s_{v+I,J}\right]\hat B_{v,K}\right\},\nonumber \end{eqnarray}% where \begin{eqnarray}\label{AB_def}% \hat A_{v,K} &:=& \frac{1}{4\pi i \gamma G\hbar} \left(\hat V_v - c_{v,K} \hat V_v c_{v,K} - s_{v,K}\hat V_v s_{v,K}\right)\,,\nonumber\\ \hat B_{v,K} &:=& \frac{1}{4\pi i \gamma G\hbar} \left(s_{v,K} \hat V_v c_{v,K} - c_{v,K}\hat V_v s_{v,K}\right)\,. \end{eqnarray} In the first line of (\ref{one_sum}), the expression inside the curly braces is symmetric in the indices $I$ and $J$, hence vanishes when contracted with $\epsilon_{IJK}$. Therefore only the second line contributes, and the extrinsic curvature part of the gravitational constraint is \begin{eqnarray}\label{K_operator}% \hat{H}_{K,v} &=& \frac{-N(v)}{64\pi\gamma^2 G} \sum_{IJK}\sum_{\sigma_I\in\{\pm1\}} \{[(c_{v,\sigma_II} s_{v+\sigma_JJ,\sigma_II}-s_{v,\sigma_II} c_{v+\sigma_JJ,\sigma_II})s_{v,\sigma_JJ} c_{v+\sigma_II,\sigma_JJ}\nonumber\\ &&\qquad\qquad+(s_{v,\sigma_II} c_{v+\sigma_JJ,\sigma_II}+c_{v,\sigma_II} s_{v+\sigma_JJ,\sigma_II})c_{v,\sigma_JJ} s_{v+\sigma_II,\sigma_JJ} ]\hat B_{v,\sigma_KK}\} \\ &=& \frac{-N(v)}{64\pi\gamma^2 G} \sum_{IJK}\sum_{\sigma_I\in\{\pm1\}} \{[ s_{v,\sigma_II,\sigma_JJ}^- s_{v,\sigma_JJ} c_{v+\sigma_II,\sigma_JJ} + s_{v,\sigma_II,\sigma_JJ}^+ c_{v,\sigma_JJ} s_{v+\sigma_II,\sigma_JJ}]\hat B_{v,\sigma_KK}\}\,, \nonumber \end{eqnarray}% where in the last line trigonometric identities have been used to express products of sines and cosines through \[ s_{v,\sigma_II,\sigma_JJ}^{\pm}:= \sin\left(\frac{\gamma}{2}(k_{\sigma_II}(v+\sigma_II/2)\pm k_{\sigma_II}(v+\sigma_JJ+\sigma_II/2)\right)\,. \] As in this expression, all functions $k_I$ are, as before, evaluated at the midpoint of the edge $e_{v,I}$. We see that in the homogeneous case the first term in the sum vanishes and the leading contribution is \begin{equation} \label{K_hom} 4\sin (\gamma k_I/2) \cos (\gamma k_I/2) \sin (\gamma k_J/2) \cos (\gamma k_J/2) \hat B_{v,K},\end{equation} in agreement with \cite{HomCosmo}. \subsubsection{Higher curvature corrections} \label{HigherCurv} There are two types of corrections visible from this expression: Using commutators to quantize inverse densitized triad components implies eigenvalues of $\hat{B}_{v,I}$ which differ from the classical expectation at small labels $\mu_{v,I}$. Moreover, using holonomies contributes higher order terms in extrinsic curvature together with higher spatial derivatives when sines and cosines are expanded in small curvature regimes. We will now compute the next-leading terms of higher powers and spatial derivatives of $\tilde{k}_I(v)$ before dealing with inverse power corrections in the following subsection. First, recall the usual expectation that quantum gravity gives rise to low energy effective actions with higher curvature terms such as $\int{\mathrm{d}}^4x \sqrt{|\det g|} \ell_{\rm P}^2R^2$ or $\int{\mathrm{d}}^4x \sqrt{|\det g|}\ell_{\rm P}^2R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma}$ added to the Einstein--Hilbert action $\int{\mathrm{d}}^4x\sqrt{|\det g|}R$. Irrespective of details of numerical coefficients, there are two key aspects: The Planck length $\ell_{\rm P}=\sqrt{G\hbar}$ must be involved for dimensional reasons in the absence of any other length scale, and higher spatial as well as time derivatives arise with higher powers of $R_{\mu\nu\rho\sigma}$. In canonical variables, one expects higher powers and higher spatial derivatives of extrinsic curvature and the triad, together with components of the inverse metric necessary to define scalar quantities from higher curvature powers (which forces one to raise indices on the Riemann tensor, for instance). Higher time derivatives, on the other hand, are more difficult to see in a canonical treatment and correspond to the presence of independent quantum variables without classical analog \cite{Karpacz}. Any quantization such as that followed here starts from the purely classical action where $\hbar$ and thus $\ell_{\rm P}$ vanishes. In effective equations of the resulting quantum theory, quantum corrections depending on $\hbar$ will nevertheless emerge. As a first step in deriving such effective equations, we have non-local holonomy terms in a Hamiltonian operator which through its expectation values in semiclassical states will give rise to similar contributions of the same functional form of $k_I(v)$. At first sight, however, the expressions above do not agree with expectations from higher curvature actions: One can easily see that in (\ref{K_operator}) there are higher powers of extrinsic curvature by expanding the trigonometric functions, and higher spatial derivatives of extrinsic curvature by Taylor expanding the discrete displacement involved, e.g., in $k_I(v+I/2)$. Moreover, higher spatial derivatives of the triad arise from similar non-local terms in the spin connection contribution $\hat{H}_{\Gamma}$ discussed later. But there are no factors of the Planck length in such higher powers (all factors of $G$ and $\hbar$ are written out explicitly and not ``set equal to one''). In fact, by definition $k_I(v)$ is dimensionless since it is obtained by multiplying the curvature component $\tilde{k}_I(v)$ with $\ell_0$ in which all possible dimensions cancel. Higher power terms here thus do not need any dimensionful prefactor. Moreover, there are no components of the inverse metric (which would be $1/\tilde{p}^{I}(v)$ for our diagonal triads) in contrast to what is required in higher curvature terms. \paragraph{Curvature expansion.} To see how this is reconciled, we expand the Hamiltonian explicitly in $\ell_0$ after writing $k_I=\ell_0\tilde k_I$. This corresponds to a slowly varying field approximation with respect to the lattice size. For the $(+I,+J)$-plaquette, a single term in the sum (\ref{Kcontrib}) becomes \begin{eqnarray} \label{K_expansion} && 2s_{v,I,J}^- s_{v,J} c_{v+I,J}+ 2s_{v,I,J}^+ c_{v,J} s_{v+I,J} = \gamma^2\ell_0^2 \tilde{k}_I\tilde{k}_J+ \frac{1}{2}\gamma^2 \ell_0^3\left(\tilde{k}_I \tilde{k}_{J,J}+\tilde{k}_J \tilde{k}_{I,I}+2\tilde{k}_J \tilde{k}_{I,I}\right)\nonumber\\ && \qquad\qquad+\frac{1}{8}\gamma^2\ell_0^4\left(\tilde k_I \tilde k_{J,JJ} +\tilde k_J \tilde k_{I,II}+4(\tilde k_I\tilde k_{J,II}+\tilde k_I\tilde k_{J,IJ}+\tilde k_{I,I}\tilde k_{J,I}+\tilde k_{I,J}\tilde k_{J,I}) \right.\nonumber \\ && \qquad\qquad\left. +2\tilde k_{I,I}\tilde k_{J,J}- \frac{4}{3}\gamma^2\tilde k_I\tilde k_J(\tilde k_I^2+\tilde k_J^2)\right)+ O(\ell_0^5)\,. \end{eqnarray} (Commas on the classical field $\tilde{k}_I$ indicate partial derivatives along a direction given by the following index.) For a fixed direction $K$ there are in total eight terms to be included in the sum (\ref{Kcontrib}). They are obtained from (\ref{K_expansion}) by taking into account the four plaquettes in the $(I,J)$-plane meeting at vertex $v$ (Fig. \ref{4plaquette}) and considering both orientations in which each plaquette can be traversed. While the latter merely boils down to symmetrization over $I$ and $J$, the former requires some care, noting that in the Hamiltonian constraint (\ref{Kcontrib}) $h_{v,-I}$ means $h_{v-I,I}^{-1}$. The contribution (\ref{K_expansion}) corresponds to plaquette 1 of Fig. \ref{4plaquette} and has $\sigma_I=\sigma_J=\sigma_K=1$. Accounting for the overall sign dictated by the $\sigma$-factors, one can obtain the expressions for the three remaining plaquettes 2, 3 and 4 following the recipe below. \[ \begin{array}{c|cccc|l} \mbox{plaquette} & \multicolumn{4}{c|}{\mbox{extrinsic curvature components}} & \mbox{sign}\\\hline (1)& k_I(v+I/2)& k_J(v+I+J/2)& -k_I(v+I/2+J)& -k_J(v+J/2)& \\ (2)& -k_I(v-I/2)& k_J(v-I+J/2)& k_I(v-I/2+J)& -k_J(v+J/2)&(-1)\\ (3)& -k_I(v-I/2)& -k_J(v-I-J/2)& k_I(v-I/2-J)& k_J(v-J/2)&(-1)^2 \\ (4)& k_I(v+I/2)& -k_J(v+I-J/2)& -k_I(v+I/2-J)& k_J(v-J/2)&(-1) \end{array} \] The first column designates a plaquette number, whereas the last one indicates the overall sign factor. The other four columns show the correspondence between the relevant link labels. \setlength{\unitlength}{0.01\linewidth}% \setlength{\fboxsep}{0pt}% \setlength{\fboxrule}{1.5pt} \begin{figure} \begin{center} \begin{picture}(20,20) \linethickness{1pt} \put(10,10){\circle*{2}} \put(0,0){\circle*{1}} \put(0,10){\circle*{1}} \put(10,0){\circle*{1}} \put(20,0){\circle*{1}} \put(0,20){\circle*{1}} \put(10,20){\circle*{1}} \put(20,10){\circle*{1}} \put(20,20){\circle*{1}} \put(0,0){\line(1,0){20}} \put(0,0){\line(0,1){20}} \put(0,10){\line(1,0){20}} \put(10,0){\line(0,1){20}} \put(20,0){\line(0,1){20}} \put(0,20){\line(1,0){20}} \put(+11,8){$v$ \put(-6,10){$v\!-\!I$} \put(+21,10){$v\!+\!I$} \put(7,-3){$v-J$} \put(7,21){$v+J$} \put(+21,20){$v\!+\!\!I\!\!+\!\!J$} \put(+21,0){$v\!+\!\!I\!\!-\!\!J$} \put(-8,20){$v\!\!-\!\!I\!\!+\!\!J$} \put(-8,0){$v\!\!-\!\!I\!\!-\!\!J$} \put(14.5,14){\large{\bf 1}} \put(4.5,14){\large{\bf 2}} \put(4.5,4){\large{\bf 3}} \put(14.5,4){\large{\bf 4}} \put(10,10){\vector(1,0){6}} \put(10,10){\vector(-1,0){6}} \put(20,10){\vector(0,1){6}} \put(20,10){\vector(0,-1){6}} \put(0,10){\vector(0,1){6}} \put(0,10){\vector(0,-1){6}} \put(0,0){\vector(1,0){6}} \put(0,20){\vector(1,0){6}} \put(10,0){\vector(0,1){6}} \put(10,20){\vector(0,-1){6}} \put(20,0){\vector(-1,0){6}} \put(20,20){\vector(-1,0){6}} \end{picture} \end{center} \caption{Four plaquettes adjacent to vertex $v$ in the $(I,J)$-plane. The arrows indicate the directions in which the relevant holonomies are traversed. \label{4plaquette}} \end{figure} After the symmetrization over all four plaquettes (traversed in both directions), the cubic terms drop out \begin{eqnarray} \label{K_ave} \gamma^2\ell_0^2 \tilde{k}_I\tilde{k}_J &\!\!-\!\!&\frac{\gamma^4 \ell_0^4}{6} \tilde k_I \tilde k_J (\tilde k_I^2+\tilde k_J^2) \\&\!\!+\!\!&\frac{\gamma^2 \ell_0^4}{8}\left(\tilde k_I \tilde k_{J,JJ} +\tilde k_J \tilde k_{I,II}+2(\tilde k_I\tilde k_{J,II}+\tilde k_J\tilde k_{I,JJ}+\tilde k_{I,I}\tilde k_{J,I}+\tilde k_{I,J}\tilde k_{J,J}) \right) + O(\ell_0^5)\,.\nonumber \end{eqnarray} Note that the link labels $\tilde k$ were introduced as values of the extrinsic curvature components evaluated at midpoints of edges in the continuum approximation (\ref{contapprox}) of our basic non-local variables. The expression above is written in terms of just two components $\tilde k_I(v)$ and $\tilde k_J(v)$ (and their partial spatial derivatives) Taylor expanded around the vertex $v$. The first term, when combined with $\hat{B}_{v,K}$ and summed over all triples $IJK$, reproduces the correct classical limit of the constraint $H_K$. This limit is obtained in two steps: we first performed the continuum approximation by replacing holonomies with mid-point evaluations of extrinsic curvature components. This would still give us a non-local Hamiltonian since each vertex contribution now refers to evaluations of the classical field at different points. In a second step we then Taylor expanded these evaluations around the central vertex $v$, which gives a local result and corresponds to a further, slowly-varying field approximation. \paragraph{Comparison with higher curvature terms.} Here, the factor $\ell_0^2$ in the leading term together with a factor $\ell_0$ from $\hat{B}_{v,K}$ through (\ref{commPoiss}) combines to give the Riemann measure of the classical integral. Higher order terms, however, come with additional factors of $\ell_0$ in (\ref{K_ave}) which are not absorbed in this way. The result is certainly independent of coordinates since the whole construction (\ref{K_operator}) in terms of $k_I$ is coordinate independent. But for a comparison with higher curvature terms we have to formulate corrections in terms of $\tilde{k}_I$ and $\tilde{p}^I$ as these are the components of classical extrinsic curvature and densitized triad tensors. Higher order terms in the expansion are already formulated with $\tilde{k}_I$ in coordinate independent combinations with $\ell_0$-factors. It remains to interpret the additional $\ell_0$ factors appropriately for a comparison with low energy effective actions. This can be done quite simply in a way which removes the above potential discrepancies between our expansions and higher curvature terms in low energy effective actions. We simply use (\ref{ScalarFlux}) to write $\ell_0^2=p^I/\tilde{p}^I$ which is the only well-defined possibility to express $\ell_0$ in terms of the fields. Thus, inverse metric components $1/\tilde{p}^I$ directly occur in combination with $\tilde{k}_J$ factors as required for higher curvature terms. The fact that the cubic term in $\ell_0$ in (\ref{K_ave}) drops out is also in agreement with higher curvature corrections since in that case only even powers of the length scale $\ell_{\rm P}$ occur. Moreover, there are now factors of $p^I$ multiplying the corrections. These are basic variables of the quantum theory determining the fundamental discreteness. Thus, factors of the Planck length occurring in low energy effective actions are replaced by the state specific quantities $p^I$. While the Planck length $\ell_{\rm P}=\sqrt{G\hbar}$ is expected to appear for dimensional reasons without bringing in information about quantum gravity (it can just be computed using classical gravity for $G$ and quantum mechanics for $\hbar$), the $p^I$ are determined by a state of quantum gravity. If expressed through labels $\mu_{v,I}$, the Planck length also appears, but it can be enlarged when $\mu_{v,I}>1$. Moreover, the lattice labels are dynamical (and in general inhomogeneous) and can thus change in time in contrast to $\ell_{\rm P}$. Although the form of corrections is analogous to those for low energy effective actions, the conceptual as well as dynamical appearance of correction terms is thus quite different. The terms considered so far could not give rise to higher time derivatives of the spatial metric. In general, higher time derivatives describe the effect of quantum variables (\ref{QuantVars}) of the field theory, which appear in the expectation value of the Hamiltonian constraint in a generic state. Quantum variables are thus, in a certain sense, analogous to higher time derivatives in effective actions \cite{Karpacz}, which indicates that the correction terms they imply should combine with those in (\ref{K_ave}) obtained by expanding sines and cosines to higher powers of space-time curvature components. All corrections of these types should thus be considered together since they will eventually be mixed up despite of their different derivations. A computation of terms containing quantum variables requires more detailed information about the expectation value of the constraint operator in an arbitrary state. These terms are thus more difficult to compute, which also makes an interpretation of the remaining higher curvature terms alone, especially concerning their possible covariance, more difficult.\footnote{It is sometimes tempting to ``sum the whole perturbations series'' of higher order terms simply by using the left hand side of (\ref{K_expansion}) directly in effective equations without an expansion. However, this is in general not a consistent approximation since arbitrarily small higher order terms are included but other types of corrections such as higher time derivatives or quantum variables are completely ignored. There is currently only one known model where the procedure is correct since all quantum variables have been shown to decouple from the expectation values \cite{BouncePert}. But this model, a free isotropic scalar in a certain factor ordering of the constraint operator, is very special. Under modifications such as a scalar potential quantum variables do not decouple and their motion implies further correction terms in effective equations not captured in the trigonometric functions arising from holonomies.} We will thus focus from now on on corrections coming from commutators $\hat{B}_{v,K}$ to quantize inverse powers which are independent of the higher order corrections and even give rise to non-perturbative terms. Moreover, in Sec.~\ref{magnitude} below we will demonstrate that those corrections are expected to be dominant in cosmological perturbation theory. \subsubsection{Inverse triad corrections} A direct calculation using (\ref{hol_action}) and (\ref{V_action}) shows that $\hat B_{v,K}$ commutes with all flux operators and thus has flux eigenstates as eigenbasis, as it happens also in homogeneous models \cite{LivRev}. The action \begin{eqnarray} \hat B_{v,K}|\ldots,\mu_{v,K},\ldots\rangle &:=& \left(2\pi\gamma \ell_{\mathrm P}^2\right)^{1/2}\sqrt{|\mu_{v,I}+\mu_{v,-I}| |\mu_{v,I}+\mu_{v,-J}|} \\ &\times&\left(\sqrt{|\mu_{v,K}+\mu_{v,-K}+1|}- \sqrt{|\mu_{v,K}+\mu_{v,-K}-1|}\right) |\ldots,\mu_{v,K},\ldots\rangle \nonumber \end{eqnarray} directly shows the eigenvalues which do not agree exactly with the classical expectation $e_{K}(v)=\sqrt{|p^I(v)p^J(v)/p^K(v)|}\sim \sqrt{|\mu_{v,I}\mu_{v,J}/\mu_{v,K}|}$ (indices such that $\epsilon^{IJK}=1$) for the co-triad (\ref{cotriad}) which appears as a factor in the Hamiltonian constraint. But for large values $\mu_{v,I}\gg1$ the classical expectation is approached as an expansion of the eigenvalues shows. Inverse triad corrections are obtained by extracting the corrections which $B_{v,K}$ receives on smaller scales. We introduce the correction function as a factor $\alpha_{v,K}$, depending on the lattice labels $\mu_{v,I}$, such that $B_{v,K}=\alpha_{v,K} e_{v,K}$ and $\alpha_{v,K}\rightarrow 1$ classically, i.e.\ for $\mu_{v,K}\gg1$. Comparing the eigenvalues of $\hat B_{v,K}$ with those of flux operators in the combination $\sqrt{|{\cal F}_{v,I}{\cal F}_{v,J}/{\cal F}_{v,K}|}$, we find \begin{equation} \label{alpha_corr} \alpha_{v,K}=\sqrt{|\mu_{v,K}+\mu_{v,-K}|} \left(\sqrt{|\mu_{v,K}+\mu_{v,-K}+1|}- \sqrt{|\mu_{v,K}+\mu_{v,-K}-1|}\right)\end{equation} After having computed the operators and their eigenvalues, we can specialize the correction function to perturbations of the scalar mode. We reduce the number of independent labels by imposing $\mu_{v,I}+\mu_{v,-I}=\mu_{v,J}+\mu_{v,-J}$ for arbitrary $I$ and $J$. This corresponds to a metric proportional to the identity $\delta_{ab}$ for a scalar perturbation. We then assign a new variable $p(v)=2\pi\gamma\ell_{\rm P}^2(\mu_{v,I}+\mu_{v,-I})$ to each vertex $v$, which is independent of the direction of the edge $I$ and describes the diagonal part of the triad. Quantum numbers in eigenvalues of the lattice operators can then be replaced by $p(v)$, and the resulting functions compared with the classical ones. The remaining subscript $v$ indicates that the physical quantities are vertex-dependent, i.e.\ inhomogeneous. Then the averaging over the plaquette orientations in the constraint becomes trivial and the total correction reads \begin{equation}\label{alpha_iso} \alpha_v=2\sqrt{|\mu_v|}\left(\sqrt{|\mu_v+1/2|}-\sqrt{|\mu_v-1/2|}\right) \end{equation} i.e.\ \begin{equation} \alpha[p(v)]=\frac{\sqrt{|p(v)|}}{2\pi\gamma\ell_{\rm P}^2} \left(\sqrt{|p(v)+2\pi\gamma\ell_{\rm P}^2|}- \sqrt{|p(v)-2\pi\gamma\ell_{\rm P}^2|}\right)\,. \end{equation} We will continue analyzing these correction functions in Sec.~\ref{CorrProp} after having discussed how such functions also enter the spin connection and matter terms. \subsubsection{Spin connection} \label{SpinConn} So far, the holonomies we used only contributed the extrinsic curvature terms to the Hamiltonian but no spin connection terms at all. In the procedure followed here, we thus have to quantize $\Gamma_a^i[E]$ directly which is possible in the perturbative regime where line integrals of the spin connection have covariant meaning. This gives rise to one further correction function in the effective expression of the spin connection \begin{equation}% \Gamma_I^i=-\epsilon^{ijk} e_j^b \left(\partial_{[I} e_{b]}^k+\frac{1}{2}e_k^c e_a^l \partial_{[c} e_{b]}^l\right), \end{equation} as it also contains a co-triad (\ref{cotriad}). Since the triad and its inverse have a diagonal form \begin{equation}% e_i^I\equiv\frac{E_i^I}{\sqrt{|\det E|}}=e^{(I)}\delta_i^I, \quad e_I=e_{(I)}\delta_I^i \end{equation}% with the components given by% \begin{equation} \label{e_comp} e^{I}=\frac{p^I}{\sqrt{|\det E|}}=\left( e_{I}\right)^{-1} ,\quad \det E = p^I p^J p^K, \end{equation}% the spin-connection simplifies to \begin{equation} \label{Spinscalar} \Gamma_I^i=\epsilon^{ic}_I e^{(c)}\partial_c e_{(I)}.\end{equation} In terms of components of a densitized triad it reads \begin{equation}\label{spin_class}% \Gamma_I^i=\frac{1}{2}\epsilon^{ij}_I\frac{p^{(j)}}{p^{(I)}} \left(\sum_J\frac{\partial_j p^J}{p^J}-2 \frac{\partial_j p^I}{p^I}\right).% \end{equation}% Since there are many alternative choices in performing the quantization of such an object, but not much guidance from a potential operator in the full theory, we first discuss general aspects one can expect for the quantization of the spin connection in a simple version. It includes corrections of inverse densitized triad components by correction functions in each term of (\ref{spin_class}). We thus mimic a quantization to the extent that expectation values of classical expressions containing inverse powers of $p$ acquire a correction factor \begin{equation} \frac{1}{p^I(v)} \rightarrow \frac{\beta_I(v)}{p^I(v)},\end{equation} where the correction functions $\beta_I$ are kept different from the function $\alpha$ used before because the object to be quantized is different. There will also be corrections from the discretization $\Delta_I$ of partial derivatives $\ell_0\partial_I$, but we ignore them in what follows for the same reason which allowed us to ignore such effects from the loop holonomy quantizing $F_{ab}^i$. The effective analog of (\ref{spin_class}) is then of the form \begin{equation}\label{spin_eff}% (\Gamma_I^i)_{\rm eff}=\frac{1}{2}\epsilon^{ij}_I\beta^I\frac{p^j}{p^I}\left(\sum_J\beta^J\frac{\partial_j p^J}{p^J}-2\beta^I \frac{\partial_j p^I}{p^I}\right).% \end{equation}% At this stage the triad components, corresponding to different orientations, can be put equal to each other in effective equations, $p^I=p^J=p^K=p$. This implies an analogous relation between the correction functions $\beta_I=\beta_J=\beta_K=\beta_0$. Comparing (\ref{spin_eff}) with the ansatz $\Gamma_I^i=\frac{1}{2}\epsilon^{ij}_I \beta \frac{\partial_j p}{p}$, we conclude that also the spin connection receives a correction function $\beta=\beta_0^2$. For a precise quantization we observe that we need terms of the form $\ell_0^2\Gamma_a^i\Gamma_b^j$ and $\ell_0^2\partial_a\Gamma_b^i$ in the constraint since one factor $\ell_0$ of the Riemann measure will be absorbed in the commutator $\hat{B}_{v,I}$. To quantize $\ell_0\Gamma_a^i$, we combine $\ell_0$ with the partial derivative $\partial_I$ in (\ref{Spinscalar}) to approximate a lattice difference operator $\Delta_I$ defined by $(\Delta_If)_v= f_{v+I}-f_v$ for any lattice function $f$. A well-defined lattice operator thus results once a prescription for quantizing the inverse triad has been chosen. One can again make use of Poisson identities for the classical inverse which, however, allows more freedom than for the combination of triad components we saw in the Hamiltonian constraint. Such a freedom, corresponding to quantization ambiguities, will also be encountered when we consider matter Hamiltonians. For any choice we obtain a well-defined operator which would not be available without the perturbative treatment since the full spin connection is not a tensorial object. An explicit example can most easily be derived by writing the spin connection integrated along a link $e_{v,I}$ as it might appear in a holonomy, \[ \int_{e_{v,I}} \dot{e}_I^a\Gamma^i_a\approx \ell_0\Gamma^i_I =\epsilon^{ic}_I e^{(c)}\ell_0 \partial_c e_{(I)}\approx \epsilon^{iK}_I \frac{p^{(K)}}{\sqrt{|\det E|}} \Delta_K e_{(I)} \] using the lattice difference operator $\Delta_I\approx\ell_0\partial_I$. We then have to deal with the inverse powers explicit in the fraction and implicit in the co-triad $e_I$. The latter is standard, replacing $e_I$ by $\ell_0^{-1}h_I\{h_I^{-1},V_v\}$ based on (\ref{cotriad}). The inverse determinant in the fraction cannot be absorbed in the resulting Poisson bracket because (i) it does not commute with the derivative and (ii) absorbing a single inverse in a single co-triad would lead to a logarithm of $V_v$ in the Poisson bracket which would not be well-defined. It can, however, be absorbed in the flux $\ell_0^2 p^K$ if we do not use the basic flux operator $\hat{F}_{v,K}$ but the classically equivalent expression \begin{eqnarray} F_{v,K} &\approx& \ell_0^2 p^K= \frac{1}{2}\ell_0^2 \delta^k_{(K)} \epsilon_{kij}\epsilon^{KIJ} e_I^ie_J^j\nonumber\\ &=&-\frac{1}{4}(4\pi\gamma G)^{-2}\sum_{IJ}\sum_{\sigma_I\in\{\pm1\}} \sigma_I\sigma_J \epsilon^{IJK}\mathop{\mathrm{tr}}\nolimits(\tau_{(K)} h_I\{h_I^{-1},V_v\} h_J\{h_J^{-1},V_v\}) \end{eqnarray} which is analogous to expressions used in \cite{Flux}. Since there are two Poisson brackets, we can split the inverse $V_v$ evenly among them, giving rise to square roots of $V_v$ in the brackets: \begin{eqnarray} \frac{p^K}{\sqrt{|\det E|}} &\approx& \ell_0\frac{F_{v,K}}{V_v}\nonumber\\ &=& -\frac{\ell_0}{16\pi^2\gamma^2 G^2} \sum_{IJ}\sum_{\sigma_I\in\{\pm1\}}\sigma_I\sigma_J \epsilon^{IJK}\mathop{\mathrm{tr}}\nolimits(\tau_{(K)} h_I\{h_I^{-1},\sqrt{V_v}\} h_J\{h_J^{-1},\sqrt{V_v}\})\,. \end{eqnarray} The remaining factor of $\ell_0$ is absorbed in $e_I$ inside the derivative which is quantized following the standard procedure. A well-defined quantization of spin connection components thus follows, which is not local in a vertex since the difference operator connects to the next vertex. Similarly, the derivative of the spin connection needed in the Hamiltonian constraint leads to further connections to next-to-next neighbors. Explicitly, one can thus write an integrated spin connection operator quantizing $\Gamma_{v,I}^i:=\int_{e_{v,I}}{\mathrm{d}} t\dot{e}_I^a\Gamma_a^i$ as \begin{eqnarray} \hat{\Gamma}_{v,I}^i &=& \epsilon_I{}^{iK} \left(\frac{1}{16\pi^2\gamma^2\ell_{\rm P}^2} \sum_{J,L,\sigma_J,\sigma_L} \sigma_J\sigma_L\epsilon^{JLK} \mathop{\mathrm{tr}}\nolimits\left(\tau_{(K)} h_J[h_J^{-1},\hat{V}_v^{1/2}] h_L[h_L^{-1},\hat{V}_v^{1/2}]\right) \right. \nonumber\\ &&\times\left.\Delta_K\left(\frac{i}{2\pi\gamma\ell_{\rm P}^2} \mathop{\mathrm{tr}}\nolimits(\tau^{(I)} h_I[h_I^{-1},\hat{V}_v])\right)\right)\,. \end{eqnarray} Replacing the commutators by classical expressions times correction functions $\alpha$ defined as before and $\alpha_{1/2}$ defined similarly for a commutator containing the square root of the volume operator leads to an expression \begin{eqnarray*} (\Gamma_I^i)_{\rm eff} &=& \alpha_{1/2}(p^i)\alpha_{1/2}(p^I)\epsilon_I{}^{ic} e^{(c)}\partial_c(\alpha(p^I)e_I) \\ &=&\alpha_{1/2}(p^i)\alpha_{1/2}(p^I)\alpha(p^I) \Gamma_I^i+ \alpha_{1/2}(p^i)\alpha_{1/2}(p^I) \alpha'(p^I)e_I\epsilon_I{}^{ic}e^{(c)}\partial_cp^{(I)} \end{eqnarray*} where the prime denotes a derivative by $p^I$. Using the relation $p^J=e_Ie_K$ whenever $\epsilon_{JIK}=1$ between densitized triad and co-triad components allows us to write \begin{eqnarray} (\Gamma_I^i)_{\rm eff} &=& \alpha_{1/2}(p^i)\alpha_{1/2}(p^I)\alpha(p^I) \Gamma_I^i+ \alpha_{1/2}(p^i)\alpha_{1/2}(p^I) \alpha'(p^I)e_I\epsilon_I{}^{ic}e^{(c)}\partial_c(e_Je_K)|_{\epsilon_{IJK}=1} \nonumber\\ &=& \alpha_{1/2}(p^i)\alpha_{1/2}(p^I) \left(\alpha(p^I)\Gamma^i_I+\sum_{K\not=I} \alpha'(p^I) p^K\Gamma^i_K\right) \end{eqnarray} for the effective spin connection components. For scalar modes, using that all $p^I$ at a given point are equal, this can be written with a single correction function \begin{equation} \beta[p(v)]=\alpha_{1/2}[p(v)]^2(\alpha[p(v)]+2p\alpha'[p(v)]) \end{equation} for $\Gamma_I^i$, where $\alpha'={\mathrm{d}}\alpha/{\mathrm{d}} p$. \subsection{Matter Hamiltonian} Matter fields are quantized by similar means in a loop quantization, using lattice states, and then coupled dynamically to geometry by adding the matter Hamiltonian to the constraint. For a scalar field $\phi$, the momentum $\pi=\sqrt{|\det E|}\dot{\phi}/N$ is a density of weight one. In the $\phi$-representation, states will simply be of the form already used for the gravitational field, except that each vertex now also carries a label $\nu_v\in{\mathbb R}$ describing the dependence on the scalar field $\phi(v)$ through $\exp(i\nu_v\phi(v))$ \cite{ScalarBohr}. Well-defined lattice operators are then given by $\widehat{\exp(i\nu_0\phi}_v)$, for any $\nu_0\in{\mathbb R}$, which shifts the label $\nu_v$ by $\nu_0$. The momentum, with its density weight, has to be integrated before it can meaningfully be quantized. We introduce \[ P_v := \int_{R_v}{\mathrm{d}}^3x \pi\approx \ell_0^3\pi(v) \] where $R_v$ is a cubic region around the vertex $v$ of the size of a single lattice site. Since we have $\{\phi(v),P_w\}= \chi_{R_w}(v)$ in terms of the characteristic function $\chi_R(v)=1$ if $v\in R$ and zero otherwise, a momentum operator $P_v$ must have eigenvalue $\hbar\nu_v$ in a state introduced above. \subsubsection{Inverse triad corrections} For the matter Hamiltonian of a scalar field $\phi$ with momentum $\pi$ and potential $U(\phi)$ we have the classical expression \[ H_{\phi}[N]=\int {\mathrm{d}}^3x N(x) \left[\frac{1}{2\sqrt{\det(q)}}\pi(x)^2 +\frac{E_i^aE_i^b}{2\sqrt{\det q}}\partial_a\phi(x)\partial_b \phi(x)+\sqrt{\det q}U(\phi)\right] \] containing inverse powers of the metric, too. It can be quantized by loop techniques \cite{QSDV,QFTonCSTI} making use of identities similar to (\ref{cotriad}). One first generalizes the identity to arbitrary positive powers of the volume in a Poisson bracket, \begin{equation} \{A_a^i,V_v^r\}=4\pi\gamma G\:rV_v^{r-1}e_a^i \end{equation} and then combines such factors with suitable exponents $r$ to produce a given product of triad and co-triad components. Since such identities would be used only when inverse components of densitized triads are involved and a positive power of volume must result in the Poisson bracket, the allowed range for $r$ is $0<r<2$. Any such Poisson bracket will be quantized to \[ \dot{e}^a_K\{A_a^i,V_v^r\}\mapsto \frac{-2}{i\hbar \ell_0}\mathop{\mathrm{tr}}\nolimits(\tau^ih_{v,K}[h_{v,K}^{-1},\hat{V}_v^r]) \] using holonomies $h_{v,I}$ in direction $I$ with tangent vector $\dot{e}_K^a$. Since holonomies in our lattice states have internal directions $\tau_K$ for direction $K$, we can compute the trace and obtain \begin{equation} \widehat{V_v^{r-1}{e}_K^i}= \frac{-2}{8\pi i r\gamma \ell_{\mathrm P}^2 \ell_0} \sum_{\sigma \in \{\pm 1\}}\sigma\mathop{\mathrm{tr}}\nolimits(\tau^ih_{v,\sigma K}[h_{v,\sigma K}^{-1},\hat{V_v}^r]) =\frac{1}{2\ell_0} (\hat{B}_{v,K}^{(r)} - \hat{B}_{v,-K}^{(r)}) \delta^i_{(K)} \end{equation} where, for symmetry, we use both edges touching the vertex $v$ along direction $K$ and $\hat B_{v,K}^{(r)}$ is the generalized version of (\ref{AB_def}): \begin{equation}\label{B_def}% \hat B_{v,K}^{(r)} := \frac{1}{4 \pi i\gamma G \hbar r}\left(s_{v,K} \hat V_v^r c_{v,K} - c_{v,K}\hat V_v^r s_{v,K}\right)\end{equation}% The exponent used for the gravitational part was $r=1$, and $r=1/2$ already occurred in the spin connection, while the scalar Hamiltonians introduced in \cite{QSDV,QFTonCSTI}, which we closely follow in the construction of the matter Hamiltonian here, use $r=1/2$ for the kinetic term and $r=3/4$ for the gradient term. With \[ \epsilon^{abc}\epsilon_{ijk} \{A_a^i,V_v^{1/2}\} \{A_b^j,V_v^{1/2}\} \{A_c^k,V_v^{1/2}\}= (2\pi\gamma G)^3\epsilon^{abc}\epsilon_{ijk} \frac{e_a^ie_b^je_c^k}{V_v^{3/2}}= \frac{6(2\pi\gamma G)^3}{\ell_0^3V_v^{1/2}} \] for a lattice site volume $V_v\approx \ell_0^3 |\det(e_a^i)|$ and \[ \epsilon^{abc}\epsilon_{ijk} \{A_b^j,V_v^{3/4}\} \{A_c^k,V_v^{3/4}\}= (3\pi\gamma G)^2\epsilon^{abc}\epsilon_{ijk} \frac{e_b^je_c^k}{V_v^{1/2}}= 6(3\pi\gamma G)^2\frac{E^a_i}{V_v^{1/2}} \] one can replace the inverse powers in the scalar Hamiltonian as follows: For the kinetic term, we discretize \[ \int{\mathrm{d}}^3x \frac{\pi^2}{\sqrt{\det q}}\approx \sum_v \ell_0^3\frac{\pi(v)^2}{\sqrt{\det q(v)}}\approx \sum_v \frac{{P_v}^2}{V_v}\,. \] Then, the classically singular \begin{equation} \label{invV} \frac{1}{V_v}=\left(\frac{\ell_0^3}{6} \epsilon^{abc}\epsilon_{ijk} \frac{e_a^ie_b^je_c^k}{V_v^{3/2}}\right)^2= \left(\frac{\ell_0^3}{6(2\pi\gamma G)^3} \epsilon^{abc}\epsilon_{ijk} \{A_a^i,V_v^{1/2}\} \{A_b^j,V_v^{1/2}\} \{A_c^k,V_v^{1/2}\}\right)^2 \end{equation} will be quantized to \[ \left(\frac{1}{48} \epsilon^{IJK}\epsilon_{ijk} (\hat{B}_{v,I}^{(1/2)}- \hat{B}_{v,-I}^{(1/2)})\delta^i_{(I)} (\hat{B}_{v,J}^{(1/2)}- \hat{B}_{v,-J}^{(1/2)})\delta^j_{(J)} (\hat{B}_{v,K}^{(1/2)}- \hat{B}_{v,-K}^{(1/2)})\delta^k_{(K)} \right)^2\,. \] Similarly, we discretize the gradient term by \[ \int{\mathrm{d}}^3x \frac{E^a_iE^b_i}{\sqrt{\det q}}\partial_a\phi\partial_b\phi\approx \sum_v\ell_0^3\frac{E^a_i(v)E^b_i(v)}{\sqrt{\det q(v)}}(\partial_a\phi)(v)(\partial_b\phi)(v)\approx \sum_v \frac{p^I(v)p^J(v)}{V_v} \Delta_I\phi_v\Delta_J\phi_v \] where we replace spatial derivatives $\partial_a$ by lattice differences $\Delta_I$. Now, using \[ \delta^i_{(I)}\frac{p^I(v)}{V_v^{1/2}}=\ell_0^2\frac{E_i^I(v)}{V_v^{1/2}}= \frac{\ell_0^2}{6} \frac{\epsilon^{Ibc}\epsilon_{ijk} e^j_be^k_c}{V_v^{1/2}}= \frac{\ell_0^2}{6(3\pi\gamma G)^2} \epsilon^{Ibc}\epsilon_{ijk} \{A_b^j,V_v^{3/4}\}\{A_c^k,V_v^{3/4}\} \] we can quantize the metric contributions to the gradient term by \begin{eqnarray} \frac{1}{24^2} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\epsilon^{IKL}\epsilon_{ijk} (\hat{B}_{v,K}^{(3/4)}- \hat{B}_{v,-K}^{(3/4)})\delta^j_{(K)} (\hat{B}_{v,L}^{(3/4)}- \hat{B}_{v,-L}^{(3/4)})\delta^k_{(L)} \label{gradient}\\ &\times\epsilon^{JMN}\epsilon_{imn} (\hat{B}_{v,M}^{(3/4)}- \hat{B}_{v,-M}^{(3/4)})\delta^m_{(M)} (\hat{B}_{v,N}^{(3/4)}- \hat{B}_{v,-N}^{(3/4)})\delta^n_{(N)} \,.\nonumber \end{eqnarray} In addition to the fact that we are using different values for $r$ in each term in the gravitational and matter parts, giving rise to different correction functions, the matter terms are less unique than the gravitational term and can be written with different parameters $r$. This corresponds to quantization ambiguities which will appear also in effective equations and which could have phenomenological implications. Some choices are preferred since they give rise to simpler expressions, but this does not suffice to determine a unique quantization. Instead of using $r=1/2$ in the kinetic term, for instance, we can use the class of relations \begin{eqnarray*} \frac{1}{\sqrt{|\det E|}}=\frac{(\det e)^k}{(\det E)^{(k+1)/2}}= \left(\frac{1}{6}\epsilon^{abc}\epsilon_{ijk} (4\pi G\gamma)^3\right.\\ \times \{A_a^i,V^{(2k-1)/3k}\} \{A_b^j,V^{(2k-1)/3k}\} \{A_c^j,V^{(2k-1)/3k}\}\Bigr)^k \end{eqnarray*} for any positive integer $k$ to write the inverse determinant through Poisson brackets not involving the inverse volume (see also the appendix of \cite{Robust}). This determines an integer family of quantizations with $r_k=(2k-1)/3k>\frac{1}{3}$. For $k=2$ we obtain the previous expression, but other choices are possible. Moreover, using the same $r$ in all terms arising in gravitational and matter Hamiltonians can only be done in highly contrived ways, if at all. There is thus no clearly distinguished value. From now on we will work with the choices specified above. On regular lattice states, all ingredients are composed to a Hamiltonian operator \begin{eqnarray} \label{MatterHam} \hat{H}_{\phi}[N]&=&\sum_{v\in\gamma} N_v\left[\frac{1}{2}\hat{P}_v^2\left(\frac{1}{48}\sum_{IJK,\sigma_I\in\{\pm1\}} \sigma_1\sigma_2\sigma_3\epsilon^{IJK} \hat{B}_{v,\sigma_1I}^{(1/2)} \hat{B}_{v,\sigma_2J}^{(1/2)} \hat{B}_{v,\sigma_3K}^{(1/2)} \right)^2\right.\\ &+&\left. \frac{1}{2} \left(\frac{1}{48}\sum_{IJK,\sigma_I\in\{\pm1\}} \sigma_1\sigma_2\sigma_3\epsilon^{IJK} (\sigma_1\Delta_{\sigma_1I}\phi)_v \hat{B}_{v,\sigma_2J}^{(3/4)} \hat{B}_{v,\sigma_3K}^{(3/4)} \right)^2 +\hat{V}_v U(\phi_v) \right]\,, \nonumber \end{eqnarray} \subsubsection{Matter correction functions} As before, we compute eigenvalues of the operators \begin{equation} \hat B_{v,K}^{(r)}:=\left(2\pi\gamma \ell_{\mathrm P}^2\right)^{-1}\frac{\hat {V}^r|_{\mu_{v,K}+1}-\hat {V}^r|_{\mu_{v,K}-1}}{r} \end{equation}% where the subscript of the volume operator indicates that its eigenvalue in a lattice state is computed according to (\ref{V_action}) with a shifted label of link $e_{v,I}$. Thus, the eigenvalues are \begin{equation}\label{Br_def}% B_{v,K}^{(r)}:=\frac{1}{r}\left(2\pi\gamma \ell_{\mathrm P}^2\right)^{\frac{3r}{2}-1} |\mu_{v,I}+\mu_{v,-I}|^{r/2} |\mu_{v,J}+\mu_{v,-J}|^{r/2} \left(|\mu_{v,K}+\mu_{v,-K}+1|^{r/2}- |\mu_{v,K}+\mu_{v,-K}-1|^{r/2}\right)\,. \end{equation}% compared to the classical expectation \[ (2\pi\gamma\ell_{\rm P}^2)^{\frac{3r}{2}-1} |\mu_{v,I}+\mu_{v,-I}|^{r/2} |\mu_{v,J}+\mu_{v,-J}|^{r/2} |\mu_{v,K}+\mu_{v,-K}|^{r/2-1} \] for $V^{r-1}e_K$. For any $r$, correction functions \begin{eqnarray} \alpha_{v,K}^{(r)} &=& \frac{1}{r} |\mu_{v,K}+\mu_{v,-K}|^{1-r/2} \left(|\mu_{v,K}+\mu_{v,-K}+1|^{r/2}- |\mu_{v,K}+\mu_{v,-K}-1|^{r/2}\right) \end{eqnarray} result. The main examples of $r$ are seen in Fig.~\ref{Fig_alpha}. \begin{figure} \centerline{\includegraphics[width=10cm,keepaspectratio]{alpha05r_to2}} \caption{Behavior of the correction function $\alpha$. It approaches one from above for large arguments. For small arguments, the function is increasing from zero and reaches a peak value larger than one. Also shown is the limiting case $r=2$ which does not show a peak but a constant correction function for $\mu>1$. \label{Fig_alpha}} \end{figure} Imposing again that $\mu_{v,I}+\mu_{v,-I}=\mu_{v,J}+\mu_{v,-J}$ for arbitrary $I$ and $J$ and introducing $p(v)=2\pi\gamma\ell_{\rm P}^2 (\mu_{v,I}+\mu_{v,-I})$, we obtain the effective correction functions \begin{eqnarray} \alpha^{(r)}[p(v)] &=& \frac{2}{2\pi r\gamma\ell_{\rm P}^2} |p(v)|^{1-r/2} \left(|p(v)+2\pi\gamma\ell_{\rm P}^2|^{r/2}- |p(v)-2\pi\gamma\ell_{\rm P}^2|^{r/2}\right)\,.\nonumber \end{eqnarray} This can be used to write the effective matter Hamiltonian on a conformally flat space $q_{ab}=|p(x)|\delta_{ab}$ as \[ H_{\phi}=\int_{\Sigma} {\mathrm{d}}^3x N(x)\left(\frac{D[p(x)]}{2|\tilde{p}(x)|^{3/2}} \pi(x)^2 +\frac{\sigma[p(x)])|\tilde{p}(x)|^{\frac{1}{2}}\delta^{ab}}{2} \partial_a\phi\partial_b\phi +|\tilde{p}(x)|^{\frac{3}{2}}U(\phi)\right), \] where comparison with (\ref{MatterHam}) shows that \begin{equation} D[p(v)]=\alpha^{(1/2)}[p(v)]^6\quad\mbox{ and }\quad\sigma[p(v)]=\alpha^{(3/4)}[p(v)]^4\,. \end{equation} \subsection{Properties of correction functions from inverse powers} \label{CorrProp} We have derived several different correction functions, making use of different parameters $r$. In most cases one could make different choices of such parameters and still write the classically intended expression in an equivalent way. This gives rise to quantization ambiguities since the eigenvalues of $\hat{B}_{v,K}^{(r)}$ depend on the value $r$, and so will correction functions. In addition to the ambiguities in the exponents $r$, one could use different representations for holonomies before taking the trace rather than only the fundamental representation understood above \cite{Gaul,Ambig}. In this case, we have more generally \[ \hat{B}_{v,K}^{(r,j)} =\frac{3}{irj(j+1)(2j+1)}\left(2\pi\gamma \ell_{\mathrm P}^2\right)^{-1} \mathop{\mathrm{tr}}\nolimits_j\left( \tau^Kh_{v,K}\hat{V}_v^rh_{v,K}^{-1}\right)\,. \] Eigenvalues of such operators can be expressed as \begin{equation} \label{BrjEigen} B_{v,K}^{(r,j)}=\frac{3}{rj(j+1)(2j+1)}\left(2\pi\gamma \ell_{\mathrm P}^2\right)^{\frac{3r}{2}-1} |\mu_{v,I}+\mu_{v,-I}|^{r/2} |\mu_{v,J}+\mu_{v,-J}|^{r/2} \sum_{m=-j}^jm|\mu_{v,K}+\mu_{v,-K}+2m|^{r/2} \end{equation} which leads to the general class of correction functions \begin{equation} \label{alpharj} \alpha_{v,K}^{(r,j)}= \frac{3}{r j (j+1)(2j+1)}|\mu_{v,K}+\mu_{v,-K}|^{1-\frac{r}{2}} \sum_{m=-j}^j{m\left|\mu_{v,K}+\mu_{v,-K}+2m\right|^{r/2}} \,. \end{equation} After imposing isotropy the last expression becomes \begin{equation} \alpha^{(r,j)}= \frac{6}{r j (j+1)(2j+1)}|\mu|^{1-\frac{r}{2}}\sum_{m=-j}^j{m\left|\mu+m\right|^{r/2}}\nonumber \end{equation} which is shown for a few cases in Fig.~\ref{Fig_alpha10} \begin{figure} \centerline{\includegraphics[width=10cm,keepaspectratio]{alpha10}} \caption{Behavior of the correction function $\alpha$ for larger $j$. The general trend is similar to the case for $j=1/2$, but there are $[j]+1$ spikes at $\mu=1,\ldots,j$ for integer $j$ and $\mu=1/2,\ldots,j$ otherwise. Above the peak, the function is smooth. \label{Fig_alpha10}} \end{figure} For large $j$, the sum in $\alpha^{(r,j)}$ can be approximated using calculations as in \cite{Ambig}. The idea is to consider two cases: i) $\mu>j$ and ii) $\mu<j$ separately. In the former, the absolute values can be omitted as all the expressions under the sum are positive. Then the summation is to be replaced by integration to yield \begin{eqnarray}% \alpha^{(r,j)}= \frac{12 j^3 \tilde\mu^{1-\frac{r}{2}}}{r j (j+1)(2j+1)}&&\hspace{-1.5em}\left[\frac{1}{r+4}\left((\tilde\mu+1)^{\frac{r}{2}+2}-(\tilde\mu-1)^{\frac{r}{2}+2}\right)\right.\nonumber\\ &&\hspace{-1.5em}\left.-\frac{\tilde\mu}{r+2}\left((\tilde\mu+1)^{\frac{r}{2}+1}-(\tilde\mu-1)^{\frac{r}{2}+1}\right)\right], \quad \tilde\mu>1 \nonumber \end{eqnarray}% where $\tilde\mu:=\mu/j$. In the second case, the terms in the sum corresponding to $m<\mu$ and $m>\mu$ should again be considered separately. The end result, however, is very similar to the previous one \begin{eqnarray}% \alpha^{(r,j)}= \frac{12 j^3 \tilde\mu^{1-\frac{r}{2}}}{r j (j+1)(2j+1)}&&\hspace{-1.5em} \left[\frac{1}{r+4}\left((\tilde\mu+1)^{\frac{r}{2}+2}-(1-\tilde\mu)^{\frac{r}{2}+2}\right)\right.\nonumber\\ &&\hspace{-1.5em}\left.-\frac{\tilde\mu}{r+2}\left((\tilde\mu+1)^{\frac{r}{2}+1}+(1-\tilde\mu)^{\frac{r}{2}+1}\right)\right], \quad \tilde\mu<1 \nonumber \end{eqnarray}% After some rearrangements and using that $j\gg 1$ these two expressions can be combined into a single one as \begin{eqnarray} \label{alpharjapprox} \alpha^{(r,j)}= \frac{6\tilde\mu^{1-\frac{r}{2}}}{r (r+2)(r+4)}\left[(\tilde\mu+1)^{\frac{r}{2}+1}(r+2-2\tilde\mu)+\mathop{\mathrm{sgn}}(\tilde\mu-1) |\tilde\mu-1|^{\frac{r}{2}+1}(r+2+2\tilde\mu)\right]\,. \end{eqnarray}% The approximation is compared to the exact expression of the correction function obtained through eigenvalues in Fig.~\ref{Fig_alphacomp}. As one can see, the spikes are smeared out by the approximation (except for the point $\tilde{\mu}=1$ where the approximation remains non-differentiable at second order which is not visible from the plot). The general trend, however, is reproduced well even below the peak. For applications in effective equations we note that the approximation might be considered more realistic than the exact eigenvalue expression because those equations would be based on semiclassical states. Since such states cannot be eigenstates of the triad but must only be peaked on a certain expectation value, they will automatically give rise to a smearing-out of the spikes in the eigenvalues as discussed in more detail in Sec.~\ref{ExpectVal}. \begin{figure} \centerline{\includegraphics[width=10cm,keepaspectratio]{alpha10comp}} \caption{Comparison between the correction function (\ref{alpharj}) and its approximation (\ref{alpharjapprox}). The spikes are smeared out by the approximation. \label{Fig_alphacomp}} \end{figure} \subsubsection{Asymptotic behavior} This class of correction functions parameterized by two ambiguity parameters $r$ and $j$ captures the most important general properties of such functions, including the position of their maxima at $\tilde\mu\approx1$ (or $\mu\approx j$) and the initial power law increase for small $\mu$ (determined by $r$) \cite{Ambig,ICGC}. It is indeed easy to see that all correction functions have the correct classical limit on large scales, such as \begin{equation}\alpha^{(r,j)}(\tilde\mu)\approx 1+\frac{1}{32 \tilde\mu^2}\frac{(r-2)(r-4)}{3}\frac{4(3j^2+3j-1)}{5}+O(\tilde\mu^{-4}) \rightarrow 1\end{equation} for (\ref{alpha_iso}). Moreover, for small $\mu$ the correction function goes to zero as \begin{equation} \alpha^{(r,j)}(\tilde\mu)\approx (2\tilde\mu)^{2-\frac{r}{2}}, \end{equation} which ensures boundedness of the quantized co-triad $e_{(K)}\propto \alpha \sqrt{\tilde\mu} \propto \tilde\mu^2$ (using $r=1$ for this case as in (\ref{cotriad})), when $\tilde\mu\rightarrow 0$. The same is true for higher $j$ since evaluated at $\mu=0$ the sum of odd terms gives zero. This function is not smooth but has a cusp at its maximum at $\mu=1/2$, or more generally a cusp at integer or half-integer values between $0$ and $j$. The second derivative $\alpha''$ is always positive while $\alpha'$ changes sign between any two cusps. To the right of the cusp at the largest $\mu$, the derivatives satisfy \begin{equation}\label{alpha_asymp}% \alpha^\prime < 0, \quad \alpha^{\prime\prime} > 0 \,. \end{equation} Note that the approximation used for larger $j$ smears out the cusps and does not everywhere have positive second derivative. The behavior above the peak and the general increase below is, however, reproduced well by the approximation. The definite sign of $\alpha''$ has far-reaching implications in the quantum corrected equations of motion \cite{InhomEvolve}. Small corrections then add up during long cosmic evolution times which would not be realized if, e.g., $\alpha$ would oscillate around the classical value which is also conceivable a priori. \subsubsection{Small-scale behavior and ambiguities} We will mostly use here and in cosmological applications of the corrected perturbation equations of \cite{HamPerturb} the behavior for larger values of $\mu$ above the peak. On very small scales, the approach to zero at $\mu=0$ is special to operators with U(1)-holonomies as they appear in the perturbative treatment here. In particular, as we have seen explicitly the volume operator $\hat{V}$ and gauge covariant combinations of commutators such as $\mathop{\mathrm{tr}}\nolimits(\tau^i h[h^{-1},\hat{V}])$ commute. It is thus meaningful to speak of the (eigen-)value of inverse volume on zero volume eigenstates. For non-Abelian holonomies such as those for SU(2) in the full theory, the operators become non-commuting \cite{DegFull}. The inverse volume at zero volume eigenstates thus becomes unsharp and one can at most make statements about expectation values rather than eigenvalues which again requires more information on semiclassical states. Then, the expectation values are not expected to become sharply zero at zero volume, as calculations indeed show \cite{BoundFull}. In addition, also here quantization ambiguities matter: We can write volume itself, and not just inverse volume, through Poisson brackets such as \cite{DegFull} \begin{eqnarray*} V &=& \int{\mathrm{d}}^3x \left(\frac{\epsilon^{abc}\epsilon_{ijk}}{6(10\pi \gamma G/3)^{3}} \int{\mathrm{d}}^3y_1\{A_a^i(x),|\det e(y_1)|^{5/6}\}\right.\\ &&\qquad\qquad\times\left.\int{\mathrm{d}}^3y_2\{A_b^j(x),|\det e(y_2)|^{5/6}\} \int{\mathrm{d}}^3y_3\{A_c^j(x),|\det e(y_3)|^{5/6}\}\right)^2\,. \end{eqnarray*} After a lattice regularization as before, using $V_v\approx\ell_0^3|\det(e_a^i)|$, we obtain \begin{eqnarray*} V_v&=&\ell_0^6\left(\frac{|\det e|}{\sqrt{V_v}}\right)^2= \ell_0^6\left(\epsilon^{abc}\epsilon_{ijk} \frac{e_a^i}{V_v^{1/6}} \frac{e_b^j}{V_v^{1/6}} \frac{e_c^k}{V_v^{1/6}}\right)^2\\ &=& \left(\frac{\epsilon^{abc}\epsilon_{ijk}}{6(10\pi \gamma G/3)^{3}}\ell_0^3 \{A_a^i,V_v^{5/6}\} \{A_b^j,V_v^{5/6}\} \{A_c^j,V_v^{5/6}\}\right)^2 \end{eqnarray*} whose quantization, making use of commutators, differs from the original volume operator (\ref{V_action}). If non-Abelian holonomies are used, it would not commute with the full volume operator of \cite{AreaVol} or \cite{Vol2}. This clearly shows that the usual quantization ambiguity also applies to what is considered the relevant geometrical volume. (Related ambiguities for flux operators have been discussed in \cite{Flux}.) It is not necessarily the original volume operator constructed directly from fluxes, but could be any operator having volume as the classical limit. For finding zero volume states to be related to classical singularities, for instance, dynamics indicates that volume constructed in the more complicated way through commutators with the original volume operator is more relevant than the volume operator constructed directly from fluxes \cite{DegFull}. Thus, specific volume eigenstates have to be used with great care in applications with non-Abelian holonomies. Also, the behavior of correction functions below the peak value, especially whether or not they approach zero at zero volume, is thus less clear in a general context. In any case, below the peak positions scales are usually so small, unless one uses larger $j$, that perturbation theory breaks down. The behavior above the peak, by contrast, is robust and gives characteristic modifications to the cosmological evolution of structure. \section{Effective Hamiltonian} \label{EffHamDiscuss} Calculations of distinct terms in the constraint presented in the preceding section can now be used to derive effective Hamiltonian constraints. \subsection{Expectation values in semiclassical states and quantum variables} \label{ExpectVal} The derivation of an effective Hamiltonian constraint proceeds by computing expectation values of the constraint operator in semiclassical states which are superpositions of our lattice states peaked on perturbative metric and extrinsic curvature components. Such states are easily constructible although, for the order we are working at here, we do not need to do so explicitly. The peak values of perturbative fields are thus in particular diagonal which means that expectation values can easily be computed via Abelian calculations.\footnote{Although initially SU(2)-holonomies appear in the constraint, they only refer to lattice directions such that those holonomies are of the form $\exp (A\tau_I)$. While these matrices do not commute among themselves for different $I$, one can easily re-arrange the order; see, e.g., \cite{CosmoI,SphSymm} for a discussion of the analogous effect in symmetric models. The special form of SU(2)-matrices occurring in our context is also the reason why we can take the trace in the Hamiltonian constraint explicitly and reduce it to a product of sines and cosines of curvature components.} The only complication arises from the fact that we are necessarily dealing with operators as products of holonomies and fluxes which are not simultaneously diagonalizable. It is most convenient to use the triad eigenbasis $|\ldots,\mu_{v,I},\ldots\rangle$ for triad or inverse triad operators, and a holonomy eigenbasis for products of holonomies. This was implicitly assumed previously in the curvature expansion and when using inverse triad eigenvalues for correction functions. However, for expectation values of the complete constraint operator as a product of holonomy and co-triad terms we need to transform between the two eigenbases, which as usually is possible by inserting sums over complete sets of states: $\langle\psi|\hat{H}|\psi\rangle= \sum_I\langle|\psi\hat{H}_1|I\rangle \langle I|\hat{H}_2|\psi\rangle$ if $\{|I\rangle\}$ is the complete set of states and $\hat{H}=\hat{H}_1\hat{H}_2$ is factorized into the two parts mentioned above. For a complete treatment we thus need to compute matrix elements of $\hat{H}_1$ and $\hat{H}_2$, not just eigenvalues. Nevertheless, the calculations presented before already provide the main terms under the following approximation: We assume, without loss of generality, that the complete set of states $\{|I\rangle\}$ contains a state $|\psi\rangle$ we are interested in. More crucially, we assume that the spread $\sigma$ of $\psi$ in basic variables is small. Under this assumption, $\langle I|\hat{H}_{i}|\psi\rangle$, $i=1,2$, are dominated by $\langle\psi|\hat{H}_{i}|\psi\rangle$ since (i) there is not much overlap with most other states in the complete set and (ii) the states $|\psi\rangle$, having small spreads, are as close as possible to eigenstates of $\hat{H}_1$ and $\hat{H}_2$, respectively. With $\hat{H}_1$ being a product of holonomies and $\hat{H}_2$ depending on fluxes, the spreads required in this construction cannot be arbitrarily small because they are restricted by uncertainty relations. This implies that additional corrections not computed before arise due to the unavoidable spread of semiclassical states. As a direct consequence of spreading, such terms depend on parameters such as $\sigma$ which are nothing but the quantum variables (\ref{QuantVars}) mentioned before. These variables necessarily feature in a complete effective Hamiltonian, describing how spreading and deformations of the state back-react on the peak position \cite{Karpacz}. Apart from these quantum variable terms, the main effective Hamiltonian then is of the form $\langle\psi|\hat{H}_1|\psi\rangle \langle\psi|\hat{H}_2|\psi\rangle$ where $|\psi\rangle$ is a semiclassical state peaked on a given classical geometry. Each of the two factors is of the form \begin{equation}\label{convol} \sum_\mu O_{\mu}|\psi_{\mu}^{(\mu_0)}|^2\sim \sum_{\mu} O_{\mu}f(\mu-\mu_0) \sim \int{\mathrm{d}}\mu O(\mu)f(\mu-\mu_0)=(O\star f)(\mu_0) \end{equation} where $O_{\mu}$ are eigenvalues of an operator $\hat{O}$ and $\mu_0$ is the peak value of the state $|\psi\rangle$ in the basic variable $\mu$ whose eigenbasis is used. On the right hand side, we see that the effect of computing an expectation value in a semiclassical state is mainly, to the given order, that eigenvalues appear in a form convoluted with the shape of the semiclassical state. In such a convolution, sharp features in eigenvalue functions such as the spikes in Fig.~\ref{Fig_alpha10} will be smeared out. But otherwise the general behavior is already displayed well by explicit eigenvalues, and, similarly, higher order curvature corrections are close to what we computed before. For general features we can thus avoid dealing with details of states and their convolution with eigenvalues. The form of effective Hamiltonians we arrive at is thus \begin{eqnarray} \label{EffHam} H_{\rm eff}[N] &=& \frac{1}{8\pi G} \int_{\Sigma} \mathrm{d}^3x N(x)\alpha[p]\left(-3\tilde{k}^2+ \Delta_K\right.\\ &&\qquad\qquad \left. +\beta[p](-\tilde{p}^{-1}\partial^I\partial_I\tilde{p}+ \frac{1}{2}\tilde{p}^{-2}(\partial^I\tilde{p})(\partial_I\tilde{p}))+ \Delta^{(1)}_{\partial}\right)\sqrt{|\tilde{p}|} \nonumber\\ &&+ \int_{\Sigma}{\mathrm{d}}^3x N(x)\left(\frac{D[p]}{2|\tilde{p}|^{3/2}}\pi^2 +\frac{\sigma[p]|\tilde{p}|^{\frac{1}{2}}}{2} (\partial^I\phi)(\partial_I\phi) +\Delta^{(2)}_{\partial} +|\tilde{p}|^{\frac{3}{2}}U(\phi)\right)\nonumber\\ && +\int_{\Sigma}{\mathrm{d}}^3x N(x)\Delta_Q\nonumber \end{eqnarray} for metrics including scalar perturbations in longitudinal gauge. Note that the correction functions depend functionally on the field $p(x)$, not $\tilde{p}$, which shows that their scale is uniquely determined by the state irrespective of choices of coordinates. Unspecified correction terms are higher order curvature corrections $\Delta_K$ (see Eq.~(\ref{K_ave})), discretization corrections $\Delta^{(1/2)}_{\partial}$ from different spatial derivative terms in the constraint, and terms containing quantum variables $\Delta_Q$ which arise from metric as well as matter fields. This form of effective constraints also demonstrates potential effects of using SU(2) representations different from the fundamental one. Notice that we did not compute this for the higher curvature expansion since the required traces of different Pauli matrices are more involved. But it is clear that this can only change the coefficients in the expansion $\Delta_K$ since it always remains at a perturbative level. Generally, larger values of $j$ mean that curvature corrections will become important at smaller curvatures compared to $j=1/2$, and thus coefficients in an expansion will increase with $j$. The effect in inverse triad corrections $\alpha[p]$, which we did compute explicitly here, is more pronounced since $j$ determines the scale at which one enters the non-perturbative regime of such inverse triad corrections. The main difference between larger values of $j$ and the minimal one is that in the former case peaked states exist whose spread is smaller than the peak position of eigenvalues of an inverse triad operator. When this is realized, the non-perturbative branch of increasing behavior between $\mu=0$ and $\mu=j$ is not completely washed out in the convolution but remains visible in an effective Hamiltonian. In an effective Hamiltonian this consequence is obvious, but it was not always clear from the underlying difference equation in isotropic models. There, the discrete stepsize, determined by the SU(2)-representation of holonomies in the constraint, equals the peak position of inverse triad eigenvalues (see, e.g., \cite{AmbigConstr}). Thus, the discrete step in the difference equation always jumps from zero directly to the peak when the same representation is used for all holonomies occurring in the gravitational as well as matter parts of the constraint. One could thus argue that dynamics will be insensitive to the value of the representation. Effective equations, if they are applicable in this small-scale regime, show that this is not necessarily so. The representation enters higher curvature terms differently from inverse triad corrections, thus allowing effects of the non-perturbative part to remain. \subsection{Technical issues} We now illustrate some of the more important choices we made in constructing the constraint operators used here. \subsubsection{Quantization procedures} \label{QuantProc} Our construction is suitable for a treatment of cosmological perturbations within loop quantum gravity, but it does circumvent some of its general aspects. First, we do not take into account full non-abelian features; they can be included perturbatively but are not required for our selection of mode and gauge. Secondly, we do not allow irregular lattices or valence higher than six. Also this can be included by summing operator contributions over triples of edges (as they are constructed in the full setting). Detailed coefficients in effective expressions may then change, but not qualitative effects. Moreover, as already mentioned the labels coming with additional edges or higher valent vertices are redundant for cosmological perturbations. We have presented higher power corrections using ``holonomies'' based on extrinsic curvature rather than connection components since this simplifies the calculations considerably. Using the background, it is mathematically possible to define such objects, although in a full background independent setting only holonomies of a connection would be well-defined. We use this mainly as a first possibility to demonstrate which types of corrections one expects and will discuss now how general the resulting expressions can be considered to be. This refers to corrections to terms of the Lorentzian constraint which, schematically, can be written as \begin{equation}\label{FK} F+K^2={\mathrm{d}} A+A^2+(A-\Gamma)^2= {\mathrm{d}}(\Gamma+K)+(\Gamma+K)^2+K^2 \end{equation} to be multiplied by triad components dealt with by Poisson brackets. Using extrinsic curvature as basic object, one obtains trigonometric functions of its components which when expanded give higher power corrections to ${\mathrm{d}} K+K^2$. But since the spin connection has been split off from the basic object, one has to quantize it individually and add suitable combinations for ${\mathrm{d}}\Gamma+\Gamma^2$ to the constraint. Here, we assume that the cross-term $\Gamma K$ does not contribute which is indeed the case for diagonal triads (implying anti-symmetric spin connection components) and extrinsic curvature. This is not much of a restriction: $K$ is required to be diagonal for $K$-holonomies to simplify the calculations. Moreover, the perturbative non-diagonal part of $\Gamma$ must be antisymmetric because it perturbs an SO(3)-matrix. If there is a diagonal contribution, e.g.\ from a spatially curved background, it can be combined with $K$. As for corrections, we have higher power corrections in the quantization of ${\mathrm{d}} K+K^2$ and inverse triad corrections in ${\mathrm{d}}\Gamma+\Gamma^2$ since the spin connection contains inverse triad components. Using $A$-holonomies gives, at first sight, a different picture. Now, $F={\mathrm{d}} A+A^2$ receives higher power corrections, but the spin connection is not quantized directly, not giving immediate inverse triad corrections. One rather has to proceed as in the full theory \cite{QSDI} where the term $(A-\Gamma)^2=K^2$ in the constraint is re-written using (\ref{Kcomm}). The double-Poisson bracket (\ref{Kcomm}) used to quantize extrinsic curvature now leads to additional corrections. In particular, since inverse triad quantizations have been used in $H^{(1)}$ in (\ref{Hone}), corresponding corrections do arise which are qualitatively similar to those in a direct quantization of the spin connection. One thus expects similar types of corrections, as with $K$-holonomies and $\Gamma$-quantizations, although in different combinations. In our construction, $K$-holonomies arose from $A$-holonomies through a perturbative expansion in non-diagonal components. When using $A$-holonomies on a spatially curved background such as a closed Friedmann--Robertson--Walker model, it is furthermore necessary to take lattice effect of an inhomogeneous model \cite{InhomLattice} or related effects \cite{APS} into account. A fine enough lattice (${\cal N}\gg1$) is required for a semiclassical expansion of holonomies since \[ \exp(iA)\sim\exp(i(\Gamma+K))\sim \exp(i(K+{\cal N}^{-1/3}\bar{\Gamma}+\delta\Gamma)) \] with the background spin connection possibly of the order $\bar{\Gamma}=V_0^{1/3}\tilde{\bar{\Gamma}}=O(1)$ can be expanded in all terms only if ${\cal N}$ is large. (For a closed isotropic model, for instance, $\bar{\Gamma}=1/2$ \cite{Closed}.) The number of vertices ${\cal N}$ enters through $\ell_0\tilde{\bar{\Gamma}}={\cal N}^{-1/3}\bar{\Gamma}$ in holonomies. The spin connection perturbation $\delta\Gamma$ will be small in perturbative regimes such that we can always write \[ \exp(i(K+{\cal N}^{-1/3}\bar{\Gamma}+\delta\Gamma))= \exp(i(K+{\cal N}^{-1/3}\bar{\Gamma}))(1+i\delta\Gamma+\cdots)\,. \] But the remaining exponential also has to reduce to the leading terms of an expansion in semiclassical regimes. While $K$ is then automatically small, this may not be the case for $\bar{\Gamma}$. Without the reduction by ${\cal N}^{-1/3}$ for a fine lattice, one could not expand the exponentials to reproduce the polynomials in $K$ and $\Gamma$ classically occurring in the constraints. We have focused here on the first type of quantization which is simpler to compute explicitly but may not be as close to the full theory. Using $A$-holonomies, no diagonality can be used, but perturbative treatments of non-diagonal components are possible. It is thus possible, though more involved, to compute correction terms obtained through different quantization schemes and to compare their consequences, in particular those at the phenomenological level. A first result in this direction follows from the perturbation equations derived in \cite{HamPerturb} which show that effects of inverse triad corrections in $\Gamma$ are less significant than those in the commutator \cite{InhomEvolve}. Thus, one can hope that the precise quantization procedure of curvature is not very important for physical aspects accessible so far. A detailed investigation of all consequences can nevertheless provide important guidance as to which procedure should be pursued in the full theory. \subsubsection{Different types of correction functions} In fact, we have included in the computation of perturbative effects four different correction functions $\alpha$, $\beta$, $D$ and $\sigma$. All of them come from inverse triad corrections. With all functions coming from the same type of modification, one may wonder why they should not all be identical. It is clear from the procedure that these functions arise from different classical functionals of densitized triad components. For instance, $\alpha$ comes from the antisymmetric part of $E^a_iE^b_i/\sqrt{|\det E|}$ while $\sigma$ comes from the symmetric part. They could be related to the same correction, but the quantization requires quite different rewritings (\ref{AB_def}) and (\ref{gradient}) of the corresponding terms in Hamiltonians such that correction functions will differ. In particular, they come with different parameters such as $r$. On top of that, each correction function is subject to quantization ambiguities. As we have seen, however, the typical behavior is robust under changes of the parameters. In particular, all correction functions have the same qualitative properties and differ only quantitatively in a way parameterized by a few parameters. \subsubsection{Implications for gauge issues} The assumptions on states used to derive effective constraints have a bearing on the gauge issue. By specifying the peak value of a spatial geometry and its extrinsic curvature in a semiclassical state we are fixing the spatial diffeomorphism constraint rather than solving it by averaging as done in the full theory \cite{ALMMT}. Choosing the form of peak values partially implements a chosen gauge, but still allows some freedom. We also note that even though spatial diffeomorphisms are fixed, one still has to impose the constraint. This will give rise to one of the cosmological perturbation equations as is clear from \cite{HamPerturb}. Fixing the diffeomorphism constraint also implies a different viewpoint for the Hamiltonian constraint operator of the loop quantization. In the full construction \cite{QSDI}, one makes use of diffeomorphisms in order to make the operator more independent of the choice of edges used to quantize curvature. When diffeomorphisms are fixed, this is no longer possible and effective constraints would depend on precisely how such edges are chosen. We have fixed this freedom here by laying the edges entirely on the lattice resulting in a graph preserving operator. Thus, holonomy corrections in the constraint depend on the lattice spacing provided through a state implementing the background geometry. While this simplifies the calculations without leading to significant quantitative changes in coefficients, we are as a consequence disregarding the creation of new vertices by the constraint operator. Thus, ${\cal N}$ is constant for the construction, but may effectively be assumed to be slowly dependent on, e.g., the total volume (see \cite{InhomLattice} for more details). \section{General implications for effective theory} \label{Effective} Our calculations, following the scheme to derive effective equations sketched before, have led to corrections (\ref{EffHam}) which arise as leading order terms in an effective theory of perturbative loop quantum gravity. No complete expression has been derived, but characteristic terms are clear and lead to interesting phenomena \cite{InhomEvolve}. Rather than studying one model in detail we have provided here an illustration of the general scheme: The characteristic feature of loop quantizations is the use of holonomies, which give rise to typical correction terms. They can be split into higher power corrections, which are always perturbative, and corrections to inverse powers of triad components which become non-perturbative at small scales. All these corrections are in addition to discretization and genuine quantum effects such as higher time derivatives. In this section we highlight conceptual conclusions that can be drawn from such a scheme. \subsection{Basic variables in quantum gravity corrections} Holonomy corrections arise through expectation values and thus depend on the basic variables used in the quantization. Using commutators to quantize triad components, for instance, modifies the classical expressions in a way which can be computed through the explicitly available eigenvalues such as (\ref{BrjEigen}) of these operators. The occurrence of trigonometric functions instead of direct curvature or connection components leads to higher power terms when expanded in an effective constraint. Such corrections depend, by construction, on $p(x)/\ell_{\mathrm P}^2=\ell_0^2\tilde{p}(x)/\ell_{\mathrm P}^2$ and $k(x)=\ell_0\tilde{k}(x)$, respectively, both of which are independent under rescaling the coordinates. They do depend, however, on the lattice size which determines the scales on which a state probes the field. If we split the fields into background parts \begin{equation} \tilde{\bar{p}}:=\frac{1}{V_0} \int{\mathrm{d}}^3x \tilde{p}(x)\quad\mbox{and}\quad \tilde{\bar{k}}:=\frac{1}{V_0}\int{\mathrm{d}}^3x\tilde{k}(x) \end{equation} by integrating over a cube (sufficiently large to contain, say, the Hubble volume) of coordinate volume $V_0=\int{\mathrm{d}}^3x$ and perturbations \begin{equation} \delta \tilde{p}(x)= \tilde{p}(x)-\tilde{\bar{p}}\quad\mbox{and}\quad \delta\tilde{k}(x)= \tilde{k}(x)-\tilde{\bar{k}} \end{equation} to set up cosmological perturbation theory \cite{HamPerturb},\footnote{In \cite{HamPerturb}, only written here with a tilde have been used, but the tilde was dropped for notational convenience.} we can see from preceding constructions that it is not these fields directly which occur in correction functions. In isotropic loop quantum cosmology, the quantization is based on variables \begin{equation} \bar{p}=V_0^{2/3}\tilde{\bar{p}}\approx\frac{1}{V_0^{1/3}}\sum_v \ell_0^3\tilde{p}(v)=\frac{1}{{\cal N}^{1/3}}\sum_vp(v) \end{equation} and \begin{equation} \bar{k}=V_0^{1/3}\tilde{\bar{k}}\approx\frac{1}{V_0^{2/3}}\sum_v \ell_0^3\tilde{k}(v)=\frac{1}{{\cal N}^{2/3}}\sum_vk(v) \end{equation} which now appear as lattice averages in an inhomogeneous setting and provide the background for cosmological perturbation theory. As such, they do not depend on the auxiliary coordinate volume $V_0$ as they would in homogeneous models \cite{Bohr} but on the number ${\cal N}$ of lattice vertices. These two quantities are related by $V_0={\cal N}\ell_0^3$ through the lattice size $\ell_0$, but ${\cal N}$ has significance as a parameter specifying the states rather than just being auxiliary as $V_0$. Similarly, basic variables of the inhomogeneous theory are functions\footnote{Note that $\ell_0$ is used to rescale the inhomogeneous $\tilde{p}(x)$ while $V_0$ is used to rescale $\tilde{\bar{p}}$ as it is done in isotropic models \cite{Bohr}.} \begin{equation} \label{psplit} p(x)=\ell_0^2\tilde{p}(x) =\frac{\ell_0^2}{V_0^{2/3}}\bar{p}+\delta p(x)= \frac{1}{{\cal N}}\sum_vp(v)+\delta p(x) \end{equation} which directly occur in correction functions through fluxes, and \begin{equation} k(x)=\ell_0\tilde{k}(x) =\frac{\ell_0}{V_0^{1/3}}\bar{k}+\delta k(x)= \frac{1}{{\cal N}}\sum_vk(v)+\delta k(x) \end{equation} occurring in higher power corrections through holonomies. This shows that the resulting equations are rescaling invariant when $\tilde{p}$, $\tilde{k}$ and $\ell_0$ change simultaneously, a fact which was not always obvious in isotropic models based on the scale factor. As expected, the equations are also dependent on specifics (mainly ${\cal N}$) of the state whose dynamics is described effectively. This shows which states are suitable for perturbation theory and when perturbations break down. A perturbation scheme works only if $\delta \tilde{p}\ll\tilde{\bar{p}}$ which from (\ref{psplit}) implies that differences between local edge labels of the state (corresponding to $\delta p(x)=\ell_0^2\delta\tilde{p}(x)$) must be small compared to the average lattice label ${\cal N}^{-1}\sum_vp(v)$ (corresponding to the perturbative background value of $p(x)$). Since the labels are discrete, differences between them have a positive lower bound unless they are equal. Thus, the average label must be large compared to the discrete gap in the spectrum of labels. In our U(1)-theory, labels are integer valued which means that the average label must be larger than one, and local edge labels must not stray too much from the average. There is no such restriction from the curvature perturbations because curvature does not have a discrete spectrum. \subsection{Quantum variables and classical limit} Starting from the Hamiltonian (constraint) operator in any quantum theory, the quantum Hamiltonian is defined as a function on the projective Hilbert space determined by taking expectation values. This can be seen as the Hamiltonian function of a dynamical system whose phase space is obtained from the Hilbert space \cite{GeomQuantMech,ClassQuantMech,Schilling}. The system thus appears of classical form at least as far as dynamics is concerned, but each of its classical degrees of freedom is accompanied by infinitely many quantum variables (\ref{QuantVars}). An effective description requires a further step, truncating the infinitely many quantum variables to a finite set \cite{EffAc}. If this is done consistently, one obtains effective equations which amend the classical ones by quantum corrections. One often performs such a truncation by using a certain class of semiclassical states to compute expectation values of the Hamiltonian operator. The regime under consideration determines what a suitable set of semiclassical states is. Based on the assumed semiclassicality of states peaked at values $p^I$ and $k_J$, the expressions we derived give the main part of the effective Hamiltonian constraint computed as an expectation value in such states. Note that we did not explicitly compute expectation values in states but read off corrections from operators by expanding trigonometric functions arising from holonomies or eigenvalues of inverse triad operators. Each of these corrections requires, strictly speaking, eigenstates of holonomies for higher order corrections in $\tilde{k}$ or flux eigenstates for corrections as functions of $\tilde{p}$. But even if we were to compute expectation values in peaked states, the main corrections would be of the form read off from different eigenstates as seen by analogous calculations in the isotropic case \cite{Bohr,Josh}. In general, one has to use semiclassical states which are neither eigenstates of holonomies nor of flux operators. This gives rise to additional contributions depending, e.g., on the spread of the state. From the spread and other detailed properties of states one obtains contributions depending on additional independent quantum variables corresponding to fluctuations and correlations. For non-quadratic Hamiltonians or constraints, these quantum variables couple to classical variables and influence their motion. To some degree, the appearance of additional independent quantum variables corresponds to higher derivative terms in effective actions \cite{Karpacz}. Thus, we obtain modified coefficients (from correction functions such as $\alpha$), higher powers in momenta (from $\sin k_I$ and $\cos k_I$) and higher derivative terms from quantum variables (interpreted as higher time derivatives) and from the discretization (higher spatial derivatives), which comprise all effects known and expected from effective actions. The first two arise as typical corrections by using holonomies. \subsubsection{Basic variables vs.\ coarse graining} \label{Coarse} In our treatment here we assumed that lattice scales are small compared to other scales of the relevant physical fields such as matter or classical metric modes to be obtained in a semiclassical limit. In such a context, it is sufficient to use the basic variables as they come as labels of a quantum state directly in effective correction functions. This is not possible in regimes where basic variables of the states are themselves strongly inhomogeneous as it necessarily happens when the discrete flux labels $\mu_{v,I}$ approach the lowest non-zero value one. Then, the perturbative condition $\delta\mu_{v,I}\ll\mu_{v,I}$ where $\delta\mu_{v,I}$ refers to the difference in nearby labels cannot be satisfied unless $\delta\mu_{v,I}=0$, i.e.\ the labels are exactly homogeneous. Most likely, this happens in strong curvature regimes where perturbation theory would be expected to break down even classically. But since the discreteness of the labels $\mu_{v,I}$ plays a role in this simple argument, there can be regimes where classical perturbation theory would be applicable but the underlying lattice formulation would not seem to be in a perturbative regime. In such cases, one would have to coarse grain the basic variable, i.e.\ replace the basic lattice site variables by averages over larger patches of an intermediate scale. Then, the averaged labels would increase, relieving the contradiction between $\delta\mu_{v,I}\ll\mu_{v,I}$ and quantum discreteness. \subsubsection{Orders of magnitude of corrections} \label{magnitude} With several different correction terms, it is helpful to know whether in certain regimes some of them can be ignored. This can be difficult to determine in homogeneous models unless one makes special choices of ambiguity parameters such as large values of $j$ \cite{Inflation}. In inhomogeneous situations it is often simpler to determine which corrections are expected to be dominant because they depend differently on the basic scale contained in $p_{v,I}$ \cite{InhomLattice}. These variables are parameters determining the state and thus the physical regime being probed. When $p_{v,I}$ is small, i.e.\ close to its minimum $\ell_{\rm P}^2$, inverse triad corrections are large. They decrease when $p_{v,I}$ becomes larger, but this also implies larger and fewer lattice sites such that discretization effects become important. Moreover, in nearly isotropic configurations extrinsic curvature is given by \begin{equation} \label{kest} k_{v,I}=\sqrt{8\pi Gp_{v,I}\rho/3} \end{equation} as it follows from the Friedmann equation. The energy density scale $\rho$ thus determines when curvature corrections are relevant. Since there is also a factor of $p_{v,I}$, curvature corrections increase with larger $p_{v,I}$ just as discretization corrections. For a semiclassical regime we must have $p_{v,I}>\ell_{\rm P}^2$ in order to reproduce closely the correct inverse powers of triad components. We must also have a discreteness scale $p_{v,I}$ which is sufficiently small in order to avoid discretization effects already in, say, particle physics. This requires $p_{v,I}$ to be much smaller than the typical physical scale squared, such as a wave length $\lambda$ of field modes or even the Hubble length $a/\dot{a}$. We thus have a range $\ell_{\rm P}^2<p_{v,I}\ll \lambda^2$ or $\ell_{\rm P}^2<p_{v,I}\ll (8\pi G\rho)^{-1}$ if we express the Hubble length in terms of energy density. At the upper bound we ensure that discretization effects do not disrupt other physics used essentially in a given scenario. As a consequence of (\ref{kest}), this {\em implies} that higher order corrections in curvature are small, too. The dominant contributions are then given by inverse triad corrections which we have focused on in the preceding derivations. Note that the semiclassical range for $p_{v,I}$ is large in late time cosmology, implying that corrections can be arbitarily small, for instance to the propagation of signals from gamma ray bursts. In the early universe, however, and in particular during inflation the energy scale is much higher, restricting the range more narrowly \cite{InhomEvolve}. The best tests of quantum gravity effects are thus expected from early universe cosmology. \subsubsection{Classical limit} We have ignored in our calculations so far any detailed specifics of states and terms containing quantum variables. Implicitly, we are thus assuming that such terms are subdominant, just as one assumes analogous terms to be subdominant in a derivative expansion of low energy effective actions. Under this assumption we reproduce classical expressions in the suitable limit, which proves that loop quantum gravity has the correct classical limit in this perturbative regime in the same sense as in usual effective theories. This statement certainly includes inhomogeneities in the perturbative sector considered here. For instance, the Newton potential and corrections on smaller scales can be obtained from perturbation equations derived from the effective constraints \cite{InhomEvolve}. In effective theory, verifying the correct classical limit does not require one to construct explicit dynamical coherent states, not even approximately. This would certainly be of interest, but would be highly complicated and is rarely done in interacting field theories where one can nevertheless be certain about the correct classical limit. We emphasize that, in any case, a discussion of the semiclassical limit based on coherent states does require such states to be {\em dynamical} coherent states, or at least must involve statements on dynamical changes of state parameters to specify suitable regimes. This means that states must stay approximately coherent under evolution or, in a fully constrained theory such as gravity, solve the Hamiltonian constraint. If this is not realized, quantum variables and the back-reaction of spread and deformations on the classical variables are not under sufficient control to ensure the correct classical limit. There are two viable procedures to verify the correct classical limit of a quantum theory, be it a constrained or unconstrained system. First, one may use kinematical coherent states to compute expectation values of the dynamical operators (a Hamiltonian or constraint operators) and then analyze the dependence of quantum variables in resulting equations of motion; if their effect on expectation values is small in suitable regimes, the correct classical limit results. Secondly, dynamical coherent states can be used if they can be constructed at least approximately, which directly illustrates whether the dynamics of expectation values is close to the classical one. The second procedure is much more complicated for interacting theories since the full quantum dynamics would have to be solved at least approximately at the quantum level. The first procedure, by contrast, allows one to derive effective dynamical equations first and then approximate solutions to understand the behavior of expectation values. Thus, usually kinematical semiclassical states are used in explicit effective descriptions, followed by an analysis such as one in a derivative expansion in quantum field theory. Such a further analysis is always required when kinematical semiclassical states are used, and it can only be done in a regime dependent way to bring in conditions for when semiclassicality should be satisfied. We have done this implicitly in our discussion by assuming slowly varying fields as in usual derivative expansions, both in space by doing a continuum limit of the lattice states and in time by assuming quantum variables to be negligible. We emphasize again that even if one can demonstrate an ``instant'' classical limit by using kinematical coherent states, a dynamical statement would still require one to assume (or to show) that such back-reaction effects of quantum variables on expectation values are not strong. This picks the correct regime of states in which one has semiclassical behavior. Without such an additional analysis, kinematical coherent states would neglect the back-reaction of spreading and deformations of states on expectation values which are essential for dynamical effective equations \cite{EffAc}. An additional aspect arises for generally covariant situations where not all variables can be peaked in a semiclassical state as it would be the case in a common kinematical coherent state. Some of the phase space variables will have to play the role of internal ``clocks'' in which evolution of expectation values as well as quantum variables is measured. Thus, when constructing kinematical coherent states to check the classical limit, they must not be peaked on all phase space variables; a choice of clock has to be made before the calculation. Then, quantum variables also back-react on the change of the clock. Often, investigations of classical limits based on kinematical coherent states are motivated by well-known constructions of the harmonic oscillator or free quantum field theories. The behavior of quantum variables or of dynamical coherent states in general can, however, be very different from the well-studied aspects of the free systems. Such systems or small deviations from them with anharmonicity can well be studied by coherent state techniques. But gravity is very different and not expanded around a set of harmonic oscillators. In fact, gravity with its unbounded Hamiltonian even lacks a ground state or vacuum to expand around. The bounce model solved in \cite{BouncePert}, for instance, shows that the spreads change exponentially rather than being constant or at least periodic as it happens for the harmonic oscillator. The resulting semiclassical picture is very different from that provided by harmonic oscillator coherent states. This must be taken into account in semiclassical analyses; effective theory provides suitable means to study such situations in sufficiently general terms as initiated in this paper. \subsection{Collective graviton} The constructions indicate a picture of the classical limit of quantum gravity where linear metric modes appearing in the evolution equations are not basic excitations of a quantum field. They arise, rather, as collective excitations out of the underlying discrete quantum theory. At a basic level, degrees of freedom are encoded in quantum numbers $\mu_{v,I}$ while the scalar mode, for instance, is obtained through the difference between such a local label and the average value on the whole lattice. The classical modes thus arise as non-local, effective excitations out of the underlying quantum state \cite{InhomLattice}. This shows in a well-defined sense how classical degrees of freedom are obtained as collective excitations, analogously to phonons in a crystal. That the correct classical dynamics results for these collective modes is demonstrated, for instance, by the derivation of Newton's potential for perturbations on a flat isotropic background in \cite{InhomEvolve}. \section{Summary} Together with \cite{EffAc,InhomLattice,HamPerturb} we have shown in this paper that techniques are now available to derive effective equations of cosmological perturbation theory. The geometrical background on which cosmological perturbations are defined is introduced through a class of states, rather than being used to set up the quantization. Background independent quantum properties thus remain, but one can make use of perturbation expansions for explicit calculations. The role of quantum labels and basic variables is clear from this procedure, which determines the type as well as order of magnitudes of correction terms which remained obscure previously. The inhomogeneous treatment including all relevant modes allows us to see all possible correction terms. Note, for instance, that since the isotropic expression for the co-triad is finite on small scales even classically, it would not contribute a correction function to the gravitational part of the Hamiltonian constraint in a purely isotropic setting. When isotropic expressions are quantized directly, a cancellation of the inverse isotropic triad component hides possible quantum effects of the full constraint. This does not arise in our context starting from an inhomogeneous lattice quantization. Thus, complete corrections are obtained in reliable form. Many different regimes are still to be explored to obtain a full overview of all effects. Moreover, gauge issues have to be investigated which is of relevance for the full quantum theory, too. While general effective equations including the relevant inhomogeneities are now available and orders of correction terms can be estimated, one still has to use them with care since they are not yet formulated for gauge invariant perturbations. In this context one should notice that not only evolution equations but also gauge transformations are determined by the constraints and thus modified by quantum corrections. It is thus not possible to use classical expressions for gauge invariant quantities since they will receive additional corrections. These issues are currently being studied to complete the derivation of equations with quantum corrections. The strategy for computing those corrections from quantum operators has been provided in this paper. When speaking of quantum corrections to classical equations it is clear that to zeroth order the correct classical limit has to be satisfied. In fact, what we have shown here implies that loop quantum gravity has the correct classical limit for scalar modes in longitudinal gauge propagating on a spatially flat background. In the process, we have demonstrated which steps must be involved in such a detailed calculation, most importantly a continuum limit but also a slowly varying field approximation. Extensions to other modes and gauges, and different backgrounds, can be done by the same techniques but are technically more involved to do explicitly. Nevertheless, it is clear from the construction that the correct classical limits will also be reproduced in those cases. More precisely, we have shown in Sec.~\ref{magnitude} that there are always ranges of the basic lattice variables such that quantum corrections are small in nearly classical situations of low energy density and small curvature. In more energetic cosmological situations, those ranges can shrink to narrow intervals such that significant quantum corrections can be expected \cite{InhomEvolve}. This demonstration of the correct classical limit crucially rests on a new understanding of effective theory \cite{EffAc}. Although it has not yet been formulated fully for field theories (but see \cite{EffectiveEOM}), this scheme is applicable here due to the ultraviolet cut-off of quantum gravity. On any lattice state we have only finitely many degrees of freedom in any compact spatial volume to which the quantum mechanical techniques of \cite{EffAc} directly apply. While loop quantum gravity does not possess a sharp cut-off but is rather based on arbitrary graphs in space any of which can occur in a general superposition \cite{ALMMT}, effective equations are always defined with respect to a single class of states. The physical state thus determines the cut-off dynamically. We have certainly not used explicit physical solutions of the Hamiltonian constraint but rather computed effective equations from general states. If a physical solution were available, the labels $p_v$ would be determined explicitly and fix the order of correction terms completely. Moreover, a full solution would determine how the lattice itself changes by the creation of new vertices in terms of an internal clock such as the total volume. This graph-changing nature seems to be one of the most important effects to be understood especially for late-time evolution in cosmology, or any dynamical issue relevant for large spatial slices. Although such a full solution seems currently out of reach, models and effective analyses already provide quite detailed information on the dynamics of background independent quantum gravity in cosmologically relevant regimes. \section*{Acknowledgements} MB was supported by NSF grant PHY-0554771, HH by the fellowship A/04/21572 of Deutscher Akademischer Austauschdienst (DAAD) and MK by the Center for Gravitational Wave Physics under NSF grant PHY-01-14375. We thank Parampreet Singh for joining us in initial calculations of this project.
2,877,628,090,581
arxiv
\section{\label{intro}Introduction} In the class of quantum state discrimination problems minimum error discrimination (MED) is one of the oldest. The problem arises because nonorthogonal states aren't perfectly distinguishable. Thus any measurement aimed at distinguishing among states cannot hope to do so without some error. Different measurement strategies have different performance strength (measured in terms of the average probability of success). Given that the states cannot be distinguished perfectly there must be some measurement criterion which gives the maximum probability of success. To find what this measurement strategy is, is the problem of MED. The setting in MED is as follows: Alice has a fixed ensemble of states $\{ \rho_1, \rho_2, \cdots, \rho_n\}$ where $\rho_i$ are positive semi-definite operators of trace $1$ acting on some Hilbert space $\mathcal{H}$ of dimension $n$. She selects one of these states ($\rho_i$, say) with probability $p_i \in \{p_1, p_2, \cdots, p_n\}$ ($p_i\; >\;0,\; \sum_{i=1}^n \;p_i\; =\;1$, $p_i$'s are refered to as apriori probabilities) and gives it to Bob. Bob knows that Alice has selected the state from the set $\{\rho_i\}_{i=1}^{n}$ with apriori probabilities $p_i$ and his job is to figure out which state he has been given using an $n$-element POVM. In MED, Bob's measurement strategy is constrained in the following way: there is a one-to-one correspondence between elements in Alice's ensemble $\{ p_i, \rho_i \}_{i=1}^{n}$ and Bob's POVM elements $\{ E_i\}_{i=1}^{n}$ (where $E_i \ge 0$, and $\sum_{i=1}^{n} E_i = \mathbb{1}_n$, which is the identity operator acting on $\mathcal{H}$) so that when the $i$-th measurement outcome clicks, Bob infers Alice gave him the $i$-th state from her ensemble. Since $\rho_1$, $\rho_2$, $\cdots$, $\rho_n$ don't generally lie on orthogonal supports, errors are likely to occur. Bob's job is to find the optimal POVM for minimizing the average probability of this error or equivalently maximizing the average probability of success. There are other variants to the quantum state discrimination problem \cite{Per}, \cite{Croke}, \cite{Walgate}. The most popular among them is called unambiguous state discrimination, in which, if one has to perform state discrimination over an ensemble of $n$ states $\{ p_i, \rho_i \}_{i=1}^{n}$, the measurement strategy used has $n+1$ outcomes, where, just as in the MED case, there is a one-to-one correspondence between ensemble elements $\rho_i$ and the $i$-th POVM element $E_i$. Furthermore, the POVM must be constrained so that when Alice sends Bob the $i$-th state, the $j$-th POVM element won't click where $ j \neq i, n+1$. The trade-off is that Bob can say nothing about the state which Alice gave him when the $(n+1)$-th POVM element clicks. Heuristically one expects that one cannot discriminate unambiguously among a set of linearly dependent states; this was proven true later \cite{Chef}. Coming back to MED, necessary and sufficient conditions for the optimal POVM for any ensemble were given by Holevo \cite{Hol} and Yuen et al. \cite{Yuen} independently. Yuen et al. cast MED into a convex optimization problem for which numerical solutions are given in polynomial time\footnote{That is, polynomial in $dim \mathcal{H}$.}. While there are quite a number of numerical techniques to obtain the optimal POVM \cite{Eldar, Jezek, Hel2, Tyson1}, for very few ensembles has the MED problem been solved analytically. Some of these include an ensemble of two states \cite{Hel}, ensembles whose density matrix is maximally mixed state \cite{Yuen}, equiprobable ensembles that lie on the orbit of a unitary, \cite{Ban,Chou,BS}, and mixed qubit states \cite{Bae, Ha}. In \cite{Kwon}, many interesting properties of the MED problem have been elucidated using geometry of $N$ qudit states. An upper bound for the optimal success probability was given in \cite{Bae1}. Comparing the existing results for MED with that of the unambiguous state discrimination problem, it is seen that the latter has been solved for more kinds of ensembles ensembles than the former \cite{Per, Chef,Pang, Bergou, Ray, Herzog,Janos, Som}. \textbf{Linearly Independent Pure State Ensembles:} For an ensemble of $n$ linearly independent pure states ($n$-LIP), given by $\widetilde{P} \equiv \{p_i, \ketbra{\psi_i}{\psi_i} \}_{i=1}^{n}$ (where $\ket{\psi_i}$'s span $\mathcal{H}$), certain properties which the optimal POVM should satisfy has been given in the literature on MED already: \begin{itemize} \item [(i)] The optimal POVM is a unique rank one projective measurement \cite{Ken,Hel,Carlos}. \item [(ii)] The optimal POVM for MED of $\widetilde{P}$ is the pretty good measurement (PGM) of another ensemble, $\widetilde{Q}\equiv \left\{ q_i>0, \ketbra{\psi_i}{\psi_i} \right\}_{i=1}^{n}$\footnote{While (i) is subsumed by (ii) (as the PGM of the ensemble $\widetilde{Q}$ is a rank-one projective measurement), it is beneficial to, separately, emphasize it.} \cite{Bela,Mas,Carlos}. Note that the $i$-th state in $\widetilde{P}$ and $\widetilde{Q}$ are the same for all $1 \le i \le n$, whereas the probabilities are generally not. Additionally, in \cite{Carlos}, it is explicitly shown that the ensembles $ \widetilde{P}, \widetilde{Q}$ are related through an invertible map. \end{itemize} To formalize this invertible relation between $\widetilde{P}$ and $\widetilde{Q}$ we will now introduce a few definitions. \begin{mydef} \label{DefEns} $\mathcal{E}$ is the set of all ensembles comprising of $n$ LI pure states. Hence, any ensemble in $\mathcal{E}$ is of the form $\widetilde{P} = \{ p_i>0, \ketbra{\psi_i}{\psi_i} \}_{i=1}^{n}$ where $\ket{\psi_1},\ket{\psi_2},\cdots,\ket{\psi_n}$ are LI.\end{mydef} $\mathcal{E}$ is a $(2n^2-n-1)$ real parameter set. \begin{mydef} \label{DefPro} $\mathcal{P}$ is the set of all rank one projective measurements on the states of $\mathcal{H}$; an element in $\mathcal{P}$ is of the form $\{ \ketbra{v_i}{v_i} \}_{i=1}^{n}$ where $\braket{v_i}{v_j}=\delta_{ij},\; \forall \; 1 \leq i,j \leq n$. \end{mydef} $\mathcal{P}$ is an $n(n-1)$ real parameter set. From point (i) above we see that the optimal POVM for $\widetilde{P} \in \mathcal{E}$ is a unique element in $\mathcal{P}$. Thus, one can define the \emph{optimal POVM map}, $\mathscr{P}$, in the following way: \begin{mydef} \label{mathscrP} $\mathscr{P}: \mathcal{E} \longrightarrow \mathcal{P}$ is such that $ \mathscr{P}\left( \widetilde{P} \right)$ is \emph{the} optimal POVM for MED of $\widetilde{P}\in \mathcal{E}$. \end{mydef} Let $PGM$ denote the PGM map, i.e., $PGM:\mathcal{E} \longrightarrow \mathcal{P}$ is such that $PGM\left( \widetilde{Q} \right)$ is the PGM of $\widetilde{Q} \in \mathcal{E}$, i.e. (refer to \cite{Carlos}), $PGM\left(\widetilde{Q}\right)=\left\{\rho_q^{-\frac{1}{2}}q_i \ketbra{\psi_i}{\psi_i}\rho_q^{-\frac{1}{2}} \right\}_{i=1}^{n}$, where $\rho_q = \sum_{i=1}^{n} q_i \ketbra{\psi_i}{\psi_i}$. Then (ii) above says that there exists an invertible map, which we label by $\mathscr{R}$, which can be defined in the following way: \begin{mydef} \label{mathRdef} $\mathscr{R}: \mathcal{E} \longrightarrow \mathcal{E}$ is a bijection such that \begin{equation} \label{mathRR} \mathscr{P}\left( \widetilde{P} \right) =PGM \left( \mathscr{R} \left( \widetilde{P} \right) \right), \; \forall \; \widetilde{P} \in \mathcal{E}.\end{equation} \end{mydef} Knowing $\mathscr{R}$ would solve the problem of MED for LI pure state ensembles. While the existence of the invertible function $\mathscr{R}$ has been proven \cite{Bela,Carlos}, unfortunately, it isn't known - neither analytically nor computationally for arbitrary ensemble $\widetilde{P}$. Fortunately $\mathscr{R}^{-1}$ is known \cite{Mas, Bela, Carlos} i.e., having fixed the states $\{ \ket{\psi_i} \}_{i=1}^{n}$ one can give $p_i$ in terms of the $q_i$: let $G_q >0$ represent the gram matrix of the ensemble $\widetilde{Q}$, i.e., $\left( {G_q} \right)_{ij}=\sqrt{q_i q_j} \braket{\psi_i}{\psi_j}$, and let ${G_q}^{\frac{1}{2}}$ represent the positive square root of $G_q$; let $G$ denote the gram matrix of $\widetilde{P}$, i.e., $G_{ij} = \sqrt{p_ip_j}\braket{\psi_i}{\psi_j}, \; \forall \; 1 \leq i, j \leq n$; then diagonal elements of $G$ can be written as functions of $q_i$ and matrix elements of $G_q^\frac{1}{2}$ \begin{equation} \label{piqi} G_{ii} = p_i = C \frac{q_i}{ \left( {G_q}^{\frac{1}{2}} \right)_{ii} } , \; \forall \; 1 \leq i \leq n, \end{equation} where $C$ is the normalization constant\footnote{$q_i >0, \; \forall \; 1 \leq i \leq n$. This comes from the definition of $\mathcal{E}$ and that $\{ q_i, \ketbra{\psi_i}{\psi_i} \}_{i=1}^{n} \in \mathcal{E}$. Also, $\left( G^\frac{1}{2} \right)_{ii} > 0, \; \forall \; 1 \leq i \leq n$. This is because $G^\frac{1}{2}$, being the positive square root of $G$ (gram matrix for the linearly independent vectors $\{ \sqrt{q_i} \ket{\psi_i} \}_{i=1}^{n}$) is positive definite and the diagonal elements of a positive definite matrix have to be greater than zero.}, $$C = \left(\sum_{j=1}^{n} \dfrac{q_j}{\left( {G_q}^{\frac{1}{2}} \right)_{jj}} \right)^{-1}.$$ This tells us what $\mathscr{R}^{-1}$ is: $$\mathscr{R}^{-1}\left( \left\{ q_i, \ketbra{\psi_i}{\psi_i} \right\}_{i=1}^{n} \right) =\left\{ p_i, \ketbra{\psi_i}{\psi_i} \right\}_{i=1}^{n},$$ where $p_i$ and $q_i$ are related by equation \eqref{piqi}. It is more convenient to define $\mathscr{R}^{-1}$ and $\mathscr{R}$ on the set of gram matrices, which we will denote by $\mathcal{G}$. \begin{mydef} \label{defG} $\mathcal{G}$ is the set of all positive definite $n \times n$ matrices of trace one. \end{mydef} Note\footnote{Associating each $G \in \mathcal{G}$ with an $n \times n$ density matrix of rank $n$, we see that $\mathcal{G}$ is the same as the interior of the generalized Bloch sphere for $n$ dimensional systems. Hence $\mathcal{G} \subset \mathbb{R}^{n^2-1}$.} that $\mathcal{G}$ is convex and is also open in $\mathbb{R}^{n^2-1}$. Define $\mathscr{R}_\mathcal{G}^{-1}:\mathcal{G} \longrightarrow \mathcal{G}$ by $\mathscr{R}_\mathcal{G}^{-1}(G_q) = G$, using relation \eqref{piqi}. We know that $\mathscr{R}^{-1}$ is invertible on $\mathcal{E}$ (from \cite{Carlos}); this implies that $\mathscr{R}_\mathcal{G}^{-1}$ is invertible on $\mathcal{G}$, i.e., $\mathscr{R}_\mathcal{G}$ exists. Equation \eqref{piqi} tells us that $\mathscr{R}_\mathcal{G}^{-1}$ is continuous in $\mathcal{G}$. Since $\mathcal{G} \subset \mathbb{R}^{n^2-1}$ is open\footnote{The topology of $\mathcal{G}$ is that which is induced on it by the Hilbert-Schmidt norm. Note that this is equivalent to the Euclidean metric of $\mathbb{R}^{n^2-1}$}, the invariance of domain theorem \cite{Spivak1} tells us that $\mathscr{R}_\mathcal{G}^{-1}$ is a homeomorphism on $\mathcal{G}$. This means that $\mathscr{R}_\mathcal{G}$ is also continuous on $\mathcal{G}$. To be able to express what $\mathscr{R}$ is one needs to be able to solve the $n$ equations \eqref{piqi} for $q_i$ in terms of $p_j$'s and $\ket{\psi_j}$'s. These equations are too complicated for one to hope to solve: to begin with one doesn't even have an explicit closed form expression for $G^{\frac{1}{2}}$ in terms of the matrix elements of $G$ for arbitrary $n$. Even for the cases when $n = 3,4$, where one \emph{can} obtain such a closed form expression for $G^{\frac{1}{2}}$, the nature of the equations is too complicated to solve analytically. This tells us that it is hopeless to obtain $q_i$ as a closed form expression in terms of $\{ p_i, \ket{\psi_i} \}_{i=1}^{n}$. A similar sentiment was expressed earlier \cite{Tyson}. While a closed form expression of the solution seems too difficult to obtain (and even if obtained, too cumbersome to appreciate) giving an \emph{efficient technique} to compute $q_i$ from $\{ p_i, \ket{\psi_i} \}_{i=1}^{n}$ establishes that the relation given by equation \eqref{mathRR} along with technique (to compute $q_i$) provides a solution for MED of an ensemble of $n$-LIPs. To achieve such a technique we recast the MED problem for any ensemble $\widetilde{P}$ in terms of a matrix equation and a matrix inequality using the gram matrix $G$ of $\widetilde{P}$. The matrix equation and the inequality are equivalent to the optimality conditions that the optimal POVM has to satisfy, i.e., the optimal conditions given by Yuen et al \cite{Yuen}. Recasting the problem in this form gives us three distinct benefits. \begin{itemize} \item[(1)] It helps us to \emph{explicitly} establish that the optimal POVM for $\widetilde{P}$ is given by the PGM of another ensemble of the form $\widetilde{Q}$ (i.e., relation in equation \eqref{mathRR} is made explicit in the matrix equality and matrix inequality conditions). \item[(2)] MED is actually a rotationally invariant problem, i.e., the optimal POVM, $\{ E_i \}_{i=1}^{n}$, varies covariantly under a unitary transformation, $U$, of the states: $$\ketbra{\psi_i}{\psi_i} \rightarrow U\ketbra{\psi_i}{\psi_i}U^\dag \Longrightarrow E_i\rightarrow U E_i U^\dag. $$ This makes it desirable to subtract out the rotationally covariant aspect of the solution and, so, cast the problem in a rotationally invariant form. This is achieved through the aforesaid matrix equality and inequality. \item[(3)] It gives us a technique to compute $q_i$. \end{itemize} For (3) we need to compute $\mathscr{R}_\mathcal{G}(G)$ for a given $G \in \mathcal{G}$. This is done by using the analytic implicit function theorem which tells us that $\mathscr{R}_\mathcal{G}$ is an analytic function on $\mathcal{G}$. Specifically, we will vary $G$ from one point in $\mathcal{G}$, at which we know what the action of $\mathscr{R}_\mathcal{G}$ is, to another point in $\mathcal{G}$, at which we want to know what the action of $\mathscr{R}_\mathcal{G}$ is. Further on, since our technique rests on the theory of the MED problem for $n$-LIP ensembles, it is expected that the algorithm our technique offers is computationally as efficient as or more efficient than existing techniques. We show that this is indeed the case, particularly by directly employing Newton's method to solve the matrix inequality. This adds to the utility of our technique. The paper is divided into the following sections. In Section \ref{MEDP} we go into detail about what MED is and elaborate on the optimality conditions, and specify what they look like for $n$-LIPs. In Section \ref{SSGMM} we recast the MED problem for LI pure states in a rotationally invariant form. In Section \ref{empIFT} IFT is employed to solve the rotationally invariant conditions, which were developed in the previous section; in subsection \ref{algcompl} of section \ref{empIFT} the computational complexity of our algorithm is compared with a standard SDP technique. We conclude the paper in section \ref{conclusion}. \section{The MED Problem: The Conditions of Optimality} \label{MEDP} When the states $\ket{\psi_i}$ are pairwise orthogonal, i.e., $\braket{\psi_i}{\psi_j} =0, \, \forall \, 1 \leq i,j \leq n$, one can perfectly distinguish among them by performing the projective measurement $\{\ketbra{\psi_i}{\psi_i}\}_{i=1}^{n}$. In general the states $\{\ket{\psi_i} \}_{i=1}^n$ aren't pairwise orthogonal and in such a case, it may happen that despite being given $\ket{\psi_i}$, one's measurement output is $j$, leading to an error. The average probability of such errors, $P_e$, is then given by \begin{subequations} \begin{equation} \label{Pe} P_e = \sum_{\substack{i,j=1 \\ i\neq j}}^n p_i \bra{\psi_i} E_j \ket{\psi_i}, \end{equation} and the average probability of success $P_s$ is given by \begin{equation} \label{Ps} P_s = \sum_{i=1}^n p_i \bra{\psi_i} E_i \ket{\psi_i}, \end{equation} \end{subequations} where $\{ E_j \}_{j=1}^{n}$ represents an $n$ element POVM ($n$-POVM). Note that $ P_s + P_e =1$. Our task is to maximize $P_s$ by choosing an appropriate POVM in the set of $n$-element POVMS. We refer to the maximum value of $P_s$ as $P_s^{{max}}$. The maximum success probability $P_s^{max}$ is given by \begin{equation} \label{Pmax} \begin{split} P_{s}^{max}& = \text{Max} \; \{P_s | \; \{ E_j \}_{j=1}^{n} \text{is an $n$-POVM}\}. \end{split} \end{equation} The set of $n$-POVMs is convex, i.e., if $\{E_i\}_{i=1}^{n}$ and $\{E'_i\}_{i=1}^{n}$ are $n$-POVMs, then so is $\{ p E_i + (1-p) E'_i \}_{i=1}^{n}$, $\forall \; 0 \leq p \leq 1$. Hence MED is a constrained convex optimization problem. To every such a constrained convex optimization problem (called the primal problem) there is a corresponding dual problem which provides a lower bound (if the primal problem is a constrained minimization) or an upper bound (if the primal problem is a constrained maximization) to the quantity being optimized (called the objective function). Under certain conditions these bounds are tight implying that one can obtain the solution for the primal problem from its dual. We then say that there is no duality gap between both problems. For MED, there is no duality gap between the primal and dual problems; thus the dual problem can be solved to obtain the optimal POVM \cite{Yuen}. The dual problem is given as follows \cite{Yuen}: \begin{equation} \label{dual} \text{Min} \: \text{Tr}(Z), \; \text{subject to: } Z \geq p_i \ketbra{\psi_i}{\psi_i}\, , \; \forall \; 1 \leq i \leq n. \end{equation} Then $P_s^{max}$ is given by $P_s^{max} = \text{ Min } Tr(Z)$. Also the optimal $n$-POVM, $\{E_i\}_{i=1}^{n}$ will satisfy the complementary slackness condition: \begin{equation} \label{cslack} (Z- p_i \ketbra{\psi_i}{\psi_i}) E_i =0, \, \forall \, 1\leq i \leq n. \end{equation} Now summing over $i$ in equation \eqref{cslack} and using the fact that $ \sum_{i=1}^{n} E_i = \mathbb{1}_n$ we get the following. \begin{equation} \label{Z} Z = \sum_{i=1}^{n} p_i \ketbra{\psi_i}{\psi_i} E_i = \sum_{i=1}^{n} E_i p_i \ketbra{\psi_i}{\psi_i}. \end{equation} From equation \eqref{cslack} we get \begin{equation} \label{St} E_j \left( p_j \ketbra{\psi_j}{\psi_j} - p_i \ketbra{\psi_i}{\psi_i} \right) E_i =0, \; \forall \; 1 \leq i,j \leq n. \end{equation} Conditions \eqref{cslack} and \eqref{St} are equivalent to each other. $Z$, given by equation \eqref{Z}, has to satisfy another condition \begin{equation} \label{Glb} Z \geq p_i \ketbra{\psi_i}{\psi_i}, \; \forall \; 1 \leq i \leq n. \end{equation} Thus the \emph{necessary and sufficient} conditions for the $n$-element POVM $\{ E_i \}_{i=1}^{n}$ to maximize $P_s$ are given by conditions \eqref{St} (or \eqref{cslack}) and \eqref{Glb} together. \section{Rotationally Invariant Necessary and Sufficient Conditions for MED} \label{SSGMM} We wish to obtain the optimal POVM (which is a rank-one projective measurement) for MED of an ensemble $\widetilde{P} = \{p_i,\ketbra{\psi_i}{\psi_i} \}_{i=1}^{n}$, where $\{ \ket{\psi_i} \}_{i=1}^{n}$ is a LI set. Let $ \tket{\psi}{i} \equiv \sqrt{p_i} \ket{\psi_i} , \; \forall \; 1\leq i \leq n$. Since $\{ \tket{\psi}{i} \}_{i=1}^{n}$ is a LI set, corresponding to this set there exists a \emph{unique} set of vectors $\{ \tket{u}{i} \}_{i=1}^{n} \subset \mathcal{H}$ such that\footnote{Given a set of $n$ LI vectors $\left\{ \tket{\psi}{i} \right\}_{i=1}^{n}$ one can obtain the corresponding set of vectors $\{ \tket{u}{i} \}_{i=1}^{n}$ in the following way: fix a basis to work in, arrange $\brat{\psi}{i}$ as rows in a matrix which we call $V$. $V$ is invertible since its rows are LI. The columns of $V^{-1}$ correspond to the vectors $\tket{u}{i}$ in our chosen basis.}: \begin{equation} \label{orth} \tbraket{\psi}{i}{u}{j}= \delta_{ij}, \; \forall \; 1 \leq i,j \leq n. \end{equation} Let $G$ denote the gram matrix of $\left\{ \tket{\psi}{i} \right\}_{i=1}^{n}$. The matrix elements of $G$ are hence given by \begin{equation} \label{Gram} G_{ij} = \tbraket{\psi}{i}{\psi}{j}, \; \forall \; 1 \leq i,j \leq n. \end{equation} $Tr(G)=1$. Since $\left\{ \tket{\psi}{i} \right\}_{i=1}^{n}$ is a LI set, $G > 0$. The gram matrix corresponding to the set $\{ \tket{u}{i} \}_{i=1}^{n}$ is $G^{-1}$. \emph{Any} orthonormal basis $\{ \ket{v_i} \}_{i=1}^{n}$ of $\mathcal{H}$ can be represented as: \begin{equation} \label{v} \ket{v_i} = \sum_{\substack{j=1}}^{n} \left(G^{\frac{1}{2}} U\right)_{ji} \tket{u}{j}, \end{equation} where $G^{\frac{1}{2}}$ is the positive square root of $G$ and $U$ is an $n\times n$ unitary matrix. $U$ captures the unitary degree of freedom of the orthonormal basis $\{ \ket{v_i}\}_{i=1}^{n}$. Any such orthonormal basis corresponds to a rank-one projective measurement: \begin{equation} \label{btp} \{ \ket{v_i} \}_{i=1}^{n} \longrightarrow \{ \ketbra{v_{i}}{v_{i}} \}_{i=1}^{n}. \end{equation} Using this rank-one projective measurment for MED, the average probability of success is given by: \begin{equation} \label{PSU} P_s = \sum_{\substack{i=1}}^{n} | \braket{ \widetilde{ \psi_i}}{v_i}|^{2} = \sum_{\substack{i=1}}^{n} \left| \left(G^\frac{1}{2} U\right)_{ii}\right|^2. \end{equation} Let $\{\ketbra{w_i}{w_i} \}_{i=1}^{n}$ be a rank-one projective measurement, which is also a solution for the $n$-POVM $\{ E_i \}_{i=1}^{n}$ in condition \eqref{St}. Here $\braket{w_i}{w_j}=\delta_{ij}$ for $ i,j = 1, 2, \cdots, n$. Let an $n \times n$ unitary matrix $W$ be related to the rank-one projective measurement $\{\ketbra{w_i}{w_i} \}_{i=1}^{n}$ in the following way. \begin{equation} \label{w} \ket{w_i} = \sum_{\substack{j=1}}^{n} \left(G^{\frac{1}{2}} W\right)_{ji} \tket{u}{j}. \end{equation} The unitary matrix $W$ is fixed upto right-multiplication with a diagonal unitary matrix, which changes the phases of $\ket{w_i}$. We will soon fix the phases of $\ket{w_i}$ which will ensure that $W$ will be \emph{unique}. Thus equation \eqref{St} can be rewritten as: \begin{equation} \label{St1} \bra{w_j} \left( \ketbrat{\psi}{j}{\psi}{j} - \ketbrat{\psi}{i}{\psi}{i} \right) \ket{w_i} = 0, \; \forall \; 1 \leq i,j \leq n. \end{equation} Using equation \eqref{w} in equation \eqref{St1}: \begin{equation} \label{StG} \left(G^{\frac{1}{2}} W\right)_{jj}^{*} \left(G^{\frac{1}{2}} W\right)_{ji} = \left(G^{\frac{1}{2}} W\right)_{ij}^{*} \left(G^{\frac{1}{2}} W\right)_{ii}, \; \forall \; 1 \leq i,j \leq n. \end{equation} The diagonal elements of the matrix $G^\frac{1}{2}W$ can be made non-negative by appropriately fixing the phases of the $\ket{w_i}$ vectors in the following way: right-multiply $W$ with a diagonal unitary $W'$, whose diagonal elements will be phases. From equation \eqref{w} it is seen that right-multiplying $W$ with $W'$ merely changes the phases of the ONB vectors $\ket{w_i}$, and that they will still satisfy equation \eqref{St1}. We can vary the phases in $W'$ so that the diagonals of $G^\frac{1}{2} W W'$ are non-negative. We absorb $W'$ into $W$. After this absorption, the $n \times n$ unitary $W$ which is associated with the rank-one projective measurement $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$, is unique. Continuing, we see that equations \eqref{StG} now take the following form. \begin{equation} \label{StG1} \left(G^{\frac{1}{2}} W\right)_{jj} \left(G^{\frac{1}{2}} W\right)_{ji} = \left(G^{\frac{1}{2}} W\right)_{ii} \left(G^{\frac{1}{2}} W\right)_{ij}^{*}, \; \forall \; 1 \leq i,j \leq n. \end{equation} Let $D= Diag(d_{11},d_{22}, \cdots, d_{nn})$ be the real diagonal matrix of $G^\frac{1}{2} W $, i.e., \begin{equation} \label{D} d_{ii} = \left(G^{\frac{1}{2}} W \right)_{ii}, \; \forall \; 1 \leq i \leq n. \end{equation} From equation \eqref{StG1} and the fact that the diagonals of $G^\frac{1}{2}W$ are all real, we infer that the matrix $DG^{\frac{1}{2}}W$ is hermitian. \begin{equation} \label{DGWhermit} D G^{\frac{1}{2}} W - W^\dag G^{\frac{1}{2}} D = 0. \end{equation} Left multiplying the LHS and RHS by $D G^{\frac{1}{2}} W$ gives \begin{align} \label{EOY} & \left(D G^{\frac{1}{2}} W \right)^2 - D G D = 0 \notag \\ \Longrightarrow \; \; & X^2 - DGD = 0, \end{align} where $X \equiv DG^\frac{1}{2}W$, $X^\dag = X$ and (note that) $D^2$ is the diagonal of $X$. In the MED problem, we are given the gram matrix $G$ of the ensemble $\widetilde{P}$. To solve condition \eqref{St} for MED of $\widetilde{P}$ we need to solve for $X$ in equation \eqref{EOY}. Knowing $X$ tells us what $G^\frac{1}{2}W$ is, which can be used in equation \eqref{w} to obtain $\{\ketbra{w_i}{w_i}\}_{i=1}^{n}$. Equation \eqref{EOY} came from assuming that $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ represented some $n$-POVM which satisfied condition \eqref{St}. For $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ to be the optimal POVM it needs to satisfy condition \eqref{Glb} too; this will impose another condition on the solution for $X$ in equation \eqref{EOY}. \begin{theorem} \label{thmret} Let $\mathcal{X}$ be a solution for $X$ in equation \eqref{EOY}. Then $\mathcal{X}$ corresponds to the optimal POVM for MED of $\widetilde{P}$ if it is positive definite. Also, $\mathscr{R}_\mathcal{G}(G) = \dfrac{1}{Tr(D^2G)} DGD$, where $D$ is the square root of the diagonal of $\mathcal{X}$. \end{theorem} \begin{proof} We relate $d_{ii}$, defined in equation \eqref{D}, to the probability $q_i$ mentioned in equation \eqref{piqi}. In section \eqref{intro} it was mentioned that the optimal POVM for MED of $\widetilde{P}$ is the PGM of an ensemble $\mathscr{R}\left(\widetilde{P}\right)=\widetilde{Q} = \{q_i, \ketbra{\psi_i}{\psi_i} \}_{i=1}^{n}$ (see definition \eqref{mathRdef}). This means that \cite{Bela} \begin{equation} \label{pretty} \ket{w_i} = \left( \sum_{j=1}^{n} \ketbrat{\psi'}{j}{\psi'}{j} \right)^{-1/2} \tket{\psi'}{i}, \; \forall \; 1 \leq i \leq n, \end{equation} where $\tket{\psi'}{i}\equiv \sqrt{q_i}\ket{\psi_i}$ and $\left( \sum_{j=1}^{n} \ketbrat{\psi'}{j}{\psi'}{j} \right)^{-1/2} >0$. Define $\tket{u'}{i}$ to be such that $\tbraket{\psi'}{i}{u'}{j}= \delta_{ij}, \; \forall \; 1 \leq i,j \leq n$. $G_q$ is the gram matrix corresponding to the ensemble $\widetilde{Q}$. It can be verified that $G_q^{-1}$ is the gram matrix of the vectors $\{ \tket{u'}{i} \}_{i=1}^{n}$, i.e., $\left(G_q^{-1}\right)_{ij} = \tbraket{u'}{i}{u'}{j}, \; \forall \; 1 \leq i,j \leq n$. This implies that \begin{equation} \label{rho'} \left( \sum_{j=1}^{n} \ketbrat{\psi'}{j}{\psi'}{j} \right)^{-1/2} = \left( \sum_{j=1}^{n} \ketbrat{u'}{j}{u'}{j} \right)^{1/2} = \sum_{i,j=1}^{n} \left( G_q^\frac{1}{2} \right)_{ij} \ketbrat{u'}{i}{u'}{j}. \end{equation} Note that since the LHS in equation \eqref{rho'} is positive definite, the RHS in equation \eqref{rho'} should also be positive definite and that can only be true if $G_q^\frac{1}{2} > 0$. One can verify the above equation by squaring on both sides \footnote{That $\tket{\psi'}{i}$ and $\tket{u'}{j}$ are related by $\tbraket{\psi'}{i}{u'}{j}= \delta_{ij}$ implies that $\sum_{j=1}^{n} \ketbrat{u'}{j}{\psi'}{j} = \mathbb{1}_n$. This can be seen from the fact that if $\ket{\eta} = \sum_{j=1}^{n} \alpha_j \tket{u'}{j}$ is any vector in $\mathcal{H}$, then $\left(\sum_{j=1}^{n} \ketbrat{u'}{j}{\psi'}{j}\right) \ket{\eta} = \ket{\eta}$. That $\sum_{j=1}^{n} \ketbrat{u'}{j}{\psi'}{j} = \mathbb{1}_n$ is true implies that $\left(\sum_{j=1}^{n} \ketbrat{u'}{j}{u'}{j}\right)\left(\sum_{k=1}^{n} \ketbrat{\psi'}{k}{\psi'}{k}\right) = \mathbb{1}_n$. Hence $\sum_{j=1}^{n} \ketbrat{u'}{j}{u'}{j}$ is the inverse of $\sum_{k=1}^{n} \ketbrat{\psi'}{k}{\psi'}{k}$. Also, since $G_q$ is the gram matrix of $\left\{ \tket{\psi'}{j} \right\}_{j=1}^n$ and $G_q^{-1}$ is the gram matrix of $\left\{ \tket{u'}{j} \right\}_{j=1}^{n}$ we get that $\left( \sum_{i,j=1}^{n} \left( G_q^\frac{1}{2} \right)_{ij} \ketbrat{u'}{i}{u'}{j} \right)^2 = \sum_{j=1}^{n} \ketbrat{u'}{j}{u'}{j}$.}. Substituting the above in equation \eqref{pretty} gives \begin{eqnarray} \label{wu'1} & \ket{w_i} & = \sum_{j=1}^{n} \left(G_q^\frac{1}{2} \right)_{ji} \tket{u'}{j} \notag \\ & ~ & = \sum_{j=1}^{n} \frac{\sqrt{p_j}}{\sqrt{q_j}} \left( G_q^\frac{1}{2} \right)_{ji} \tket{u}{j}, \; \forall \; 1 \leq i \leq n, \end{eqnarray} where $ \tket{u'}{i} = \frac{\sqrt{p_i}}{\sqrt{q_i}} \tket{u}{i}$, $\forall \; 1 \leq i \leq n$ (since $ \tket{\psi'}{i} = \frac{\sqrt{q_i}}{\sqrt{p_i}} \tket{\psi}{i} $). Since $\{ \tket{u}{i} \}_{i=1}^{n}$ is a basis for $\mathcal{H}$, on comparing equations \eqref{wu'1} and \eqref{w} we obtain \begin{subequations} \begin{equation} \label{atd} \left( G^{\frac{1}{2}} W \right)_{ji} = \dfrac{\sqrt{p_j}}{\sqrt{q_j}} \left( G_q^\frac{1}{2} \right)_{ji}, \; \forall \; 1 \leq i,j, \leq n, \end{equation} \begin{equation} \label{aj} \Longrightarrow \; \; d_{jj} = \dfrac{\sqrt{p_j}}{\sqrt{q_j}} \left( G_q^\frac{1}{2} \right)_{jj}, \; \forall \; 1 \leq j \leq n. \end{equation} \end{subequations} Using equation \eqref{piqi} we get that \begin{equation} \label{aipiqi} d_{jj}\dfrac{\sqrt{p_j}}{\sqrt{q_j}} = \frac{p_j}{q_j} \left( G_q^\frac{1}{2} \right)_{jj} = C, \; \forall \; 1 \leq j \leq n, \end{equation} where $C$ is the positive constant that appears in equation \eqref{piqi}. Since $d_{jj} \frac{\sqrt{p_j}}{\sqrt{q_j}} = C, \; \forall \; 1 \leq j \leq n$, using equations \eqref{atd} and \eqref{aipiqi} we get that $$(\mathcal{X})_{ji} = \left(DG^\frac{1}{2}W\right)_{ji} = d_{jj}\left(G^\frac{1}{2}W \right)_{ji}= C \times \left(G_q^\frac{1}{2} \right)_{ji}, \; \forall \; 1 \leq i,j, \leq n,$$ that is, $\mathcal{X}$ is equal to the product of a positive constant $C$ and $G_q^\frac{1}{2}$, which implies that $\mathcal{X} > 0$. Also from equation \eqref{EOY} it follows that $DGD =C^2 \times G_q$, i.e., the gram matrix of $\mathscr{R}\left(\widetilde{P}\right)=\widetilde{Q}$ is given by $\frac{DGD}{Tr(D^2G)}$, i.e., \begin{equation} \label{GXR} \mathscr{R}_\mathcal{G}(G) = \frac{\mathcal{X}^2}{Tr\left(\mathcal{X}^2\right)} = \frac{DGD}{Tr(D^2G)}. \end{equation} \end{proof} The converse of theorem \ref{thmret} is proved in the following. \begin{theorem} \label{thm2} If $\mathcal{X}$ is a solution for $X$ in equation \eqref{EOY} and $\mathcal{X}$ is positive definite, then $\mathcal{X}$ corresponds to the optimal POVM for MED of the ensemble $\widetilde{P}$. Also, $\mathcal{X}$ is unique, i.e., there is no other $\mathcal{X}'$ such that it is a solution for $X$ in equation \eqref{EOY} and $\mathcal{X}'>0$. \end{theorem} \begin{proof} Let $\mathcal{X}$ be a solution for $X$ in equation \eqref{EOY} and let $\mathcal{X}$ be positive definite. Equating $D^{-1} \mathcal{X}$ with $G^\frac{1}{2}W$ (see below equation \eqref{EOY}) and employing it in equation \eqref{w}, we obtain $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ to be the rank-one projective measurement corresponding to solution $\mathcal{X}$. We want to prove that $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ is the optimal POVM. For this purpose we borrow a result from Mochon's paper. Equation (33) in Mochon's paper \cite{Carlos} tells us that the positive operator $Z$, defined in equation \eqref{Z}, is a scalar times the positive square root of the density matrix of the ensemble $\mathscr{R}\left(\widetilde{P}\right)$, i.e., \begin{equation} \label{ZQ} Z = C \left( \sum_{i=1}^{n} q_i \ketbra{\psi_i}{\psi_i} \right)^\frac{1}{2}. \end{equation} We will compute $\sum_{i=1}^{n} p_i |w_i\rangle\langle w_i | \psi_i \rangle\langle \psi_i | $ and show that it is equal to $C ( \sum_{i=1}^{n} q_i \ketbra{\psi_i}{\psi_i} )^\frac{1}{2}$, thereby proving that $\sum_{i=1}^{n} p_i |w_i\rangle\langle w_i | \psi_i \rangle\langle \psi_i | $ is equal to $Z$. This will then imply that $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ is the optimal POVM. Since $\ket{w_i} = \sum_{k=1}^{n} (D^{-1}\mathcal{X})_{ki} \tket{u}{k}$ and $\tket{u}{k} = \sum_{j=1}^{n} (G^{-1})_{jk} \tket{\psi}{j}$, using equation \eqref{EOY} it's easily verified that $\ket{w_i} = \sum_{j=1}^{n} \left( D\mathcal{X}^{-1} \right)_{ji} \tket{\psi}{j}$. Then \begin{equation} \label{Zw} \sum_{i=1}^{n} |w_i\rangle\langle w_i | \widetilde{\psi}_i \rangle\langle \widetilde{\psi}_i | = \sum_{j,i =1}^{n} \left( D \mathcal{X}^{-1} D \right)_{ji} \ketbrat{\psi}{j}{\psi}{i} > 0. \end{equation} Squaring the RHS in equation \eqref{Zw} and employing equation \eqref{EOY} we get that \begin{equation} \label{Z2} \left(\sum_{i,j =1}^{n} \left( D \mathcal{X}^{-1} D \right)_{ij} \ketbrat{\psi}{i}{\psi}{j}\right)^2 = \sum_{i,j =1}^{n} d_{ii}^2 \ketbrat{\psi}{i}{\psi}{i}. \end{equation} Consider the probability $ k_i \equiv \dfrac{d_{ii}^2p_i}{\sum_{j=1}^{n}d_{jj}^2p_j}, \; \forall \; 1 \leq i \leq n$. Thus $\sum_{i=1}^n k_i \ketbra{\psi_i}{\psi_i}$ is the average state of the ensemble $\widetilde{K} = \{ k_i, \ketbra{\psi_i}{\psi_i} \}_{i=1}^{n}$. The matrix elements of the gram matrix, $G_k$ of $\widetilde{K}$ are then given by $$(G_k)_{ij} = \sqrt{k_i k_j} \braket{\psi_i}{\psi_j} = \dfrac{1}{\sum_{l=1}^{n}d_{ll}^2p_l}d_{ii} \tbraket{\psi}{i}{\psi}{j} d_{jj}.$$ This tells us that $G_k=\frac{1}{Tr(D^2G)}DGD$; using equation \eqref{EOY} we get that the positive square root of $G_k$ is $G_k^\frac{1}{2}=\frac{1}{\sqrt{Tr(D^2G)}}\mathcal{X}$ and, hence, $d_{ii}^2 =\mathcal{X}_{ii}= \sqrt{Tr(D^2G)}\left(G_k^\frac{1}{2}\right)_{ii}$ (see equation \eqref{D}). Thus $k_i$ and $p_i$ are related by the equation \begin{equation} \label{aiki} p_i = C' \dfrac{k_i}{ \left( G_k^\frac{1}{2} \right)_{ii}}, \; \forall \; 1 \leq i \leq n, \end{equation} where $C'$ is the normalization constant given by $C'= \sqrt{Tr(D^2G)}$. We see that $p_i$ are related to the $k_i$ in the exact same way that $p_i$ are related to $q_i$ from equation \eqref{piqi}. Below definition \eqref{mathRR}, it was mentioned that if $\widetilde{P}$ and $\widetilde{K}$ are two ensembles with the same states with apriori probabilities $p_i$ and $k_i$, which are related by equation \eqref{aiki}, we get that $\mathscr{R}^{-1}\left(\widetilde{K}\right) = \widetilde{P}$. Since $\mathscr{R}^{-1}$ is a bijection, this implies that $\widetilde{K} = \mathscr{R}\left(\widetilde{P}\right)= \widetilde{Q}$ and $k_i = q_i, \; \forall \; 1 \leq i \leq n$, where $q_i$ is the apriori probability of states in $\widetilde{Q}$ as given in equation \eqref{piqi}. This also implies that $C' = C$. From equation \eqref{ZQ} we get that the RHS of equation \eqref{Z2} equates to $$ \left( \sum_{i=1}^{n} |w_i\rangle\langle w_i | \widetilde{\psi}_i \rangle\langle \widetilde{\psi}_i |\right)^2 = \sum_{i=1}^n d_{ii}^2 \ketbrat{\psi}{i}{\psi}{i} = C^2\left( \sum_{i=1}^{n} q_i\ketbra{\psi_i}{\psi_i} \right) = Z^2.$$ Then the fact that $\sum_{i=1}^{n} |w_i\rangle\langle w_i | \widetilde{\psi}_i \rangle\langle \widetilde{\psi}_i | $ is positive definite tells us that \begin{equation} \label{Zw1} \sum_{i=1}^{n} |w_i\rangle\langle w_i | \widetilde{\psi}_i \rangle\langle \widetilde{\psi}_i | = C \left( \sum_{i=1}^{n} q_i \ketbra{\psi_i}{\psi_i} \right)^\frac{1}{2} = Z . \end{equation} Note that the ONB $\{ \ket{w_i}\}_{i=1}^{n}$ was constructed from $\mathcal{X}$, which solves for $X$ in equation \eqref{EOY} and which is positive definite. That $\sum_{i=1}^{n} |w_i\rangle\langle w_i | \widetilde{\psi}_i \rangle\langle \widetilde{\psi}_i |$ is equal to $C \left( \sum_{i=1}^{n} q_i \ketbra{\psi_i}{\psi_i} \right)^\frac{1}{2} $, which \emph{we already know} is equal to $Z$ \cite{Carlos}, implies that $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ is the optimal POVM. Thus $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ is the optimal POVM. Since $\{ \ketbra{w_i}{w_i} \}_{i=1}^{n}$ is the \emph{unique} optimal POVM for MED of $\widetilde{P}$ so is the $n$ tuple $(d_{11},d_{22},\cdots,d_{nn})$ unique to the MED of $\widetilde{P}$, \footnote{Note that $d_{ii} = \braket{w_i}{\widetilde{\psi}_i}$, thus if $\{\ketbra{w_i}{w_i}\}_{i=1}^{n}$ is unique, so must the $n$-tuple $(d_{11},d_{22},\cdots,d_{nn})$.}. This implies that $D=Diag(d_{11},d_{22},\cdots,d_{nn})$ is unique, which implies that $DGD$ is unique and since the positive square root of $DGD$ is also unique, that tells us that $\mathcal{X}$ is unique too. \end{proof} Theorem \eqref{thm2} tells us that for MED of any $\widetilde{P} \in \mathcal{E}$, $\mathcal{X}$ is unique. But note that if $\ket{\psi_i}$ underwent a rotation by unitary $U$ then it can be inferred from equation \eqref{EOY} that the solution for $\mathcal{X}$ won't change since $G$ doesn't change. This implies that $\mathcal{X}$ is a function of $G$ in $\mathcal{G}$. Let the matrix elements of $\mathcal{X}$ be given by the following equation \begin{equation} \label{mathcalW} \mathcal{X} = \begin{pmatrix} {d_{11}}^2 & d_{12} + i d_{21} & d_{13} + i d_{31} & \cdots & d_{1n} + i d_{n1} \\ d_{12} - i d_{21} & {d_{22}}^{2} & d_{23} + i d_{32} & \cdots & d_{2n} + i d_{n2}\\ d_{13} - i d_{31} & d_{23} - i d_{32} & {d_{33}}^{2} & \cdots & d_{3n} + i d_{n3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ d_{1n} - i d_{n1} & d_{2n} - id_{n2} & d_{3n} - i d_{n3} & \cdots & {d_{nn}}^2 \end{pmatrix}, \end{equation} where $d_{kl}$ are the real and imaginary parts of the matrix elements of $\mathcal{X}$. Since $\mathcal{X}$ is a function on $\mathcal{G}$, $d_{kl}$ are also functions on $\mathcal{G}$. \begin{mydef} \label{defQ} Let $\mathcal{Q}$ denote the set of all positive definite $n \times n$ matrices. \end{mydef} Thus $\mathcal{G} \subset \mathcal{Q}$. Using $\mathcal{G}$ and $\mathcal{Q}$ we formalize $\mathcal{X}$ as a function on $\mathcal{G}$. \begin{mydef} \label{funmathD} \label{alphai} \label{funW} \label{defrhokl} \begin{subequations} $\mathcal{X}: \mathcal{G} \longrightarrow \mathcal{Q}$ is such that $\mathcal{X}(G)$ solves equation \eqref{EOY} \begin{equation} \label{calEOY} \left(\mathcal{X}(G)\right)^2 - D(G) \, G \, D(G) = 0. \end{equation} Let's denote $d_{kl}: \mathcal{G} \longrightarrow \mathbb{R}$ to be the real and imaginary parts of matrix elements of $\mathcal{X}(G)$, i.e., \begin{align} \label{rhodef} & ~~~~~~~~~~~~~~~~~~~~~~~~ d_{kl}\left(G\right) \equiv Re \left( \left( \mathcal{X}\left(G\right) \right)_{kl} \right), \; \forall \; 1 \leq k < l \leq n, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~ d_{ii}\left(G\right) \equiv \sqrt{\left( \mathcal{X}\left(G\right) \right)_{ii}}, \; \forall \; 1 \leq i \leq n, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~ d_{kl}\left(G\right) \equiv Im \left( \left( \mathcal{X}\left(G\right) \right)_{lk} \right), \; \forall \; 1 \leq l < k \leq n, \end{align} and $D(G) \equiv Diag(d_{11}(G),d_{22}(G), \cdots, d_{nn}(G))$. \end{subequations} \end{mydef} Note that if one knows the real $n$-tuple $(d_{11}(G),d_{22}(G), \cdots, d_{nn}(G))$, then using equation \eqref{EOY} one can compute $\mathcal{X}(G)$. Thus we have reformulated the MED problem for linearly independent pure states in a rotationally invariant way: \bigskip \textbf{Rotationally Invariant Necessary and Sufficient Conditions:} \emph{ Let $G$ be the gram matrix corresponding to an $n$-LIP: $\{ p_i, \ketbra{\psi_i}{\psi_i} \}_{i=1}^{n}$. To solve MED for this $n$-LIP, one needs to find real and positive $n$-tuple }$(d_{11}(G),d_{22}(G), \cdots, d_{nn}(G))$\emph{ such that, when arranged in the diagonal matrix }$ D(G) = Diag(d_{11}(G),d_{22}(G), \cdots, d_{nn}(G))$, \emph{the diagonal of the positive square root of }$ D(G) G D(G)$\emph{is} $\left( D(G)\right)^2$. \section{\label{empIFT} Solution for the MED of LI Pure State Ensembles} $\mathcal{X}$ is a function on $\mathcal{G}$ such that $\mathcal{X}(G)$ is a solution for $X$ in equation \eqref{EOY}, and is positive definite. We need to compute $\mathcal{X}(G)$ for a given $G \in \mathcal{G}$. We employ the Implicit Function Theorem (IFT) for this. \subsection{Functions and Variables for IFT} \label{defFV} In this subsection, we will introduce the functions and variables which are part of the IFT technique. Let the unknown hermitian matrix $X$ in equation \eqref{EOY} be represented by \begin{equation} \label{DGWlooks1} X = \begin{pmatrix} x_{11}^2 & x_{12} + i x_{21} & x_{13} + i x_{31} & \cdots & x_{1n} + i x_{n1} \\ x_{12} - i x_{21} & x_{22}^2 & x_{23} + i x_{32} & \cdots & x_{2n} + i x_{n2} \\ x_{13} - i x_{31} & x_{23} - i x_{32} & x_{33}^2 & \cdots & x_{3n} + i x_{n3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{1n} - i x_{n1} & x_{2n} - ix_{n2} & x_{3n} - i x_{n3} & \cdots & x_{nn}^2 \end{pmatrix}, \end{equation} where $x_{kl} \in \mathbb{R}, \; \forall \; 1 \leq k, l \leq n $. Define $F$ on $\mathcal{G} \times \mathcal{H}_n$, where $\mathcal{H}_n$ is the real vector space of all $n \times n$ hermitian matrices. \begin{mydef} \begin{equation} \label{defF} F(G,X) \equiv X^2 - D(X) G D(X), \end{equation} where $X$ is of the form given in equation \eqref{DGWlooks1} and $D(X) \equiv Diag \left(x_{11},x_{22},\cdots,x_{nn}\right)$. \end{mydef} We define the matrix elements of $F$ as functions of $G$ and $x_{ij},$ $\forall\; 1 \le i,j \le n$. \begin{equation} \label{f} F = \begin{pmatrix} f_{11} & f_{12} + i f_{21} & f_{13} + i f_{31} & \cdots & f_{1n} + i f_{n1} \\ f_{12} - i f_{21} & f_{22} & f_{23} + i f_{32} & \cdots & f_{2n} + i f_{n2} \\ f_{13} - i f_{31} & f_{23} - i f_{32} & f_{33} & \cdots & f_{3n} + i f_{n3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ f_{1n} - i f_{n1} & f_{2n} - if_{n2} & f_{3n} - i f_{n3} & \cdots & f_{nn} \end{pmatrix}, \end{equation} where, for $i<j$, $f_{ij}$ and $f_{ji}$ represent the real and imaginary parts of $F_{ij}$ respectively, and $f_{ii}$ represents the diagonal element $F_{ii}$, and for $j < i$, $f_{ji}$ and $-f_{ij}$ represent the real and imaginary parts of $F_{ij}$ respectively. Then using definition \eqref{defF} and equation \eqref{DGWlooks1} we get for $ i< j$ \begin{subequations} \label{feq} \begin{align} \label{fij} f_{ij}\left(G, \vv{x}\right) = \; & \sum_{k=1}^{i-1} \left(x_{ki}x_{kj} + x_{ik}y_{jk} \right) + \sum_{k=i+1}^{j-1} \left(x_{ik}x_{kj} - x_{ki}x_{jk} \right) \notag + \sum_{k=j+1}^{n} \left(x_{ik}x_{jk} + x_{ki}x_{kj} \right) \\ + & x_{ij}\left(x_{ii}^2 + x_{jj}^2\right) - x_{ii}x_{jj} Re\left(G_{ij}\right), \end{align} \begin{align} \label{fji} f_{ji}\left(G, \vv{x}\right) = \; & \sum_{k=1}^{i-1} \left(x_{ki}x_{jk} - x_{ik}x_{kj} \right) + \sum_{k=i+1}^{j-1} \left(x_{ik}x_{jk} + x_{ki}x_{kj} \right) \notag + \sum_{k=j+1}^{n} \left(-x_{ik}x_{kj} + x_{ki}x_{jk} \right) \\ + & x_{ji}\left(x_{ii}^2 + x_{jj}^2\right) - x_{ii}x_{jj} Im\left(G_{ij}\right), \end{align} and for the diagonal elements we get \begin{align} \label{fii} f_{ii}\left(G, \vv{x}\right) = \; & \sum_{k=1}^{i-1} \left(x_{ki}^2 + x_{ik}^2 \right) + \sum_{k=i+1}^{n} \left(x_{ik}^2 + x_{ki}^2 \right) + 2 x_{ii}^4 - x_{ii}^2G_{ii}, \end{align} where $\vv{x} \equiv (x_{11},x_{12},\cdots, x_{nn})$ (i.e., $\vv{x}$ is the real $n^2$-tuple of the $x_{ij}$-variables). \end{subequations} Finally, we define the Jacobian of the functions $f_{ij}$ with respect to the variables $x_{ij}$; this Jacobian matrix has the matrix elements \begin{equation} \label{Jacobian} \left(J \left( G, \vv{x} \right)\right)_{ij,kl} \equiv \dfrac{\partial{f_{ij} \left(G,\vv{x} \right) }}{\partial{x_{kl}}}, \; \forall \; 1 \le i,j,k,l \le n. \end{equation} Note that since the $f_{ij}$ functions and the $x_{ij}$ variables are both $n^2$ in number, this Jacobian matrix is an $n^2 \times n^2$ square matrix. \subsection{Implementing IFT} Let $G_0 \in \mathcal{G}$ be a gram matrix whose MED for which we know the solution, that is, we know the values of $x_{ij} = d_{ij}(G_0)$, $\forall$ $1 \le i,j \le n$ (see definition \eqref{defrhokl}). Substituting $x_{ij} = d_{ij}(G_0)$, $\forall$ $1 \le i,j \le n$ in equation \eqref{DGWlooks1} gives us that $X = \mathcal{X}(G_0)$ (see equations \eqref{mathcalW} and definition \eqref{funmathD}), and substituting $X = \mathcal{X}(G_0)$ into equation \eqref{defF} gives (see theorem \ref{thm2}), \begin{enumerate} \item[(i.)] $ F \Big(G_0, \; X = \mathcal{X}(G_0) \; \Big) = 0$. This equation tells us that $X = \mathcal{X}(G_0)$ is a solution for $X$ in equation \eqref{EOY} when $G = G_0$. \item[(ii.)] $X = \mathcal{X}(G_0) > 0$. \end{enumerate} IFT, which is a well known result in functional analysis \cite{imp}, then tells us the following. \paragraph{Implicit Function Theorem:} Consider the following inequality: \begin{equation}\label{Detjac} Det \left( J(G_0,\vv{d}(G_0)) \right) \neq 0, \; \text{where} \; \vv{d}(G_0) = \left(d_{11}(G_0),d_{12}(G_0),\cdots,d_{1n}(G_0),d_{21}(G_0),\cdots,d_{nn}(G_0)\right).\end{equation} If the inequality \eqref{Detjac} is true, then IFT tells us that there exists an open neighbourhood $I_{G_0}$ in $\mathcal{G}$ containing $G_0$, such that for each $i,j$, where $1 \le i,j \le n$, there exists an open interval $I_{ij}$ in $\mathbb{R}$ containing the real number $d_{ij}(G_0)$, such that one can define the function $\phi_{ij}: I_{G_0} \longrightarrow I_{ij}$, such that \begin{enumerate} \item $\phi_{ij}$'s are continuously differentiable in $I_{G_0}$, \item $\phi_{ij}(G_0) = d_{ij}(G_0)$, $\forall \; 1 \le i,j \le n$, and \item the following equation holds true for $\; \forall \; 1 \le i,j \le n$ and $\forall \; G \in I_{G_0}$: \begin{equation} \label{impff1} f_{ij} \left( G, \vv{\phi}(G) \right) = 0, \text{ where } \vv{\phi}(G) = ( \phi_{11}(G),\phi_{12}(G),\cdots, \phi_{nn}(G)). \end{equation} \end{enumerate} Thus to use the IFT for our purpose we need to prove the following. \begin{theorem} \label{Jacthm} $Det \left( J\left( G,\vv{d}(G)\right)\right) \neq 0, \; \forall \; G \in \mathcal{G}$. \end{theorem} \begin{proof} This proof is divided into two parts: \begin{enumerate} \item[(a.)] To show that $J(G,\vv{d}(G))$ is a linear transformation on the real space $\mathcal{H}_n$ of $n \times n$ complex hermitian matrices: \medbreak Proof of (a.): \begin{subequations}First note that $J(G,\vv{d}(G))$ is the Jacobian of the function $F$ with respect to the variable $X$ (equation \eqref{defF}). Let $x_{ij}$ be assigned the value $d_{ij}(G)$ for all $1 \le i, j \le n$. Now let $x_{ij}=d_{ij}(G) \longrightarrow x_{ij}=d_{ij}(G) + \epsilon \delta x_{ij}$ be an arbitrary perturbation, where $\epsilon$ is an infinitesimal positive real number and $\delta x_{ij}$ are real, $\forall$ $1 \le i,j \le n$. As a result of this perturbation we have the following \begin{enumerate} \item[(i)] $\left(x_{ii}(G)\right)^2 = $ $\left(d_{ii}(G)\right)^2 \longrightarrow \left(d_{ii}(G)\right)^2 + 2 \epsilon d_{ii}(G) \delta x_{ii} + \mathcal{O}(\epsilon^2)$, and \item[(ii)] $X = \mathcal{X}(G) \longrightarrow X = \mathcal{X}(G) + \epsilon \delta X + \mathcal{O}(\epsilon^2)$ where \begin{equation} \label{DeltaX} \delta X = \begin{pmatrix} 2 d_{11}(G)\delta x_{11} & \delta x_{12} + i \delta x_{21} & \delta x_{13} + i \delta x_{31} & \cdots & \delta x_{1n} + i \delta x_{n1} \\ \delta x_{12} - i \delta x_{21} & 2 d_{22}(G) \delta x_{22} & \delta x_{23} + i \delta x_{32} & \cdots & \delta x_{2n} + i \delta x_{n2} \\ \delta x_{13} - i \delta x_{31} & \delta x_{23} - i \delta x_{32} & 2 d_{33}(G) \delta x_{33} & \cdots & \delta x_{3n} + i \delta x_{n3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \delta x_{1n} - i \delta x_{n1} & \delta x_{2n} - i\delta x_{n2} & \delta x_{3n} - i \delta x_{n3} & \cdots & 2 d_{nn}(G) \delta x_{nn} \end{pmatrix}. \end{equation} For the sake of brevity, for the rest of this proof, we will denote $D(G)$ by $D$, $\mathcal{X}(G)$ by $\mathcal{X}$, and $J(G,\vv{d}(G))$ by $J_G$. Define: \begin{equation} \label{Ddelta} D_\delta \equiv Diag(\delta x_{11},\delta x_{22},\cdots,\delta x_{nn}). \end{equation} \end{enumerate} Thus we get the following. \begin{align} \label{Fpert} & F\left(G, \mathcal{X} + \epsilon \delta X\right) \notag \\ = \; & F\left(G,\mathcal{X}\right) + \epsilon \left( \delta X \mathcal{X} + \mathcal{X} \delta X - D_{\delta } G D - D G D_{\delta } \right) + \mathcal{O}(\epsilon^2) \notag \\ = \; & \epsilon \left( \delta X \mathcal{X} - D_{\delta }D^{-1}\mathcal{X}^2 + \mathcal{X} \delta X - \mathcal{X}^2 D^{-1} D_{\delta } \right) + \mathcal{O}(\epsilon^2) \notag \\ = \; & \epsilon J_G \left(\delta X \right)+ \mathcal{O}(\epsilon^2), \end{align} where equation \eqref{EOY} was employed in the second step above, and \begin{equation} \begin{split} J_G\left( \delta X \right) & = \; \delta X \mathcal{X} - D_{\delta }D^{-1}\mathcal{X}^2 + \mathcal{X} \delta X - \mathcal{X}^2 D^{-1} D_{\delta }\\ \label{J} & = \; \left(\delta X \mathcal{X} - D_{\delta }D^{-1}\mathcal{X}^2 \right) + \left(\delta X \mathcal{X} - D_{\delta }D^{-1}\mathcal{X}^2 \right)^\dag. \end{split} \end{equation} Thus it is seen that $J_G$ is a linear transformation on the space of $n \times n$ complex hermitian matrices $\mathcal{H}_n$. \end{subequations} \item[(b.)] If the action of $J(G,\vv{d}(G))$ on some $n \times n$ complex hermitian matrix $\delta X$ is $0$ then $\delta X$ itself must be $0$.\medbreak Proof of (b.): From equation \eqref{J} it is clear that $J_G\left( \delta X \right) =0$ if and only if $\delta X \mathcal{X} - D_{\delta }D^{-1}\mathcal{X}^2$ is anti-hermitian. \begin{subequations} Let's assume that $\delta X \mathcal{X} - D_{\delta }D^{-1}\mathcal{X}^2$ is anti-hermitian. That is, \begin{align} \label{ij1} & \delta X \mathcal{X} - D_{\delta }D^{-1} \mathcal{X}^2 = - \mathcal{X} \delta X + \mathcal{X}^2 D^{-1} D_{\delta } \notag \\ \Longrightarrow \; & \mathcal{X}^{-1} \delta X - \mathcal{X}^{-1} D_{\delta }D^{-1} \mathcal{X} = - \delta X \mathcal{X}^{-1} + \mathcal{X} D^{-1} D_{\delta } \mathcal{X}^{-1}. \end{align} Let $\mathcal{X} = \sum_{i=1}^{n} g_i \ketbra{g_i}{g_i}$ be the spectral decomposition of $\mathcal{X}$. Then the $ij$-th matrix element of the matrix in equation \eqref{ij1}, in the $\{ \ket{g_i}\}_{i=1}^{n}$ basis, is given by \begin{align*} & \dfrac{1}{g_i} \pk{g_i}{\delta X}{g_j} - \dfrac{g_j}{g_i} \pk{g_i}{D_{\delta } D^{-1}}{g_j} \notag \\ = \; - & \dfrac{1}{g_j} \pk{g_i}{\delta X}{g_j} + \dfrac{g_i}{g_j} \pk{g_i}{D_{\delta } D^{-1}}{g_j} \end{align*} \begin{equation} \label{ij2} \Longrightarrow \pk{g_i}{\delta X}{g_j} = \left( \dfrac{g_i^2 + g_j^2}{g_i + g_j} \right)\pk{g_i}{D_{\delta } D^{-1}}{g_j}. \end{equation} Multiplying the above number by $\ketbra{g_i}{g_j}$ and summing over $i,j$ from $1$ to $n$ gives \begin{equation} \label{ij3} \delta X = \sum_{i,j=1}^{n} \pk{g_i}{D_{\delta } D^{-1}}{g_j} \dfrac{g_i^2 + g_j^2}{g_i + g_j} \ketbra{g_i}{g_j}. \end{equation} Let $\{\ket{k}\}_{k=1}^{n}$ represent the standard basis, then $\braket{k}{g_j}$ is complex number occuring in the $k$-th entry of $\ket{g_j}$. Using equations \eqref{Ddelta} and \eqref{DeltaX} we get $\pk{g_i}{D_{\delta } D^{-1}}{g_j} =\frac{1}{2} \sum_{l=1}^{n} \ir{g_i}{l}{g_j} \dfrac{(\delta X)_{ll}}{ d_{ll}(G)^2} $. The diagonal elements of $\delta X$ are then given by \begin{equation} \begin{split} \label{ij4} (\delta X)_{kk} = \; & \sum_{l=1}^{n} \left( \dfrac{1}{2} \sum_{i,j=1}^{n} \is{k}{g_i}{g_j}{k} \dfrac{g_i^2 + g_j^2}{g_i + g_j} \ir{g_i}{l}{g_j} \right) \dfrac{(\delta X)_{ll}}{ \left(d_{ll}(G)\right)^2} \\ = \; & \sum_{l=1}^{n} \dfrac{1}{2} \left( O \Lambda O^\dag \right)_{kl}\dfrac{(\delta X)_{ll}}{ \left(d_{ll}(G)\right)^2}, \end{split} \end{equation} where $O$ is an $n \times n^2$ matrix with matrix elements given by $O_{k,ij} = \is{k}{g_i}{g_j}{k}$, $\Lambda$ is an $n^2 \times n^2$ diagonal matrix with matrix elements $\Lambda_{ij,kl} = \delta_{ik}\delta_{jl} \dfrac{g_i^2 + g_j^2}{g_i + g_j}$. It is easy to check that rows of $O$ are orthogonal. Since $\Lambda > 0$ and $O$ is of rank $n$, the matrix $\dfrac{1}{2} O \Lambda O^\dag$ positive definite. \medbreak Consider \begin{equation} \label{Ddelta1} \ket{D_{\delta X }} \equiv \begin{pmatrix} (\delta X)_{11}\\ (\delta X)_{22}\\ \vdots \\ (\delta X)_{nn}\\ \end{pmatrix}. \end{equation} Then equation \eqref{ij4} can be rewritten as \begin{align} \label{ij5} & \left(\mathbb{1} - \dfrac{1}{2} O \Lambda O^\dag D^{-2} \right) \ket{D_{\delta X }} = 0 \notag \\ \Longrightarrow \; & \left(D^2 - \dfrac{1}{2} O \Lambda O^\dag \right) D^{-2} \ket{D_{\delta X }} = 0 \end{align} Let $\Lambda'$ be an $n^2 \times n^2$ diagonal matrix whose matrix elements are given by $\Lambda'_{ij,kl} = \delta_{ik}\delta_{jl} \dfrac{2 g_i g_j}{g_i + g_j}$. Since $\Lambda' > 0$, $\dfrac{1}{2} O \Lambda' O^\dag$ is positive definite. After some amount of tedious algebra we find that the following equation holds true. \begin{align} \label{ij6} D^2 = \dfrac{1}{2} O \left( \Lambda + \Lambda' \right) O^\dag. \end{align} Hence $D^2 - \dfrac{1}{2}O\Lambda O^\dag$ = $\dfrac{1}{2} O \Lambda' O^\dag >0$. This implies that for equation \eqref{ij5} to be true $\ket{D_{\delta X }} = 0$. This implies (see equation \eqref{Ddelta1}) $(\delta X)_{ii}= 0$, which implies that $ 2 d_{ii}(G)\delta x_{ii} = 0$, which implies that $x_{ii}=0$, i.e., $D_\delta = 0$. Substituting $D_{\delta } = 0$ in equation \eqref{ij3} gives $\delta X = 0$. Hence, demanding $J_G\left( \delta X \right)=0 $ leads to the conclusion that $\delta X =0$. \end{subequations} \end{enumerate} This means that $J_G$ is non-singular, which then implies that $ Det\left(J_{G}\right) \neq 0 $. This proves the theorem. \end{proof} Theorem \ref{Jacthm} implies that IFT holds true for all $G_0 \in \mathcal{G}$, i.e., for all $G_0 \in \mathcal{G}$ one can define these $\phi_{ij}$ functions so that the points 1., 2. and 3. mentioned in IFT are satisfied. The third point in IFT, i.e., equation \eqref{impff1}, tells us that for any $G \in I_{G_0}$, $F(G, X) = 0$, when $x_{ij} = \phi_{ij}(G)$, $\forall \; 1 \leq i ,j \le n$. This is equivalent to stating that if one obtains the $\phi_{ij}$ functions, defined in some open neighbourhood $I_{G_0}$ of $G_0$, then $x_{ij}=\phi_{ij}(G), \; \forall \; 1 \le i,j \le n$, gives us \emph{some} solution for $X$ in equation \eqref{EOY} for any $G \in I_{G_0}$. If it is true that assigning $x_{ij} = \phi_{ij}(G), \; \forall \; 1 \le i,j \le n$, implies that $X > 0$, then obtaining the $\phi_{ij}$ functions in some open neighbourhood $I_{G_0}$ of $G_0$ gives us \emph{the} solution for MED of all $G \in I_{G_0}$. \begin{theorem} \label{Xge0} When $G \in I_{G_0}$ and $x_{ij} = \phi_{ij}(G)$, $\forall \; 1 \leq i ,j \le n$, then $X>0$. \end{theorem} \begin{proof} Suppose not. Let there be some $G_1 \in I_{G_0}$ such that when $x_{ij}= \phi_{ij}(G_1)$, $\forall$ $1 \le i,j \le n$, then $X$ has some non-positive eigenvalues. Let $G(t) \equiv (1-t)G_0 + tG_1$ be a linear trajectory in $\mathcal{G}$. $G(t)$ starts from $G_{0}$ when $t=0$ and ends at $G_1$ when $t=1$. Note that eigenvalues of $X$ are continuous functions of $x_{ij}$, and when restricting $x_{ij}$ to be such that $x_{ij} = \phi_{ij}(G)$, $\forall \; 1 \le i,j \le n$, then $x_{ij}$ are continuous functions over $I_{G_0}$. Thus the eigenvalues of $X$ are continuous over $I_{G_0}$, whenever $x_{ij} = \phi_{ij}(G)$. This implies the following. \begin{enumerate} \item[(i.)] When $x_{ij} = \phi_{ij}(G(0))$ $\forall \; 1 \leq i, j \le n$, all eigenvalues of $X$ are positive. \item[(ii.)] When $x_{ij} = \phi_{ij}(G(1))$ $\forall \; 1 \leq i, j \le n$, some eigenvalues of $X$ are non-positive. \end{enumerate} The intermediate value theorem tells us that since $\phi_{ij}$'s are continuous over $I_{G_0}$, (i.) and (ii.) imply that there must be some $t' \in (0,1]$, such that \begin{enumerate} \item[(i.)] $X>0$, when $x_{ij}=\phi_{ij}(G(t))$, for all $t \in [0, t')$, \item[(ii.)] for all $t \in ( t',1]$, $X$ is not necessarily positive definite, when $x_{ij}=\phi_{ij}(G(t))$, and finally \item[(iii.)] $X$ has some $0$ eigenvalue(s) when $x_{ij}=\phi_{ij}(G(t'))$, i.e., when $t=t'$. \end{enumerate} When $t<t'$ then $X>0$, which also implies that $X = \mathcal{X}(G(t))$ holds true for the interval $t \in [0,t')$. Since $ \frac{\left(\mathcal{X}(G)\right)^2}{Tr\left( \left(\mathcal{X}(G)\right)^2\right)} = \mathscr{R}_\mathcal{G}(G)$ \footnote{See equation \eqref{GXR} in the proof of theorem \ref{thmret} in section \ref{SSGMM}.} we get that $\frac{X^2}{Tr\left(X^2\right)} = \mathscr{R}_\mathcal{G}\left(G(t)\right)$, when $t<t'$. Since $\mathscr{R}_\mathcal{G}$ is continuous on $\mathcal{G}$ \footnote{See description below definition \ref{defG} in section \ref{intro}.} and eigenvalues of $X$ are continuous in $I_{G_0}$, it follows that when $t=t'$, $\frac{ X^2}{Tr\left(X^2\right)} = \mathscr{R}_\mathcal{G}\left(G(t')\right)$. From (iii.) above it is seen that when $t=t'$, $X$ is singular; this implies that $\mathscr{R}_\mathcal{G}\left(G(t')\right)$ is singular as well, which is a contradiction since we know that $\mathscr{R}_\mathcal{G}$ is a function from $\mathcal{G}$ to $\mathcal{G}$ and all gram matrices in $\mathcal{G}$ are positive definite. This contradiction arose from the assumption that when $x_{ij} = \phi_{ij}(G_1)$, $X$ is not positive definite. This proves the theorem. \end{proof} Theorem \ref{Xge0} tells us that for any starting point $G_0 \in \mathcal{G}$, if we take any point $G \in I_{G_0}$, the $\phi_{ij}$'s obey the equality: $\phi_{ij}(G) = d_{ij}(G)$, $\forall \; 1 \leq i, j \le n$. Given this fact, from here onwards, we will represent the implicit functional dependence $\phi_{ij}$ by $d_{ij}$ itself. We can make a stronger statement about the behaviour of the functions $d_{ij}$ on $\mathcal{G}$. It is easier to do so if we define trajectories, like the one defined in the proof of theorem \ref{Xge0} in $\mathcal{G}$, and prove results about the behaviour of the $d_{ij}$'s with respect to the independent variable $t$. For that purpose, let $G_0, G_1 \in \mathcal{G}$ be distinct; define a linear trajectory in $\mathcal{G}$ from $G_0$ to $G_1$, $G:[0,1]\longrightarrow \mathcal{G}$ as \begin{equation} \label{Gt} G(t) = (1-t) G_0 + t G_1.\end{equation} We now apply the implicit function theorem to $F(G(t), X)$, where $X$ represents the variables whose implicit dependence we seek and $t$ is the independent variable. The analytic implicit function theorem \cite{imp} tells us that if $f_{ij}\left(G(t),\vv{x}\right)$ are analytic functions of the variables $t$ and $x_{kl}$, then $\phi_{kl}(G(t))$ (which are equal to $ d_{kl}(G(t))$) should also be analytic functions of the variable $t$ $\in [0,1]$. Equations \eqref{fij}, \eqref{fji} and \eqref{fii} tell us that $f_{ij}(G(t),\vv{x})$ are multivariate polynomials in the variables $t$ and $x_{kl}$, which implies that the $f_{ij}$'s are analytic functions of $t$ and $x_{kl}$. Thus $d_{kl}(G(t))$ are analytic functions of the variable $t$. This implies that, more generally, $d_{kl}$ are analytic functions over $\mathcal{G}$. \subsection{Taylor Series and Analytic Continuation} \label{TayAna} The fact that $d_{kl}$ are analytic functions on $\mathcal{G}$ allows us to Taylor expand $d_{kl}$ from some point in $\mathcal{G}$ to another point. Let us assume that we want to find the solution for MED of some gram matrix $G_1 \in \mathcal{G}$, and that we know the solution for MED of $G_0 \in \mathcal{G}$. Then we define a trajectory as was done in equation \eqref{Gt}. We will now show that using equation \eqref{calEOY} we can obtain the derivatives of $d_{kl}(G(t))$, upto any order, with respect to $t$; this allows us to Taylor expand the $d_{kl}(G(t))$ function about the point $t=0$. Analytically continuing from $t=0$ to $t=1$ allows us to obtain the values of $d_{kl}(G(t))$ at $t=1$. First we show how to obtain the first order derivatives of $d_{kl}(G(.))$ with respect to $t$. We will abbreviate $D(G(t))$ $=$ $(d_{11}(G(t)),d_{22}(G(t)),\cdots,d_{nn}(G(t)))$ as $D(t)$ for convenience. Similarly $\mathcal{X}(G(t))$ will be abbreviated as $\mathcal{X}(t)$. It will be useful to denote separately the matrix of off-diagonal elements of $\mathcal{X}(t)$. Thus define $N(t) \equiv \mathcal{X}(t) - (D(t))^2$. Equation \eqref{calEOY} can be re-written as $\left(D(t)^2+N(t)\right)^2 =D(t)G(t)D(t)$. Let $\Delta \equiv \frac{d G(t)}{dt} = G_1 - G_0$. Taking the total first order derivative on both sides of equation \eqref{calEOY} gives \begin{equation} \label{1order} \begin{array}{c} \left( D(t)^2 + N(t) \right) \left( 2 D(t) \dfrac{d D(t)}{dt} + \dfrac{d N(t)}{dt} \right)+ \left( 2 D(t) \dfrac{d D(t)}{dt} + \dfrac{d N(t)}{dt} \right) \left( D(t)^2 + N(t) \right) \\ - \; (D(t)G(t))\dfrac{d D(t) }{dt} - \dfrac{d D(t) }{dt}G(t)D(t) \end{array} = D(t) \Delta D(t), \end{equation} where \begin{enumerate} \item[] $\dfrac{d D(t)}{dt} = \left(\dfrac{d \; \left( d_{11}(t) \right)}{dt},\dfrac{d \; \left( d_{22}(t) \right)}{dt},\cdots,\dfrac{d \; \left( d_{nn}(t) \right)}{dt}\right)$, \item[] $\left(\dfrac{dN(t)}{dt}\right)_{kl}= \dfrac{d}{dt} \left( d_{kl}(t) + i d_{lk}(t)\right)$ (when $k<l$), \item[] $\left(\dfrac{dN(t)}{dt}\right)_{ii}= 0$, and \item[] $\left(\dfrac{dN(t)}{dt}\right)_{kl}= \dfrac{d}{dt}\left(d_{lk}(t) - i d_{kl}(t)\right)$ (when $k>l$). \end{enumerate} Thus we get $n^2$ coupled ordinary differential equations. By substituting the values of $d_{kl}(0)$ in equation \eqref{1order} one can solve for $\frac{d \; d_{kl}(t)}{dt}{|}_{t=0}$. \smallbreak The second order derivatives can be obtained in a similar fashion: taking the total derivative of LHS and RHS of the equation \eqref{1order} with respect to $t$ (i.e. the second order derivative of the LHS of equation \eqref{calEOY}) we get a set of $n^2$ coupled second order differential equations. Setting $t=0$ and using the values of $d_{kl}(0)$ and $\frac{d \; d_{kl}(t)}{dt}{|}_{t=0}$, one can solve the resulting (linear) equations to obtain the values of the unknowns $\frac{d^2 \; d_{kl}(t)}{dt^2}{|}_{t=0}$. \smallbreak Continuing in this manner one can obtain the values of the derivatives of $d_{kl}(t)$ upto any order, at the point $t=0$. In the following equation we give the $k$-th order derivative of equation \eqref{calEOY} for this purpose. \begin{equation} \begin{split} \label{1norder} & \Big( D(t)^2 + N(t) \Big)\Big( 2 D(t) \dfrac{d^k D(t)}{dt^k} + \dfrac{d^k N(t)}{dt^k} \Big)+\Big( 2 D(t) \dfrac{d^k D(t)}{dt^k} + \dfrac{d^k N(t)}{dt^k} \Big) \Big( D(t)^2 + N(t) \Big) \\ & - (D(t)G(t))\dfrac{d^k D(t) }{dt^k} - \dfrac{d^k D(t) }{dt^k}G(t)D(t)\\ = & -\Bigg( \Big(D(t)^2+N(t)\Big)\Big( \sum_{l_1=1}^{k-1} \binom{k}{l_1} \big( (\frac{d}{dt} )^{l_1} D(t) \big) \big( (\frac{d}{dt} )^{k-l_1} D(t) \big) \Big ) + \text{h.c.} \Bigg) \\ & - \sum_{l_2=1}^{k-1} \binom{k}{l_2} \Big( (\frac{d}{dt})^{k_1} (D(t)^2 + N(t)) \Big)\Big( (\frac{d}{dt})^{k-k_1} (D(t)^2 + N(t)) \Big) \\ & + \sum_{m_1=1}^{k-1} \binom{k}{m_1} \Big( (\frac{d}{dt})^{m_1} D(t) \Big)G(t)\Big( (\frac{d}{dt})^{k-m_1} D(t) \Big) \\ & + k \sum_{m_2=0}^{k-1} \binom{k}{m_2} \Big( (\frac{d}{dt})^{m_2} D(t) \Big) \Delta \Big( (\frac{d}{dt})^{k-m_2} D(t) \Big). \end{split} \end{equation} By substituting the values of all derivatives at $t=0$, one can expand the $d_{kl}$ functions about the point $t=0$. Analytic continuation is straightforward: by using the aforementioned Taylor expansion about $t=0$, one obtains the value of $d_{kl}$ at some $t = \delta t > 0$; one can then use the aforementioned method to obtain the values of derivatives of $d_{kl}$ at $t=\delta t$ and Taylor expand the $d_{kl}$ functions from $\delta t$ to $t > \delta t$. In this manner one can Taylor expand and analytically continue $d_{kl}$'s from $t=0$ to the point $t=1$. The need for analytic continuation raises the following question: what is the radius of convergence for the Taylor series about some point $t$ in the interval $[0,1]$? The LHS of equations \eqref{1order} and \eqref{1norder} gives us a hint: $\frac{d^k D(t)}{dt^k}$ and $\frac{d^k N(t)}{dt^k}$ scale proportionally to the $k$-th power of $\Delta$ i.e., \begin{equation} \begin{split} \label{scaling} \Delta \longrightarrow \nu \Delta \Longrightarrow \left( \dfrac{d^k D(t)}{dt^k},\dfrac{d^k N(t)}{dt^k}\right) \rightarrow \left( \nu^k \dfrac{d^k D(t)}{dt^k}, \nu^k \dfrac{d^k N(t)}{dt^k}\right). \end{split} \end{equation} This tells us that we need to keep $|| \Delta ||_2$ small to ensure that either $G_1$ falls within the radius of convergence of the $d_{kl}$ functions when expanded about the point $G_0$ or the number of times one is required to analytically continue from $t=0$ to $t=1$ is low. It is very difficult to obtain the exact radius of convergence for every point in $\mathcal{G}$ since the value of the radius of convergence differs for different points in $\mathcal{G}$\footnote{Particularly as one gets closer to points near the boundary of $\mathcal{G}$ (which lies outside $\mathcal{G}$), the radius of convergence becomes smaller.}. For a given $G_1$ for which we wish to find the solution, it is desirable to find a $G_0$ so that $G_1$ falls within the radius of convergence of the $d_{kl}$ functions, when expanded about $G_0$. In the following we give a method to find such a $G_0$ for a given $G_1$. \subsubsection{Starting point which generally doesn't require analytic continuation} \label{sub1} Let $G_0 \in \mathcal{G}$ be some gram matrix with the property that the diagonal of the positive definite square root of $G_0$, i.e., $G_0^\frac{1}{2}$ has the property $\left(G_0^\frac{1}{2}\right)_{11}$ $=$ $\left(G_0^\frac{1}{2}\right)_{22}$ $=$ $\cdots$ $=$ $\left(G_0^\frac{1}{2}\right)_{nn}$. Substituting $G^\frac{1}{2} = G_0^\frac{1}{2}$, along with $W = \mathbb{1}_n$ and $D = \left(G_0^\frac{1}{2}\right)_{11} \mathbb{1}_n$ in the LHS of equation \eqref{DGWhermit} gives us the RHS of equation \eqref{DGWhermit}, i.e., they all satisfy equation \eqref{DGWhermit}. It is also seen that when $D = \left(G_0^\frac{1}{2}\right)_{11} \mathbb{1}_n$, then $X = D G_0^\frac{1}{2}$ is a solution for the equation \eqref{EOY} for $G = G_0$, and since $D$ is a multiple of $\mathbb{1}_n$, $X >0$. Thus, when the diagonal of $G_0^\frac{1}{2}$ is a multiple of $\mathbb{1}_n$, the solution for the MED of corresponding gram matrix $G_0$ is known\footnote{ This result is well known. It corresponds to those cases when $\ mathscr{R}\left(\widetilde{P}\right) = \widetilde{P}$.}. Thus for a given $G_1$, we want to find a starting point $G_0$ such that the diagonal elements of $G_0^{\frac{1}{2}}$ are all equal. For this purpose expand the positive square root of $G_1$ i.e., $G_1^{\frac{1}{2}}$ in an ONB of $\mathcal{H}_n$, which comprises of $\frac{\mathbb{1}}{\sqrt{n}}$ and the generalized Gell-Man matrices $\frac{\sigma_{lk}}{\sqrt{2}}$ where $ 1 \leq l,k \leq n $\cite{bloch}. Here the $ \sigma_{lk}$ matrices are defined as \begin{equation} \label{gell} \sigma_{lk} = \left\{ \begin{array}{lr} \ketbra{l}{k} + \ketbra{k}{l}, \; \text{ when } l<k, \\ \\ i\ketbra{l}{k} - i \ketbra{k}{l}), \; \text{ when } l>k, \\ \\ \sqrt{ \frac{2}{l (l+1) } } \left( \sum_{j=1}^{l} \ketbra{j}{j} - l \ketbra{l+1}{l+1} \right)\delta_{lk}, \text{ when } 1 \leq l \leq n-1 \end{array} \right. \end{equation} All generalized Gell-Mann matrices in equation \eqref{gell} have Hilbert-Schmidt norm $\sqrt{2}$. Let $G_1^\frac{1}{2}$ have the following expansion in these Gell-Mann matrices. \begin{equation} \label{G1exp} G_1^\frac{1}{2} = \gamma \dfrac{\mathbb{1}}{\sqrt{n}} + \sum_{j=1}^{n-1} \beta_j \dfrac{\sigma_{jj}}{\sqrt{2}} + \sum_{\underset{l \neq k}{ l , k=1}}^{n} \zeta_{lk} \dfrac{\sigma_{lk}}{\sqrt{2}}, \end{equation} where $\zeta_{lk}, \beta_j, \gamma$ are real numbers. Note that $\gamma>0$ s $G_1^\frac{1}{2}$. Based upon this define $G_0^{\frac{1}{2}}$ as \begin{equation} \label{G0exp} G_0^\frac{1}{2} = \kappa \dfrac{\mathbb{1}}{\sqrt{n}} + \sum_{l \neq k} \zeta_{lk} \dfrac{\sigma_{lk}}{\sqrt{2}}, \end{equation} where $\kappa = \sqrt{ \gamma^2 + \sum_{j} \beta_j^2}$. It is easily verified that $Tr(G_0) =1$. One needs to check if $G_0^\frac{1}{2} >0$ or not. Generally, it is true that $G_0^\frac{1}{2} > 0$. But if some eigenvalues of $G_1$ are close to $0$, this \emph{may} not hold. Suppose it holds (as is generally the case), $d_{ii}(0) = \kappa, \; d_{kl}(0)=Re \left( \left(G_0^\frac{1}{2} \right)_{kl}\right), \text{ when } k<l, \; d_{kl}(0)=Im\left( \left(G_0^\frac{1}{2} \right)_{lk} \right), \text{ when } k>l$. $G_1$ generally falls within the radius of convergence of all $d_{kl}$ functions about the starting point $G_0$. In such circumstances one doesn't need analytic continuation; one can straightforwardly calculate $d_{kl}(1)$ from the Taylor series about $t=0$. If $G_0^\frac{1}{2}$, obtained this way, isn't positive definite, then this method fails and one needs another starting point. \subsubsection{Starting points which generally require analytic continuation} \label{sub2} Another possible starting point is an ensemble of equiprobable orthogonal states; this ensemble's gram matrix is $G_0 = \frac{\mathbb{1}}{n}$ where $d_{ij}(0)=\delta_{ij}\frac{1}{\sqrt{n}}$. To \emph{drag} the solution from $G_0$ to $G_1$ one needs to divide the $[0,1]$ interval into subintervals and analytically continue the $d_{kl}$'s from the starting point of each subinterval to its corresponding ending point. Here it needs to be ensured that one doesn't overshoot beyond the radius of convergence of any of the $d_{kl}$ functions at the starting point of each subinterval. For this purpose it was found that it generally suffices to divide $[0,1]$ into \large$\lceil$\normalsize$ n^2 || \Delta ||_2 $\large$\rceil$\normalsize subintervals. Generally the smaller the intervals, the lower the value of error. \subsubsection{Error-Estimation} \label{ex1} There is a simple method to estimate the degree of error in the process; this is based on the fact that when the solution, i.e., $d_{kl}(1)$'s are substituted in the LHS of equation \eqref{EOY} one should obtain the zero matrix, which isn't what we get due to errors. Thus the value of the Hilbert-Schmidt norm of the quantity on the LHS, i.e., the value of $|| (D(1)^2 + N(1))^2 - D(1)G(1)D(1))||_2$ gives us an estimate of the degree of error which has accumulated into the solution. The closer $|| \left(D(1)^2 + N(1)\right)^2 - D(1)G(1)D(1))||_2$ is to $0$, the lower the error. Note that \emph{one cannot decrease the error significantly by increasing the order upto which the Taylor series is expanded beyond a order of expansion. On the other hand error rates can be substantially reduced by decreasing the size of the subintervals}. Thus having solved for $d_{kl}(1)$ with a high degree of accuracy, one can now obtain the optimal POVM. In the following we present an example for $n=5$. Note that while the precision of the starting point is upto $20$ significant digits, only the first $6$ significant digits have been displayed. For lack of space sometimes quantities have been displayed with upto only $4$ significant digits. \medbreak \begin{tabular}{l l} $\tket{\psi}{1} = \left[\begin{array}{c} 0.320457 \\ 0.123687 + i0.0117558 \\ 0.117838+ i-0.027942 \\ 0.109674+ i0.0167151 \\ 0.0860555+ i0.00780123 \end{array}\right]$ & $\tket{\psi}{2} = \left[\begin{array}{c} 0.123687 -i0.0117558 \\ 0.397851 \\ 0.169692 -i0.0506685 \\ 0.125198 -i0.0244774 \\ 0.124106 -i0.0261114 \end{array}\right]$ \\ ~ & ~ \\ $\tket{\psi}{3} = \left[\begin{array}{c} 0.117838 + i0.027942 \\ 0.169692 + i0.0506685 \\ 0.404725 \\ 0.13847+ i0.0177653 \\ 0.122277-i0.0249506 \end{array}\right]$ & $\tket{\psi}{4} = \left[\begin{array}{c} 0.109674 -i0.0167151 \\ 0.125198 + i0.0244774 \\ 0.13847-i0.0177653 \\ 0.373791 \\ 0.110387 -i0.013984 \end{array}\right]$ \\ ~ & ~ \\ $\tket{\psi}{5} = \left[\begin{array}{c} 0.0860555 -i0.00780123 \\ 0.124106 + i0.0261114 \\ 0.122277+ i0.0249506 \\ 0.110387+ i0.013984 \\ 0.33677 \end{array}\right] $. \end{tabular} \smallskip The corresponding $\tket{u}{i}$ states are given by: \smallskip \begin{tabular}{l l} $\tket{u}{1} = \left[\begin{array}{c} 3.93887 \\ -0.668108 + i0.0313699 \\ -0.553671 + i0.331697 \\ -0.611991 - i0.234777 \\ -0.375925 - i0.264517 \end{array}\right]$ & $\tket{u}{2} = \left[\begin{array}{c} -0.668108 - i0.0313699 \\ 3.52494 \\ -0.939093 + i0.353801 \\ -0.418308 + i0.204928 \\ -0.685643 + i0.0142212 \end{array}\right]$ \\ ~ & ~ \\ $\tket{u}{3} = \left[\begin{array}{c} -0.553671 - i0.331697 \\ -0.939093 - i0.353801 \\ 3.50731 \\ -0.634577 - i0.0903887 \\ -0.554402 + i0.418281 \end{array}\right]$ & $\tket{u}{4} = \left[\begin{array}{c} -0.611991 + i0.234777 \\ -0.418308 - i0.204928 \\ -0.634577 + i0.0903887 \\ 3.42828 \\ -0.568152 + i0.0597921 \end{array}\right]$ \\ ~ & ~ \\ $\tket{u}{5} = \left[\begin{array}{c} -0.375925 + i0.264517 \\ -0.685643 - i0.0142212 \\ -0.554402 - i0.418281 \\ -0.568152 -i0.0597921 \\ 3.74634 \end{array}\right] $.\end{tabular} \medbreak The gram matrix for the ensemble $\{ \tket{\psi}{i} \}_{i=1}^{5}$, i.e., $G_1$ is given by: \footnotesize \medbreak \begin{align*} &G_1 = \\ &\left[ \begin{array}{lllll} 0.15257 & 0.13405 -i0.017665 & 0.13285+ i0.021068 & 0.11811-i0.010337 & 0.098267+ i0.0026888 \\ 0.13405 + i0.017665 & 0.23744 & 0.18316+ i0.051216 & 0.14883+ i0.02325 & 0.13487+ i0.034111 \\ 0.13285 -i0.021068 & 0.18316 -i0.051216 & 0.24489 & 0.15659-i0.020010 & 0.13850+ i0.013294 \\ 0.11811 + i0.010337 & 0.14883 -i0.023258 & 0.15659+ i0.020010 & 0.20017 & 0.12067+ i0.016377 \\ 0.098267 -i0.0026888 & 0.13487 -i0.034111 & 0.13850-i0.013294 & 0.12067-i0.016377 & 0.16492 \end{array}\right]. \end{align*} \normalsize \medbreak Then using equation \eqref{G0exp}, we have \footnotesize \begin{align*} &G_0^\frac{1}{2} = \\ &\left[ \begin{array}{lllll} 0.36821 & 0.12368 -i0.011755 & 0.11783+ i0.02794 & 0.10967-i0.016715 & 0.086055-i0.0078012 \\ 0.12368 + i0.011755 & 0.36821 & 0.16969+ i0.050668 & 0.12519+ i0.024477 & 0.12410+ i0.026111 \\ 0.11783 -i0.02794 & 0.16969-i0.050668 & 0.36821 & 0.13847-i0.017765 & 0.12227+ i0.024950 \\ 0.10967 + i0.016715 & 0.12519 -i0.024477 & 0.13847+ i0.017765 & 0.36821 & 0.11038+ i0.013984 \\ 0.086055 + i0.0078012 & 0.12410 -i0.026111 & 0.12227-i0.024950 & 0.11038-i0.01398 & 0.36821 \end{array}\right]. \end{align*} \normalsize \medbreak We see that all the diagonal elements of $G_0^\frac{1}{2}$ are all equal. Also $G_0^\frac{1}{2}>0$. Thus $d_{ii}(0)$ is equal to the diagonal elements of $G_0^\frac{1}{2}$ and $d_{kl}(0)$ are assigned values of the off-diagonal elements of ${G_0}^\frac{1}{2}$ (when $i \neq j)$. Here $|| \Delta ||_2 = ||G_1 - G_0||_2 = 0.058777 \sim 1/5^{2} \; ( \; =0.04 \;)$. This gives us the indication that $t=1$ lies within the radius of convergence of the implicitly defined functions $d_{kl}$ about the point $t=0$ and that no analytic continuation is required at any intermittent point. Upon employing the Taylor series expansion and expanding the series upto $10$-th term, we obtain the solution for $\mathcal{X}(1)= {D(1)}^2+N(1)$: \footnotesize \begin{align*} &\mathcal{X}(1)= {D(1)}^2+N(1) = \\ &\left[ \begin{array}{lllll} 0.09627 & 0.04197 -i0.00407 & 0.04054+ i0.009487 & 0.03528-i0.005896 & 0.02484-i0.003121 \\ 0.04197 + i0.00407 & 0.1635 & 0.07237+ i0.02128 & 0.04981+ i0.009339 & 0.04439+ i0.008852 \\ 0.04054 -i0.009487 & 0.07237 -i0.02128 & 0.1710 & 0.05580 -i0.00729 & 0.04424+ i0.008926 \\ 0.03528 + i0.005896 & 0.04981 -i0.009339 & 0.05580+ i0.00729 & 0.1399 & 0.03732+ i0.004563 \\ 0.02484 + i0.003121 & 0.04439 + i-0.008852 & 0.04424-i0.008926 & 0.03732-i0.004563 & 0.1083 \end{array}\right]. \end{align*} \normalsize \medbreak $\mathcal{X}(1)>0$ holds true. $d_{11}(1) = 0.310278, \; d_{22}(1) = 0.404377, \; d_{33}(1)=0.413591, \; d_{44}(1)=0.374064, \; d_{55}(1)=0.329225$. The maximum success probability, $P_s^{max}= \sum_{i=1}^{n} (d_{ii}(1))^2 = 0.679164$. $|| (\mathcal{X}(1))^2 - D(1)G(1)D(1)||_2 = 2.92337 \times 10^{-9}$. For lack of space the projectors $\ketbra{w_i}{w_i}$ aren't given here. Instead we give the ONB $\{ \ket{w_i} \}_{i=1}^{n}$: \medbreak \begin{tabular}{l l} $\ket{w_1} = \left[ \begin{array}{c} 0.998902 -i 0.000902941 \\ -0.0294294 -i0.00140465 \\ -0.0281464+ i0.0114238 \\ -0.0185558-i 0.00595048 \\ -0.00450716-i0.00157192 \end{array} \right]$ & $\ket{w_2} = \left[ \begin{array}{c} 0.0295208 -i0.00161874 \\ 0.999231 -i0.000890303 \\ -0.00479151+ i0.00107801 \\ 0.00760396-i0.00239694 \\ 0.0239121-i0.00195944 \end{array} \right]$ \\ ~ & ~ \\ $\ket{w_3} = \left[ \begin{array}{c} 0.0285947 + i0.0113073 \\ 0.00328547 + i0.000850588 \\ 0.999104+ i0.000230941 \\ 0.0125773+ i0.00210581 \\ 0.0237566-i0.0103508 \end{array} \right]$ & $\ket{w_4} = \left[ \begin{array}{c} 0.0179661 -i0.00612077 \\ -0.00850615 -i0.00226113 \\ -0.0132936+ i0.00235778 \\ 0.999616+ i0.00060647 \\ 0.0121594-i0.000373086 \end{array} \right]$ \\ ~ & ~ \\ $\ket{w_5} = \left[ \begin{array}{c} 0.00301285 -i0.00208885 \\ -0.0240121 -i 0.00194482 \\ -0.0235693-i0.0103215 \\ -0.0127196-i0.000528417 \\ 0.99929+ i0.000965318 \end{array} \right]$.\end{tabular} \medbreak Despite having satisfied the rotationally invariant conditions (refer theorem \ref{thm2}), we would like to see if both the conditions \eqref{St} and \eqref{Glb} are satisfied. Instead of checking condition \eqref{St} we check if $Z$, from equation \eqref{Z}, is hermitian or not. We first use $\{ \ket{w_i} \}_{i=1}^{n}$ to compute the operator $Z$. We measure the non-hermiticity of $Z$ as $\frac{1}{2}|| Z - Z^\dag ||_2$, which takes the value $2.22059\times 10^{-10}$ for our example. That $Z$ is hermitian (within error) and satisfies equation \eqref{Z} implies that equations \eqref{cslack} or equivalently equations \eqref{St} are satisfied. Additionally we find that $\forall \; 1 \leq i \leq n$, all except one eigenvalue of $Z - p_i \ketbra{\psi_i}{\psi_i}$ are positive. For each $i=1,2,\cdots,n$ the non-positive eigenvalue of $Z-p_i \ketbra{\psi_i}{\psi_i}$ is either $0$ or of the order $10^{-10}$, showing that the condition \eqref{Glb} is also satisfied. Thus we have demonstrated an example of obtaining the optimal POVM for MED of an ensemble of $5$ LI states. \subsection{Algorithms: Computational Complexity} \label{algcompl} In the following we outline the algorithm for the Taylor series expansion method, which gives us the solution for the MED of a given $n$-LIP ensemble. The method has already been elucidated in detail in subsubsection \ref{sub2}. After giving the algorithm, we give its time complexity\footnote{The time complexity of any algorithm is given by the order of the total number of elementary steps involved in completing said algorithm. In this paper, each of the following are regarded as elementary steps: basic arithmetic operations (addition, subtraction, multiplication, division) of floating point variables,assigning a value to a floating-point variable, checking a condition and retrieving the value of a variable stored in memory.} and space complexity\footnote{The space complexity is the count of the total number of variables and constants used in algorithms. These variables and constants can be of floating point type, integer type, binary etc; in this paper we treat them all alike while adding the number of variables to give us the final count. Similar to the case of the time complexity, space complexity too is given in terms of the \emph{order} of the count, rather than the exact number.}. The acceptable tolerance error being assumed here is of the order $10^{-9}$, and the time and space complexities are computed corresponding to this acceptable error margin. \bigskip \textbf{Algorithm 1: Taylor Series} The algorithm of the Taylor series method (subsubsection \ref{sub2}) is given in the following steps. \begin{enumerate} \item[(1)] Construct the gram matrix $G_1$ from the given ensemble $\widetilde{P}$. Choose an appropriate starting point $G(0)=G_0$ (for which the solution $d_{ij}(0)$, for all $1 \le i,j \le n$, is known) and define the function $G(t)= (1-t)G_0 + t G_1$. If $|| \Delta ||_2 n^2 \sim 1$ then there's no need to divide the interval $[0,1]$ into subintervals, but otherwise divide $[0,1]$ into $L\equiv \left\lceil|| \Delta ||_2 n^{2}\right\rceil$ intervals. \item[(2)] For $l=0,1,2,\cdots,L-1$, set $t_l= \dfrac{l}{L}$ and iterate over each interval in the following manner: \begin{enumerate} \item [(2.1)] For $k = 1$ through $k = K$ iterate the following: solve eqn \eqref{1norder} for $\dfrac{d^k d_{ij}}{dt^k}|_{t=t_l}$, for all $ 1 \le i, j \le n$, by using values of lower order derivatives as explained in subsection \ref{TayAna}. \item[(2.2)] Having obtained the values of derivatives $\dfrac{d^{k} d_{ij}}{dt^k}|_{t=t_l}$ upto $K$-th order for all $1 \le i,j \le n$, substitute these derivatives in an expression of the Taylor series expansion of the $d_{ij}$ functions about the point $t=t_l$, when expanded to $K$-th order. The resulting expressions will give $K$-th degree polynomials in the variable $t$ for each $1 \le i, j \le n$, i.e, for each $d_{ij}$. Obtain the value of $d_{ij}(t_{l+1})$ by computing the value these polynomials take at $t=t_{l+1}$. Then increment $t$ to $t_{l+1}$, go to (2) and iterate. Stop when $l=L$. \end{enumerate} \end{enumerate} In the following table we give the time and space complexity of various steps in the aforementioned algorithm.\medbreak \small \begin{tabular}{ | p{1em} |p{25em}|p{7em}| p{7em} |} \hline \multicolumn{4}{|c|}{Computational Complexity for Taylor Series Method} \\ \hline & Step in the algorithm & \small{Time Complexity} & \small{Space Complexity} \\ \hline 1. & Computing $G_1$ from $\widetilde{P}$ & $\mathcal{O}(n^3)$ & $\mathcal{O}(n^2)$\\ 2. & Computing $G_0$ from $G_1$, as in subsubsection \ref{sub1} & $\mathcal{O}(n^3)$ & $\mathcal{O}(n^3)$\\ 3. & Solving for $\frac{d^k \; d_{ij}}{dt^k}|_{t=t_l}$ for $k = 1,2,\cdots,K$ & $\mathcal{O}(K n^6)$ & $\mathcal{O}(K n^4)$\\ 4. & Computing Taylor series expansion of $d_{ij}(t-t_l)$ upto $K$-th order at $t=t_{l+1}$ & $\mathcal{O}(Kn^2)$ & $\mathcal{O}(n^2)$ \\ 5. & Repeating steps 3. and 4. over $L \simeq n^2 || \Delta ||_2$ times & $\mathcal{O}(K^2 n^8)$ & $\mathcal{O}( n^6)$ \\ \hline \end{tabular} \medbreak \normalsize Note that the algorithm is polynomial in $n$. It is expected that to maintain the acceptable error margin (i.e., $||\left(D(1)^2+N(1)\right)^2$ $-$ $D(1)G(1)D(1) ||_2$ $\lesssim$ $10^{-9}$) as $n$ increases, one would have to increase the value of $K$ as well. While the numerical examples we checked support this hypothesis, the required increment of $K$ to compensate the increase in the value of $n$ is seen to be significant only over large variations of values of $n$ (when $n$ varies over a range of $20$). Indeed, it remains almost constant for $n = 3$ to $n=10$ for the error to remain within the margin of the order of $10^{-9}$. As in the example given in subsubsection \ref{ex1}, choosing $K=10$ suffices to maintain the error within said margin. If $|| \Delta ||_2 n^2 \simeq 1$, analytic continuation isn't required and then the total time complexity of the algorithm is $\mathcal{O}(n^6)$ and the total space complexity of the algorithm is $\mathcal{O}(n^4)$. In case $|| \Delta ||_2 n^2 > 1$, since the maximum value of $|| \Delta ||_2 \le 2$, analytic continuation is required, and in that case, the worst case time and space complexities\footnote{That is, worst-case corresponding to the value of $|| \Delta ||_2$.} are given by $\mathcal{O}( n^8)$ and $\mathcal{O}( n^6)$ respectively. While the Taylor series method is polynomial in time with a relatively low computational complexity, it is seen that directly employing Newton's method is simpler and more computationally efficient. We will now explain how to employ Newton's method. \textbf{Algorithm 2: Newton's Method} This is a well known numerical technique for solving non-linear equations. We use it here to solve the equations $f_{ij}(G, \vv{x}) =0$, $\forall$ $1 \le i,j \le n$, (see \eqref{fij}, \eqref{fji} and \eqref{fii} for $f_{ij}$) where $G$ is the gram matrix of the ensemble $\widetilde{P}$ whose MED we want to solve for, and $\vv{x}$ are the variables which - we demand - will converge to the solution $\vv{d}(G)$. This convergence is achieved over a few iterations which are part of the algorithm. The technique is based on a very simple principle which we will now elaborate. The Taylor expansion of the $f_{ij}(G,.)$ functions, when expanded about the point $\vv{d}(G)$, can be approximated by the first order terms for small perturbations $\vv{d}(G)$ $\longrightarrow$ $\vv{d}(G)+\delta \vv{x}$ as seen in the equation below. \begin{equation} \label{tay1} \begin{split} f_{ij}(G, \vv{d}(G) + \delta \vv{x}) & \approx \; f_{ij}(G, \vv{d}(G)) + \sum_{k,l=1}^{n} \left( \dfrac{\partial f_{ij}(G,\vv{x}) }{\partial x_{kl} } |_{\vv{x}=\vv{d}(G)}\right) \delta x_{kl} \\ & = \; \sum_{k,l=1}^{n} \left(J_{G}\right)_{ij,kl} \delta x_{kl}, \end{split} \end{equation} where we have used $f_{ij}(G, \vv{d}(G)) =0, \; \forall \; 1 \le i,j \le n$, and where we denote $\left(J_{G}\right)_{ij,kl} \equiv \dfrac{\partial f_{ij}(G,\vv{x}) }{\partial x_{kl} } |_{\vv{x}=\vv{d}(G)}$. We want to obtain the value of the solution $\vv{d}(G)$. We assume that our starting point is $\vv{d}(G)+\delta \vv{x}$ which is close to $\vv{d}(G)$, so that $f_{ij}(G,\vv{d}(G)+\delta \vv{x})$ can be approximated as the RHS of equation \eqref{tay1}. Denote the inverse of the Jacobian $J\left(G,\vv{d}(G)\right)$ by $\left(J_{G}\right)^{-1}$\footnote{In theorem \eqref{Jacthm} we proved that the Jacobian is non-singular, so we know that the inverse \emph{will} exist.}. Then we get \begin{equation} \label{tay2} \sum_{k,l=1}^{n} \left( \left(J_{G}\right)^{-1} \right)_{ij,kl} f_{kl}(G, \vv{d}(G) + \delta \vv{x}) \approx \delta x_{ij}, \; \forall \; 1 \le i,j \le n. \end{equation} Subtracting $\delta \vv{x}$ from $\vv{d}(G)+\delta \vv{x}$ gives us $\vv{d}(G)$, which is the required solution. The catch here is that since we do not know the solution $\vv{d}(G)$ to start with, we cannot compute the Jacobian $J(G,\vv{d}(G))$. But since $\vv{d}(G) + \delta \vv{x}$ is close to $\vv{d}(G)$, we can approximate $J(G,\vv{d}(G))$ by $J(G,\vv{d}(G)+ \delta \vv{x})$, which we can compute. So we use $J(G,\vv{d}(G)+ \delta \vv{x})$ in place of $J(G,\vv{d}(G))$ in the algorithm, particularly, instead of using $\left( J_{G} \right)^{-1}$, computed at $\vv{d}(G)$, in equation \eqref{tay2}, we use it when computed at the point $\vv{d}(G) + \delta \vv{x}$. The description of the principle behind Newton's method clarifies the algorithm, whose steps we list below. Starting with $x_{ij}^{(0)}=\dfrac{1}{\sqrt{n}} \delta_{ij}$ (for all $1 \le i,j \le n$), $k=1$ and assuming $\epsilon = 10^{-9}$, iterate \begin{enumerate} \item[(1)] Substitute $x_{ij}^{(k-1)}$ into the functions $f_{ij}(G,.)$ defined in equations \eqref{fij}, \eqref{fji} and \eqref{fii}. Arrange all the $f_{ij}$'s in a single column, which will have $n^2$ rows; we will denote this $n^2$-row long column by $\gamma^{(k-1)}$. \item[(2)] Stop when $|| \gamma^{(k-1)} ||_2 < \epsilon$. \begin{enumerate} \item[(2.1)] Compute the Jacobian, $J_G^{(k-1)}$, where $\left(J_G^{(k-1)}\right)_{ij,st} = \dfrac{\partial f_{ij}(G,\vv{x}) }{\partial x_{st} } $ at the point $\vv{x} = \vv{x}^{(k-1)}$. \item[(2.2)] Compute the the inverse of $J_G^{(k-1)}$ i.e. $\left( J_G^{(k-1)} \right)^{-1}$. \item[(2.3)] $x_{ij}^{(k)} =x_{ij}^{(k-1)} - \left( \left( J_G^{(k-1)} \right)^{-1} \gamma^{(k-1)} \right)_{ij}$. \end{enumerate} \end{enumerate} For each $n=3$ to $n=20$, we tested approximately twenty-thousand different examples, each of which for we obtained the required solution within the margin error. What's more, it was also seen that the maximum number of iterations required to maintain the error tolerance was constant over this range of $n$, more specifically, for each of these examples we required the number of iterations to be ten. Since the number of iterations required doesn't increase with $n$ (or increases very slowly), the computational complexity (time and space) of this algorithm is determined by the cost of steps within an iteration. Keeping this in mind, we give the computational complexity (time and space) of this algorithm in the following table. \medbreak \small \begin{tabular}{ | p{1em} |p{25em}|p{7em}| p{7em} |} \hline \multicolumn{4}{|c|}{Computational Complexity for Newton's Method} \\ \hline & Step in the algorithm & \small{Time Complexity} & \small{Space Complexity} \\ \hline 1. & Computing values of $f_{ij}(G,\vv{x}^{(k-1)})$, by substituting $\vv{x}^{(k-1)}$ into equations \eqref{fij},\eqref{fji} and \eqref{fii}, for all $ 1\le i,j \le n$ & $\mathcal{O}(n^2)$ & $\mathcal{O}(n^2)$\\ 2. & Computing the Jacobian, $J_G^{(k-1)}$ at the point $\vv{x}^{(k-1)}$ & $\mathcal{O}(n^4)$ & $\mathcal{O}(n^4)$\\ 3. & Computing the inverse of the Jacobian $\left(J_G^{(k-1)}\right)^{-1}$ from the Jacobian, at the point $\vv{x}^{(k-1)}$ & $\mathcal{O}(n^6)$ & $\mathcal{O}(n^4)$\\ 4. & Computing $\vv{x}^{(k)}$ using $\left(J_G^{(k-1)}\right)^{-1}$ and $\vv{x}^{(k-1)}$ (point 2.3 in the list of steps of this algorithm above) & $\mathcal{O}(n^4)$ & $\mathcal{O}(n^2)$ \\ \hline \end{tabular}\normalsize \medbreak Thus we see that the time complexity of Newton's method is $\mathcal{O}(n^6)$ and the space complexity is $\mathcal{O}(n^4)$. The number of steps involved are lower than in the Taylor series method, making this algorithm simpler, and also the computational complexity (both time and space) of Newton's method is lower than that of the Taylor series' method when one cannot find a close enough starting point $G_0$ to the given point $G_1$ in the latter method. Let's compare the efficiency of these methods to that of an SDP algorithm. We will employ an SDP algorithm known as the Barrier-type Interior Point Method (IPM) \cite{Boyd}. \textbf{Algorithm 3: Barrier-type IPM (SDP)} The SDP problem corresponding to MED is given by \eqref{dual}. The objective of this problem is to minimize the value of $Tr(Z)$ subject to the constraints: $Z \ge p_i \ketbra{\psi_i}{\psi_i},$ $\forall$ $i=1,2,\cdots,n$. In this method we obtain $Z$ which solves \eqref{dual} over a series of iterations, known as outer iterations. One starts the $k$-th such iteration with an input $Z^{(k-1)}$ - a candidate for $Z$ - and ends with an output $Z^{(k)}$, which will serve as the input for the next iteration. The $Z^{(k)}$, which are successive approximations for $Z$, take values within the feasible region, i.e, the region given by the set $\{ Z \text{ is $n \times n$, positive definite} \; | \;Z \ge p_i \ketbra{\psi_i}{\psi_i}$, $\forall$ $1 \le i \le n\}$. If $Z$ lies in the interior of this feasible region then it is such that $Z > p_i \ketbra{\psi_i}{\psi_i}$, $\forall$ $1 \le i \le n$, whereas if $Z$ is a boundary point of the feasible region then there is some $i = 1,2,\cdots, n$ such that $Z - p_i \ketbra{\psi_i}{\psi_i}$ has at least one zero eigenvalue. In the first iteration, one starts with some \emph{strictly} feasible $Z=Z^{(0)}$, i.e., some $Z^{(0)}$ which lies in the interior of the feasible region. To ensure that $Z^{(k)}$ remains within the feasible region one perturbs the objective function which is being minimized: instead of performing an unconstrained minimization of $Tr(Z)$, one performs an unconstrained minimization of $Tr(Z) - \dfrac{1}{w}\sum_{i=1}^{n} Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i}))$, where $\dfrac{1}{w}$ is a weight factor. The reason behind subtracting the expression $\dfrac{1}{w}$ $\sum_{i=1}^{n} Log(Det(Z-p_i \ketbra{\psi_i}{\psi_i}))$ from $Tr(Z)$ for unconstrained minimization, is that the expression $Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i}))$ tends to infinity as $Z$ approaches the boundary of the feasible region. Thus performing unconstrained minimization of $Tr(Z)-\dfrac{1}{w }\sum_{i=1}^{n} Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i}))$ will ensure that while the candidates for $Z$, viz, $Z^{(k)}$, may inch closer to the boundary of the feasible region, they will never cross it. The unconstrained minimization of $Tr(Z) - \dfrac{1}{w}\sum_{i=1}^{n} Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i}))$ is performed using Newton's method. The iterations within Newton's method are known as inner iterations. Newton's Method is performed as follows: using the generalized Gell-Mann basis, expand $Z = \sum_{i,j=1}^{n} y_{ij} \frac{\sigma_{ij}}{\sqrt{2}}$, where $\sigma_{nn} = \text{\small$\sqrt{\frac{2}{n}}$} \mathbb{1}_n$. Obtain the equations \begin{equation} \begin{split} \label{hij} h_{kl}(\vv{y}) & \equiv \; \dfrac{\partial \; \left( Tr(Z) - \dfrac{1}{w}\sum_{i=1}^{n} Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i})) \right)}{\partial \; y_{kl}} \\ = & \; \sqrt{n} \delta_{k,n}\delta_{l,n} - \dfrac{1}{w} \sum_{i=1}^{n} Tr \left( \left( Z - p_i \ketbra{\psi_i}{\psi_i} \right)^{-1} \frac{\sigma_{kl}}{\sqrt{2}} \right). \end{split} \end{equation} We want to solve for the equations $h_{ij}=0,$ $\forall$ $1 \le i,j \le n$ using Newton's method. The algorithm is the same as the one described above. These equations give the stationary points of the functions $h_{ij}$, $\forall \; 1 \le i, j \le n$. The matrix elements of the Jacobian of the $h_{ij}$ functions with respect to the $y_{kl}$ variables take the following form\footnote{This isn't difficult to derive; alternately the Barrier-type IPM algorithm for MED is also given in section 11.8.3 in \cite{Boyd} (p. 618), wherein expression for the matrix elements of the Jacobian has been explicitly given.}. \begin{equation} \begin{split} \label{Jacboyd} H_{kl,st} & \equiv \; \dfrac{\partial \; h_{kl}(\vv{y}) }{\partial \; y_{st}} = \; \dfrac{\partial^2 \; \left( Tr(Z) - \dfrac{1}{w}\sum_{i=1}^{n} Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i})) \right)}{\partial \; y_{kl} \partial \; y_{st}} \\ = \; & \dfrac{1}{w} \sum_{i=1}^{n} Tr\left( \left(Z-p_i \ketbra{\psi_i}{\psi_i} \right)^{-1} \frac{\sigma_{kl}}{\sqrt{2}}\left(Z-p_i \ketbra{\psi_i}{\psi_i} \right)^{-1}\frac{\sigma_{st}}{\sqrt{2}} \right), \end{split} \end{equation} where $H_{kl,st}$ are the matrix elements of the Jacobian, as can be seen from equation \eqref{Jacboyd}. Let $\vv{\alpha} \in \mathbb{C}^{n^2}$ be some non-zero complex $n^2$-tuple, and let $A \equiv \sum_{i,j=1}^{n} \alpha_{ij} \frac{\sigma_{ij}}{\sqrt{2}}$. Then we have the equality \begin{align} \label{Hve} & \sum_{k,l,s,t=1}^n \alpha_{kl}^* H_{kl,st} \alpha_{st} \\ = & \frac{1}{w} \sum_{i=1}^{n} Tr\left( \left( (Z - p_i \ketbra{\psi_i}{\psi_i})^{-\frac{1}{2}} A^\dag (Z - p_i \ketbra{\psi_i}{\psi_i})^{-\frac{1}{2}} \right)\left( (Z - p_i \ketbra{\psi_i}{\psi_i})^{-\frac{1}{2}} A (Z - p_i \ketbra{\psi_i}{\psi_i})^{-\frac{1}{2}} \right) \right)> 0.\notag \end{align} This inequality is true for all non-zero $\vv{\alpha} \in \mathbb{C}^{n^2}$. Thus the Jacobian $H$, whose matrix elements are given in equation \eqref{Jacboyd}, is positive definite throughout the feasible region. Thus the \emph{only} stationary points in the feasible region can be local minima. But since $H > 0 $ throughout the feasible region, there can only be \emph{one} local minima in said region, i.e., the stationary point gives \emph{the} minima which we are searching for\footnote{There is another way to appreciate this: since the function $Tr(Z) - \frac{1}{w} \sum_{i=1}^{n} Log \left( Det \left(Z-p_i \ketbra{\psi_i}{\psi_i} \right) \right)$ is a convex function over the feasible region, there can only be one minima in said region, which corresponds to the point we want. The convexity of the Log-Determinant function $-Log \left( Det(X) \right)$ over the space $\{\text{all $n \times n$ matrices } X | X \ge 0\}$ is established in section 3.1 on p. 73 in \cite{Boyd}.}. Thus the inner iterations give us the minima point $Z^{(k)} = \sum_{i,j=1}^{n} y_{ij}^{(k)} \frac{\sigma_{ij}}{\sqrt{2}}$ corresponding to some weight factor $\frac{1}{w^{(k-1)}}$. After having found the minima point $Z^{(k)}$ in the $k$-th iteraction, the $k+1$-th iteration is commenced with changing the weight of the barrier function, i.e., $w^{(k-1)}$ $\longrightarrow$ $w^{(k)}> w^{(k-1)}$, and performing an unconstrained minimization of $Tr(Z) - \dfrac{1}{w^{(k)} }\sum_{i=1}^{n} Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i}))$, starting from the point $Z^{(k)}$. These iterations are continued until the weight of the barrier function decreases to an insignificantly small number (i.e. given by the error tolerance). The final approximation $Z^{(k_{f})}$ is then declared as the solution. We briefly outline the steps in the algorithm below. \bigskip Let $\epsilon$ be the error tolerance for the algorithm. For starting the algorithm choose the following: the value of $\mu$ between $\sim 3$ to $100$, the weight $w^{(0)}$ $\sim$ $10$, the initial starting point for $Z$ as $Z^{(0)}=\mathbb{1}_n$, then set $k=1$ and iterate the following. \begin{itemize} \item[(1)] Perform unconstrained minimization of the function $Tr(Z) - \dfrac{1}{w^{(k-1)} }\sum_{i=1}^{n} Log(Det(Z-p_i\ketbra{\psi_i}{\psi_i}))$ with starting point as $Z = Z^{(k-1)}$ (using Newton's Method). \item[(2)] Store the solution as $Z^{(k)}$. Update $w^{(k)}=\mu w^{(k-1)}$. \item[(3)] Stop when $w^{(k)}= \dfrac{n}{\epsilon}$. \end{itemize} The number of outer iterations for a given error tolerance is constant over $n$, but can vary with the factor $\mu$ by which the weights $w^{(k-1)}$ vary over the steps. \footnote{See section 11.5.3 of \cite{Boyd} for an upper bound on the number of outer iterations; particularly note figure 11.14. Also see the second example of section 11.6.3., figure 11.16 reveals the variation of the number of outer iterations with $\mu$.}. Thus the computational complexity of the algorithm is decided by the computational complexity of Newton's method within the inner iterations. In the following table we list the different steps as part of Newton's algorithm and give the computational complexity (time and space) for each step. \medbreak \small \begin{tabular}{ | p{1em} |p{25em}|p{7em}| p{7em} |} \hline \multicolumn{4}{|c|}{Computational Complexity for Newton's Method} \\ \hline & Step in the algorithm & \small{Time Complexity} & \small{Space Complexity} \\ \hline 1. & Computing values of $h_{ij}(\vv{y}^{(k-1)})$, by substituting $\vv{y}^{(k-1)}$ into equations \eqref{hij}, for all $ 1\le i,j \le n$. & $\mathcal{O}(n^2)$ & $\mathcal{O}(n^2)$\\ 2. & Computing the Jacobian $H$, at the point $\vv{y}^{(k-1)}$ & $\mathcal{O}(n^5)$ & $\mathcal{O}(n^4)$\\ 3. & Computing the inverse of $H$ at the point $\vv{y}^{(k-1)}$ & $\mathcal{O}(n^6)$ & $\mathcal{O}(n^4)$\\ 4. & Computing $y_{ij}^{(k)} = y_{ij}^{(k-1)} -\sum_{s,t=1}^{n} \left(H^{-1}\right)_{ij,st}h_{st}^{(k-1)}$, $\forall \; i,j, \le n$ & $\mathcal{O}(n^4)$ & $\mathcal{O}(n^2)$ \\ \hline \end{tabular}\normalsize \medbreak \textbf{Comparing Different Methods:} The table above shows that the computational complexity of the Barrier-type IPM is as costly as the direct application of Newton's method. In fact, a closer analysis shows that directly applying Newton's method is less costly than the SDP method, along with the advantage of being simpler to implement. Also, the Taylor series method is as costly as both Newton's method and the SDP method, when one can find a gram matrix $G_0$ in the close vicinity of the given gram matrix $G_1$. If one is interested in a one-time calculation for an ensemble of LI pure states Newton's method is the most desirable method to implement among all the three examined here. \section{Remarks and Conclusion} \label{conclusion} We showed how the mathematical structure of the MED problem for LI pure state ensembles could be used to obtain the solution for said problem. This was done by casting the necessary and sufficient conditions \eqref{St} and \eqref{Glb} into a rotationally invariant form which was employed to obtain the solution by using the implicit function theorem. We also showed that this technique is simpler to implement than standard SDP techniques. As mentioned in section \eqref{intro}, for fixed states $\ket{\psi_1},\ket{\psi_2},\cdots,\ket{\psi_n}$, $\mathscr{R}$ induces an invertible map on the space of probabilities, $\{p_i\}_{i=1}^n \longrightarrow \{q_i\}_{i=1}^{n}$. This naturally begs a question on whether there is a relation between the two probabilities, for example does one majorize the other? Or, more generally, is the entropy of $\{q_i\}_{i=1}^{n}$ always larger than the entropy of $\{p_i\}_{i=1}^{n}$ or vice versa? The answer to this question is that there doesn't seem to be any simple property relating these two probabilities, vis-a-vis, one majorizing the other or that the $(\{p_i\}_{i=1}^{n},\{q_i\}_{i=1}^{n})$-pair are either related by $H(p_i) \geq H(q_i)$ or $H(p_i) < H(q_i)$; examples of both cases can be found. In this paper we studied only about the case for $n$-LIP ensembles. Naturally there is the question if a similar theory holds for more general ensembles. For the case of $m$-linearly dependent pure state ensembles (where $m>dim \mathcal{H}=n$): it is explicitly shown that, while a map like $\mathscr{R}^{-1}$ exists on the space of $m$ linearly dependent pure (LDP) state ensembles, $\mathscr{R}^{-1}$ isn't one-to-one \cite{Carlos}\footnote{In that sense it defeats the purpose of denoting such a map by $\mathscr{R}^{-1}$, because $\mathscr{R}^{-1}$ doesn't have an \emph{inverse}.}. From the analysis in our paper, it is clear that the one-to-one nature of the map $\mathscr{R}^{-1}$ (for $n$-LIPs) plays a crucial in formulating the rotationally invariant necessary and sufficient conditions for MED of said ensemble of states; thus it also plays a crucial role in the application of this necessary and sufficient condition to obtain the solution for MED of said ensemble. The non-invertibility of $\mathscr{R}^{-1}$ also shows that the optimal POVM won't necessarily vary smoothly as one varies the ensemble from one $m$-LDP to another $m$-LDP. C. Mochon gave algebraic arguments for this \cite{Carlos} in his paper, and Ha et al. showed the same using the geometrical arguments for ensembles of three qubit states, as an example \cite{Ha}. This has been shown for general qudits as well \cite{Kwon}. Besides this, there is also the fact that there are some LDPs for which the optimal POVM isn't even unique, i.e., two or more distinct POVMs give the maximum success probability for MED. This means that as the ensemble is varied in the neighbourhood of said ensemble, the optimal POVM can undergo discontinuous \emph{jumps}. Hence, we conclude that such the technique which was used in section \ref{empIFT} for $n$-LIPs can't be generalized to $m$-LDPs. Work is currently under progress to see if such a technique can be generalized to mixed state ensembles of linearly independent states \cite{SGMix14}. \paragraph{Acknowledgements} The authors wish to thank Dr. R. Simon and Jon Tyson for meaningful discussions and insightful remarks. \bibliographystyle{hplain}
2,877,628,090,582
arxiv
\section{Introduction} \label{S:Intro} QCD problems with a single heavy quark $Q$ having momentum $P = M_Q v + p$ (where $M_Q$ is the on-shell mass and $v$ is some vector with $v^2 = 1$) can be described by heavy quark effective theory (HQET) if characteristic heavy-quark residual momentum $p$, as well as characteristic gluon and light-quark momenta $k_i$, are $\ll M_Q$ (see, e.g., \cite{Manohar:2000dt,Grozin:2004yc,Grozin:2013bva}). The heavy quark is described by the field \begin{equation} h_{v0} = Z_h^{1/2}(\alpha_s(\mu),a(\mu)) h_v(\mu)\,, \label{Intro:h} \end{equation} where we use the $\overline{\text{MS}}$ scheme, and $Z_h$ is a minimal renormalization constant. We use the covariant gauge: $-(\partial_\mu A_0^{\mu a})^2/(2 a_0)$ is added to the Lagrangian, the gauge-fixing parameter is renormalized by the same minimal constant as the gluon field: $a_0 = Z_A(\alpha_s(\mu),a(\mu)) a(\mu)$. The HQET heavy-quark field anomalous dimension is defined as $\gamma_h = d\log Z_h/d\log\mu$. The $h_{v0}$ coordinate-space propagator in the $v$ rest frame has the form \begin{equation} S_h(x) = \delta^{(d-1)}(\vec{x}) \theta(x^0) W(x^0)\,, \label{Intro:Sh} \end{equation} where $W(t)$ is the straight Wilson line along $v$ of length $t$. The heavy-quark field is QCD and HQET are related by the matching coefficient $z$~\cite{Grozin:2010wa}: \begin{equation} Q_0 = z_0^{1/2} h_{v0} + \mathcal{O}(1/M_Q)\,,\quad Q(\mu) = z^{1/2}(\mu) h_v(\mu) + \mathcal{O}(1/M_Q)\,. \label{Intro:z} \end{equation} The HQET field anomalous dimension $\gamma_h$ is known up to three loops~\cite{Melnikov:2000zc,Chetyrkin:2003vi}. In the first of these papers, it was obtained as a by-product of the three-loop calculation of the heavy-quark field renormalization constant in the on-shell scheme $Z_Q^{\text{os}}$, from the requirement that the renormalized matching coefficient $z(\mu)$~(\ref{Intro:z}) must be finite; in the second paper, it was confirmed by a direct HQET calculation. Several color structures of the 4-loop result are also known: $C_F (T_F n_l)^3$~\cite{Broadhurst:1994se} ($n_l$ is the number of light flavors), $C_F^2 (T_F n_l)^2$~\cite{Grozin:2015kna,Grozin:2016ydd}, $C_F C_A (T_F n_l)^2$~\cite{Marquard:2018rwx} (from the analytical $Z_Q^{\text{os}}$ result~\cite{Lee:2013sx} using the finiteness of $z(\mu)$), $d_{FF} n_l$~\cite{Grozin:2017css}. Here $C_R$ ($R=F$, $A$) are the standard quadratic Casimirs: $t_R^a t_R^a = C_R \mathbf{1}_R$ ($t_R^a$ are the generators in the representation $R$, $\mathbf{1}_R$ is the corresponding unit matrix); $\Tr t_F^a t_F^b = T_F \delta^{ab}$; $d_{FF} = d_F^{abcd} d_F^{abcd} / N_c$, where $d_R^{abcd} = \Tr t_R^{(a} t_R^b t_R^c t_R^{d)}$ (the brackets mean symmetrization), and $N_c = \Tr\mathbf{1}_F$. The remaining terms are known numerically~\cite{Marquard:2018rwx}, from the numerical 4-loop $Z_Q^{\text{os}}$ using the finiteness of $z(\mu)$~(\ref{Intro:z}). Here I calculate the $C_F^{L-1} T_F n_l \alpha_s^L$ terms up to $L=5$ analytically (sect.~\ref{S:gammah}); the $L=4$ term agrees with the numerical result~\cite{Marquard:2018rwx}. If the heavy-quark velocity is substantially changed (e.\,g., a weak decay into another heavy quark), we have HQET with 2 unrelated fields $h_v$, $h_{v'}$. At the effective-theory level this is described by the current \begin{equation} J_0 = h^+_{v' 0} h_{v 0} = Z_J(\alpha_s(\mu)) J(\mu)\,. \label{Intro:J} \end{equation} The minimal renormalization constant $Z_J$ is gauge invariant (unlike $Z_h$) because the current $J_0$ is color singlet. The anomalous dimension of this current, also known as the cusp anomalous dimension, is defined as $\Gamma(\varphi) = d\log Z_J/d\log\mu$, where $\cosh\varphi = v\cdot v'$. The QCD cusp anomalous dimension $\Gamma(\varphi)$ is known up to three loops~\cite{Grozin:2014hna,Grozin:2015kna}. At $\varphi\ll1$ it is a regular series in $\varphi^2$. At $\varphi\gg1$ it is $\Gamma_l \varphi + \mathcal{O}(\varphi^0)$~\cite{Korchemsky:1987wg}, where $\Gamma_l$ is the light-like cusp anomalous dimension. Several color structures of the 4-loop $\Gamma(\varphi)$ are also known: $C_F (T_F n_l)^3$~\cite{Beneke:1995pq}, $C_F^2 (T_F n_l)^2$~\cite{Grozin:2015kna,Grozin:2016ydd}. The $d_{FF} n_l$ term is known at $\varphi\ll1$ up to $\varphi^4$~\cite{Grozin:2017css}% \footnote{Such expansion was first used in~\cite{Bagan:1993js} at 2 loops.}. For the $\varphi\gg1$ asymptotics (i.\,e. $\Gamma_l$), both $n_l^2$ terms are known from combining the $C_F^2 (T_F n_l)^2$ result~\cite{Grozin:2015kna,Grozin:2016ydd} and the large-$N_c$ $N_c^2 n_l^2$ result~\cite{Henn:2016men}. Large-$N_c$ results for $\Gamma_l$ at $n_l^1$~\cite{Henn:2016men,Moch:2017uml} and $n_l^0$~\cite{Lee:2016ixa,Moch:2017uml} are also known analytically. Contributions of individual color structures of $\Gamma_l$ at $n_l^{1,0}$ are only known numerically~\cite{Moch:2017uml}. Here I calculate the $C_F^{L-1} T_F n_l \alpha_s^L$ terms up to $L=5$ in $\Gamma(\varphi)$ analytically, as exact functions of $\varphi$ (sect.~\ref{S:Gamma}). In particular, I find their $\varphi\gg1$ asymptotics; the analytical $L=4$ result agrees with the numerical one~\cite{Moch:2017uml}. In QED without light lepton flavors ($n_l=0$), as explained below, both $\gamma_h$ and $\Gamma(\varphi)$ are exactly given by the one-loop formulas. When $n_l\ne0$, higher corrections appear. Combining the 4-loop $\gamma_h$ results for $C_F (T_F n_l)^3$~\cite{Broadhurst:1994se}, $C_F^2 (T_F n_l)^2$~\cite{Grozin:2015kna,Grozin:2016ydd}, $C_F^3 T_F n_l$ (sect.~\ref{S:gammah}) and $d_{FF} n_l$~\cite{Grozin:2017css} structures, I obtain the complete analytical 4-loop result for the Bloch--Nordsieck field anomalous dimension $\gamma_h$ in QED (sect.~\ref{S:QED}). Combining the 4-loop $\Gamma(\varphi)$ full results for $C_F (T_F n_l)^3$~\cite{Beneke:1995pq}, $C_F^2 (T_F n_l)^2$~\cite{Grozin:2015kna,Grozin:2016ydd}, $C_F^3 T_F n_l$ (sect.~\ref{S:Gamma}) structures with the $d_{FF} n_l$ term~\cite{Grozin:2017css} (expansion up to $\varphi^4$), I obtain the expansion of the 4-loop QED $\Gamma(\varphi)$ up to $\varphi^4$ (sect.~\ref{S:QED}). \section{HQET field anomalous dimension: the $C_F^{L-1} T_F n_l \alpha_s^L$ terms} \label{S:gammah} This is a QED problem. Due to exponentiation~\cite{Gatheral:1983cz,Frenkel:1984pz}, the coordinate-space propagator of the Bloch--Nordsieck field (i.\,e.\ the straight Wilson line $W$) is \begin{equation} W = \exp \left( \sum w_i \right)\,, \label{gammah:exp} \end{equation} where $w_i$ are single-web diagrams. Due to $C$ parity conservation in QED, webs have even numbers of legs (fig.~\ref{F:webs}). In QED with $n_l=0$ there is only 1 single-web diagram: fig.~\ref{F:webs}a with the free photon propagators. Therefore, $\log W$ is exactly 1-loop; the $\beta$ function is 0, and hence $\gamma_h$ is also exactly 1-loop. At $n_l>0$ corrections to the photon propagator in fig.~\ref{F:webs}a appear. Webs with 4 legs (fig.~\ref{F:webs}b) first appear at 4 loops; they have been calculated in~\cite{Grozin:2017css}. All contributions to $\log W$~(\ref{gammah:exp}) are gauge invariant except the 1-loop one, because proper vertex functions with any numbers of photon legs are gauge invariant and transverse with respect to each photon leg due to the QED Ward identities. \begin{figure}[ht] \begin{center} \begin{picture}(94,20) \put(21,9.75){\makebox(0,0){\includegraphics{w2.pdf}}} \put(73,12.5){\makebox(0,0){\includegraphics{w4.pdf}}} \put(21,0){\makebox(0,0)[b]{a}} \put(73,0){\makebox(0,0)[b]{b}} \end{picture} \end{center} \caption{Webs: (a) 2-leg (the thick line is the full photon propagator); (b) 4-leg (the blob is the sum of connected diagrams).} \label{F:webs} \end{figure} The full momentum-space photon propagator in the covariant gauge is \begin{equation} D^{\mu\nu}(k) = - \frac{i}{k^2} \left(g^{\mu\nu} - \frac{k^\mu k^\nu}{k^2}\right) \frac{1}{1 - \Pi(k^2)} - i a_0 \frac{k_\mu k_\nu}{(k^2)^2}\,, \label{gammah:D} \end{equation} where $\Pi(k^2)$ is the photon self-energy: \begin{equation} \Pi = \sum_{L=1}^\infty \Pi_L A_0^L (-k^2)^{-L\varepsilon}\,,\quad A_0 = \frac{e_0^2}{(4\pi)^{d/2}} e^{-\gamma\varepsilon} \label{gammah:Pi} \end{equation} ($e_0^2$ has dimensionality $m^{2\varepsilon}$, so that the power of $-k^2$ is obvious; $\gamma$ is the Euler constant). Only the 0-loop term in~(\ref{gammah:D}) is gauge dependent. Writing $\Pi_L$ as $\tilde{\Pi}_L n_l + (n_l^{>1} \text{ terms})$, we obtain in the Landau gauge $a_0=0$ \begin{align} &D^{\mu\nu}(k) = \tilde{D}_0^{\mu\nu}(k) + n_l \sum_{L=1}^\infty \tilde{\Pi}_L \tilde{D}_L^{\mu\nu}(k) A_0^L + (n_l^{>1} \text{ terms})\,, \nonumber\\ &\tilde{D}_L^{\mu\nu}(k) = \frac{i}{(-k^2)^{1+L\varepsilon}} \left(g^{\mu\nu} + \frac{k^\mu k^\nu}{-k^2}\right)\,. \label{gammah:Dk} \end{align} The $\overline{\text{MS}}$ charge renormalization is \begin{align} &A_0 = \mu^{2\varepsilon} \frac{\alpha(\mu)}{4\pi} Z_\alpha(\alpha(\mu))\,, \label{gammah:ren}\\ &\frac{d\log\alpha(\mu)}{d\log\mu} = - 2 \varepsilon - 2 \beta(\alpha(\mu))\,,\quad \beta(\alpha) = \frac{1}{2} \frac{d\log Z_\alpha}{d\log\mu} = \sum_{L=1}^\infty \beta_L \left(\frac{\alpha}{4\pi}\right)^L \nonumber \end{align} (note that here we call the $L$-loop $\beta$ function coefficient $\beta_L$, not $\beta_{L-1}$ as usually done; this makes subsequent formulas more logical). In QED $\log\left(1 - \Pi(k^2)\right) = \log Z_\alpha + (\text{finite})$; writing $\beta_L = \bar{\beta}_L n_l + (n_l^{>1} \text{ terms})$, we see that $1/\varepsilon$ terms in $\tilde{\Pi}_L$ are related to $\bar{\beta}_L$: \begin{equation} \tilde{\Pi}_L = \frac{\bar{\beta}_L}{L} \frac{1}{\varepsilon} + \bar{\Pi}_L + \mathcal{O}(\varepsilon)\,. \label{gammah:Pieps} \end{equation} Here the $\beta$ function coefficients are~\cite{Gorishnii:1990kd} \begin{equation} \bar{\beta}_1 = - \frac{4}{3}\,,\quad \bar{\beta}_2 = - 4\,,\quad \bar{\beta}_3 = 2\,,\quad \bar{\beta}_4 = 46\,, \label{gammah:Pi2} \end{equation} and~\cite{Ruijl:2017eht} \begin{align} &\bar{\Pi}_1 = - \frac{20}{9}\,,\quad \bar{\Pi}_2 = 16 \zeta_3 - \frac{55}{3}\,,\quad \bar{\Pi}_3 = - 2 \left(80 \zeta_5 - \frac{148}{3} \zeta_3 - \frac{143}{9}\right)\,, \nonumber\\ &\Bar{\Pi}_4 = 2240 \zeta_7 - 1960 \zeta_5 -104 \zeta_3 + \frac{31}{3}\,. \label{gammah:Pires} \end{align} The coordinate-space full photon propagator is the Fourier transform of~(\ref{gammah:Dk}): \begin{align} &D^{\mu\nu}(x) = \frac{1}{(4\pi)^{d/2}} \left[ \bar{D}_0^{\mu\nu}(x) + n_l \sum_{L=1}^\infty \tilde{\Pi}_L \bar{D}_L^{\mu\nu}(x) A_0^L \right] + (n_l^{>1} \text{ terms})\,, \label{gammah:Dx}\\ &\bar{D}_L^{\mu\nu}(x) = \frac{\Gamma\bigl(1-(L+1)\varepsilon\bigr)}{\Gamma\bigl(1+L\varepsilon\bigr)} \left(\frac{4}{-x^2}\right)^{1-(L+1)\varepsilon} \nonumber\\ &\qquad{}\times\left[- g^{\mu\nu} + \frac{g^{\mu\nu} + 2 \bigl(1 - (L+1) \varepsilon\bigr) x^\mu x^\nu / (-x^2)}{2 (1+L\varepsilon)}\right]\,. \nonumber \end{align} The sum of single-web diagrams (fig.~\ref{F:webs}) in the Landau gauge, analytically continued to Euclidean $t=-i\tau$, is \begin{equation} \log W = S_1 A + n_l \sum_{L=2}^\infty S_L \tilde{\Pi}_{L-1} A^L + (n_l^{>1} \text{ terms}) + (w_{>2 \text{ legs}} \text{ terms})\,,\quad A = A_0 \left(\frac{\tau}{2}\right)^{2\varepsilon} e^{2\gamma\varepsilon}\,, \label{gammah:wn} \end{equation} where the 1-loop HQET integral \begin{equation} S_L = \frac{3 - 2\varepsilon}{L\varepsilon (1 - 2 L\varepsilon) (1 + (L-1)\varepsilon)} \frac{\Gamma(1 - L\varepsilon)}{\Gamma(1 + (L-1)\varepsilon)} e^{- (2L-1) \gamma \varepsilon} = \frac{3}{L\varepsilon} + 3 + \frac{1}{L} + \mathcal{O}(\varepsilon) \label{gammah:S} \end{equation} can be calculated in coordinate space~(\ref{gammah:Dx}), or as a Fourier transform of the momentum-space HQET propagator. Now we re-express $\log W$ via the renormalized $\alpha(\mu)$ at $\mu_0 = 2 e^{-\gamma} / \tau$ (it is sufficient to do this in the 1-loop term) and obtain \begin{align} &\log W = S_1 \frac{\alpha}{4\pi} + n_l \sum_{L=2}^\infty \left[S_L \left(\frac{\bar{\beta}_{L-1}}{L-1} \frac{1}{\varepsilon} + \bar{\Pi}_{L-1}\right) - S_1 \frac{\bar{\beta}_{L-1}}{L-1} \frac{1}{\varepsilon}\right] \left(\frac{\alpha}{4\pi}\right)^L \nonumber\\ &{} + (n_l^{>1} \text{ terms}) + (w_{>2 \text{ legs}} \text{ terms}) = \log Z_h + (\text{finite})\,. \label{gammah:logW} \end{align} Extracting $\log Z_h$ and differentiating it in $\log\mu$, we obtain $\gamma_h$. Restoring the color factors and adding the gauge dependent term% \footnote{In the arbitrary covariant gauge the extra term to be added to $w_1$~(\ref{gammah:wn}) is $\Gamma(-\varepsilon) e^{-\gamma\varepsilon} a_0 A = \Gamma(-\varepsilon) e^{-\gamma\varepsilon} a(\mu_0) \alpha(\mu_0)/(4\pi)$, because in QED $Z_\alpha = Z_A^{-1}$. Hence the extra term to be added to $\log Z_h$ is purely 1-loop: $- (a/\varepsilon) \alpha/(4\pi)$. In QED $d\log\left(a(\mu) \alpha(\mu)\right)/d\log\mu = -2\varepsilon$ exactly, and hence the extra term in $\gamma_h$~(\ref{gammah:gamma}) is also purely 1-loop: $2 a \alpha/(4\pi)$.}, we obtain \begin{align} \gamma_h ={}& \frac{\alpha_s}{4\pi} \biggl[ 2 (a-3) C_F + T_F n_l \sum_{L=1}^\infty \bigl(- 6 \bar{\Pi}_L + 2 \bar{\beta}_L\bigr) \left(C_F \frac{\alpha_s}{4\pi}\right)^L \biggr] + (\text{other color structures}) \nonumber\displaybreak\\ {}={}& 2 (a-3) C_F \frac{\alpha_s}{4\pi} + T_F n_l C_F \left(\frac{\alpha_s}{4\pi}\right)^2 \biggl[ \frac{32}{3} - 6 \left(16 \zeta_3 - 17\right) C_F \frac{\alpha_s}{4\pi} \nonumber\\ &{} + \frac{16}{3} \left(180 \zeta_5 - 111 \zeta_3 - 35\right) \left(C_F \frac{\alpha_s}{4\pi}\right)^2\nonumber\\ &{} - 6 (2240 \zeta_7 - 1960 \zeta_5 - 104 \zeta_3 - 5) \left(C_F \frac{\alpha_s}{4\pi}\right)^3 + \mathcal{O}(\alpha_s^4) \biggr] \nonumber\\ &{} + (\text{other color structures})\,. \label{gammah:gamma} \end{align} We have reproduced the $C_F^2 T_F n_l$ term in the 3-loop anomalous dimension~\cite{Melnikov:2000zc,Chetyrkin:2003vi} by a simpler method. The coefficient of $C_F^3 T_F n_l (\alpha_s/\pi)^4$ in $\frac{1}{2} \gamma_h$ is \begin{equation*} \frac{180 \zeta_5 - 111 \zeta_3 - 35}{96} \approx 0.189778\,, \end{equation*} in perfect agreement with the numerical result $0.1894\pm0.0030$ (Table~III in~\cite{Marquard:2018rwx}). \section{QCD cusp anomalous dimension: the $C_F^{L-1} T_F n_l \alpha_s^L$ terms} \label{S:Gamma} Now we consider the Green function ${<}T\{h_v^+(x)J(0)h_{v'}(x')\}{>}$. Up to obvious $\delta$ functions similar to~(\ref{Intro:Sh}), it is the broken Wilson line $W(\varphi)$ from $x=-vt$ to $0$ and then to $x'=v't'$. Renormalization constants cannot depend on kinematics of Green functions we choose to calculate, and so we choose $t' = t$ to have a single-scale problem. We have \begin{equation} \log\frac{W(\varphi)}{W(0)} = \sum(w_i(\varphi)-w_i(0))\,, \label{Gamma:exp} \end{equation} where the sum runs over all single-web diagrams. Diagrams in which all photon vertices are to the left (or to the right) of the $J$ vertex cancel in $w_i(\varphi)-w_i(0)$. The remaining 2-leg webs are shown in fig.~\ref{F:cusp}. At 4 loops 4-leg webs appear; they have been calculated (at $\varphi\ll1$) in~\cite{Grozin:2017css}. \begin{figure}[ht] \begin{center} \begin{picture}(52,26) \put(26,12){\makebox(0,0){\includegraphics{cusp.pdf}}} \put(26,24){\makebox(0,0){$0$}} \put(1.5,2){\makebox(0,0){$-vt$}} \put(6.5,9){\makebox(0,0){$-vt_1$}} \put(49,2){\makebox(0,0){$v't$}} \put(41,12){\makebox(0,0){$v't_2$}} \end{picture} \end{center} \caption{Cusp: the 2-leg webs.} \label{F:cusp} \end{figure} The the $L$-loop $n_l^1$ contribution is ($L\ge2$) \begin{equation} w_L(\varphi) = - \tilde{\Pi}_{L-1} n_l A_0^L e^{L\gamma\varepsilon} \int_0^t dt_1 \int_0^t dt_2\,v_\mu v'_\nu \bar{D}_{L-1}^{\mu\nu}(vt_1+v't_2)\,, \label{Gamma:wL} \end{equation} where $\bar{D}_L^{\mu\nu}(x)$ is given by~(\ref{gammah:Dx}). We can write it, together with the 1-loop Landau-gauge contribution, in the form \begin{equation} w_1(\varphi) = V_1(\varphi) A\,,\quad w_L(\varphi) = V_L(\varphi) \tilde{\Pi}_{L-1} n_l A^L\,, \label{Gamma:w} \end{equation} where \begin{equation} V_L(\varphi) = 4 \frac{\Gamma(1-u)}{\Gamma(1+u-\varepsilon)} e^{-(2L-1)\gamma\varepsilon} \left[ - I_1(\varphi) \cosh\varphi + \frac{u I_1(\varphi) \cosh\varphi - (1-u) I_2(\varphi)}{2(1+u-\varepsilon)} \right]\,, \label{Gamma:VL} \end{equation} $u=L\varepsilon$. The integrals $I_{1,2}(\varphi)$ are \begin{align} I_1(\varphi) ={}& \int_0^1 dt_1 \int_0^1 dt_2 (e^{\varphi/2} t_1 + e^{-\varphi/2} t_2)^{-1+u} (e^{-\varphi/2} t_1 + e^{\varphi/2} t_2)^{-1+u} \nonumber\\ {}={}& \frac{e^{-2 u \varphi}}{4 u^2 \sinh\varphi} \bigl(g_1(\varphi) - g_2(\varphi)\bigr)\,, \label{Gamma:I1}\\ I_2(\varphi) ={}& \int_0^1 dt_1 \int_0^1 dt_2 (e^{\varphi/2} t_1 + e^{-\varphi/2} t_2)^u (e^{-\varphi/2} t_1 + e^{\varphi/2} t_2)^{-2+u} \nonumber\\ {}={}& \frac{1}{2 u (1-u)} \left[1 + \frac{e^{-2 u \varphi}}{2 \sinh\varphi} \bigl(e^{-\varphi} g_1(\varphi) - e^\varphi g_2(\varphi)\bigr)\right]\,, \label{Gamma:I2}\\ I_1(0) ={}& I_2(0) = \frac{2 - 2^{2u}}{2 u (1-2u)}\,, \label{Gamma:I0}\\ g_1(\varphi) ={}& (e^\varphi+1)^{2u} f_1(1-e^\varphi) - f_1(1-e^{2\varphi})\,,\quad g_2(\varphi) = (e^\varphi+1)^{2u} f_2(1-e^\varphi) - f_2(1-e^{2\varphi})\,, \nonumber\\ f_1(x) ={}& \F{2}{1}{-2u,-u\\1-2u}{x} = 1 + 2 \Li2(x) u^2 + \mathcal{O}(u^3)\,, \label{Gamma:f1}\\ f_2(x) ={}& \F{2}{1}{-2u,1-u\\1-2u}{x} = 1 + 2 \log(1-x) u + \left(\log^2(1-x) - 2 \Li2(x)\right) u^2 + \mathcal{O}(u^3)\,. \label{Gamma:f2} \end{align} We obtain \begin{align} &V_L(\varphi) - V_L(0) = - 2 \frac{\varphi \coth\varphi - 1}{L \varepsilon} + \bar{V}(\varphi) + \mathcal{O}(\varepsilon)\,, \label{Gamma:VLres}\\ &\bar{V}(\varphi) = \coth\varphi \left[4 \Li2(1-e^{2\varphi}) - 4 \Li2(1-e^\varphi) + \varphi \left(4 \log(e^\varphi+1) + \varphi\right)\right] \nonumber\\ &\quad{} + 2 \log(e^\varphi+1) - \varphi - 6 \log 2 + 4 = \bar{V}(-\varphi)\,. \nonumber \end{align} Similarly to~(\ref{gammah:logW}), we substitute~(\ref{Gamma:VL}) into \begin{equation*} \log\frac{W(\varphi)}{W(0)} = \log Z_J + (\text{finite}) \end{equation*} and re-express it via $\alpha(\mu_0)$ (it is sufficient to do this in the 1-loop term). Note that $\bar{V}(\varphi)$ does not depend on $L$; as a result, terms $\bar{\beta}_{L-1} \bar{V}(\varphi)$ cancel in $\log Z_J$ (in contrast to the first line of~(\ref{gammah:gamma}) where they contributed because of the $1/L$ term in~(\ref{gammah:S})). Differentiating $\log Z_J$ we obtain \begin{align} &\Gamma(\varphi) = 4 (\varphi \coth\varphi - 1) \frac{\alpha_s}{4\pi} \biggl[C_F + T_F n_l \sum_{L=1}^\infty \bar{\Pi}_L \left(C_F \frac{\alpha_s}{4\pi}\right)^L \biggr] + (\text{other color structures}) \nonumber\\ &{} = 4 (\varphi \coth\varphi - 1) C_F \frac{\alpha_s}{4\pi} \biggl\{1 + T_F n_l \frac{\alpha_s}{4\pi} \biggl[ - \frac{20}{9} + \left(16 \zeta_3 - \frac{55}{3}\right) C_F \frac{\alpha_s}{4\pi} \nonumber\displaybreak\\ &\quad{} - 2 \left(80 \zeta_5 - \frac{148}{3} \zeta_3 - \frac{143}{9}\right) \left(C_F \frac{\alpha_s}{4\pi}\right)^2 \nonumber\\ &\quad{} + \left(2240 \zeta_7 - 1960 \zeta_5 - 104 \zeta_3 + \frac{31}{3}\right) \left(C_F \frac{\alpha_s}{4\pi}\right)^3 + \mathcal{O}(\alpha_s^4) \biggr] \biggr\} \nonumber\\ &\quad{} + (\text{other color structures})\,. \label{Gamma:Gamma} \end{align} Thus we have reproduced the 3-loop $C_F^2 T_F n_l$ term in~\cite{Grozin:2014hna,Grozin:2015kna}. The coefficient of $2 T_F n_l C_F^3 (\alpha_s/(4\pi))^4$ in the light-like cusp anomalous dimension $\Gamma_l$ is \begin{equation*} - 4 \left(80 \zeta_5 - \frac{148}{3} \zeta_3 - \frac{143}{9}\right) \approx - 31.055431\,, \end{equation*} in perfect agreement with the numerical result $-31.00\pm0.4$ (Table~2 in~\cite{Moch:2017uml}). The $C_F^{L-1} T_F n_l$ terms in the quark--antiquark potential in Coulomb gauge are given by a single Coulomb-gluon propagator: \begin{align} &V(\vec{q}^{\,}) = - \frac{4 \pi \alpha_s}{\vec{q}^{\,2}} \biggl[ C_F + T_F n_l \sum_{L=1}^\infty \bar{\Pi}_L \left(C_F \frac{\alpha_s}{4\pi}\right)^L \biggr] + (\text{other color structures}) \nonumber\\ &{} = - \frac{4 \pi \alpha_s}{\vec{q}^2} \biggl\{ C_F + T_F n_l \frac{\alpha_s}{4\pi} \biggl[ - \frac{20}{9} + \biggl(16 \zeta_3 - \frac{55}{3}\biggr) \frac{\alpha_s}{4\pi} \nonumber\\ &\quad{} - 2 \biggl(80 \zeta_5 - \frac{148}{3} \zeta_3 - \frac{143}{9}\biggr) \biggl(C_F \frac{\alpha_s}{4\pi}\biggr)^2 \nonumber\\ &\quad{} + \biggl(2240 \zeta_7 - 1960 \zeta_5 - 104 \zeta_3 + \frac{31}{3} \biggr) \biggl(C_F \frac{\alpha_s}{4\pi}\biggr)^3 + \mathcal{O}(\alpha_s^4) \biggr] \biggr\} \nonumber\\ &{} + (\text{other color structures})\,, \label{Gamma:V} \end{align} where $\alpha_s$ is taken at $\mu^2 = \vec{q}^{\,2}$. The terms up to $\alpha_s^4$ agree with~\cite{Smirnov:2008pn}. The cusp anomalous dimension at Euclidean angle $\varphi_E = \pi - \delta$, $\delta\to0$, is related to the quark--antiquark potential~\cite{Kilian:1993nk} \begin{equation} \delta\,\Gamma(\pi-\delta)\Big|_{\delta\to0} = \frac{\vec{q}^{\,2} V(\vec{q}^{\,})}{4\pi}\,; \label{Gamma:delta} \end{equation} this relation follows from conformal invariance, and in QCD it is broken by extra terms proportional to coefficients of the $\beta$ function~\cite{Grozin:2014hna,Grozin:2015kna}. Comparing~(\ref{Gamma:Gamma}) with~(\ref{Gamma:V}), we see that the relation~(\ref{Gamma:delta}) for the $C_F^{L-1} T_F n_l$ color structures is valid to all orders in $\alpha_s$. \section{QED results} \label{S:QED} The 4-loop anomalous dimension of the QED Bloch--Nordsieck field is now known completely analytically. Adding terms with higher powers of $n_l$ from~\cite{Grozin:2015kna,Grozin:2016ydd} and the 4-loop contribution of the webs with 4 legs~\cite{Grozin:2017css}, we obtain \begin{align} \gamma_h ={}& 2 (a-3) \frac{\alpha}{4\pi} + \frac{32}{3} n_l \left(\frac{\alpha}{4\pi}\right)^2 + \left[ - 6 \left(16 \zeta_3 - 17\right) + \frac{160}{27} n_l \right] n_l \left(\frac{\alpha}{4\pi}\right)^3 \nonumber\\ &{} + \biggl[ 16 \left(40 \zeta_5 + \frac{32}{3} \pi^2 \zeta_3 - 21 \zeta_3 - \frac{32}{3} \pi^2 - \frac{35}{3}\right) \nonumber\\ &\hphantom{{}+\biggl[\biggr.} - 32 \left(\frac{\pi^4}{15} - 12 \zeta_3 + \frac{103}{27}\right) n_l - \frac{256}{9} \left(\zeta_3 - \frac{1}{3}\right) n_l^2 \biggr] n_l \left(\frac{\alpha}{4\pi}\right)^4\,. \label{QED:gammah} \end{align} Adding terms with higher powers of $n_l$~\cite{Grozin:2015kna,Grozin:2016ydd} and the 4-legs webs contribution~\cite{Grozin:2017css,Bruser:2018aud} (known only up to $\varphi^6$) to~(\ref{Gamma:Gamma}), we obtain the QED cusp anomalous dimension up to 4 loops \begin{align} \Gamma(\varphi) ={}& 4 (\varphi \coth\varphi - 1) \frac{\alpha}{4\pi} \biggl\{ 1 + n_l \frac{\alpha}{4\pi} \biggl[ - \frac{20}{9} + \left(16 \zeta_3 - \frac{55}{3} - \frac{16}{27} n_l\right) \frac{\alpha}{4\pi} \nonumber\\ &\quad{} - \frac{8}{9} \biggl(\frac{2}{5} \pi^4 - 80 \zeta_3 + \frac{299}{9} - \frac{8}{3} \biggl(2 \zeta_3 - \frac{1}{3}\biggr) n_l\biggr) n_l \left(\frac{\alpha}{4\pi}\right)^2 \biggr] \biggr\} \nonumber\\ &{} - \frac{8}{3} \varphi^2 \biggl[80 \zeta_5 + \frac{128}{3} \pi^2 \zeta_3 - \frac{40}{9} \pi^4 - \frac{148}{3} \zeta_3 - \frac{80}{9} \pi^2 - \frac{143}{9} \nonumber\\ &\quad{} + \frac{1}{3} \biggl(112 \zeta_5 + \frac{512}{75} \pi^2 \zeta_3 - \frac{392}{225} \pi^4 - \frac{6076}{75} \zeta_3 + \frac{1256}{225} \pi^2 + \frac{2371}{225}\biggr) \varphi^2 \nonumber\\ &\quad{} - \frac{2}{3} \biggl( \frac{304}{49} \zeta_5 + \frac{512}{1225} \pi^2 \zeta_3 - \frac{42004}{11025} \zeta_3 - \frac{3368}{33075} \pi^4 + \frac{10664}{33075} \pi^2 - \frac{9341}{33075} \biggr) \varphi^4 \nonumber\\ &\quad{} + \mathcal{O}(\varphi^6) \biggr] n_l \left(\frac{\alpha}{4\pi}\right)^4\,. \label{QED:Gamma} \end{align} The $n_l \alpha^4$ term is known only up to $\varphi^6$; \begin{equation*} \varphi \coth\varphi - 1 = \frac{\varphi^2}{3} \left(1 - \frac{\varphi^2}{15} + \frac{2}{315} \varphi^4 + \mathcal{O}(\varphi^6)\right)\,. \end{equation*} \acknowledgments I am grateful to J.\,M.~Henn and M.~Stahlhofen for hospitality in Mainz and useful discussions; to M.~Steinhauser and P.~Marquard for discussing~\cite{Marquard:2018rwx}; to A.~Vogt for discussing~\cite{Moch:2017uml}. The work has been supported by the PRISMA cluster of excellence, JGU Mainz, and partially by the Russian Ministry of Education and Science.
2,877,628,090,583
arxiv
\section{Introduction} \paragraph{Background} Markov chain Monte Carlo (MCMC) is a powerful technique of designing randomized approximation algorithms for {\#P}-hard problems. Jerrum et al.~\cite{JVV86} showed the equivalence in the sense of the polynomial time computation between {\em almost} uniform generation and randomized approximate counting for self-reducible problems. A number of fully polynomial-time randomized approximation schemes (FPRAS) based on their technique have been developed for {\#}P-hard problems, such as the volume of a convex body~\cite{DFK1991,LV06,CV15}, integral of a log-concave function~\cite{LV06}, partition function of the Ising model~\cite{JS93}, and counting bipartite matchings~\cite{JS96}. When designing an FPRAS based on the technique, it is important that the {\em total variation distance} of the approximate distribution from the target distribution is sufficiently small, and hence analyses of the mixing times of Markov chains are central issues in a series of works on MCMC for FPRAS to guarantee a small total variation distance is small. See also Section~\ref{subsec:RWMC} for the terminology of Markov chains. In contrast, not many results are known about {\em deterministic} approximation algorithms for {\#}P-hard problems. A remarkable progress is the correlation decay technique, independently devised by Weitz~\cite{Weitz2006} and Bandyopadhyay and Gamarnik~\cite{BG08}, and there are several recent developments on the technique. For counting $0$-$1$ knapsack solutions, Gopalan et al.~\cite{GKM10}, and Stefankovic et al.~\cite{SVV2012} gave deterministic approximation algorithms (see also~\cite{GKMSVV2011}). Ando and Kijima~\cite{AK14} gave an FPTAS based on approximate convolutions for computing the volume of a $0$-$1$ knapsack polytope. A direct derandomization of MCMC algorithms is not known yet, but it holds a potential for a general scheme of designing deterministic approximation algorithms for {\#}P-hard problems. {\em Deterministic random walks}~\cite{CS06,CDST07,DF09,CDFS10,KKM12,KKM13,SYKY13} may be used as a substitute for Markov chains, for the purpose. \paragraph{Deterministic random walk} Deterministic random walk is a deterministic process analogous to a (multiple) random walk\footnote{``multiple random walk'' means independent random walks of many tokens.}. A configuration $\chi^{(t)} \in \mathbb{Z}_{\geq 0}^V$ of $M$ tokens distributed over a (finite) vertex set $V$ is deterministically updated from time $t$ to $t+1$ by routers equipped on vertices. The router on a vertex $u \in V$ deterministically serves tokens on $u$ to neighboring vertex $v$ with a ratio (about) $P_{uv} \in [0,1]$ such that $\sum_{v \in V} P_{uv}=1$, i.e., $P=(P_{uv}) \in \mathbb{R}^{V \times V}$ is a transition matrix (when $V$ is finite). See Section~\ref{subsec:detRW} for the detailed description of the model with which this paper is concerned. Note that the expected configuration $\mu^{(t)} \in \mathbb{R}_{\geq 0}^V$ of $M$ tokens in a multiple random walk at time $t$ is given by $\mu^{(t)}=\chi^{(0)}P^t$ on the assumption that $\chi^{(0)}=\mu^{(0)}$. Cooper and Spencer~\cite{CS06} investigated the rotor-router model, which is a deterministic random walk corresponding to a simple random walk, and showed for the $d$-dimensional (infinite) integer lattice that the maximum vertex-wise discrepancy $\|\chi^{(t)}-\mu^{(t)}\|_{\infty}$ is upper bounded by a constant $c_d$, which depends only on~$d$ but is independent of the total number of tokens. Later, it is shown that $c_1\simeq 2.29$ \cite{CDST07} and $c_2$ is about $7.29$ or $7.83$ depending on the routers~\cite{DF09}. On the other hand, Cooper et al.~\cite{CDFS10} gave an example of a rotor-router on the infinite $k$-regular tree, such that its vertex-wise discrepancy gets ${\rm \Omega}(\sqrt{kt})$ for an arbitrarily fixed~$t$. Motivated by general transition matrices, Kijima et al.~\cite{KKM12} investigated a rotor-router model on finite multidigraphs, and gave a bound $\Order(n|{\cal A}|)$ of the vertex-wise discrepancy when $P$ is rational, ergodic and reversible, where $n=|V|$ and ${\cal A}$ denotes the set of multiple edges. For an arbitrary rational transition matrix $P$, Kajino et al.~\cite{KKM13} gave an upper bound using the second largest eigenvalue $\lambda^*$ of $P$ and some other parameters of~$P$. To deal with irrational transition probabilities, Shiraga et al.~\cite{SYKY13} presented a generalized notion of the rotor-router model, which they call {\em functional router model}. They gave a bound $\Order((\pi_{\max}/\pi_{\min})t^* \Delta)$ of the vertex-wise discrepancy for a specific functional router model (namely, {\em SRT-router model}) when $P$ is ergodic and reversible, where $t^*$ denotes the mixing rate of $P$ and $\pi_{\max}$ (resp. $\pi_{\min}$) is the maximum (resp. minimum) element of the stationary distribution vector $\pi$ of $P$. Using~\cite{SYKY13}, Shiraga et al.~\cite{SYKY14} discussed the time complexity of a simulation, in which they are concerned with an {\em oblivious} version, meaning that the states of routers are reset in each step while the deterministic random walk above mentioned carries over the states of routers to the next step. Similar, or essentially the same concepts have been independently developed in several literature, such as load-balancing, information spreading and self-organization. Rabani et al.~\cite{RSW98} investigated the diffusive model for load balancing, which is an oblivious version of deterministic random walk, and showed for the model that the vertex-wise discrepancy is $\Order\left(\Delta\log(n)/(1-\lambda^*)\right)$ when $P$ is symmetric and ergodic, where $\Delta$ is the maximum degree of the transition diagram of $P$. Friedrich et al.~\cite{FGS12} proposed the BED algorithm for load balancing, which uses some extra information in the previous time, and they gave $\Order(\Dim^{1.5})$ for hypercube and $\Order(1)$ for constant dimensional tori. Akbari et al.~\cite{AB13} discussed the relation between the BED algorithm and the rotor-router model, and gave the same bounds for a rotor-router model. Berenbrink et al.~\cite{BKKM15} investigated about cumulatively fair balancers algorithms, which includes the rotor-router model, and gave an upper bound $\Order(d\min(\sqrt{\log(n)/(1-\lambda^*)}, \sqrt{n}))$ for a lazy version of simple random walks on $d$-regular graphs. As a closely related topic, the behavior of the rotor-router model with a single token has also been investigated. Holroyd and Propp~\cite{HP10} investigated the frequency $\nu^{(t)} \in \mathbb{Z}_{\geq 0}^V$ of visits of the token in $t$ steps, and showed that $\|\nu^{(t)}/t-\pi\|_{\infty}$ is $\Order(mn/t)$. Preceding~\cite{HP10}, Yanovski et al.~\cite{YWB03} showed that the rotor-router model with a single token always stabilizes to a traversal of an Eulerian cycle after $2mD$ steps at most, where $D$ denotes the diameter of the graph. This result implies that the (edge) cover time of the rotor-router model with a single token is $\Order(mD)$ for any graph. Bampas et al.~\cite{BGHI09} gave examples of which the stabilization time gets ${\rm \Omega }(mD)$. Similar analyses for the rotor-router model with many tokens have been developed, recently. Dereniowski et al.~\cite{DKPU14} investigated the cover time of the rotor-router model with $M$ tokens, and gave an upper $\Order (mD/\log M)$ and an example of ${\rm \Omega}(mD/M)$ as a lower bound. Chalopin et al.~\cite{CDGK15} gave an upper bound of its stabilization time is $\Order (m^4D^2+mD\log M)$, while they also showed that the period of a cyclic stabilized states can get as large as $2^{\Omega(\sqrt{n})}$. \begin{table}[t] \begin{center} \begin{tabular}{l | ll | ll} \bhline{1.3pt} Conditions on $P$ &$L_\infty$-discrepancy & &$L_1$-discrepancy & \\ \bhline{1pt} E. R. & \multirow{2}{*}{$\Order\left(\frac{\Delta\log (n)}{1-\lambda^*}\right)$} & \multirow{2}{*}{\cite{RSW98}} & \multirow{2}{*}{$\Order\left(\frac{\Delta n\log (n)}{1-\lambda^*}\right)$} & \multirow{2}{*}{} \\ symmetric & & & & \\ \hline E.~R.~L. & \multirow{2}{*}{$\Order(n|{\cal A}|)$} & \multirow{2}{*}{\cite{KKM12}} & \multirow{2}{*}{$\Order(n^2|{\cal A}|)$} & \\ rational & & & & \\ \hline any rational & $\Order\left(\frac{\alpha^*n|{\cal A}|}{(1-\lambda^*)^\beta}\right)$ & \cite{KKM13} & $\Order\left(\frac{\alpha^*n^2|{\cal A}|}{(1-\lambda^*)^\beta}\right)$ & \\ \hline E.~R. & $\Order\left( \frac{\pi_{\max}}{\pi_{\min}}t^* \Delta\right)$ & \cite{SYKY13} & $\Order\left( \frac{\pi_{\max}}{\pi_{\min}}t^* \Delta n\right)$ & \\ \hline E.~R.~L. & \multirow{3}{*}{$\Order\left(d\min\left(\sqrt{\frac{\log(n)}{1-\lambda^*}},\sqrt{n}\right)\right)$} & \multirow{3}{*}{\cite{BKKM15}} & \multirow{3}{*}{$\Order\left(m\min\left(\sqrt{\frac{\log(n)}{1-\lambda^*}},\sqrt{n}\right)\right)$} & \\ simple r.w. & & & & \\ $d$-regular & & & & \\ \bhline{1pt} E. & & & $\Order(mt^*)$ & Thm.~\ref{thm:upper-ergodic-Ob} \\ \hline E.~L. & & & $\Order(m\sqrt{t^*\log t^*})$ & Thm.~\ref{thm:upper-lazyTV-SRT} \\ \hline E.~R.~L. & \multirow{2}{*}{$\Order(\Delta\sqrt{t^*\log t^*})$} & \multirow{2}{*}{Thm.~\ref{thm:upper-lazyLisrt}} & \multirow{2}{*}{} & \\ symmetric & & & & \\ \bhline{1.3pt} \multicolumn{5}{l}{E.: ergodic, R.: reversible, L.: lazy} \end{tabular} \caption{Summary of known results on $\|\chi^{(t)}-\mu^{(t)}\|_{\infty}$ for finite graphs, and this work. } \label{table:upper} \end{center} \end{table} \paragraph{Our results.} As we stated before, the total variation distance between the target distribution and approximate samples is significant in the analysis of MCMC algorithms. While there are several works on deterministic random walks concerning the vertex-wise discrepancy $\|\chi^{(t)}-\mu^{(t)}\|_{\infty}$ such as~\cite{RSW98,KKM12,KKM13,SYKY13,BKKM15}, little is known about the total variation discrepancy $\|\chi^{(t)}-\mu^{(t)}\|_1$. This paper investigates the total variation discrepancy to develop a new analysis technique aiming at derandomizing MCMC. To begin with, we give a simple but nontrivial upper bound for any ergodic finite Markov chains, precisely we show $\|\chi^{(t)}-\mu^{(t)}\|_1 = \Order(mt^*)$ where $t^*$ is the mixing rate of $P$ and $m$ is the number of edges of the transition diagram of~$P$. In fact, the analyses are almost the same for both the non-oblivious model, including the rotor-router model~\cite{CS06,KKM12,KKM13,BKKM15}, and the oblivious model like~\cite{RSW98, SYKY14} in which the states of routers are reset in each step, and we in Section~\ref{sec:result_oblivious} deal with the oblivious model. We also give a lower bound for the oblivious model presenting an example such that $\|\chi^{(t)}-\mu^{(t)}\|_1=\Omega(t^*)$, which suggests that the mixing rate is negligible in the $L_1$ discrepancy for the oblivious model. Then, we in Section~\ref{sec:result_nonoblivious} give a better upper bound for non-oblivious determinstic random walk, precisely we show $\|\chi^{(t)}-\mu^{(t)}\|_1 =\Order(m\sqrt{t^*\log t^*})$ when $P$ is ergodic and lazy. Notice that the upper bound does not require reversible. The analysis technique is a modification of Berenbrink et al.~\cite{BKKM15}, in which they investigated a lazy version of simple random walks on $d$-regular graphs. In fact, we also remark that the analysis technique by~\cite{BKKM15} for the vertex-wise discrepancy is extended to general graphs, precisely we show that $\|\chi^{(t)}-\mu^{(t)}\|_{\infty} = \Order(\Delta \sqrt{t^*\log t^*})$ when $P$ is ergodic, lazy, symmetric. We also present some lower bounds of $L_1$ discrepancy for non-oblivious models. Table~\ref{table:upper} shows a summary of known results~\cite{RSW98,KKM12,KKM13,SYKY13,BKKM15} on $\|\chi^{(t)}-\mu^{(t)}\|_{\infty}$, and the results by this work. The column of ``$L_1$ discrepancy'' shows the upper bounds of $\|\chi^{(t)}-\mu^{(t)}\|_1$ implied by the previous results~\cite{RSW98,KKM12,KKM13,SYKY13,BKKM15}, in comparison with upper bounds obtained by this paper. \section{Preliminaries}\label{sec:prel} \subsection{Random walk / Markov chain}\label{subsec:RWMC} As a preliminary step, we introduce some terminology of Markov chains (cf.~\cite{LPW08}). Let $V=\{1, \ldots, n\}$ be a finite set, and let $P \in \mathbb{R}_{\geq 0}^{n \times n}$ be a transition matrix on $V$, which satisfies $\sum_{v\in V}P_{u,v}=1$ for any $v\in V$, where $P_{u,v}$ denotes the $(u,v)$ entry of $P$ ($P^t_{u,v}$ denotes $(u,v)$ entry of $P^t$, as well). Let $\MDG=(V, \ME)$ be the transition digram of $P$, meaning that $\ME = \{(u,v) \in V\times V \mid P_{u,v}>0\}$. Let $\No(v)$ and $\Ni(v)$ respectively denote the out-neighborhood and the in-neighborhood of $v \in V$ on $\MDG$~\footnote{$\No(v) = \{ u \in V \mid P_{v,u}>0 \}$ and $\Ni(v) = \{ u \in V \mid P_{u,v}>0 \}$.}. For convenience, let $m=|\ME|$, $\deltao(v)=|\No(v)|$ and $\deltai(v)=|\Ni(v)|$. A finite Markov chain is called {\em ergodic} if $P$ is {\em irreducible}\footnote{$P$ is irreducible if $\forall u, v\in V, \exists t>0, P^t_{u, v} > 0$. Then, transition diagram of $P$ is connected.} and {\em aperiodic}\footnote{$P$ is aperiodic if $\forall v\in V, \GCD\{ t \in \mathbb{Z}_{>0} \mid P^t_{v, v} > 0\} = 1$.}. It is well known that any ergodic $P$ has a unique {\em stationary distribution} $\pi \in \mathbb{R}_{\geq 0}^{n}$ (i.e., $\pi P = \pi$), and the limit distribution is $\pi$ (i.e., $\lim_{t\to \infty}\xi P^{t} = \pi$ for any probability distribution $\xi\in \mathbb{R}_{\geq 0}^{n}$ on $V$). Let $\xi$ and $\zeta$ be probability distributions on $V$, then the {\em total variation distance} $\dtv$ between $\xi$ and $\zeta$ is defined by \begin{eqnarray} \label{def:TV} \dtv(\xi, \zeta) \defeq \max_{A\subset V} \left| \sum_{v\in A}(\xi_v-\zeta_v )\right| =\frac{1}{2} \left\|\xi-\zeta\right\|_1. \label{def:dtv} \end{eqnarray} The {\em mixing time} of $P$ is defined by \begin{eqnarray} \tau(\varepsilon) \defeq \max_{v \in V} \min \left\{ t \in \mathbb{Z}_{\geq 0} \mid \dtv(P^t_{v, \cdot}, \pi) \leq \varepsilon \right\} \label{def:mix} \end{eqnarray} for any $\varepsilon > 0$~\footnote{$P^t_{v, \cdot}$ denotes the $v$-th row vector of $P^t$. }. Let $t^* \defeq \tau(1/4)$, called {\em mixing rate}, which is often used as a characterization of $P$. Let $\mu^{(0)}=(\mu^{(0)}_1,\ldots, \mu^{(0)}_n) \in \mathbb{Z}_{\geq 0}^{n}$ denote an initial configuration of $M$ tokens over $V$. Suppose that each token randomly and independently moves according to $P$. Let $\mu^{(t)}$ denote the {\em expected} configuration of tokens at time $t\in \mathbb{Z}_{\geq 0}$ in a Markov chain, then $\mu^{(t)}=\mu^{(0)}P^t$ holds. By the definition of mixing time, $\|\mu^{(t)}/M-\pi\|_1\leq \varepsilon $ holds for any $t\geq \tau(\varepsilon )$ if $P$ is ergodic. \subsection{Deterministic random walk: framework}\label{subsec:detRW} A {\em deterministic random walk} is a deterministic process imitating $\mu^{(t)}$. Let $\chi^{(0)}=\mu^{(0)}$ and $\chi^{(t)}\in \mathbb{Z}_{\geq 0}^{n}$ denote the configuration of tokens at time $t\in \mathbb{Z}_{\geq 0}$ in a deterministic random walk. An update in a deterministic random walk is defined by $Z_{v,u}^{(t)}$ denoting the number of tokens moving from $v$ to $u$ at time $t$, where $Z_{v,u}^{(t)}$ must satisfy the condition that \begin{eqnarray} \sum_{u \in \No(v)} Z_{v,u}^{(t)}=\chi^{(t)}_v \label{eq:Zvut-usumsa} \end{eqnarray} for any $v\in V$. Then, $\chi^{(t+1)}$ is defined by \begin{eqnarray} \chi_u^{(t+1)} \defeq \sum_{v \in \Ni(u)} Z_{v,u}^{(t)} \label{eq:Zvut-vsumsa} \end{eqnarray} for any $u\in V$. We will explain some specific deterministic random walks in Sections~\ref{subsec:model_oblivious} and \ref{subsec:model_srt} by giving precise definitions of $Z_{v,u}^{(t)}$. We are interested in a question if $\chi^{(t)}$ approximates $\mu^{(t)}$ well in terms of the total variation discrepancy, i.e., the question is how large $\max_{A\subseteq V}|\chi^{(t)}_A-\mu^{(t)}_A\|=(1/2)\|\chi^{(t)}-\mu^{(t)}\|_1$ does get. In the end of this section, we introduce two notations which we will use in the paper. For any $\xi \in \mathbb{R}^V$ and $A\subseteq V$, let $\xi_A$ denotes $\sum_{v\in A}\xi_v$. For example, $\mu^{(t)}_A=\sum_{v\in A}\mu^{(t)}_v$ and $P_{u,A}=\sum_{v\in A}P_{u,v}$. For any $\xi\in \mathbb{R}^n$, $P\in \mathbb{R}^{n \times n}$ and $u\in V$, let $(\xi P)_u$ denotes the $u$-th element of the vector $\xi P$, i.e., $(\xi P)_u=\sum_{v\in V}\xi_vP_{v,u}$. \section{Upper and lower bounds for oblivious model}\label{sec:result_oblivious} This section is concerned with an oblivious version of deterministic random walk, which is closely related to the models in~\cite{RSW98,SYKY14}. \subsection{Oblivious model}\label{subsec:model_oblivious} Given a transition matrix $P$ and a configuration $\chi^{(t)}$ of tokens, we define $Z_{v,u}^{(t)}$ as follows. Assume that an arbitrary ordering $u_1,\ldots,u_{\deltao(v)}$ on $\No(v)$ is prescribed for each $v \in V$. Then, let \begin{eqnarray} Z_{v,u_i}^{(t)} = \left\{ \begin{array}{ll} \left \lfloor \chi^{(t)}_v P_{v,u_i} \right \rfloor + 1 & (i \leq i^*) \\[2ex] \left \lfloor \chi^{(t)}_v P_{v,u_i} \right \rfloor & (\mbox{otherwise}) \end{array}\right. \label{def:update} \end{eqnarray} where $i^* \defeq \chi^{(t)}_v - \sum_{i=1}^{\deltao(v)} \lfloor \chi^{(t)}_v P_{v,u_i} \rfloor$ denotes the number of ``surplus'' tokens. It is easy to check that the condition \eqref{eq:Zvut-usumsa} holds for any $v,u\in V$ and $t\in \mathbb{Z}_{\geq 0}$. Then, the configuration $\chi^{(t+1)}$ is updated according to~\eqref{eq:Zvut-vsumsa}, recursively. The following observation is easy from the definition \eqref{def:update} of $Z_{v,u}^{(t)}$ . \begin{observation}\label{obs:det-sampling} For any oblivious model, $|Z_{v,u}^{(t)} - \chi^{(t)}_v P_{v,u}| \leq 1$ holds for any $u,v\in V$ and $t\in \mathbb{Z}_{\geq 0}$. \end{observation} \subsection{Upper bound}\label{subsec:upper_oblivious} In this section, we give an upper bound of the total variation discrepancy. \begin{theorem} \label{thm:upper-ergodic-Ob} Suppose $P\in \mathbb{R}_{\geq 0}^{n\times n}$ is ergodic. Then, for any oblivious model, \begin{eqnarray*} \left| \chi^{(T)}_A-\mu^{(T)}_A \right| &\leq &\frac{3}{2}mt^* =\Order(mt^*) \end{eqnarray*} holds for any $A \subseteq V$ and for any $T\in \mathbb{Z}_{\geq 0}$. \end{theorem} Remark that Theorem~\ref{thm:upper-ergodic-Ob} only assumes that $P$ is ergodic. \begin{proof}[Proof of Theorem~\ref{thm:upper-ergodic-Ob}] Let $\phi^{(t)}=\chi^{(t)}-\chi^{(t-1)}P$, for convenience. By \eqref{eq:Zvut-vsumsa} and Observation~\ref{obs:det-sampling}, \begin{eqnarray} |\phi^{(t)}_u| =\left |\left( \chi^{(t+1)}-\chi^{(t)}P\right)_u\right| =\left| \sum_{v\in \Ni (u)} (Z_{v,u}^{(t)}-\chi_v^{(t)}P_{v,u}) \right| \leq \sum_{v\in \Ni (u)} \left| Z_{v,u}^{(t)}-\chi_v^{(t)}P_{v,u}\right| \leq \deltai(u) \label{eq:phibound} \end{eqnarray} holds for any $u\in V$ and $t\in\mathbb{Z}_{\geq 0}$. Now, we see that \begin{eqnarray} \label{eq:maindisc1} \sum_{t=0}^{T-1}\phi^{(T-t)}P^{t} &=&\sum_{t=0}^{T-1}\left( \chi^{(T-t)} P^{t} - \chi^{(T-t-1)} P^{t+1} \right) =\chi^{(T)}P^0-\chi^{(0)}P^T =\chi^{(T)} -\mu^{(T)} \end{eqnarray} hold, since $\mu^{(T)}=\chi^{(0)}P^T$ holds by the assumption. By \eqref{eq:maindisc1}, \begin{eqnarray} \label{eq:discmix2} \chi^{(T)}_A-\mu^{(T)}_A &=&\left( \sum_{t=0}^{T-1}\phi^{(T-t)} P^{t} \right)_A =\sum_{t=0}^{T-1}\sum_{u\in V}\phi_u^{(T-t)} P^{t}_{u,A} \nonumber\\ &=&\sum_{t=0}^{\ats-1}\sum_{u\in V}\phi_u^{(T-t)} P^{t}_{u,A} +\sum_{t=\ats}^{T-1}\sum_{u\in V}\phi_u^{(T-t)} \Bigl( P^{t}_{u,A}-\pi_A \Bigr) \end{eqnarray} for any possible integer $\alpha$, where the last inequality follows from the fact that \begin{eqnarray*} \sum_{u\in V}\phi^{(t)}_u= \sum_{u\in V}\left( \chi^{(t+1)}-\chi^{(t)}P\right)_u &=&\sum_{u\in V}\chi^{(t+1)}_u-\sum_{u\in V}\sum_{v\in V}\chi^{(t)}_vP_{v, u} =M-M =0 \end{eqnarray*} holds for any $t\in \mathbb{Z}_{\geq 0}$. By \eqref{eq:discmix2}, we obtain that \begin{eqnarray} \Bigl|\chi^{(T)}_A-\mu^{(T)}_A \Bigr| \leq \left|\sum_{t=0}^{\ats-1}\sum_{u\in V}\phi_u^{(T-t)} P^{t}_{u,A} \right| +\left| \sum_{t=\ats}^{T-1}\sum_{u\in V}\phi_u^{(T-t)} \Bigl( P^{t}_{u,A}-\pi_A \Bigr) \right|. \label{eq:maindisc2} \end{eqnarray} Now, we give upper bounds of each term of \eqref{eq:maindisc2}. For the first term of \eqref{eq:maindisc2}, it is easy to see that \begin{eqnarray} \left|\sum_{t=0}^{\ats-1}\sum_{u\in V}\phi_u^{(T-t)} P^{t}_{u,A} \right| \leq \sum_{t=0}^{\ats-1}|P^{t}_{u,A}| \sum_{u\in V}|\phi_u^{(T-t)}| \leq \sum_{t=0}^{\ats-1}\sum_{u\in V}\deltai(u)=m\alpha t^* \label{eq:firstbound} \end{eqnarray} holds by \eqref{eq:phibound}. To bound the second term of \eqref{eq:maindisc2}, we use the following lemma (See Appendix~\ref{app:markov} for the proof). \begin{lemma} \label{lemm:dtsum}\cite{SYKY13} Suppose $P \in \mathbb{R}_{\geq 0}^{n\times n}$ is {\em ergodic}. Then, \begin{eqnarray*} \sum_{t=\alpha t^*}^{\infty} \dtv\left( P^t_{u, \cdot}, \pi \right) \leq \frac{t^*}{2^\alpha} \end{eqnarray*} holds for any $u\in V$ and for any $\alpha \in \mathbb{Z}_{>0}$. \shortqed \end{lemma} By Lemma~\ref{lemm:dtsum}, we obtain that \begin{eqnarray} \left| \sum_{t=\ats}^{T-1}\sum_{u\in V}\phi_u^{(T-t)} \Bigl( P^{t}_{u,A}-\pi_A \Bigr) \right| \leq \sum_{t=\ats}^{T-1}\sum_{u\in V}|\phi_u^{(T-t)}| \Bigl| P^{t}_{u,A}-\pi_A \Bigr| \leq \frac{t^*}{2^\alpha} \sum_{u\in V}\max_{0\leq t \leq T}|\phi_u^{(T-t)}| \leq \frac{mt^*}{2^\alpha} \label{eq:secondbound} \end{eqnarray} hold where the last inequality follows from \eqref{eq:phibound}. Now, we obtain the claim from \eqref{eq:maindisc2}, \eqref{eq:firstbound} and \eqref{eq:secondbound} by letting $\alpha=1$. \end{proof} \subsection{Lower bound}\label{subsec:lower_oblivious} We give the following lower bound for an oblivious model. This proposition imply that we cannot improve the term $t^*$ for oblivious models in general. \begin{proposition}\label{prop:mixlower} There exist an oblivious model such that \begin{eqnarray*} \max_{S\subseteq V}\left|\chi^{(T)}_S-\mu^{(T)}_S\right|={\rm \Omega}(nt^*) \end{eqnarray*} holds for any time $T$ after mixing. \end{proposition} \begin{proof Let $V=\{0, \ldots, n-1\}$, and let a transition matrix $P$ be defined by $P_{u, u}=(k-1)/k$ for any $u\in V$, and $P_{u, v}=1/k(n-1)$ for any $u, v\in V$ such that $u\neq v$, i.e., $P$ denotes a simple random walk on $K_n$ with a self loop probability $(k-1)/k$ for any vertex. For this $P$, it is not difficult to check $t^*=\Order(k)$ (See Appendix~\ref{app:markov}). Then, we give a corresponding oblivious deterministic random walk. Let us assume that the prescribed ordering for each $v\in V$ starts with $v$ itself (remember the definition of an oblivious deterministic random walk in Section~\ref{subsec:model_oblivious}). Let \begin{eqnarray*} \chi^{(0)}_u= \begin{cases} k & (u\in A)\\ 0 & (u\in B), \end{cases} \end{eqnarray*} where $A=\{0, \ldots, n/2-1\}$ and $B=\{n/2, \ldots, n-1\}$. Then, the initial configuration is stable, i.e., $\chi^{(t)}=\chi^{(0)}$, since each $v\in A$ serves $\lfloor k\cdot \frac{k-1}{k}\rfloor +1=k$ tokens to itself (notice that the ``surplus'' token stays at $v$ according to the prescribed ordering). Now it is easy to see that \begin{eqnarray*} \max_{S\subseteq V}|\chi^{(t)}_S-\mu^{(t)}_S| &\geq &|\chi^{(t)}_A-\mu^{(t)}_A| \geq \frac{kn}{2}-\frac{kn}{4}-\varepsilon = \frac{kn}{4}-\varepsilon={\rm \Omega}(nt^*) \end{eqnarray*} holds for any $t\geq \tau(\varepsilon)$. We obtain the claim. \if0 Now, we prove $t^*=\Order(k)$. Let $X_t$ be a Markov chain according to $P$, and let $Y_t$ be defined by \begin{eqnarray*} Y_{t+1}= \begin{cases} Y_{t} & ({\rm if }\ X_{t+1}=X_{t})\\ X_{t} & ({\rm if }\ X_{t+1}=Y_{t})\\ X_{t+1} & ({\rm otherwise }). \end{cases} \end{eqnarray*} Then, for any $X_0$ and $Y_0$, \begin{eqnarray*} {\rm Pr}[X_t\neq Y_t]\leq \left(\frac{k-1}{k}+\frac{1}{k(n-1)}\right)^t \end{eqnarray*} holds for any $t\in \mathbb{Z}_{\geq 0}$, thus $\dtv(P^t_{v, \cdot}, \pi)\leq \left(\frac{k-1}{k}+\frac{1}{k(n-1)}\right)^t$ by coupling lemma~\cite{LPW08}, and we obtain \begin{eqnarray*} \tau(\varepsilon )&\leq &\frac{\log \varepsilon^{-1}}{\log \left(\frac{k-1}{k}+\frac{1}{k(n-1)}\right)^{-1}} =\frac{\log \varepsilon^{-1}}{\log \left(1-\frac{n-2}{k(n-1)}\right)^{-1}} \leq \frac{\log \varepsilon^{-1}}{\frac{n-2}{k(n-1)}} =\frac{n-1}{n-2}k\log \varepsilon^{-1}. \end{eqnarray*} Note that $\log (1-x)^{-1}\geq x$ for any $0<x<1$, and we obtain the claim. \fi \end{proof} \section{Upper and lower bounds for non-oblivious model}\label{sec:result_nonoblivious} Observation~\ref{obs:det-sampling} for oblivious model suggests only that $|Z_{v,u}^{(t)}-\chi^{(t)}_vP_{v,u}|\leq 1$ holds for any $t\in \mathbb{Z}_{\geq 0}$. In this section, we introduce the {\em SRT-router model} (c.f.,~\cite{SYKY13}), which satisfies $|\sum_{s=0}^{t}(Z_{v,u}^{(s)}-\chi^{(s)}_vP_{v,u})|\leq 1$ for any $t\in \mathbb{Z}_{\geq 0}$, and we obtain an improved bound when the Markov chain is {\em lazy}\footnote{$P$ is lazy if $P_{u,u}\geq 1/2$ holds for any $u\in V$.}. \subsection{Model}\label{subsec:model_srt} The SRT-router model, based on the {\em shortest remaining time} (SRT) rule~\cite{AJJ10,T80,SYKY13}, is a generalized version of the rotor-router model. In the model, we define an {\em SRT-router} $\sigma_v \colon \mathbb{Z}_{\geq 0} \to \No(v)$ on each $v \in V$ for a given $P$. Roughly speaking, $\sigma_v(i)$ denotes the destination of the $i$-th launched token at $v$. Given $\sigma_{v}(0), \ldots,\sigma_v(i-1)$, inductively $\sigma_v(i)$ is defined as follows. First, let \begin{eqnarray*} T_i(v)=\{u\in \No(v)\mid |\{ j\in[0,i)\mid \sigma_v(j) =u\}|-(i+1)P_{v, u}<0\}, \end{eqnarray*} where $[z,z')\defeq \{z,z+1,\ldots, z'-1\}$ (remark $[z,z)=\emptyset$). Then, let $\sigma_v(i)$ be $u^*\in T_i(v)$ minimizing the value \begin{eqnarray*} \frac{|\{ j\in[0,i)\mid \sigma_v(j) =u\}|+1}{P_{v, u}} \end{eqnarray*} in any $u\in T_i(v)$. If there are two or more such $u\in T_v(i)$, then let $u^*$ be the minimum in them in an arbitrary prescribed order. The ordering $\sigma_v(0), \sigma_v(1), \ldots$ is known as the {\em shortest remaining time} (SRT) rule (see e.g.,~\cite{AJJ10,T80,SYKY13}). In an SRT-router model, there are $\chi^{(t)}_v$ tokens on a vertex $v$ at time $t$, and each vertex $v$ serves tokens on~$v$ to the neighboring vertices one by one according to $\sigma_v(i)$, like a rotor-router. For example, if there are $a$ tokens on $v$ at time $t=0$, then $|\{j\in [0,a)\mid \sigma_v(j)=u\}|$ tokens move to each $u\in \No(v)$, and there are $b$ tokens on $v$ at $t=1$, then $|\{j\in [a,a+b)\mid \sigma_v(j)=u\}|$ tokens move to each $u\in \No(v)$, and so on. Formally, it is defined by \begin{eqnarray} \label{def:Zvufunc} Z_{v,u}^{(t)} &=&\left| \left\{ j\in \left[ \textstyle\sum_{s=0}^{t-1} \chi_v^{(s)}, \sum_{s=0}^{t} \chi_v^{(s)} \right) \mid \sigma_v(j)=u \right\} \right|. \end{eqnarray} It is clear that the definition~\eqref{def:Zvufunc} satisfies \eqref{eq:Zvut-usumsa}. Then, the configuration of tokens is recursively defined by \eqref{eq:Zvut-vsumsa}. The following proposition is due to Angel et al.~\cite{AJJ10} and Tijdeman~\cite{T80}. \begin{proposition} {\rm \cite{T80, AJJ10}} \label{prop:fdiscSRT} For any SRT-router model, \begin{eqnarray*} \Bigl| |\{ j \in [0,z) \mid \sigma_v(j) =u\}| - z\cdotp P_{v,u}\Bigr| < 1 \end{eqnarray*} holds for any $v, u\in V$ and for any $z>0$. \end{proposition} Proposition~\ref{prop:fdiscSRT} suggests that $|Z_{v,u}^{(t)}-\chi^{(t)}_vP_{v,u}|$ is small enough. In fact, Proposition~\ref{prop:fdiscSRT} and \eqref{def:Zvufunc} suggest a stronger fact that \begin{eqnarray} \left| \sum_{t=a}^{b}\left(Z_{v,u}^{(t)}-\chi_v^{(t)}P_{v,u}\right) \right| &=& \left| \sum_{t=a}^b\left| \left\{ j\in \left[ \textstyle\sum_{s=0}^{t-1} \chi_v^{(s)}, \sum_{s=0}^{t} \chi_v^{(s)} \right) \mid \sigma_v(j)=u \right\} \right| - \sum_{t=a}^b\chi_v^{(t)}P_{v,u} \right| \nonumber \\ &=& \left| \left| \left\{ j\in \left[ \textstyle \sum_{s=0}^{a-1} \chi_v^{(s)}, \sum_{s=0}^{b} \chi_v^{(s)} \right) \mid \sigma_v(j)=u \right\} \right| - \sum_{t=a}^b\chi_v^{(t)}P_{v,u} \right| \nonumber \\ &\leq & \max_{\substack{z,z'\in \mathbb{Z}_{\geq 0}\\ {\rm s.t.}\ z'>z}}\Bigl| |\{ j \in [z,z') \mid \sigma_v(j) =u\}|-(z'-z)P_{v,u}\Bigr| <2 \label{eq:discsrtsum} \end{eqnarray} holds for any $a,b\in \mathbb{Z}_{\geq 0}$ s.t. $a\leq b$. We will use \eqref{eq:discsrtsum} in our analysis, in Section~\ref{subsec:upper_srt}. \subsection{Better upper bound for the SRT-router model}\label{subsec:upper_srt} Now, we show for ergodic and lazy $P$ the following theorem, modifying the technique~\cite{BKKM15}. \begin{theorem} \label{thm:upper-lazyTV-SRT} Suppose $P\in \mathbb{R}_{\geq 0}^{n\times n}$ is ergodic and lazy. Then for any SRT model, \begin{eqnarray*} \left| \chi^{(T)}_A-\mu^{(T)}_A \right| =\Order\left(m\sqrt{t^*\log t^*}\right) \end{eqnarray*} holds for any $A \subseteq V$ and for any $T\in \mathbb{Z}_{\geq 0}$. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:upper-lazyTV-SRT}] The major difference between an oblivious model and an SRT-router model is that \begin{eqnarray} \left| \sum_{t=a}^{b}\phi^{(t)}_u \right| = \left| \sum_{t=a}^{b}\sum_{v\in \Ni (u)} (Z_{v,u}^{(t)}-\chi_v^{(t)}P_{v,u}) \right| \leq \sum_{v\in \Ni (u)} \left| \sum_{t=a}^{b} (Z_{v,u}^{(t)}-\chi_v^{(t)}P_{v,u}) \right| \leq 2\deltai(u) \label{eq:phibound2} \end{eqnarray} holds for any $u\in V$ and $b\geq a$ in an SRT-router model since \eqref{eq:discsrtsum} holds. It is easy to check that $|\chi^{(T)}_A-\mu^{(T)}_A |\leq 3mt^*$ holds for any SRT-router model by the same argument in the proof of Theorem~\ref{thm:upper-ergodic-Ob} using \eqref{eq:phibound2} instead of \eqref{eq:phibound}. Thus we obtain $|\chi^{(T)}_A-\mu^{(T)}_A |\leq 6m$ if $t^*=1,2$. In the rest part of the proof, we assume that $t^*\geq 3$, which suggests $t^*\lceil \lg t^*\rceil \geq 3$. We introduce the following proposition and lemma to give a better upper bound of the first term of \eqref{eq:maindisc2}. See Appendix~\ref{app:markov} for the proofs. \begin{proposition} \label{prop:summation} Let $F_t=\sum_{i=0}^{t}f_i$. Then, \begin{eqnarray*} \sum_{t=0}^{T}f_t g_t=F_T g_T +\sum_{t=0}^{T-1}F_t (g_{t}-g_{t+1}) \end{eqnarray*} holds for any $T\in \mathbb{Z}_{\geq 0}$ and for any $f_i, g_i$ $(0\leq i \leq T)$. \shortqed \end{proposition} \begin{lemma} \label{lemm:lazysum} Suppose that $P \in \mathbb{R}_{\geq 0}^{n\times n}$ is {\em ergodic} and {\em lazy}. Then, \begin{eqnarray*} \sum_{t=0}^{T} \dtv\left( P^t_{u, \cdot}, P^{t+1}_{u, \cdot} \right) \leq 24\sqrt{T}-11 \end{eqnarray*} holds for any $u\in V$ and for any $T\in \mathbb{Z}_{> 0}$. \shortqed \end{lemma} Using Proposition~\ref{prop:summation}, \eqref{eq:phibound2} and Lemma~\ref{lemm:lazysum}, we obtain \begin{eqnarray} \left| \sum_{t=0}^{\ats -1}\phi^{(T-t)}_uP^t_{u,A} \right| &=& \left| \left( \sum_{i=0}^{\ats -1}\phi^{(T-i)}_u \right) P^{\ats-1}_{u,A} +\sum_{t=0}^{\ats-2} \left( \sum_{i=0}^{t}\phi^{(T-i)}_u \right) \Bigl( P^t_{u,A}-P^{t+1}_{u,A} \Bigr) \right|\nonumber \\ &\leq & \left| \sum_{i=0}^{\ats -1}\phi^{(T-i)}_u \right| |P^{\ats-1}_{u,A}| +\sum_{t=0}^{\ats-2} \left| \sum_{i=0}^{t}\phi^{(T-i)}_u \right|\Bigl| P^t_{u,A}-P^{t+1}_{u,A} \Bigr| \label{eq:rev} \\ &\leq & 2\deltai(u) +2\deltai(u)\cdot \Bigl( 24\sqrt{\ats-2} -11\Bigr) =2\deltai(u) \Bigl( 24\sqrt{\ats-2} -10\Bigr) \label{eq:firstbetter} \end{eqnarray} for any $u\in V$, where $\alpha$ is an arbitrary positive integer satisfying $\alpha t^*\geq 3$. Finally, \eqref{eq:maindisc2}, \eqref{eq:firstbetter}, \eqref{eq:secondbound} and \eqref{eq:phibound2} imply that \begin{eqnarray*} \left| \chi^{(T)}_A-\mu^{(T)}_A \right| &\leq & \sum_{u\in V}\left|\sum_{t=0}^{\ats-1}\phi_u^{(T-t)} P^{t}_{u,A} \right| +\frac{t^*}{2^\alpha} \sum_{u\in V}\max_{0\leq t \leq T}|\phi_u^{(T-t)}| \nonumber \\ &\leq &2m\Bigl( 24\sqrt{\ats-2} -10\Bigr) +2m\cdotp \frac{t^*}{2^\alpha} \leq 2m\Bigl( 24\sqrt{t^*\lg t^*-2} -9\Bigr) \end{eqnarray*} where the last inequality is obtained by letting $\alpha=\lceil \lg t^* \rceil$. We obtain the claim. \end{proof} \subsection{Lower bounds}\label{subsec:lower_srt} This section discusses a lower bound of the total variation discrepancy. First, we observe the following proposition, which is caused by the integral gap between $\chi^{(T)}\in \mathbb{Z}^V$ and $\mu^{(T)}\in \mathbb{R}^V$. \begin{proposition} \label{prop1} Suppose that $P$ is ergodic and its stationary distribution is uniform. Then, for any $\chi^{(T)}\in \mathbb{Z}_{\geq 0}^n$ with an appropriate number of tokens $M$, \begin{eqnarray*} \max_{S\subseteq V}\left|\chi^{(T)}_S-\mu^{(T)}_S\right|=\mathrm{\Omega}(n) \end{eqnarray*} holds for any time $T$ after mixing. \end{proposition} We also give a better lower bound for an SRT-router model. \begin{proposition} \label{prop2} There exist an example of SRT model such that \begin{eqnarray*} \max_{S\subseteq V}\left|\chi^{(T)}_S-\mu^{(T)}_S\right|\geq \frac{n^2}{8}=\mathrm{\Omega}(m) \end{eqnarray*} holds for any $T>0$. \end{proposition} See Appendix~\ref{app:markov} for the proofs. \subsection{Vertex-wise discrepancy} This section presents an upper bound of the {\em single vertex discrepancy} $\|\chi^{(T)}-\mu^{(T)}\|_\infty$, which is an extended version of~\cite{BKKM15} to ergodic, reversible and lazy Markov chains, in general. \begin{theorem} \label{thm:upper-lazyLisrt} Suppose $P\in \mathbb{R}_{\geq 0}^{n\times n}$ is ergodic, reversible\footnote{ $P$ is reversible if the {\em detailed balance equation} $\pi_vP_{v,u}=\pi_uP_{u,v}$ holds for any $u,v\in V$. Notice that a reversible ergodic $P$ is symmetric if its stationary distribution is uniform, and vice versa.}, and lazy. Then for any SRT-router model, \begin{eqnarray*} \left| \chi^{(T)}_w-\mu^{(T)}_w \right| =\Order\left( \frac{\pi_{\max}}{\pi_{\min}} \Delta \sqrt{t^*\log t^*}\right) \end{eqnarray*} holds for any $w \in V$ and for any $T\in \mathbb{Z}_{\geq 0}$, where $\Delta=\max_{u\in V}|\No(u)| (=\max_{u\in V}|\Ni(u)|)$, $\pi_{\max}=\max_{u\in V}\pi_u$ and $\pi_{\min}=\min_{u\in V}\pi_u$. \end{theorem} \begin{proof} If $t^*=1,2$, $| \chi^{(T)}_w-\mu^{(T)}_w |\leq \frac{12\pi_{\max}}{\pi_{\min}} \Delta$ holds since $| \chi^{(T)}_w-\mu^{(T)}_w |\leq \frac{6\pi_{\max}}{\pi_{\min}} \Delta t^*$ holds due to~\cite{SYKY13}. Now, we assume $t^*\geq 3$, which suggests $t^*\lceil \lg t^* \rceil\geq 3$. By a combination of \eqref{eq:maindisc2}, \eqref{eq:rev}, \eqref{eq:secondbound} and \eqref{eq:phibound2}, we obtain that \begin{eqnarray} \Bigl|\chi^{(T)}_w-\mu^{(T)}_w\Bigr| \leq 2\Delta \sum_{u\in V}|P^{\ats-1}_{u,w}| +2\Delta\sum_{t=0}^{\ats-2} \sum_{u\in V}\Bigl|P^t_{u,w}-P^{t+1}_{u,w} \Bigr| +2\Delta \sum_{t=\ats}^{T-1} \sum_{u\in V}\Bigl|P^t_{u,w}-\pi_w\Bigr| \label{eq:sngledisc} \end{eqnarray} holds, where $\alpha$ is an arbitrary positive integer satisfying $\alpha t^*\geq 3$. The condition that $P$ is reversible, i.e., $\pi_uP^t_{u,w}=\pi_{w}P^t_{w,u}$ holds for any $u,v\in V$, implies that \begin{eqnarray} \sum_{u\in V}P^t_{u,w} =\sum_{u\in V}\frac{\pi_w}{\pi_{u}}P^t_{w,u} \leq \frac{\pi_w}{\pi_{\min}}\sum_{u\in V}P^t_{w,u} = \frac{\pi_w}{\pi_{\min}} \label{eq:r1} \end{eqnarray} holds. Lemma~\ref{lemm:lazysum} implies that \begin{eqnarray} \sum_{t=0}^{\ats-2}\sum_{u\in V}\Bigl|P^t_{u,w}-P^{t+1}_{u,w} \Bigr| &=&\sum_{t=0}^{\ats-2}\sum_{u\in V}\Bigl| \frac{\pi_w}{\pi_{u}}\Bigl( P^{t}_{w,u} -P^{t+1}_{w,u} \Bigr)\Bigr| \leq \frac{\pi_w}{\pi_{\min}} \sum_{t=0}^{\ats-2}\sum_{u\in V}\Bigl| P^{t}_{w,u} -P^{t+1}_{w,u} \Bigr| \nonumber \\ &=& \frac{\pi_w}{\pi_{\min}} \sum_{t=0}^{\ats-2} \| P^{t}_{w,u} -P^{t+1}_{w,u} \|_1 = \frac{2\pi_w}{\pi_{\min}} \sum_{t=0}^{\ats-2}\dtv \Bigl( P^t_{w,\cdot}, P^{t+1}_{w,\cdot}\Bigr) \nonumber \\ &\leq & \frac{2\pi_w}{\pi_{\min}} \Bigl(24\sqrt{\ats-2}-11 \Bigr) \label{eq:r2} \end{eqnarray} holds, as well as Lemma~\ref{lemm:dtsum} implies that \begin{eqnarray} \sum_{t=\ats}^{T-1}\sum_{u\in V}\Bigl|P^t_{u,w}-\pi_w\Bigr| &=&\sum_{t=\ats}^{T-1}\sum_{u\in V}\Bigl| \frac{\pi_w}{\pi_{u}}\Bigl( P^t_{w,u} -\pi_u \Bigr) \Bigr| \leq \frac{\pi_w}{\pi_{\min}} \sum_{t=\ats}^{T-1}\sum_{u\in V}\Bigl| P^t_{w,u} -\pi_u \Bigr| \nonumber \\ &=& \frac{2\pi_w}{\pi_{\min}} \sum_{t=\ats}^{T-1}\dtv \Bigl( P^t_{w,\cdot},\pi\Bigr) \leq \frac{2\pi_w}{\pi_{\min}} \frac{t^*}{2^\alpha} \label{eq:r3} \end{eqnarray} holds. Thus, a combination \eqref{eq:sngledisc}, \eqref{eq:r1}, \eqref{eq:r2} and \eqref{eq:r3} implies that \begin{eqnarray*} \Bigl|\chi^{(T)}_w-\mu^{(T)}_w\Bigr| &\leq &2\Delta \frac{\pi_w}{\pi_{\min}} +2\Delta \frac{2\pi_w}{\pi_{\min}} \Bigl(24\sqrt{\ats-2}-11 \Bigr) +2\Delta \frac{2\pi_w}{\pi_{\min}} \frac{t^*}{2^\alpha} \\ &\leq & \frac{2\pi_w}{\pi_{\min}}\Delta \Bigl(48 \sqrt{t^*\lceil \lg t^* \rceil-2}-19 \Bigr) \end{eqnarray*} holds where the last inequality follows by letting $\alpha=\lceil \lg t^* \rceil$. We obtain the claim. \end{proof} \section{Concluding Remarks} In this paper, we gave two upper bounds of the {\em total variation discrepancy}, one is $\|\chi^{(t)}-\mu^{(t)}\|_1 =\Order(mt^*)$ for any ergodic Markov chains and the other is $\|\chi^{(t)}-\mu^{(t)}\|_1=\Order(m\sqrt{t^*\log t^*})$ for any lazy and ergodic Markov chains. We also showed some lower bounds. The gap between upper and lower bounds is a future work. Development of a deterministic approximation algorithm based on deterministic random walks for {\#}P-hard problems is a challenge. \bibliographystyle{abbrv}
2,877,628,090,584
arxiv
\section{Introduction} Photoionization of atoms in bichromatic laser field had received recently a considerable attention both in theory (see, for instance, Baranova {\it et al}\/ 1990, 1992, 1993, Baranova and Zel'dovich 1991, Sz\"{o}ke {\it et al}\/ 1991, Anderson {\it et al}\/ 1992, Schafer and Kulander 1992, Potvliege and Smith 1991, 1992a,b, 1994, Pazdzersky and Yurovsky 1994, Protopapas {\it et al}\/ 1994, V\'{e}niard {\it et al}\/ 1995, Pazdzersky and Usachenko 1997, Fifirig {\it et al}\/ 1997) and experiment (Muller {\it et al}\/ 1990, Ce Chen and Elliott 1990, Baranova {\it et al}\/ 1991, 1992, Yin {\it et al}\/ 1992). One of the principal points of interest seems to be dependence of the observables on the difference of field phases $\varphi$, i.e. the problem of {\it phase control}. Another important aspect is the angular (polar) asymmetry of photoionization rate. These effects are interrelated since polar asymmetry is $\varphi$-dependent and vanishes for some particular value of phase (see more detail below). The calculations have been carried out previously for ionization of hydrogen atom in two laser fields with a frequency ratio 1:2 (Schafer and Kulander 1992), 1:3 (Potvliege and Smith 1991) and 2:3 (Potvliege and Smith 1994). Potvliege and Smith (1992) presented results for various frequency ratios and initial states. Different schemes were employed, but all of them implied numerically intensive work. For the multiphoton electron detachment from negative ions some analytical treatment exists (Baranova {\it et al}\/ 1993, Pazdzersky and Yurovsky 1994, Pazdzersky and Usachenko 1997) being limited mostly to the case when one or both fields are weak. The presence of large number of parameters in the problem sometimes makes results of analytical studies not directly transparent. The case of fields with comparable (and large) intensities is also of interest bearing in mind both possible experimental realizations and the theoretical description of the transition between the multiphoton and tunneling regimes. The multiphoton electron detachment from negative ions presents unique situation when quantitatively reliable results can be obtained by analytical methods in a broad range of parameters characteristic to the problem. Indeed, it has been demonstrated recently (Gribakin and Kuchiev 1997a,b) that judicious modification of the Keldysh (1964) model\footnote{Subsequent development of this model was due to Perelomov {\it et al}\/ (1966), Faisal (1973) and Reiss (1980); Perelomov and Popov (1967) were the first to consider multicolour process within this framework in terms of influence of higher harmonics on ionization probabilities.} ensures a very good quantitative agreement with results of numerically intensive developments, being much more simple. In many cases it also provides good agreement with available experimental data. It works unexpectedly well {\it even for small number $n$ of photons absorbed}. In addition to numerous examples considered previously (Gribakin and Kuchiev 1997a,b), here we briefly comment on the most recent experiments by Zhao {\it et al}\/ (1997) on non-resonant excess photon detachment of negative hydrogen ions. After absorption of {\it two photons}, the electron is ejected in superposition of $S$ and $D$ waves due to selection rules. The experiment demonstrates prevalence of $D$ wave contribution (90\% $\pm$ 10\%). Our calculations give for $D$ and $S$ waves population 86.2\% and 13.6\% respectively\footnote{The model shows also some population of $G$ and higher partial waves. However, this unphysical effect proves to be less than 0.2\% thus sustaining the model applicability.} for experimental conditions (light frequency $\omega = 0.0428$, field intensity $I=10^{10} {\rm W}/{\rm cm}^2$. The elaborate numerical calculations by Telnov and Chu (1996) and by Nikolopoulos and Lambropoulos (1997) give for $D$ wave population 91\% and 89\% respectively. The difference between these values is almost the same as the difference between our result and that of Nikolopoulos and Lambropoulos (1997), all three theoretical predictions lying within experimental error bars. This, together with the cases considered earlier allows us to suggest that even for $n=2$ an approach (Gribakin and Kuchiev 1997a,b) provides an accuracy comparable with that of the most elaborate numerical developments. The present paper extends approach of Gribakin and Kuchiev (1997a,b) to the case of bichromatic field. Its objective is to provide the scheme and some representative quantitative results for two-colour electron detachment from negative ions. In particular, the phase effects and the polar asymmetry are studied. The number of parameters in the problem is quite large and at the moment they cannot be fixed experimentally. Nevertheless it seems to be worthwhile to carry out some pivoting calculations in order to obtain insight into the possible magnitude of effects specific for negative ions in bichromatic fields. We consider angular differential detachment rates, heights of ATD (Above Threshold Detachment) peaks and total detachment rates. \section{Scheme of calculations} The previously developed scheme (Gribakin and Kuchiev 1997a,b) needs some modifications to incorporate bichromatic problem when electric field in the light wave \begin{eqnarray} \label{F} \vec{F}(t) = \vec{F}_1 \, \cos \omega_1 t + \vec{F}_2 \, \cos ( \omega_2 t + \varphi ) \end{eqnarray} is a superposition of two harmonic components with the frequencies $\omega_1$, $\omega_2$ and the amplitude vectors $\vec{F}_1$, $\vec{F}_2$ respectively; $\varphi$ is the difference of field phases. Atomic units are used throughout the paper unless stated otherwise. We consider a case of commensurable field frequencies\footnote{The general treatment of incommensurable frequencies case was considered by Ho {\it et al}\/ (1983), Delone {\it et al}\/ (1984), Ho and Chu (1984), Manakov {\it et al}\/ (1986), Potvliege and Smith (1992).} which implies that the common period $T$ exists such that \begin{eqnarray} T = \frac{ 2 \pi}{\omega_1} \, M_1 = \frac{ 2 \pi}{\omega_2} \, M_2 \end{eqnarray} for some mutually simple integers $M_1$ and $M_2$. The exact expression for the differential transition rate $d w_\lambda$ is derived following Appendix A of the paper by Gribakin and Kuchiev (1997a) with the result \begin{eqnarray} \label{wd} d w_\lambda = 2 \pi \sum_{\epsilon_f} \left| A_{ \lambda \epsilon_f} \right|^2 \delta(E_\lambda - E_0 - \epsilon_f) \, \rho_\lambda , \\ \label{Ad} A_{ \lambda \epsilon_f} = \frac{1}{T} \int_0^T \, \langle \psi_\lambda (t) | V(t) | \psi_0(t) \rangle \, dt . \end{eqnarray} Here $\psi_0(t) = \exp(-i E_0 t) \phi_0$ describes an initial state for the time-independent Hamiltonian $H_0$, and $\psi_\lambda(t)$ is a quasienergy state for the total Hamiltonian $H = H_0 + V(t)$, which includes interaction with the periodic field $V(t) = - e \vec{r}\cdot \vec{F}(t)$, $V(t) = V(t+T)$: \begin{eqnarray} i \frac{ \partial \psi_\lambda}{\partial t} = \left[ H_0 + V(t) \right] \, \psi_\lambda , \\ \psi_\lambda(t) = \exp( - i E_\lambda t) \, \phi_\lambda , \quad \, \quad \quad \phi_\lambda(t) = \phi_\lambda(t+T), \end{eqnarray} $E_\lambda$ is the quasienergy, $\rho_\lambda$ is the density of states, $\vec{r}$ is the active electron vector. The energy $\epsilon_f$ absorbed by electromagnetic field could be presented as $\epsilon_f = n_1 \, \omega_1 + n_2 \, \omega_2$ with some integers $n_1$ and $n_2$. However this representation (i.e. the choice of $n_1$ and $n_2$) generally is non-unique which reflexes existence of different coherent interfering paths (with different number of absorbed and emitted photons of each frequency) leading to the same final state. If interaction of light wave with an electron is described in the dipole-length form, as presumed above, then a long range contribution to the matrix elements is emphasized. Therefore it is sufficient to employ an asymptotic form of the initial-state wave function (Gribakin and Kuchiev 1997a): \begin{eqnarray} \phi_0(\vec{r}) \approx A r^{\nu-1} \, \exp(- \kappa r) \, Y_{lm}(\hat{\vec{r}}) \quad \quad \quad ( r \gg 1), \end{eqnarray} where $E_0 = - \frac{1}{2} \kappa^2$, $\nu = Z/\kappa$, $Z$ is the charge of the atomic residual core ($\nu=Z=0$ for a negative ion), and $\hat{\vec{r}}$ is the unit vector. The coefficients $A$ are tabulated for many negative ions (Radzig and Smirnov 1985). The amplitude $A_{ \lambda \epsilon_f}$ (\ref{Ad}) is evaluated neglecting the influence of atomic potential on the photoelectron in the final state. Further on, the integral over time in (\ref{Ad}) is calculated within the stationary phase approximation which presumes large magnitude of the classical action \begin{eqnarray} \label{S} S(t) = \frac{1}{2} \int^t \left( \vec{p} + \vec{k}_{t^\prime} \right)^2 dt^\prime - E_0 t , \end{eqnarray} where $\vec{k}_t$ is the classical electron momentum due to the field \begin{eqnarray} \label{k} \vec{k}_{t} = e \int^t \vec{F}(t^\prime) \, d t^\prime . \end{eqnarray} The photoelectron translational momentum $\vec{p}$ plays a role of the quantum number $\lambda$ above; in particular, the quasienergy $E_\lambda = E_{\vec{p}}$, \begin{eqnarray} E_{\vec{p}} = \frac{1}{2} \vec{p}^{\: 2} + \frac{e^2}{4 \omega_1^2}F_1^2 + \frac{e^2}{4 \omega_2^2}F_2^2 \end{eqnarray} includes contribution from the electron quiver energy due to the field. The result of calculations of the amplitude (\ref{Ad}) could be written as a modification of the formula (25) in the paper by Gribakin and Kuchiev (1997a): \begin{eqnarray} \label{A} A_{ \vec{p} \epsilon_f} = - \frac{(2 \pi)^2}{T} \, A \, \Gamma(1+\nu/2) \, 2^{\nu/2} \, \kappa^\nu \, \sum_\mu Y_{lm}(\hat{\vec{p}}_\mu) \, \frac{\exp \left( i S_\mu \right)} {\sqrt{2 \pi (- i S^{\prime \prime}_\mu)^{\nu+1}}} . \end{eqnarray} A corresponding expression for the detachment rate for the negative ion case ($\nu = 0$) reads: \begin{eqnarray} \label{w} \frac{d w_{e_f}}{d \Omega_{\vec{p}}} = \frac{1}{(2 \pi)^2} \, p \left| A_{ \vec{p} \epsilon_f} \right|^2 = \frac{(2 \pi)^2}{T^2} \, p \, A^2 \left| \sum_\mu Y_{lm}(\hat{\vec{p}}_\mu) \, \frac{\exp \left( i S_\mu \right)} {\sqrt{2 \pi S^{\prime \prime}_\mu}} \right|^2 . \end{eqnarray} Here the subscript $\mu$ indicates that the function is calculated at the saddle point $t= t_\mu$ which satisfies equation \begin{eqnarray} \label{sp} S^\prime(t_\mu) = 0 . \end{eqnarray} In the plane of the complex-valued time the saddle points $t_\mu$ lie symmetrically with respect to the real axis. Summation in (\ref{A}) includes the points lying in the upper half plane (${\rm Im} \, t_\mu > 0$) with $ 0 \leq {\rm Re} \, t_\mu \leq T$; $\hat{\vec{p}}_\mu$ is a unit vector in the direction of $\vec{p} + \vec{k}_\mu$. The magnitude of the final electron translational momentum $p$ is governed by the energy conservation \begin{eqnarray} \frac{1}{2} \kappa^2 + E_{\vec{p}} = \epsilon_f , \end{eqnarray} which ensures that the momentum is real for open ATD channels. According to (\ref{S}), (\ref{k}), (\ref{F}) we have \begin{eqnarray} S^\prime(t) = \frac{1}{2} \, (\vec{p} + \vec{k}_t)^2 + \frac{1}{2} \, \kappa^2 = \nonumber \\ = \frac{1}{2} \, \vec{p}^{\: 2} + \frac{e^2}{2 \omega_1^2}F_1^2 \, \sin^2 \omega_1 t + \frac{e^2}{2 \omega_2^2}F_2^2 \, \sin^2(\omega_2 t + \varphi) + \nonumber \\ + p \cdot F_1 \, \frac{e}{\omega_1} \, \sin \omega_1 t + p \cdot F_2 \, \frac{e}{\omega_2} \, \sin (\omega_2 t + \varphi) + \nonumber \\ + \vec{F}_1 \cdot \vec{F}_2 \, \frac{e^2}{\omega_1 \omega_2} \, \sin \omega_1 t \, \sin (\omega_2 t + \varphi) + \frac{1}{2} \, \kappa^2 . \end{eqnarray} The frequencies $\omega_1$ and $\omega_2$ are integer multiples of the basic frequency $\omega = 2 \pi / T$ \begin{eqnarray} \omega_1 = M_1 \, \omega, \quad \quad \quad \omega_2 = M_2 \, \omega. \end{eqnarray} Assuming for definiteness that $M_2 > M_1$ and introducing $\zeta = \exp ( i \omega t )$ we notice that the function \begin{eqnarray} {\cal P}(\zeta) = \zeta^{2 M_2} S^\prime(\zeta) \end{eqnarray} is a polynomial of the power $4 M_2$ in $\zeta$. This observation is of practical importance since equation (\ref{sp}) defining the saddle point is equivalent to \begin{eqnarray} \label{pz} {\cal P}(\zeta) = 0 . \end{eqnarray} The efficient numerical procedures for finding roots of polynomials are available, and, in particular, one can be confident that {\it all}\/ the roots are found. The practical calculations are conveniently carried out using the {\it Mathematica}\/ program (Wolfram 1991). Starting from the expression for $S^\prime(t)$ one derives $S(t)$ and $S^{\prime \prime}(t)$ by analytical integration and differentiation respectively. The saddle points are found using Eq.(\ref{pz}), and the roots $t_\mu$ lying in the upper half plane are selected. Finite summation over $\mu$ in (\ref{A}) or (\ref{w}) completes the calculation. The roots $t_\mu$ and hence the photoionization amplitude (\ref{A}) and rate (\ref{w}) depend on the orientation of electron translational momentum $\vec{p}$ with respect to the field amplitudes $\vec{F}_1$ and $\vec{F}_2$. It is not difficult to consider the fields of various relative orientation and polarization, but for simplicity we limit our further calculations to linear polarized fields with $\vec{F}_1 \parallel \vec{F}_2$. Then the differential photoionization rate depends only on the single angle $\theta$ between the vectors $\vec{p}$ and $\vec{F}_1 \parallel \vec{F}_2$. \section{Results} Our calculations for H$^-$ detachment are carried out for the parameters $\kappa = 0.2354$, $A=0.75$ (Radzig and Smirnov 1985). The frequencies ratio $\omega_1 / \omega_2 = 1 : 2$ is considered. In this case the field (\ref{F}) have zero mean value, but possesses polar asymmetry (i.e. asymmetry under inversion of the $z$ axis directed along $\vec{F}_1 \parallel \vec{F}_2$) which could be conveniently characterized by the time-average value (Baranova {\it et al}\/ 1990) \begin{eqnarray} \label{F3} \langle F^3 \rangle = \frac{3}{4} F_1^2 \, F_2 \, \cos \varphi . \end{eqnarray} Presuming that $F_1, \: F_2 > 0$, from this expression one infers, for instance, that for the phase $\varphi \in [0, \frac{1}{2} \pi]$ the electric field effectively attains larger values in the positive-$z$ direction than in the negative-$z$ one. This is illustrated, for example, by Fig.1 in the paper by Schafer and Kulander (1992), or by Fig.2 in the paper by Baranova {\it et al}\/ (1993). Note that our definition of the phase $\varphi$ is the same as in the papers by Baranova {\it et al}\/ (1990), Muller {\it et al}\/ (1990), Pazdzersky and Yurovsky (1995), but differs from that chosen by Schafer and Kulander (1992) who describe the electric filed as $\vec{F}(t) = \vec{F}_1 \, \sin \omega_1 t + \vec{F}_2 \, \sin ( \omega_2 t + \varphi_{{\rm KS}})$. Namely, the phases are related as $\varphi_{{\rm KS}}= \varphi - \frac{1}{2}\pi$. Although the formula (\ref{F3}) shows that the field has polar symmetry for $\varphi = \frac{1}{2} \pi$ and the maximal polar asymmetry for $\varphi = 0$, quite paradoxically, the differential detachment rate (\ref{w}) possesses polar symmetry for $\varphi = 0$ (i.e. is invariant under transformation $\theta \Rightarrow \pi - \theta$), and is asymmetrical for other values of phase (see discussion by Baranova {\it et al}\/ (1990), Schafer and Kulander (1992), Pazdzersky and Yurovsky (1995)). The other features of the phase effects are as follows. \begin{enumerate} \item The system Hamiltonian is a 2$\pi$-periodic function of the parameter $\varphi$. \item The system Hamiltonian is not changed by simultaneous transformation $\varphi \Rightarrow - \varphi$, $\theta \Rightarrow \pi - \theta$. The same applies to the differential detachment rate (\ref{w}). \item The transformation $\varphi \Rightarrow \pi - \varphi$ leaves the Hamiltonian invariant only if $t$ is replaced by $-t$. \end{enumerate} As stressed by Baranova {\it et al}\/ (1990), Baranova and Zeldovich (1991), Anderson {\it et al}\/ (1992), Baranova {\it et al}\/ (1993), the problem is invariant under the time inversion operation provided the final-state electron interaction with the atomic core is neglected. This is the case in the present model. Therefore our differential ionization rates are the same for $\varphi$ and $(\pi - \varphi)$; hence it is sufficient for us to consider phases only from the interval $\varphi \in [0, \: \frac{1}{2} \pi]$. The calculations by Baranova {\it et al}\/ (1990) within the perturbation theory and by Schafer and Kulander (1992) within the wave propagation technique took into account the final state electron-core interaction. Therefore they have found some deviations from the symmetry under $\varphi \Rightarrow (\pi - \varphi)$ transformation. However, for the negative ion photodetachment this effect could be anticipated to have only minor influence. At first we consider two fields with the frequencies $\omega = 0.0043$ and $2\omega$ and equal intensities $I_1 = I_2 = 10^{9} {\rm W}/{\rm cm}^2$. It is well known that the regime of detachment process is governed by the Keldysh parameter $\gamma = \omega \kappa / F$ ($\gamma \gg1$ for multiphoton detachment in perturbative regime; $\gamma \ll 1$ for strong field, or tunneling regime). In the present case for the first field we have $\gamma_1 = \omega_1 \kappa / F_1 = 6$, and for the second field $\gamma_2 = 2 \gamma_1$, which corresponds to multiphoton regime. Fig.1 (as well as Figs.2-4 below) shows differential detachment rate as a function of the angle $\theta$. The abscissas of the plots give the magnitude of \begin{eqnarray} \label{dd} \frac{1}{2} \, \frac{1}{2 \pi} \frac{d w_{e_f}}{d \cos \theta} = \frac{1}{(2 \pi)^2} \, p \left| A_{ \vec{p} \epsilon_f} \right|^2 , \end{eqnarray} where the right hand side was calculated using the right hand side of the formula (\ref{w}). The left hand side of Eq.(\ref{dd}) has the factor $\frac{1}{2}$. It means that the true detachment rate in case of H$^-$ ion is twice larger than that given by Eq.(\ref{w}). By this we account for the two possible spin states of residual H atom (i.e. for the presence of two equivalent electrons in H$^-$). In Fig.1 we show the differential detachment rate for the first and second ATD peaks which correspond to absorption of $n=7$ and $n=8$ photons of frequency $\omega$ respectively. In Fig.2 the same results are shown but for doubled value of the field amplitude $F_2$. In Fig.3 the amplitude $F_2$ is the same as in Fig.1, but the amplitude $F_1$ is doubled. In all cases the angular distributions exhibit strong dependence on the field phase difference $\varphi$. This is well expected since the angular patterns are formed by interference of contributions coming from different complex-valued moments of time $t_\mu$. For $\varphi = 0$ the distribution is rather flat, with $\varphi$ increasing it becomes more oscillatory. An interesting, and not obvious feature is that for $\varphi = \frac{1}{2} \pi$ the rate turns zero at the values of angle $\theta$ where it has minimum. In Fig.4 we present the results for the same frequencies as before and equal field intensities $I_1 = I_2 = 10^{11} {\rm W}/{\rm cm}^2$. Here the Keldysh parameter for the first field is $\gamma_1 = \omega_1 \kappa / F_1 = 0.6$. Bearing in mind the presence of the second field one can suppose that the situation corresponds to the onset of strong-field domain. The first open ATD channel corresponds to absorption of $n=18$ photons of frequency $\omega$. The patterns in differential rates become more oscillatory than in the weak filed case. The partial detachment rate for each ATD channel is obtained by integration of (\ref{w}) over angles \begin{eqnarray} \label{wt} w_{e_f} = \int \frac{d w_{e_f}}{d \Omega_{\vec{p}}} \, d \Omega_{\vec{p}} = \int_{\theta=\pi}^{\theta=0} \frac{d w_{e_f}}{d \cos \theta} \, d \cos \theta \: . \end{eqnarray} We present separately the result $w_{e_f}^{(u)}$ of integration over the upper half-space of the electron ejection $\left( 0 < \theta < \frac{1}{2} \pi \right)$ and its counterpart $w_{e_f}^{(l)}$ for the lower half-space $\left( \frac{1}{2} \pi < \theta < \pi \right)$. These magnitudes give a bulk characterization for the partial rate polar asymmetry. As discussed above, the polar asymmetry disappears (i.e. $w_{N}^{(u)} = w_{N}^{(l)}$) for $\varphi = 0$. In the perturbative regime (Fig.5) for the same conditions as in Fig.1 we see that the bulk polar asymmetry parameter $w_{N}^{(u)}/w_{N}^{(l)}$ exceeds $10^3$ even for the lowest ATD channel ($N=1$) provided the phase $\varphi$ is not too small (the open ATD channels are labeled by the number $N = 1, 2, \ldots$ in order of increasing emitted electron energy). For higher ATD peaks the bulk asymmetry is even larger. The partial detachment rates integrated over all ejection angles $w_N = w_{N}^{(u)} + w_{N}^{(l)}$ is shown in Fig.6 for three representative values of $\varphi$. Even for $N=1$ the variation of the phase $\varphi$ leads to the substantial change in the detachment rate described by a factor 4. In the tunneling regime the bulk polar asymmetry (Fig.7) is not as prominent as in the perturbative regime. Nevertheless it is quite substantial. Note that both in Fig.5 and Fig.7 the electron emission in the upper half-space $\left(0 < \theta < \frac{1}{2} \pi \right)$ is more probable for all $N$, i.e. $w_{N}^{(u)} > w_{N}^{(l)}$ in the interval of phases considered ($\varphi \in \left[0, \, \frac{1}{2} \pi \right]$). As discussed above (see the property (ii)), the situation is reversed for $\varphi \in \left[- \frac{1}{2} \pi, \, 0 \right]$. For small value of the phase $\varphi = \frac{1}{8} \pi$ in Fig.7 there is a clear tendency to swap the relation between $w_{N}^{(u)}$ and $w_{N}^{(l)}$ for higher values of $N$ which is prevented by a kind of 'pseudocrossing'. For larger $\varphi$ the partial rates $w_{N}^{(l)}$ are strongly suppressed when $N$ increases. The phase effects in the partial rates $w_N$ are less significant in the tunneling regime (Fig.8). The total detachment rates are obtained by summation over all open ATD channels: \begin{eqnarray} w = \sum_{N} w_{N} . \end{eqnarray} The results for the total rates as well as $w^{(l, \: u)} = \sum_{N} w_{N}^{(l, \: u)}$ are presented in table 1. In the perturbative regime we see again strong bulk asymmetry (three orders of magnitude and more) if the phase difference $\varphi$ is not close to zero, and substantial variation of $w$ with $\varphi$. Actually this result reiterates that for the partial rate with $N=1$ since the latter gives a dominant contribution to the total rate in the perturbative regime. In the strong field regime the bulk polar asymmetry $w^{(l)}/w^{(u)}$ remains well manifested in the rate summed over all ATD channels. However, the total rate $w$ is practically insensitive to the phase variation. The partial rates $w_N$ in Fig.8 exhibit some oscillatory structure as functions of $N$ with position of extrema depending on $\varphi$. This $\varphi$-dependence almost completely disappears after summation over $N$ as table 1 shows. \section{Conclusion} As a summary, the approach of Gribakin and Kuchiev (1997a,b) provides convenient tool for investigating two-colour photodetachment of negative ions. The bichromatic electron detachment for H$^-$ ion in the fields with 1:2 frequency ratio is examined in the perturbative and tunneling regimes. The polar asymmetry is found to be tremendously strong ($ \sim 10^3$) in the perturbative regime. Note that the asymmetry remains strong and keeps the same sign for all ATD for a wide range of phases $0 < \varphi < \pi$. This property makes this effect be convenient for experimental observation because it manifests itself very strongly even for a relatively poor resolution of the phase $\delta \varphi \sim \pi/2$ . It should be noted that via the recoil mechanism the predicted effect leads also to acceleration of the core thus creating anisotropic flux of neutral H atoms. \section*{Acknowledgments} We appreciate fruitful discussions with G.F.Gribakin. One of us (M.Yu.K.) is thankful to the Australian Research Council for support. This work was supported by the Australian Bilateral Science and Technology Collaboration Program. V.N.O. acknowledges a hospitality of the staff of the School of Physics of UNSW where this work was fulfilled. \section*{References} \begin{harvard} \item[] Anderson D Z, Baranova N B, Green K, and Zel'dovich B Ya 1992 {\it Zh. Eksp. Teor. Fiz.}\/ {\bf 102} 397-405 [1992 {\it Sov. Phys.-JETP}\/ {\bf 75} 210-4] \item[] Baranova N B, Beterov I M, Zel'dovich B Ya, Ryabtsev I I, Chudinov A N and Shul'ginov A A 1992 {\it Pis'ma Zh. Eksp. Teor. Fiz.}\/ {\bf 55} 431-5 [1992 {\it JETP Letters}\/ {\bf 55} 439-44] \item[] Baranova N B, Zel'dovich B Ya, Chudinov A N and Shul'ginov A A 1990 {\it Zh. Eksp. Teor. Fiz.}\/ {\bf 98} 1857-68 [1990 {\it Sov. Phys.-JETP}\/ {\bf 71} 1043-9] \item[] Baranova N B and Zel'dovich B Ya 1991 {\it J. Opt. Soc. Am. B}\/ {\bf 8} 27-32 \item[] Baranova N B, Reiss H R and Zel'dovich B Ya 1993 {\it Phys. Rev. A}\/ {\bf 48} 1497-505 \item[] Ce Chen and Elliott D S 1990 {\it Phys. Rev. Lett.}\/ {\bf 65} 1737-40 \item[] Delone N B, Manakov N L and Fainshtein A G 1984 {\it Zh. Eksp. Teor. Fiz.}\/ {\bf 86} 906-14 [1984 {\it Sov. Phys.-JETP}\/ {\bf 59} 529-33] \item[] Gribakin G F and Kuchiev M Yu 1997a {\it Phys. Rev. A}\/ {\bf 55} 3760-71 \item[] \dash 1997b {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 30} L657-64 \item[] Faisal F H M 1973 {\it J. Phys. B: At. Mol. Phys.}\/ {\bf 6} L89-92 \item[] Fifirig M, Cionga A and Florescu V 1997 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 30} 2599-608 \item[] Ho T S, Chu S I and Tietz J V 1983 {\it Chem. Phys. Lett.}\/ {\bf 96} 464-71 \item[] Ho T S and Chu S I 1984 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 17} 2101-28 \item[] Keldysh L V 1964 {\it Zh. Eksp. Teor. Fiz.}\/ {\bf 47} 1945-57 [1965 {\it Sov. Phys.-JETP}\/ {\bf 20} 1307-14] \item[] Nikolopoulos L A A and Lambropoulos P 1997 {\it Phys. Rev. A}\/ {\bf 56} 3106-15 \item[] Manakov N L, Ovsiannikov V D and Rapoport L P 1986 {\it Phys.Rep.} {\bf 141} 319-433 \item[] Muller H G, Bucksbaum P H, Schumacher D W and Zavriev A 1990 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 23} 2761-9 \item[] Pazdzersky V A and Yurovsky V A 1991 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 24} 733-40 \item[] \dash 1994 {\it Phys. Rev. A}\/ {\bf 51} 632-40 \item[] Pazdzersky V A and Usachenko V I 1997 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 30} 3387-402 \item[] Perelomov A M, Popov V S and Terent'ev M V 1966 {\it Zh. Eksp. Teor. Fiz.}\/ {\bf 50} 1393-409 [1966 {\it Sov. Phys.-JETP}\/ {\bf 23} 924-34] \item[] Perelomov A M and Popov V S 1967 {\it Zh. Eksp. Teor. Fiz.}\/ {\bf 52} 514-26 [1967 {\it Sov. Phys.-JETP}\/ {\bf 25} 336-43] \item[] Potvliege R M and Smith P H G 1991 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 24} L641-6 \item[] \dash 1992 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 25} 2501-16 \item[] \dash 1994 {\it Phys. Rev. A}\/ {\bf 49} 3110-3 \item[] Protopapas M, Knight P L and Burnett K 1994 {\it Phys. Rev. A}\/ {\bf 49} 1945-9 \item[] Radzig A A and Smirnov B M 1985 {\it Reference Data on Atoms, Molecules and Ions} (Berlin: Springer) \item[] Reiss H R 1980 {\it Phys. Rev. A}\/ {\bf 22} 1786-813 \item[] Schafer K J and Kulander K C 1992 {\it Phys. Rev. A}\/ {\bf 45} 8026-33 \item[] Sz\"{o}ke A, Kulander K C and Bardsley J N 1991 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 24} 3165-71 \item[] Telnov D A and Chu S I 1996 {\it J. Phys. B: At. Mol. Opt. Phys.}\/ {\bf 29} 4401-10 \item[] V\'{e}niard V, Taleb R and Maquet A 1995 {\it Phys. Rev. Lett.}\/ {\bf 74} 4161-4 \item[] Wolfram S 1991 {\it Mathematica: A System for Doing Mathematics by Computer}, 2nd ed. (Addison-Wesley Publishing Co., Palo Alto) \item[] Yin Y-Y, Ce Chen and Elliott D S 1992 {\it Phys. Rev. Lett.}\/ {\bf 69} 2353-6 \item[] Zhao X M, Gulley M S, Bryant H C, Strauss C E M, Funk D J, Stintz A, Rislove D C, Kyrala G A, Ingalls W B and Miller W A 1997 {\it Phys. Rev. Lett.}\/ {\bf 78} 1656-9 \end{harvard} \newpage \begin{table} \caption{ Total rates $w$ (summed over all ATD channels) for detachment of H$^-$ ion in the bichromatic field with the frequencies $\omega = 0.0043$ and $2\omega$, equal intensities $I_1 = I_2$ and some representative values of the phase difference $\varphi$. The detachment rates $w^{(u)}$ and $w^{(l)}$ for electron ejection into the upper and lower half-spaces respectively are also shown.} \vspace{5mm} \begin{tabular}{| c || l | l | l || l | r | l |} \hline & & & & & & \\ $\varphi$ & $w$ & $w^{(u)}$ & $w^{(l)}$ & $w$ & $w^{(u)}$ & $w^{(l)}$ \\ & & & & & & \\ \hline & \multicolumn{3}{c ||}{} & \multicolumn{3}{c |}{} \\ & \multicolumn{3}{c ||}{in units 10$^{-21}\:$a.u.} & \multicolumn{3}{c |} {in units 10$^{-6}\:$a.u.} \\ & \multicolumn{3}{c ||}{} & \multicolumn{3}{c |}{} \\ \hline \hline & \multicolumn{3}{c ||}{} & \multicolumn{3}{c |}{} \\ & \multicolumn{3}{c ||}{$I_1 = I_2 = 10^9$W/cm$^2$} & \multicolumn{3}{c |} {$I_1 = I_2 = 10^{11}$W/cm$^2$} \\ & \multicolumn{3}{c ||}{} & \multicolumn{3}{c |}{} \\ \hline & & & & & & \\ 0 & $1.55$ & 0.773 & 0.773 & 162.1 & 81.1 & 81.1 \\ & & & & & & \\ \hline & & & & & & \\ $\frac{1}{8} \pi$ & 2.12 & 1.88 & 0.240 & 164.6 & 99.8 & 64.8 \\ & & & & & & \\ \hline & & & & & & \\ $\frac{1}{4} \pi$ & 3.56 & 3.50 & 0.00566 & 164.8 & 113.0 & 51.8 \\ & & & & & & \\ \hline & & & & & & \\ $\frac{3}{8} \pi$ & 5.07 & 5.06 & 0.00112 & 165.1 & 123.8 & 41.4 \\ & & & & & & \\ \hline & & & & & & \\ $\frac{1}{2} \pi$ & 5.71 & 5.71 & 0.00040 & 164.9 & 128.0 & 37.0 \\ & & & & & & \\ \hline \end{tabular} \end{table} \Figures \begin{figure} \caption{ Detachment of H$^-$ ion in bichromatic field with the frequencies $\omega = 0.0043$ and $2\omega$ and equal intensities $I_1 = I_2 = 10^{9} {\rm W}/{\rm cm}^2$. Differential detachment rate (see Eq.(\protect\ref{dd})) (in units $10^{-12}$a.u.) as a function of the angle $\theta$ is shown for the first ({\it a}\/, absorption of $n=7$ photons of frequency $\omega$) and second ({\it b}\/, $n=8$) ATD peaks and various values of the field phase difference: solid curve - $\varphi = 0$; short-dashed curve - $\varphi = \frac{1}{8} \pi$; dot-dashed curve - $\varphi = \frac{1}{4} \pi$; dotted curve - $\varphi = \frac{3}{8} \pi$; long-dashed curve - $\varphi = \frac{1}{2} \pi$.} \end{figure} \begin{figure} \caption{ Same as in Fig.1, but for unequal field intensities $I_1 = 10^{9} {\rm W}/{\rm cm}^2$, $I_1 = 4 \times 10^{9} {\rm W}/{\rm cm}^2$. The differential detachment rates is shown in units $10^{-10}$a.u.} \end{figure} \begin{figure} \caption{ Same as in Fig.1, but for unequal field intensities $I_1 = 4 \times 10^{9} {\rm W}/{\rm cm}^2$, $I_1 = 10^{9} {\rm W}/{\rm cm}^2$. The differential detachment rate is shown in units $10^{-10}$a.u.} \end{figure} \begin{figure} \caption{ Same as in Fig.1, but in the strong field regime: $I_1 = I_2 = 10^{11} {\rm W}/{\rm cm}^2$. The detachment rate for the first ({\it a}\/, absorption of $n=18$ photons of frequency $\omega$) and second ({\it b}\/, $n=19$) ATD peaks is shown in units $10^{-6}$a.u.} \end{figure} \begin{figure} \caption{ Partial detachment rates for various ATD channels for H$^-$ ion in bichromatic field with the same parameters as in Fig.1 (perturbative regime) and various values of the field phase difference $\varphi$. $N$ labels ATD peaks with the lowest $N=1$ peak corresponding to absorption of 7 photons of frequency $\omega = 0.0043$. The rate $w_{N}^{(u)}$ of electron emission in the upper half-space is shown by circles, its counterpart $w_{N}^{(l)}$ for the lower half-space is depicted by triangles. The plot for $\varphi = \frac{1}{8} \pi$ additionally includes the rate for $\varphi = 0$ (crosses) when emission is polar symmetrical ($w_{N}^{(u)} = w_{N}^{(l)}$). The symbols are joined by lines to help the eye.} \end{figure} \begin{figure} \caption{ Same as in Fig.5, but for the detachment rate integrated over all angles $w_N = w_{N}^{(u)} + w_{N}^{(l)}$. The results for three values of the phase $\varphi$ are shown: circles -- $\varphi = 0$; blocks -- $\varphi = \frac{1}{4} \pi$, triangles -- $\varphi = \frac{1}{2} \pi$.} \end{figure} \begin{figure} \caption{ Same as in Fig.5, but for the field parameters chosen as in Fig.4 (strong field regime). The lowest $N=1$ peak corresponds to absorption of 18 photons of frequency $\omega = 0.0043$. The detachment rate is shown in units $10^{-6}$ a.u.} \end{figure} \begin{figure} \caption{ Same as in Fig.6, but for the field parameters chosen as in Fig.7 (strong field regime). The detachment rate is shown in units $10^{-6}$ a.u.} \end{figure} \end{document} -------------------------------------------- Valentin N.Ostrovsky Temporary address Permanent address (till 15 December, 1997) School of Physics Institute of Physics The University of New South Wales The University of St Petersburg Sydney 2052 198904 St Petersburg Australia Russia FAX: 61(2) 9385 6060 FAX: (7)-(812)-4287240 E-mail: Valentin.Ostrovsky@pobox.spbu.ru
2,877,628,090,585
arxiv
\section{Introduction} \label{sec:intro} With the rapid development of deep learning techniques, the performance of image denoising is improved significantly in recent years \cite{chen2022simple, zamir2022restormer, chen2021hinet, wang2022uformer}. However, deploying a state-of-the-art (SOTA) denoising model on resource constrained devices, such as mobile devices, remains challenging. On the one hand, although NPUs specifically optimized for deep neural networks are ubiquitous on mobile devices, most SOTA network architectures do not consider NPUs' compatibility. Hence they cannot fully utilize the powerful NPUs. On the other hand, the requirement of high resolution processing (720p/1080p or even higher) for real applications exponentially increases the computational and memory access cost, which are key efficiency bottlenecks on mobile devices. There have been some attempts to design lightweight models to promote mobile deployments \cite{xu2021efficient, zamir2022restormer, chen2022simple}. These lightweight models reduce either the FLOPs or the number of parameters of the model. However, recent works show that reducing the FLOPs or the number of parameters does not necessarily lead to a low latency on mobile devices \cite{vasu2022improved, zhang2021edge}. For example, skip connections and multi-branch structures are commonly used design choices for low-level vision tasks \cite{MPRnet, wang2022uformer, chen2021hinet, zamir2022restormer}. However, these operations can incur a high memory access cost, hindering fast inference on mobile devices. In addition to the model architectures, whether the operations are well optimized by NPUs is also essential when improving runtime performance \cite{zhang2021edge}. Benefiting from the powerful parallel computing capability and specialized optimization for common operations, NPUs show great advantages over other processors when processing neural networks. However, operations well optimized by the NPUs are quite limited. Architectures containing NPU-unsupported operations will be partially processed by CPUs or GPUs. This introduces the additional data transfer cost between processors and increases the synchronization cost, leading to a severe overhead. For example, ESA \cite{ESA}, which is adopted by the winner in the runtime track of the NTIRE 2022 efficient super-resolution challenge \cite{ESA-ntire}, contains NPUs-unsupported operations like max-pooling with a kernel size of 7 and a stride of 3 and bilinear interpolation with a scale larger than 2, which will be processed by GPUs. In this case, the feature maps have to be moved from NPUs to GPUs until finishing the process, which significantly slows down the inference speed of the model, as shown in Table \ref{attention performance}. \vspace{-0.05cm} In this paper, by conducting extensive experiments on an iPhone 11, we identify the preference of the Apple Neural Engine (ANE), which is a typical type of NPUs, for different network architectures and operations. Based on this, a mobile-friendly denoising network is proposed to significantly improve the model's runtime performance on mobile devices while achieving enhanced denoising performance than other lightweight denoising networks on the real-world denoising benchmarks SIDD \cite{sidd} and DND \cite{dnd}. The experiments show that our model only takes around 20ms when processing a 720p image on an iPhone 11, which offers the possibility to process 720p images or videos in real-time on mobile devices. \vspace{-0.3cm} \section{Network Architecture}\vspace{-0.15cm} In this section, we build a mobile-friendly denoising network from scratch. To ensure that our model can have efficient runtime performance on mobile devices, we only use operations compatible with the ANE. The results of models with different capacities are shown in the experiment section. \vspace{-0.4cm} \subsection{Baseline Model} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/plain_model_downsampling.drawio.pdf} \caption{The artichiture of the baseline model.} \label{plain_model_with_downsampling} \end{figure}\vspace{-0.2cm} \begin{table}[t]\scriptsize \caption{Quantitative performance of different downsampling factors.}\vspace{0.2cm} \centering \label{performance with different scale} \setlength{\tabcolsep}{1.2mm}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & Setting & Factor & MACs/G & Memory/M & PSNR/dB & Latency/ms \\ \hline \multirow{3}{*}{Baseline} & C16\_N8 & 1 & $12.61$ & $2271$ & $36.74$ & $19.87$ \\ \cline{2-7} & C32\_N12 & $\downarrow$ 2 & $24.66$ & $1448$ & $\bf{38.52}$ & $18.96$ \\ \cline{2-7} & C48\_N16 & $\downarrow$ 4 & $20.30$ & $822$ & $\bf{38.52}$ & $\bf{16.87}$ \\ \hline \end{tabular}} \end{table} As mentioned before, to ensure real-time on-device runtime performance, the memory access cost has to be strictly controlled. Therefore, we start from a DnCNN-like \cite{zhang2017beyond} plain topology, as shown in Fig.\ref{plain_model_with_downsampling} left, which contains only the well-optimized $3 \times 3$ convolutions and ReLU activation functions. To reduce the memory cost, only one residual connection is adopted. We start from this architecture because the DnCNN-like architecture has demonstrated its effectiveness in the image denoising task \cite{zhang2017beyond}. And we remove batch normalization to reduce potential artifacts \cite{wang2018esrgan, lim2017enhanced}. Based on this plain topology, we adopt multiple $3\times3$ convolutions with a stride of 2 to downsample the input, followed by $N$ Conv-3$\times$3+ReLU blocks with a local skip connection. At the end of the network is a PixelShuffle \cite{shi2016real} for reconstruction. The modified model is shown in Fig. \ref{plain_model_with_downsampling} right. Downsampling in the beginning brings two benefits. First, most of the operations in the model are executed at a low resolution, which significantly reduces the total memory access cost, as shown in Table \ref{performance with different scale}. Second, the overall increased receptive field of the network enables the network to capture more contextual information and improves denoising performance. We compare the denoising performance of the models with different downsampling factors on the SIDD validation dataset and the runtime performance on an iPhone 11. Both the runtime performance and the total memory access cost (read and write) are evaluated with a 720p input. The results are summarized in Table \ref{performance with different scale}, where $C$ and $N$ represent the number of channels and the number of Conv-3$\times$3+ReLU blocks. Results show that the pre-downsampling enables a wider and deeper model under a similar latency by significantly reducing the memory access cost, thus leading to an enhanced denoising performance. So we choose the 4$\times$ downsampling model as our final baseline model. \subsection{Attention} \begin{figure}[t] \centering \includegraphics[scale=0.5]{images/attention.drawio.pdf} \caption{The architecture of MFA.} \label{attention} \end{figure} Attention mechanisms have been extensively studied in low-level vision tasks \cite{ESA, hfab, chen2022simple, zamir2022restormer, wang2022uformer}. In previous works, attention modules often adopt complex topology or use operations that are not optimized by ANE \cite{ESA}, which affects the runtime performance severely, as shown in Table \ref{attention performance}. In this paper, we propose a simple yet effective mobile-friendly attention module, MFA. Its architecture is shown in Fig.\ref{attention}. The advantages of this architecture are mainly reflected in the following aspects. First, we discard the complex topology and retain only the necessary residual connection for multiplying the learned attention maps with the input feature maps as a spatial attention mechanism. Second, we further downsample the feature maps to reduce the latency of the attention module. Finally, we only use operations well-optimized by ANE, like the common $3\times3$ convolution, the ReLU activation function and bilinear interpolation with a scale of 2. \begin{table}[t]\scriptsize \caption{Performance of different attention mechanisms.}\vspace{0.2cm} \centering \label{attention performance} \setlength{\tabcolsep}{1.8mm}{ \begin{tabular}{|c|c|c|c|c|c|} \hline Model & Attention & MACs/G & Memory/M & PSNR/dB & Latency/ms \\ \hline \multirow{4}{*}{Baseline} & ESA \cite{ESA} & $12.78$ & $562$ & $38.59$ & $43.23^*$ \\ \cline{2-6} & SCA \cite{chen2022simple} & $12.52$ & $400$ & $38.41$ & $\bf{15.20}$ \\ \cline{2-6} & HFAB \cite{hfab} & $14.60$ & $574$ & $38.51 $ & $18.94$ \\ \cline{2-6} & MFA & $12.85$ & $448$ & $\bf{38.60}$ & $15.61$ \\ \hline \end{tabular}} \end{table} In practice, we propose a mobile-friendly denoising block, MFDB, which contains $K$ Conv-3$\times$3+ReLU blocks followed by one MFA. The width of the MFA is set to $\frac{1}{4}$ of the width of the model. In Table \ref{attention performance}, we compare MFA with the attention modules commonly used in low-level vision tasks, including ESA \cite{ESA}, SCA \cite{chen2022simple}, and HFAB \cite{hfab}. It can be seen from the table that MFA achieves the best denoising performance. Meanwhile, MFA is also close to SCA in terms of latency, with a difference of only 0.4ms. Note that although ESA has a comparable memory access cost compared to other methods, its on-device latency is still significantly higher due to unsupported operations mentioned in section 1.\vspace{-0.2cm} \subsection{Activation}\vspace{-0.02cm} Although Rectified Linear Unit (ReLU) has been extensively used in low-level vision tasks, many SOTA methods tend to replace ReLU with other activation functions, such as GELU, LReLU, PReLU \cite{chen2022simple, hfab} for better performance. In order to test the supportiveness of the ANE for different activation functions and the potential performance gain, we replace ReLU with several different activation functions, as shown in Table \ref{activation}. The results in the table show that these activation functions are all well-optimized by the ANE. Replacing ReLU with LReLU results in a performance gain of 0.13dB on the validation dataset of SIDD, while the inference time on an iPhone 11 remains nearly unchanged. We, therefore, replace all the ReLU in our baseline model with LReLU. \begin{table}[t]\scriptsize \caption{Quantitative performance of different activation functions.} \vspace{0.2cm} \centering \label{activation} \setlength{\tabcolsep}{4.5mm}{ \begin{tabular}{|c|c|c|c|} \hline Model & Activation & PSNR/dB & Latency/ms \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Baseline+\\ MFA\end{tabular}} & ReLU & $38.60$ & $15.61$ \\ \cline{2-4} & GELU & $38.71$ & $16.11$ \\ \cline{2-4} & PReLU & $38.71$ & $15.84$ \\ \cline{2-4} & LReLU & $38.73$ & $15.74$ \\ \hline \end{tabular}} \end{table}\vspace{-0.2cm} \subsection{Lightweight Feature Extraction} To further improve the feature extraction and representation capabilities of the downsampling module, we replace the stride-2 convolution with the Haar transform, which can be efficiently implemented in a convolution form with a kernel size of 2. Compared to a stride-2 convolution, Haar transform is invertible, which ensures that the frequency information of the input can be effectively captured in a lossless manner. This is very helpful for the restoration of noisy image details \cite{liu2018multi}. In addition, Haar transform can result in a more compact feature representation, meaning that fewer hidden channels between the downsampling blocks can lead to a better denoising performance, which also reduces the latency. Table \ref{ablation} shows that using the Haar transform to extract features significantly improves the model's denoising and on-device runtime performance. It brings 0.22dB performance gain on SIDD and 1.32ms latency reduction on an iPhone 11.\vspace{-1em} \begin{figure}[b] \centering \includegraphics[scale=0.56]{images/reparameterization.drawio.pdf} \caption{The structure of RepConv.} \label{repconv} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/overall.pdf} \caption{The overall architecture of MFDNet.} \label{overall} \end{figure} \subsection{Reparameterization} The idea of reparameterization was first proposed by RepVGG \cite{ding2021repvgg}. The core idea behind reparameterization is to parameterize a plain topology with the parameters transformed from a more complex topology (e.g., multi-branch topology). Model reparameterization is very beneficial in the design of mobile-friendly models. Complex topologies, such as multi-branch structures, can significantly increase the memory access cost and slow down the inference. However, the plain topology lags in the feature extraction capability, resulting in compromised model performance. Model reparameterization can be used to address this issue. In the training phase, the model takes advantage of the complex topology to enrich the feature representation and bring performance gains. In the testing phase, the model reparameterization method is used to simplify the topology and improve the inference speed of the model without a performance drop. In this paper, we use the expand-and-squeeze topology for training, named RepConv, since the wider features result in better feature representation. As shown in Fig.\ref{repconv}, the topology consists of two $1\times 1$ convolutions, a $3\times 3$ convolution, and a skip connection in the training phase. In the testing phase, we merge three convolutions and a skip connection into a single $3 \times 3$ convolution by reparameterization, thus eliminating the cascaded and multi-branch structures. In Table \ref{rep}, we compare the RepConv with other reparameterization methods commonly used in low-level vision tasks, including ECB \cite{zhang2021edge} and RRRB \cite{hfab}. Results show that RepConv achieves the best denoising performance with an improvement of 0.04dB on SIDD.\vspace{-0.1cm} \begin{table}[t]\scriptsize \caption{Comparisons between different reparameterization methods.} \vspace{0.2cm} \centering \label{rep} \setlength{\tabcolsep}{4.5mm}{ \begin{tabular}{|c|c|c|} \hline Model & Reparameterization Method & PSNR/dB \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}MFDNet w/o\\ reparameterization\end{tabular}} & & 38.95 \\ \cline{2-3} & ECB \cite{zhang2021edge} & 38.97 \\ \cline{2-3} & RRRB \cite{hfab} & 38.95 \\ \cline{2-3} & RepConv & 38.99 \\ \hline \end{tabular}} \end{table}\vspace{-1.0em} \subsection{Summary} At this point, we build our mobile-friendly denoising network step by step from the baseline model. The architecture of MFDNet is shown in Fig.\ref{overall}. For MFDNet, we set both the number of MFDBs ($M$) and RepConv+LReLU blocks in each MFDB ($K$) to 3.\vspace{-0.5cm} \section{Experiment} In this section, we first analyze the role of the different model design choices mentioned in the previous sections in terms of both denoising and runtime performance. We then apply our model to the real-world denoising benchmarks SIDD and DND. To test the models' on-device runtime performance, we execute them 300 times on an iPhone 11 to get the average elapsed time. Note that in all experiments, latency with the $*$ notation indicates that the model contains ANE unsupported operations and is processed by CPUs/GPUs. The computational complexity, latency and total memory access cost in all experiments are evaluated with a 720p input.\vspace{-0.2cm} \begin{table}[t]\scriptsize \caption{Ablation study of different components in MFDNet.}\vspace{0.2cm} \centering \label{ablation} \setlength{\tabcolsep}{0.43mm}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline & MFA & \begin{tabular}[c]{@{}c@{}}ReLU \\ $\rightarrow$ LReLU\end{tabular} & \begin{tabular}[c]{@{}c@{}}Stride-2 Conv \\ $\rightarrow$ Haar Transform\end{tabular} & RepConv & PSNR/dB &Latency/ms\\ \hline \multirow{5}{*}{Baseline} & & & & &$38.54$ & $12.82$ \\ \cline{2-7} & \(\checkmark\) & & & & $38.60$ & $15.61$ \\ \cline{2-7} & $\checkmark$ & $\checkmark$ & & & $38.73$ &$15.74$ \\ \cline{2-7} & \checkmark & \checkmark & \checkmark & & $38.95$ & $14.42$ \\ \cline{2-7} & \checkmark & \checkmark & \checkmark & \checkmark & $38.99$ & $14.42$ \\ \hline \end{tabular}} \end{table} \subsection{Ablation Study} We conduct our ablation study on the validation dataset of SIDD and measure the models' latency on an iPhone 11. We train our model using the Adam optimizer with the learning rate initialized with 4e-4 and halved every 100k iterations. We train the model for a total of 1M iterations with a batch size of 32 and use cropped patches of 256$\times$256 from SIDD-Medium as the training dataset. We start from the baseline model with a downsampling factor of 4, as mentioned in section 3.1, and build the MFDNet step by step. We set the depth of the baseline model ($N$) to 9 and the width to 48. Table \ref{ablation} shows the effectiveness of different components. \vspace{-1em} \subsection{Application} We apply our model to the denoising task on two real-world denoising benchmarks, SIDD and DND. All training settings are identical to those used in the ablation experiments. The results are summarized in Table \ref{denoising and runtime performance}. For MFDNet, we also design two models with different scales: MFDNet-S and MFDNet-L. For MFDNet-S, we reduce $M$ and $K$ to 1. For MFDNet-L, we keep $K=3$ unchanged and increase $M$ to 6. We compare our model with models proposed in \cite{zhang2017beyond, zhuo2019ridnet, chen2022simple, chen2021hinet, wang2020practical}. Note that as these models are not designed specifically for mobile devices, we prune them in terms of depth and width to ensure that they can run on mobile devices. For DnCNN \cite{zhang2017beyond}, we set both the width and depth to 12, and for DnCNN-S, we set the width to 16 and the depth to 3. For RIDNet \cite{zhuo2019ridnet}, we trim the number of channels to 8 and set the channel reduction to 2. For NAFNet \cite{chen2022simple}, the number of channels is also trimmed to 8 and the number of blocks is reduced to 7. For HINet \cite{chen2021hinet}, we adjust the number of channels and the depth of the model to 8 and 2, respectively. We can see from Table \ref{denoising and runtime performance} that MFANet and MFANet-L achieve the best denoising performance with the lowest latency compared to models with comparable computational complexity. For HINet, the ANE-unsupported operation, instance normalization, slows the inference speed. Note that the latency will reduce to 47.26ms if we remove the instance normalization from HINet, which indicates the significant impact of the unsupported operations. For models with computational complexity under 5GMACs, although NAFNet achieves the best denoising performance, the unsupported operation, layer normalization, processed by CPUs severely affects the runtime performance. Even if we remove the layer normalization, the latency still remains 71.32ms due to the high memory access cost of the model itself. In contrast, MFDNet-S can process a single 720p image within 10ms and the denoising performance far exceeds that of DnCNN with similar latency.\vspace{-0.3cm} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/0259+0323.drawio.pdf} \caption{Qualitative comparisons on SIDD.} \label{fig:my_label} \end{figure} \begin{table}[]\scriptsize \caption{Quantitative performance of different methods.}\vspace{0.2cm} \label{denoising and runtime performance} \centering \setlength{\tabcolsep}{0.7mm}{ \begin{tabular}{|cccccccc|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Model}} & \multicolumn{1}{c|}{\multirow{2}{*}{MACs/G}} & \multicolumn{1}{c|}{\multirow{2}{*}{Memory/M}} & \multicolumn{1}{c|}{\multirow{2}{*}{Latency/ms}} & \multicolumn{2}{c|}{SIDD} & \multicolumn{2}{c|}{DND} \\ \cline{5-8} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & SSIM \\ \hline \multicolumn{1}{|c|}{DnCNN-S \cite{zhang2017beyond}} & \multicolumn{1}{c|}{$2.77$} & \multicolumn{1}{c|}{$583$} & \multicolumn{1}{c|}{$11.65$} & \multicolumn{1}{c|}{$33.84$} & \multicolumn{1}{c|}{$0.877$} & \multicolumn{1}{c|}{$36.41$} &$0.910$ \\ \hline \multicolumn{1}{|c|}{NAFNet \cite{chen2022simple}} & \multicolumn{1}{c|}{$3.81$} & \multicolumn{1}{c|}{$2974$} & \multicolumn{1}{c|}{$1384.47^*$} & \multicolumn{1}{c|}{$\bf{38.66}$} & \multicolumn{1}{c|}{$\bf{0.951}$} & \multicolumn{1}{c|}{$\bf{38.74}$} & $\bf{0.945}$ \\ \hline \multicolumn{1}{|c|}{MFDNet-S} & \multicolumn{1}{c|}{$2.34$} & \multicolumn{1}{c|}{$142$} & \multicolumn{1}{c|}{$\bf{8.27}$} & \multicolumn{1}{c|}{$36.93$} & \multicolumn{1}{c|}{$0.932$} & \multicolumn{1}{c|}{$38.09$} &$0.939$ \\ \hline \multicolumn{1}{|l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} \\ \hline \multicolumn{1}{|c|}{DnCNN\cite{zhang2017beyond}} & \multicolumn{1}{c|}{$12.88$} & \multicolumn{1}{c|}{$2720$} & \multicolumn{1}{c|}{$23.37$} & \multicolumn{1}{c|}{$36.24$} & \multicolumn{1}{c|}{$0.921$} & \multicolumn{1}{c|}{$38.13$} & $0.936$ \\ \hline \multicolumn{1}{|c|}{HINet \cite{chen2021hinet}} & \multicolumn{1}{c|}{$11.68$} & \multicolumn{1}{c|}{$2178$} & \multicolumn{1}{c|}{$84.74^*$} & \multicolumn{1}{c|}{$37.83$} & \multicolumn{1}{c|}{$0.943$} & \multicolumn{1}{c|}{$36.82$} & $0.931$ \\ \hline \multicolumn{1}{|c|}{MFDNet} & \multicolumn{1}{c|}{$11.46$} & \multicolumn{1}{c|}{$384$} & \multicolumn{1}{c|}{$\bf{14.42}$} & \multicolumn{1}{c|}{$\bf{38.90}$} & \multicolumn{1}{c|}{$\bf{0.952}$} & \multicolumn{1}{c|}{$\bf{39.06}$} & $\bf{0.947}$ \\ \hline \multicolumn{1}{|l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} \\ \hline \multicolumn{1}{|c|}{RIDNet \cite{zhuo2019ridnet}} & \multicolumn{1}{c|}{$20.39$} & \multicolumn{1}{c|}{$4224$} & \multicolumn{1}{c|}{$75.89$} & \multicolumn{1}{c|}{$38.01$} & \multicolumn{1}{c|}{$0.945$} & \multicolumn{1}{c|}{$38.88$} & $0.947$ \\ \hline \multicolumn{1}{|c|}{PMRID \cite{wang2020practical}} & \multicolumn{1}{c|}{$15.07$} & \multicolumn{1}{c|}{$4448$} & \multicolumn{1}{c|}{$80.79$} & \multicolumn{1}{c|}{$38.96$} & \multicolumn{1}{c|}{$0.953$} & \multicolumn{1}{c|}{$38.82$} & $0.948$ \\ \hline \multicolumn{1}{|c|}{MFDNet-L} & \multicolumn{1}{c|}{$21.81$} & \multicolumn{1}{c|}{$684$} & \multicolumn{1}{c|}{$\bf{24.08}$} & \multicolumn{1}{c|}{$\bf{39.10}$} & \multicolumn{1}{c|}{$\bf{0.954}$} & \multicolumn{1}{c|}{$\bf{39.15}$} & \multicolumn{1}{c|}{$\bf{0.948}$} \\ \hline \end{tabular}} \end{table} \section{Conclusion} In this paper, we identify the network architectures and operations that can run on NPUs with low latency and excellent denoising performance through extensive analysis and experiments. Based on that, we build a mobile-friendly denoising network from scratch. Experiments show advances of our method in terms of both denoising and runtime performance. We hope this work will promote the application of CNN-based denoising models on mobile devices.
2,877,628,090,586
arxiv
\section{Introduction} Biomembranes are non-equilibrium structures due to the non-thermal energy contributions resulting from the activity of a wide variety of vicinal proteins. While the phase behavior and morphology of lipid bilayer membranes have been the subject of extensive amount of studies~\cite{mouritsen05}, most of these studies have been on the equilibrium properties of the membranes. This has changed during the last decade or so with investigations of the effects, that active protein pumps have, on the undulations of lipid membranes, their morphology and the normalization of their mechanical constants ~\cite{manneville-99,ramaswamy-00,gautam-02,gov-04,gov-05,lomholt-06,giahi-07,schlomovitz-07}. Of particular importance is another class of membrane-bound proteins, that actively translocates phospholipids from one leaflet of a biomembrane to the other~\cite{daleke-03,devaux-06,pomorski-06}. The effects of these phospholipid translocators, on the mechanics and morphology of lipid membranes, has not received much attention. In this article, we present a study of self-assembled lipid bilayers in the presence of active lipid translocation, using dissipative particle dyanamics simulations. Lipid synthesis in eukaryotic cells takes place almost exclusively on the cytosolic leaflet of the endoplasmic reticulum (ER), which leads to an asymmetry in the lipid composition across the bilayer. In order to maintain a symmetric lipid density across the ER bilayer, nearly half of the newly synthesized lipids are rapidly translocated to the other leaflet~\cite{pomorski-06}. In contrast, the plasma membrane is marked by an acute asymmetry in the lipid composition. Indeed while phosphatidylcholine and sphingomyelin are predominantly present in the exoplasmic leaflet, phosphatidylserine and phosphatidylethanolamine are mainly found in the cytosolic leaflet~\cite{alberts-cell}. Maintenance of the symmetric lipid distribution in ER or the asymmetric lipid distribution in the plasma membrane {\em cannot} be mediated by thermally induced lipid movements (also termed passive flip-flops) alone. Indeed, passive flip-flops of phospholipids are energetically unfavorable due to the large energy barrier, 20-50 kcal/mol, associated with the translocation of the polar head group through the low dielectric permitivity hydrocarbon core of the bilayer. Consequently, the rate of passive flip-flops of phospholipids is extremely small, of the order of $10^{-5} {\rm s}^{-1}$~\cite{abreu-04,liu-05}, i.e. on average, a single lipid experiences a thermally induced flip-flop every 24 hours. The lipid distribution across the bilayer is therefore actively maintained by a class of membrane-bound proteins known as phospholipid translocators~\cite{alberts-cell}. These include the adenosine triphosphate-dependent flippases and floppases, and the energy-independent scramblases~\cite{daleke-03,pomorski-06,devaux-06}. Given the difficulties in purifying membrane-bound proteins, most of lipid transcolators have not been identified. Moreover, the mechanism(s) of active flip-flop, mediated by phospholipid translocators, remain elusive. Few models have been proposed as mechanisms for active phospholipid translocations~\cite{langley-79,kol-04}. In particular, Pomorski and Menon~\cite{pomorski-06} recently proposed a mechanism similar to that of swiping a magnetic card through a card reader. In this model, the hydrophobic head group of the flipped/flopped lipid (magnetic strip of the card) is shielded from the hydrophobic environment of the bilayer, thereby facilitating its translocation across the bilayer. Recently, Sens~\cite{sens-04} investigated theoretically the conformational response of an infinitely large membrane to a localized disturbance in the form of a localized transbilayer asymmetry in the lipid density. There, he found that this asymmetry may transiently lead to the formation of a bud-like invagination followed by its relaxation. In this paper, we present a model for lipid translocation that is reminiscent of the magnetric swipe card model ~\cite{pomorski-06}. The model is then investigated via large scale dissipative particle dynamics simulations~\cite{sunil-mohamed-04,sunil-mohamed-05,laradji-06,yamamoto-02,shillcock-02}. To our knowledge, the presented work is the first simulation study of the effect of active flip-flop on the mechanical and morphological properties of lipid membranes. \section{Bilayer Model} In the dissipative particle dynamics (DPD) model, for a self-assembled lipid bilayer in an explicit solvent used here, a lipid molecule is modeled as a flexible amphiphilic chain of beads consisting of one ``head" ($H$) bead attached to three ``tail" ($T$) beads via Hookean spring bonds. The solvent is modeled as single beads ($W$). All particles have the same mass $m$. In this model, interactions between any two non-bonded particles, within a range $r_0$, are soft and repulsive. The forces acting on particles are grouped into three categories: (i) conservative forces, (ii) dissipative forces, and (iii) random forces. The conservative force between any two particles is \begin{equation} {\bf F}_{ij}^{\left(C_1\right)}= \alpha_{ij}\omega(r_{ij})\hat{{\bf r}}_{ij}, \label{fconservative} \end{equation} where $\alpha_{ij}$ is the interaction strength between particles $i$ and $j$, at respective positions ${\bf r}_i$ and ${\bf r}_j$, ${\bf r}_{ij}={\bf r}_i - {\bf r}_j$, and $\hat{{\bf r}}_{ij}=r_{ij}/|{\bf r}_{ij}|$. Bonded particles belonging to a lipid also experience a conservative Hookean force given by \begin{equation} {\bf F}_{i,i+1}^{\left(C_2\right)}= -k\left(1-r_{i,i+1}/b\right)\hat{{\bf r}}_{i,i+1}, \label{fhookian} \end{equation} where $k$ is the spring constant and $b$ is the preferred bond length. The dissipative force between particles $i$ and $j$ is given by \begin{equation} {\bf F}_{ij}^{\left(D\right)}= -\Gamma_{ij}\omega^2(r_{ij})(\hat{{\bf r}}_{ij}\cdot{\bf v}_{ij})\hat{{\bf r}}_{ij}, \label{fdissip} \end{equation} where $\Gamma_{ij}$ is the dissipative strength for the pair $(i,j)$, and ${\bf v}_{ij}={\bf v}_{i}-{\bf v}_{j}$ is their relative velocity. The random force between $i$ and $j$ is given by \begin{equation} {\bf F}_{ij}^{\left(R\right)}= \sigma_{ij}(\Delta t)^{-1/2}\omega(r_{ij})\zeta_{ij}\hat{{\bf r}}_{ij}, \label{frand} \end{equation} where $\sigma_{ij}$ is the amplitude of the random noise for the pair $(i,j)$, and $\zeta_{ij}$ is a random variable with zero mean and unit variance which is uncorrelated for different pairs of particles and different time steps. Together, the dissipative and random forces act as a thermostat provided the fluctuation-dissipation theorem is satisfied. This yields to the following relation between $\Gamma_{ij}$ and $\sigma_{ij}$, \begin{equation} \sigma_{ij}^2=2\Gamma_{ij} k_{\rm B}T, \end{equation} where $k_{\rm B}$ is Boltzmann's constant and $T$ is the thermostat temperature. In Eqs.~(\ref{fconservative}), (\ref{fdissip}) and (\ref{frand}), the weight factor, $\omega(r)$, is chosen as \begin{equation} \omega(r)= \begin{cases} 1-r/r_0 & \text{for $r$ $\leq$ $r_0$}\\ 0& \text{for $r$ $>$ $r_0$,} \end{cases} \end{equation} where $r_0$ is the interactions cutoff. The particles trajectories are obtained by solving Hamilton's equations using the velocity-Verlet integrator~\cite{ilpo-00}. In the simulation, $r_0$ and $m$ set the scales for length and mass, respectively. $k_{\rm B}T$ sets the energy scale. The time scale is given by $\tau=\left(mr_0^2/k_{\rm B}T\right)^{1/2}$. The numerical value of the amplitude of the random force is considered to be the same for all pairs and is given by $\sigma_{ij}=\sigma=3.0\left(k_{\rm B}^3T^3m/r_0^2\right)^{1/4}$, and the fluid density $\rho=3.0r_0^{-3}$. The amplitudes of the conservative force are chosen to be $\alpha_{HH}=\alpha_{TT}=\alpha_{WW}=\alpha_{WH}=25k_{\rm B}T/r_0$ and $\alpha_{WT}=\alpha_{HT}=200k_{\rm B}T/r_0$. In Eq.~(\ref{fhookian}), the spring constant $k=100k_{\rm B}T$ and $b=0.45 r_0$. The time step is chosen to be $\Delta t=0.01\tau$. The flat bilayer is initially constructed parallel to the $xy$ - plane and placed in the middle of the simulation box. It is then allowed to equilibrate until its normal fluctuations attain saturation. The total number of lipids used is 16,000 in a simulation box with size $L\times L \times L_z=(86 \times 86 \times 40)r_0^3$ for the case of symmetric flip-flops and $(80\times 80 \times 46)r_0^3$ for the case of asymmetric flip-flops. The system was subject to periodic boundary conditions in all three directions. \section{Flip-Flop Scheme} We use the following two-step scheme for ``flippase'' action: (i) formation of a complex and (ii) translocation of a lipid from one leaflet to another. A lipid to be flipped is randomly selected from one of the two leaflets (refer to Figure ~\ref{fig:algo}, for a schematic representation of the lipid to be flipped and the surrounding lipid molecules). Around this selcted lipid a fictitious cylinder is drawn which spans both leaflets of the bilayer. A flippase complex is then defined as the set of all lipids inside this fictitious cylinder. The next step involves the action of a time-dependent flipping force, ${\bf F}^{a}(t)=F_z^{a}(t)\hat{z}$, on the head bead of the selected lipid, so as to translocate it to the opposite leaflet. This force, $F_z^{a}(t)$, acts in the direction normal to the plane of the bilayer, and its magnitude is given by $F_z^{a}(t)=G\Delta z(t)$, where $\Delta z(t)$ is the distance between the head bead of the lipid being translocated and the average $z$-position of all the lipids, in the complex, in the opposite leaflet. This means that during the translocation process, the amplitude of the flippase force decreases continuously with time. In order to conserve momentum within the fictitious cylinder, a force $-{\bf F}^{a}(t)/(N_c-1)$, where $N_c$ is the number of lipid molecules in the fictitious complex, is concurrently applied onto the head beads of all other lipids within the flippase complex. During the translocation process of the selected lipid, the head-tail repulsion of the selected lipid, with the other surrounding lipids in the membrane, is ``screened" by temporarily choosing its amplitude $\alpha_{HT}=\alpha_{TT}$. This algorithm is therefore in-line with the recent swipe card model~\cite{pomorski-06}. \begin{figure}[!ht] \includegraphics[scale=0.5,angle=-90]{ff-schematic.eps} \caption{Flippase complex corresponding to the fictitious cylinder containing the lipid to be flipped. The lipid to be flipped is shown explicitly. However, only head beads of the other lipids in the flippase complex are shown.} \label{fig:algo} \end{figure} In Figure ~\ref{fig:fvst} the magnitude of the applied force, $F_z^a$, normalized by $G$, is shown as a function of time, for the selected lipid as it translocates through the bilayer, mimicking the action of a flippase. This figure depicts that the translocation time scale decreases as the amplitude, $G$, of the driving force is increased. In the remaining of the article, all results discussed are based on the case of $G=10k_{\rm B}T/r_0^2$, for which the typical time taken for flipping a lipid from one layer is about $\tau$. During each time step, a number of lipids are selected at random. Flips are attempted with a probability, $P_{\rm flip} $, if the selected lipid is not already part of an active flipping complex. The success {\it flip rate} is measured by counting the number of lipids that reach the opposite leaflet in every time step. The flip and flop probabilities are respectively given by, \begin{equation} P_{\rm flip} = \frac{1}{1 + \exp\left[-A\left(N_u-N_d-C\right)\right]}, \label{eq:probability} \\ \end{equation} and \begin{equation} P_{\rm flop} = \frac{1}{1 + \exp\left[-A\left(N_d-N_u+C\right)\right]}, \label{eq:probability} \\ \end{equation} where $N_u$ and $N_d$ are the numbers of lipid particles in the upper and lower leaflets, respectively. In Eq.~(\ref{eq:probability}), $C$ controls the steady state number difference, $\Delta N=N_u-N_d$, and $A$ is constant which fixes the width of the distribution of $\Delta N$. Having developed an active translocation algorithm, we now focus on the effect that active flip-flop has on flat membranes. We consider both cases of symmetric flip-flop, $C=0$, and asymmetric flip-flop, $C\ne 0$. \begin{figure}[!ht] \includegraphics[scale=0.4]{fvst.eps} \caption{Normalized amplitude of the translocation force, ${F_z^{a}}(t)/{G}$, vs. time for four values of the force magnitude, $G$. The data has been averaged over all translocated lipids and over time during steady state.} \label{fig:fvst} \end{figure} \section{ Symmetric Flip-Flop} Active flip-flop is labeled symmetric, if on average, the number of up-down translocations (flips) is equal to the number of down-up translocations (flops). At the beginning of the simulations, we start with a flat bilayer with exactly the same number of lipids in the two leaflets. We note that the rate of thermally induced flip-flops in this model is practically zero, in accord with experiments. Using the Irving-Kirkwood formalism~\cite{irving-50}, we calculated the lateral and normal components of the pressure tensor along the $z$-axis, and averaged over the $xy$-plane~\cite{schofield-82,goetz-98}. The tension of the membrane and its bending modulus are extracted from the structure factor of the out-of-plane fluctuations of the membrane height~\cite{sunil-mohamed-05}. \subsection{Membrane Tension and Bending Rigidity} The average orientation of the layer normal is taken to be along the $z$-axis. The steady-state profiles of the normal pressure $P_N(z)$ and the lateral pressure $P_L(z)$ along the bilayer normal are calculated from the pressure tensor using the Irving-Kirkwood formalism~\cite{irving-50}. In Figure ~\ref{fig:pnpl}, the normal and lateral pressure vs. $z$ of equilibrium membranes are compared with that of membranes active symmetric flip-flop. The rate of flipping is $10$ flips/$\Delta t$ over a membrane with projected area $86r_0\times 86r_0$. Although the details of the lateral pressure profile is dependent on the model used for the lipids, one observes that the flip-flop activity increases the lateral pressure in the two leaflets while the normal pressure is only weakly affected. As a result, active flip-flop reduces the effective tension on the membrane. \begin{figure \includegraphics[scale=0.4]{pnlat.eps} \caption{Normal pressure $P_N$ and lateral pressure $P_L$ as a function of $z$ (both equilibrium and with an attempted flip rate of 10 flips per $\Delta t$) for a system with $L=86r_0$.} \label{fig:pnpl} \end{figure} By defining a height field $h(x,y)$, which represents the position along the $z$-axis of the bilayer mid-plane at a point $(x,y)$, and its Fourier transform $\tilde{h}({\bf q})$ where ${\bf q}=(q_x,q_y)$, we calculate the circularly averaged structure factor $S(q) = \langle | \tilde{h}({\bf q}) |^2 \rangle/{L^2}$, where $q=\left(q_x^2+q_y^2\right)^{1/2}$. The long wavelength deformations of a lipid membrane from its mean planar conformation are well described by the Helfrich Hamiltonian ~\cite{helfrich-73}, ${\cal H} [h(x,y)] = \int dx dy \left[ \frac{\gamma}{2} (\nabla h)^2 + \frac{\kappa}{2} (\nabla^2 h)^2 \right]$, where $\gamma$ is the membrane surface tension and $\kappa$ is its bending modulus. The equipartition theorem of this model yields a structure factor, \begin{equation} S(q)=\frac{k_{\rm B} T}{\gamma q^2 + \kappa q^4}. \end{equation} Hence, by plotting $k_{\rm B}T/q^2 S(q) $ as a function of $q^2$, one extracts the tension on the membrane, $\gamma$, from the intercept with the vertical axis and the bending modulus, $\kappa$, from the slope at small wavevectors. This is shown in Figure ~\ref{sigma-kappa} for the cases of equilibrium and steady state flip-flop with varying flip-flop rates. Figure ~\ref{sigma-kappa} shows that as the rate of flip-flop is increased, the intercept of $k_{\rm B}T/q^2 S({\bf q})$ is shifted to lower values, implying a reduction in the tension of the membrane. This is in line with the results from the lateral and normal pressure shown in Figure ~\ref{fig:pnpl}. However, Figure ~\ref{sigma-kappa} shows that the slope of $k_{\rm B}T/q^2 S(q)$ is independent of the flip-flop rate, implying that for the flip rates considered in our simulation, there does not seem to be any significant effect on the membrane bending modulus due to active flip-flop. We thus conclude that symmetric active flip-flops leads to an increase in the fluctuations of the membrane, which manifests itself in a decrease of the surface tension of the membrane. \begin{figure}[!ht] \includegraphics[scale=0.4]{sigma-kappa-ff.eps} \caption{Structure factor vs. $q^2$ for different values of attempted flip rates. Different symbols correspond to the cases of equilibrium ($\square$), 1 flip$/\Delta t$ ($\blacksquare$), 2 flips$/\Delta t$ ($\circ$), 5 flips$/\Delta t$ ($\bullet$), and 10 flips$/\Delta t$ ($\triangle$). The simulations are performed on a system containing 16000 lipids with $L=86 r_0$ and $k_{\rm B}T=1.0$.} \label{sigma-kappa} \end{figure} \section{ Asymmetric Flip-Flop} We now consider the effect of asymmetric flip-flop. In particular, we focus on the case where only lipids from the bottom leaflet are actively flipped to the upper leaflet. This is implemented by assigning a non-zero value for the constant $C$ in the expression for the probability $P_{\rm flip}$ in Eq.~(\ref{eq:probability}). The value of $C$ is kept large, equal to $10^4$, such that the rate of accepted flips can be taken as a constant during simulation time. Flipping is restricted to a small square area ($l \times l$), termed the active region, in the central region of the membrane. This mimics the effect of flips due to localized flippases in a small region of a biomembrane. The effect of asymmetric flipping is then investigated as a function of $l$ and the total number of flips per unit time ($\nu$) . $l$ is varied between $10r_0$ to $50r_0$ and the flip rate is varied from $\nu=0.9 \,\tau^{-1}$ to $\nu=20 \, \tau^{-1}$. Asymmetric flip induces a finite difference in the lipid number densities, $\Delta s =s_u-s_d$, where $s_u$ and $s_d$ are the lipid number densities per unit of area, of the upper and lower leaflets, respectively. Furthermore, $\Delta s$ increases with time as active flip proceeds. Therefore, asymmetric flip is characterized by an absence of steady state, and if maintained, leads to an instability of the membrane, in contrast to the case of symmetric active flip-flop. We found that asymmetric active flipping lead to the formation of two major transient morphologies corresponding to either buds or blisters, depending on size of the flip area and the flip rate. When the flip rate is low, $\nu <2\,\tau^{-1}$, we found that buds form. This is shown in Figure ~\ref{bud_formation}. In this case, a full bud is formed after about $800\tau$. Once formed, the bud remains stable even after the flipping is stopped, and will relax back only in the time scale set by passive flips, which is much larger than the simulation time. The stability of the buds here has to do with the finite size of the membrane~\cite{svetina-89,miao-94}. For an infinite membrane, the buds have a finite lifetime set by the relaxation of the in-plane density~\cite{sens-04}. When lipids are flipped at a relatively high rate, we observed the formation of blister structures in the active region. In the case with flipping rate $\nu=2.5 \tau^{-1}$, it takes about $300\tau$ for a blister to form. In Figure ~\ref{blister_formation}, we depict snapshots showing the main stages of blister formation at this flip rate. The relatively high active flip rate and slow diffusion of the lipids results in high excess of lipids in the upper leaflet. This leads to the detachment of the upper leaflet from the lower leaflet, resulting in a protrusion that is reminiscent of a cylindrical micelle connected to the membrane. This is depicted in Figure ~\ref{blister_formation}(a). The cylindrical micelle then grows into a sheet-like structure still connected to the membrane, as shown in Figure ~\ref{blister_formation}(b). If the process of active flipping is further continued, we found that the membrane becomes unstable. To avoid a destabilization of the membrane, we consider only the cases where active flipping is stopped once the sheet-like structure (Figure ~\ref{blister_formation}(b)) is formed. The relaxation of the blister depends on the stage at which active flipping is halted. If active flipping is stopped during the early stage of blister formation , shown in Figure ~\ref{blister_formation}(a), the blister relaxes back into the membrane. At late stages, when the sheet is well formed (see Figure ~\ref{blister_formation}(b)) , in order to reduce the high edge energy, the blister curves forming a hook-like structure shown in Figure ~\ref{blister_formation}(c). Eventually, the sheet closes on to the membrane, forming a hemifusion state, reminiscent of that observed during the intermediate stages of vesicles fusion~\cite{shillcock-05}. This is elucidated in the series of snapshots shown in Figure ~\ref{blister_formation}(b-e). Eventually, the diaphragm in the hemifusion state ruptures into a pore as shown in Figure ~\ref{blister_formation}(f). If active flip is stopped at an intermediate stage between those shown in Figure~\ref{blister_formation}(a) and (b), the blister relaxation is arrested at the hemifusion state shown in Figure ~\ref{blister_stalk}(b). \begin{figure \includegraphics[scale=0.6]{bud_formation.eps} \caption{Bud Formation (only a small section of the membrane, with a cut through the budding region, is shown). a) Initial bending of both the leaflets and b) neck formation which eventually leads to a bud. The flip rate and size of the active region are $\nu=0.9\tau^{-1}$ and $l=10$ respectively.} \label{bud_formation} \end{figure} \begin{figure \includegraphics[scale=0.6]{blister_formation.eps} \caption{Blister Formation (only a small section of the membrane, with a cut through the blister, is shown). a) Initial protrusion b) sheet formation c-e) fusion of sheet with membrane and f) pore formation. Note that the edge of the blister, clearly visible in in (b), moves towards the cut plane in (b) to (e). The flip rate and size of the active region are $\nu=2.5\tau^{-1}$ and $l=30$ respectively. } \label{blister_formation} \end{figure} \begin{figure \includegraphics[scale=0.6]{blister_stalk.eps} \caption{In its early stage blisters can also relax by folding into itself, forming a bulb-like structure.} \label{blister_stalk} \end{figure} \section{Conclusion} The thickness of the bilayer obtained in the simulations is $4 r_0$. Comparing this with the thickness of lipid bilayers, $5\ {\rm nm}$, we obtain $r_0\approx 1.25 ~{\rm nm}$. In phospholipid bilayers in the fluid phase, the diffusion coefficient of a lipid is typically $D\sim 10^{-12} {\rm m^2/s}$~\cite{alberts-cell}. Comparing this with the diffusion coefficient for lipids obtained in the current simulations, one estimates the DPD time unit, $\tau\approx 0.2 {\rm \mu s}$. For the case of $G=5 k_BT/r_0^2$, we then find that the average duration of a single flip is about $2 {\rm \mu s}$. Considering that the time scale for typical protein conformation change ranges from milliseconds to seconds, to make contact with real systems, one has to use smaller values of $G$. However, for practical reasons we use $G=10 k_BT/r_0^2$. The typical simulation run in the present work is about $200 {\rm \mu s}$ corresponding to a diffusion length of approximately $15 {\rm nm}$. Although the lipid density equilibrates very fast, the budding event is a result of the global asymmetry in the lipid number. In the case of an infinite membrane the formed buds will disappear on time scales larger than diffusion time. However in finite systems the resulting area difference of the two leaflets leads to buds that are stable within the time scale set by passive flip-flop rates~\cite{svetina-89,miao-94}. The initial tension on the membrane also plays a role in determining the critical flip rate for bud and blister formation. In the case of membranes with fixed projected area, asymmetric flipping from the bottom leaflet to the upper leaflet leads to an asymmetry in the area per lipid in the two leaflets of the membrane. This in turn leads to an increase in the lateral tension of the bottom leaflet while that of the upper leaflet decreases. In order to equalize the area per lipid in the two leaflets, the membrane buckles, which will further increase the tension in both the leaves. Beyond a critical tension, the bottom leaflet will either rupture or decouple from the upper leaflet resulting in a blister. This also means that the flip rate at which blisters start to form should decrease with increasing tension of the initial equilibrated membrane. We confirm this in our simulations. In conclusion, we have presented a DPD model to study active flipping and its effects on a fluid bilayer membrane. We find that symmetric flip-flop results in a reduction in the tension of the membrane without much effect on its bending modulus. Asymmetric flip-flop results in non-equilibrium structures depending on the lipid flip rate. Slow flip rates involves membrane curvature and bud formation, whereas fast flip rates induces blister formation. \section*{Acknowledgements} PBSK acknowledges DST India for financial support. ML acknowledges the financial support through a grant from the Research Corporation (Award No. CC6689). MEMPHYS is supported by the Danish National Research Foundation.
2,877,628,090,587
arxiv
\section{Introduction}{\label{Sec:Introduction}} The high-spin structures of the rare-earth nuclei have drawn a lot of attentions owing to the existence of various exotic excitation modes, such as backbending~\cite{Johnson1971_PLB34-605, Lee1977_PRL38-1454}, signature inversion~\cite{Bengtsson1984_NPA415-189}, band termination~\cite{Bengtsson1983_PST5-165, Afanasjev1999_PR322-1}, superdeformation~\cite{Twin1986_PRL57-811}, wobbling mode~\cite{Odegard2001_PRL86-5866}, etc. Compared to even-even and odd-$A$ nuclei, the doubly-odd nuclei often show a broader variety of nuclear structure phenomena, and they are more challenging to study due to the complexity associated with couplings from both valence quasiproton and quasineutron. Consequently, only limited information is available for odd-odd nuclei throughout the whole nuclear chart. Usually, it is challenging to make spin-parity and configuration assignments for the states in odd-odd nuclei, since there always exist a high density of low-lying states, which leads to significant level mixing. Recently, a considerable amount of two- and four-quasiparticle (4-qp) rotational bands in doubly-odd nuclei $^{166, 168, 170, 172}$Re~\cite{Li2015_PRC92-014310, Hartley2016_PRC94-054329, Hartley2013_PRC87-024315, Hartley2014_PRC90-017301} have been observed experimentally. These neutron-deficient Re ($Z=75$) isotopes are characterized by fairly small quadrupole deformations about $\beta_2 \sim 0.2$. Their proton and neutron Fermi surfaces lie close to the $\pi h_{11/2}$ and $\nu i_{13/2}$ sub-shells, respectively, which provides a good opportunity to study the dependence of level crossings and angular momentum alignments on the occupation of specific single-particle orbitals. Moreover, these data provide an excellent testing ground for various nuclear cranking models, e.g., the cranked Nilsson-Strutinsky method~\cite{Andersson1976_NPA268-205}, the cranking Hartree-Fock-Bogoliubov (HFB) model with Nilsson~\cite{Bengtsson1979_NPA327-139} and Woods-Saxon potentials~\cite{Nazarewicz1985_NPA435-397, Cwiok1987_CPC46-379}, the projected shell model~\cite{Hara1995_IJMPE4-637}, the tilted axis cranking model~\cite{Frauendorf2001_RMP73-463}, the cranked non-relativistic~\cite{Terasaki1995_NPA593-1, Egido1993_PRL70-2876, Afanasjev2000_PRC62-054306}, and relativistic mean-field models~\cite{Afanasjev1996_NPA608-107, Afanasjev2000_PRC62-054306}, etc. In Ref.~\cite{Li2015_PRC92-014310}, two rotational bands with the configurations assigned as $\pi h_{11/2} \otimes \nu i_{13/2}$ and $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ have been observed in the lightest odd-odd rhenium nucleus $^{166}$Re. Their bandhead spins are assigned as $8^-$ and $6^+$, respectively. Later on, two rotational bands with the same configurations have been identified in $^{168}$Re by D. J. Hartley et al~\cite{Hartley2016_PRC94-054329}. According to the energy level systematics and the additivity of alignment, they suggested that the bandhead spins of the two bands observed in $^{166}$Re should be $10^-$ and $7^+$, respectively. Similar band structures have also been observed in $^{170}$Re~\cite{Hartley2013_PRC87-024315}. While in $^{172}$Re, except $\pi h_{11/2} \otimes \nu i_{13/2}$, several multi-qp configurations including the proton $h_{9/2}$ orbital have been observed in Ref.~\cite{Hartley2014_PRC90-017301}. It is well known that $h_{9/2}$ orbital has very strong deformation driving effects, which makes the level crossings and alignments in $^{172}$Re extremely complicated. It should be noted that several configuration assignments in these odd-odd Re isotopes are proposed based on alignment properties and observed band crossings. Therefore, it is important to determine their bandhead spins and configurations, as well as the level crossing mechanism. In this paper, the cranked shell model (CSM) with pairing correlations treated by a particle-number conversing (PNC) method~\cite{Zeng1983_NPA405-1, Zeng1994_PRC50-1388} will be used to investigate the recently observed 2- and 4-qp high-spin rotational bands in $^{166, 168, 170, 172}$Re. In the PNC method, the pairing Hamiltonian is diagonalized directly in a properly truncated Fock space~\cite{Wu1989_PRC39-666, Molique1997_PRC56-1795}. Therefore, the particle-number is strictly conserved and the Pauli blocking effects are taken into account exactly, which makes it especially suitable for the investigation of the high-spin states in doubly-odd nuclei~\cite{He2005_NPA760-263, Li2013_ChinPhysC37-014101, Zhang2016_SciChinaPMA59-672012, Zhang2019_NPA981-107, Liu2019_PRC100-064307}. Note that the PNC scheme has also been transplanted in the total-Routhian-surface method~\cite{Fu2013_PRC87-044319}, the cranking Skyrme-Hartree-Fock method~\cite{Liang2015_PRC92-064325}, and the cranking covariant density functional theory~\cite{Shi2018_PRC97-034317, Xiong2020_PRC101-054305} for investigating the high-spin states and the high-$K$ isomers. The PNC-CSM with octupole deformation has also been developed~\cite{He2020_PRC102-064328}. Recently, a detailed comparison of different mean fields and treatments of pairing correlations in the description of the rotational excitations have been performed~\cite{Zhang2020_PRC101-054303}. Similar approaches with exactly conserved particle number when treating the pairing correlations can be found in Refs.~\cite{Richardson1964_NP52-221, Pan1998_PLB422-1, Volya2001_PLB509-37, Pillet2002_NPA697-141, Jia2013_PRC88-044303, Jia2013_PRC88-064321, Chen2014_PRC89-014321}. This paper is organized as follows. In Sec.~\ref{Sec:PNC-CSM}, a brief introduction to PNC-CSM is presented. Sec.~\ref{Sec:Num} shows the numerical details of the calculation for Re isotopes. The comparison of experimental and calculated moments of inertia (MOIs) and alignments for the rotational bands in $^{166,168,170,172}$Re are shown in Sec.~\ref{Sec:Results}. The level crossing mechanism in these bands is also analyzed. A brief summary is given in Sec.~\ref{Sec:Summary}. \section{Theoretical framework}{\label{Sec:PNC-CSM}} The CSM Hamiltonian with pairing correlations is written as \begin{eqnarray} H_\mathrm{CSM} & = & H_0 + H_\mathrm{P} = H_{\rm Nil}-\omega J_x + H_\mathrm{P} \ , \label{eq:H_CSM} \end{eqnarray} where $H_{\rm Nil}$ is the Nilsson Hamiltonian~\cite{Nilsson1969_NPA131-1}, and $-\omega J_x$ is the Coriolis interaction with the rotational frequency $\omega$ about the $x$ axis. $H_{\rm P}$ is the monopole pairing interaction, \begin{eqnarray} H_\mathrm{P} & = & -G\sum_{\xi\eta}a^{\dagger}_{\xi}a^{\dagger}_{\bar{\xi}}a_{\bar{\eta}}a_{\eta} \ , \label{eq:H_p} \end{eqnarray} where $\bar{\xi}$ $(\bar{\eta})$ denotes the time-reversal state of a Nilsson state $\xi$ ($\eta$), and $G$ is the effective monopole pairing strength. Instead of the single-particle level truncation in traditional shell-model calculations, a cranked many-particle configuration (CMPC, the eigenstates of the one-body operator $H_0$) truncation (Fock space truncation) is adopted, which is crucial to make the PNC-CSM calculations both workable and sufficiently accurate~\cite{Molique1997_PRC56-1795, Wu1989_PRC39-666}. Usually the CMPC space with about 1000 dimension is enough for the investigation of the yrast and low-lying excited states in the rare-earth nuclei. In contrast to the conventional Bardeen-Cooper-Schrieffer or Hartree-Fock-Bogoliubov approaches, without introducing quasiparticle transformation, the pairing Hamiltonian is diagonalized directly in PNC-CSM. Therefore, the particle-number is conserved from beginning to the end and the Pauli blocking effects are treated exactly. Especially, the odd-$A$ and odd-odd nuclei are treated at the same footing as the even-even ones in PNC-CSM, which makes the calculations much easier and more reliable. By diagonalizing the many-body Hamiltonian in a sufficiently large CMPC space, the eigenstates of $H_{\rm CSM}$ can be obtained as \begin{equation} |\Psi\rangle = \sum_{i} C_i \left| i \right\rangle \qquad (C_i \; \textrm{real}) \ , \end{equation} where $| i \rangle$ is a CMPC, and $C_i$ is the corresponding expansion coefficient. The angular momentum alignment for the state $| \Psi \rangle$ is \begin{equation} \langle \Psi | J_x | \Psi \rangle = \sum_i C_i^2 \langle i | J_x | i \rangle + 2\sum_{i<j}C_i C_j \langle i | J_x | j \rangle \ , \end{equation} and the kinematic MOI is \begin{equation} J^{(1)}=\frac{1}{\omega} \langle\Psi | J_x | \Psi \rangle \ . \end{equation} Because $J_x$ is a one-body operator, the matrix element $\langle i | J_x | j \rangle$ ($i\neq j$) may not vanish only when two CMPCs $|i\rangle$ and $|j\rangle$ differ by one particle occupation~\cite{Zeng1994_PRC50-1388}. After a certain permutation of creation operators, $|i\rangle$ and $|j\rangle$ can be recast into \begin{equation} |i\rangle=(-1)^{M_{i\mu}}|\mu\cdots \rangle \ , \qquad |j\rangle=(-1)^{M_{j\nu}}|\nu\cdots \rangle \ , \end{equation} where $\mu$ and $\nu$ denotes two different single-particle states, and $(-1)^{M_{i\mu}}=\pm1$, $(-1)^{M_{j\nu}}=\pm1$ according to whether the permutation is even or odd. Therefore, the angular momentum alignment of $|\Psi\rangle$ can be written as \begin{equation} \langle \Psi | J_x | \Psi \rangle = \sum_{\mu} j_x(\mu) + \sum_{\mu<\nu} j_x(\mu\nu) \ , \label{eq:jx} \end{equation} where the diagonal contribution $j_x(\mu)$ and the off-diagonal contribution $j_x(\mu\nu)$ reads \begin{eqnarray} j_x(\mu)&=&\langle\mu|j_{x}|\mu\rangle n_{\mu} \ , \nonumber \\ j_x(\mu\nu)&=&2\langle\mu|j_{x}|\nu\rangle\sum_{i<j}(-1)^{M_{i\mu}+M_{j\nu}}C_{i}C_{j} \quad (\mu\neq\nu) \ , \label{eq:jxorb} \end{eqnarray} and \begin{equation} n_{\mu}=\sum_{i}|C_{i}|^{2}P_{i\mu} \ , \end{equation} is the occupation probability of the cranked Nilsson orbital $|\mu\rangle$, $P_{i\mu}=1$ if $|\mu\rangle$ is occupied in the CMPC $|i\rangle$, and $P_{i\mu}=0$ otherwise. The experimental kinematic MOIs are extracted separately for each signature sequence within a rotational band ($\alpha = I$ mod 2) using \begin{eqnarray}\label{eq:exp-moi} \frac{J^{(1)}(I)}{\hbar^2}&=&\frac{2I+1}{E_\gamma(I+1\rightarrow I-1)} \ , \nonumber \\ \hbar\omega(I)&=&\frac{E_{\gamma}(I+1\rightarrow I-1)}{I_x(I+1)-I_x(I-1)} \ , \end{eqnarray} where $I_x(I)=\sqrt{(I+1/2)^2-K^2}$, $K$ is the projection of the total angular momentum onto the symmetry axis. \section{Numerical details}{\label{Sec:Num}} \begin{table}[h] \centering \caption{\label{tab:def} Deformation parameters ($\varepsilon_2$, $\varepsilon_4$) for Re isotopes in the present PNC-CSM calculation, which are taken from Ref.~\cite{Bengtsson1986_ADNDT35-15} as an average of the adjacent even-even nuclei.} \begin{tabular*}{0.7\columnwidth}{c@{\extracolsep{\fill}}ccccc} \hline \hline ~ & $^{166}$Re & $^{168}$Re & $^{170}$Re & $^{172}$Re \\ \hline $\varepsilon_2$ & 0.125 & 0.168 & 0.196 & 0.213 \\ $\varepsilon_4$ & -0.001 & 0.000 & 0.002 & 0.008 \\ \hline \hline \end{tabular*} \end{table} \begin{table}[h] \centering \caption{\label{tab:ku} The Nilsson parameters ($\kappa$, $\mu$) for Re isotopes, which are taken from Ref.~\cite{Bengtsson1985_NPA436-14} with a slight modification of neutron $\mu_6$ from 0.34 to 0.28.} \begin{tabular*}{0.7\columnwidth}{c@{\extracolsep{\fill}}ccccc} \hline \hline ~ & $N$ & 4 & 5 & 6 \\ \hline \multirow{2}*{Protons} & $\kappa$ & 0.065 & 0.060 & ~ \\ & $\mu$ & 0.57 & 0.65 & ~ \\ \hline \multirow{2}*{Neutrons} & $\kappa$ & ~ & 0.062 & 0.062 \\ & $\mu$ & ~ & 0.43 & 0.28 \\ \hline \hline \end{tabular*} \end{table} In the present work, the deformation parameters ($\varepsilon_2$, $\varepsilon_4$) of $^{166, 168, 170, 172}$Re are taken from Ref.~\cite{Bengtsson1986_ADNDT35-15} (cf., Table~\ref{tab:def}), which are taken as an average of the adjacent even-even nuclei. It can be seen that the deformation parameters increase gradually with neutron number. The Nilsson parameters ($\kappa$, $\mu$) are taken as the traditional values~\cite{Bengtsson1985_NPA436-14} with a slight change of neutron $\mu_6$ (modified from 0.34 to 0.28, cf., Table~\ref{tab:ku}) for a better description of the level crossing behavior in Re isotopes. In addition, the proton orbital $\pi 1/2^-[541]$ is shifted upward by about 0.7~MeV to avoid the defect caused by the velocity dependent $l^2$ term in the Nilsson potential at very high-spin region~\cite{Andersson1976_NPA268-205}. Note that this parameter set is the same as that adopted in our previous work for $^{166}$Ta~\cite{Zhang2016_SciChinaPMA59-672012}, in which the 2-qp rotational bands are described quite well. The proton $N=4, 5$ and neutron $N=5,6$ major shells are adopted to construct the CMPC space. For both protons and neutrons, the dimensions of the CMPC space are about 1000. The effective monopole pairing strengths for $^{166, 168, 170, 172}$Re are determined by fitting the nuclear odd-even mass differences. In principal, they should be different for each nucleus. In the present work, the monopole pairing strengths for $^{166, 168, 170, 172}$Re are chosen as the same value to get a global fit. They are taken as $G_{\rm p}=0.28$~MeV for protons and $G_{\rm n}=0.48$~MeV for neutrons, respectively. \section{Results and discussion}{\label{Sec:Results}} \begin{figure}[h] \includegraphics[width=0.8\columnwidth]{fig1_nil} \centering \caption{\label{fig1:nil} The cranked single-particle levels near the Fermi surface of $^{170}$Re for (a) protons and (b) neutrons. The positive-parity (negative-parity) levels are displayed by blue (red) lines. The signature $\alpha=+1/2$ ($\alpha=-1/2$) levels are displayed by solid (dotted) lines. } \end{figure} Figure~\ref{fig1:nil} shows the proton and neutron cranked Nilsson levels near the Fermi surface of $^{170}$Re. The single-particle structures of $^{166, 168, 170, 172}$Re are similar with each other, so only $^{170}$Re is shown as an example. It can be seen in Fig.~\ref{fig1:nil} that at low rotational frequencies, there exist a proton shell gap at $Z=76$ and a neutron shell gap at $N=88$. The proton and neutron Fermi surfaces of these Re nuclei lie close to the $\pi h_{11/2}$ and $\nu i_{13/2}$ sub-shells. $\pi 9/2^-[514]$ ($h_{11/2}$) is occupied as the lowest proton configurations. This is consistent with the data, which shows that $\pi 9/2^-[514]$ is the yrast bands of odd-$A$ Re isotopes in this mass region. With the neutron number increasing, the orbitals in neutron $i_{13/2}$ sub-shell close to the Fermi surface move from $\nu 1/2^+[660]$ to $\nu 5/2^+[642]$. Therefore, these odd-odd Re isotopes provides a good opportunity to study the dependence of level crossings on the occupation of specific single-particle orbitals. \subsection{2-qp rotational bands in $^{166,168,170}$Re} \begin{figure}[h] \includegraphics[width=0.8\columnwidth]{fig2_exp} \centering \caption{\label{fig2:exp} The experimental kinematic MOIs for the two configurations (a) $\pi h_{11/2} \otimes \nu i_{13/2}$ and (b) $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ in $^{166,168,170, 172}$Re~\cite{Li2015_PRC92-014310, Hartley2016_PRC94-054329, Hartley2013_PRC87-024315, Hartley2014_PRC90-017301}. Signature $\alpha=0$ and $\alpha=1$ branches are denoted by solid and open symbols, respectively. } \end{figure} In Ref.~\cite{Li2015_PRC92-014310}, two rotational bands with the configurations assigned as $\pi h_{11/2} \otimes \nu i_{13/2}$ and $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ have been observed in $^{166}$Re. Their bandhead spins are assigned as $8^-$ and $6^+$, respectively. Similar band structures have also been observed in heavier odd-odd Re isotopes~\cite{Hartley2016_PRC94-054329, Hartley2013_PRC87-024315, Hartley2014_PRC90-017301}. Fig.~\ref{fig2:exp} shows the experimental kinematic MOIs for these two configurations in $^{166, 168, 170, 172}$Re. It can be seen in Fig.~\ref{fig2:exp}(a) that the experimental MOIs for $\pi h_{11/2} \otimes \nu i_{13/2}$ in $^{168, 170, 172}$Re are quite similar to each other, except a larger backbending frequency in $^{168}$Re. However, the MOIs for this configuration in $^{166}$Re are much smaller. The same situation happens for $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ [see Fig.~\ref{fig2:exp}(b)]. In principle, the level structures as well as the MOIs should be similar for the same configuration in adjacent nuclei. It also can be seen from Eq.~(\ref{eq:exp-moi}) that for one rotational band, the extracted kinematic MOIs with rotational frequency are very sensitive to the bandhead spin. Therefore, the bandhead spin assignments for these two bands in $^{166}$Re may be questionable. It should be noted that according to the energy level systematics and the additivity of alignment, Hartley et al., suggested that the bandhead spins of these two bands observed in $^{166}$Re should be $10^-$ and $7^+$, respectively~\cite{Hartley2016_PRC94-054329}. \begin{figure}[h] \includegraphics[width=0.8\columnwidth]{fig3_exp2} \centering \caption{\label{fig3:exp2} The same as Fig.~\ref{fig2:exp}, but with different bandhead spin assignments for the two configurations in $^{166}$Re. Only the branches with signature $\alpha=0$ are shown in this figure. } \end{figure} Figure~\ref{fig3:exp2} shows the extracted MOIs with different bandhead spin assignments for (a) $\pi h_{11/2} \otimes \nu i_{13/2}$ and (b) $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ in $^{166}$Re. The MOIs for the same configuration in $^{168, 170, 172}$Re are also shown for comparison. Since there is no signature splitting for these two bands in all these odd-odd Re nuclei, only the branches with signature $\alpha=0$ are shown in Fig.~\ref{fig3:exp2}. It can be seen clearly in Figs.~\ref{fig3:exp2}(a) and (b) that the bandhead spins of these two rotational bands in $^{166}$Re should both be increased by $2\hbar$ to get in consistent with the systematics of the experimental MOIs for the same configuration in $^{168, 170, 172}$Re. Therefore, it is reasonable to assign $10^-$ for $\pi h_{11/2} \otimes \nu i_{13/2}$ and $8^+$ for $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ as the bandhead spins. Note that the present bandhead spin assignment for $\pi h_{11/2} \otimes \nu i_{13/2}$ is the same as that suggested by Hartley in Ref.~\cite{Hartley2016_PRC94-054329}, while the bandhead spin for $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ is increased by 1$\hbar$ compared with Ref.~\cite{Hartley2016_PRC94-054329}. In the following calculations, the experimental MOIs and alignments for these two bands in $^{166}$Re will be extracted with bandhead spins $10^-$ and $8^+$. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig4_band1} \centering \caption{\label{fig4:band1} Comparison between the experimental and calculated kinematic MOIs $J^{(1)}$ (upper panels) and alignments (lower panels) of the bands with the configuration $\pi h_{11/2} \otimes \nu i_{13/2}$ (labeled as band 1) in $^{166,168,170}$Re. The experimental MOIs and alignments are displayed by black solid ($\alpha = 0$) and red open ($\alpha= 1$) squares, and the calculated values are displayed by black solid ($\alpha = 0$) and red dotted ($\alpha= 1$) lines. The alignment $i$ is defined as $i= \langle J_x \rangle-\omega J_0 -\omega^3 J_1$. The Harris parameters are $J_{0}=13\ \hbar^{2}\rm{MeV^{-1}}$ and $J_{1}=64\ \hbar^{4}\rm{MeV^{-3}}$ for $^{166}$Re~\cite{Li2015_PRC92-014310}, $J_{0}=17\ \hbar^{2}\rm{MeV^{-1}}$ and $J_{1}=50\ \hbar^{4}\rm{MeV^{-3}}$ for $^{168}$Re~\cite{Hartley2016_PRC94-054329} and $^{170}$Re~\cite{Hartley2013_PRC87-024315}. } \end{figure} Figure~\ref{fig4:band1} shows the comparison between the experimental and calculated kinematic MOIs (upper panels) and alignments (lower panels) for the bands with the configuration $\pi h_{11/2} \otimes \nu i_{13/2}$ (labeled as band 1) in $^{166, 168, 170}$Re. For odd-odd nucleus, the total signature ($\alpha = 0, 1$) is coupled by the odd neutron ($\alpha = \pm 1/2$) and odd proton ($\alpha = \pm 1/2$). It can be seen from the cranked single-particle levels in Fig.~\ref{fig1:nil}(b) that, all the $\nu i_{13/2}$ orbitals close to the Fermi surface ($\nu 1/2^+ [660]$ and $\nu 3/2^+ [651]$) have very large signature splitting, while the experimental data of these bands show nearly no signature splitting. In addition, Fig.~\ref{fig1:nil}(a) shows that the signature splitting in $\pi 9/2^-[514]$ $(h_{11/2})$ is very small. This indicates that band 1 in these three nuclei should be coupled from $\alpha = 1/2$ (the favored signature) of the odd neutron with $\alpha =\pm 1/2$ of the odd proton to form the total signature $\alpha = 1, 0$. Note that although the spherical configurations are the same for these three bands, their Nilsson configurations are different. It can be seen in Fig.~\ref{fig1:nil}(b) that the lowest neutron $\nu i_{13/2}$ orbital is $\nu 1/2^+[660]$ for $N=91$ ($^{166}$Re) and $N=93$ ($^{168}$Re), but $\nu 3/2^+[651]$ for $N=95$ ($^{170}$Re). Therefore, the configurations should be $\pi 9/2^-[514](\alpha=\pm 1/2) \otimes \nu 1/2^+[660](\alpha=1/2)$ for $^{166, 168}$Re, and $\pi 9/2^-[514](\alpha=\pm 1/2) \otimes \nu 3/2^+[651](\alpha=1/2)$ for $^{170}$Re. Note that the bandhead spin of this band in $^{166}$Re is assigned as $I_{0}=10\hbar$ by Fig.~\ref{fig3:exp2}(a), which is the same as that suggested by Hartley in Ref.~\cite{Hartley2016_PRC94-054329}. The experimental data can be reproduced quite well by the PNC-CSM calculations with the above configurations, which confirms the configuration and bandhead spin assignments for these bands. The PNC-CSM calculations predict two sharp upbendings at $\hbar\omega\approx 0.25$~MeV and 0.45~MeV in $^{166}$Re for this band, which are not observed in experiment. In addition, the calculated upbending by the PNC-CSM in $^{170}$Re is weaker than the data, which may be caused by the change of the mean-field after level crossing~\cite{Zhang2020_PRC101-054303}. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig5_band2} \centering \caption{\label{fig5:band2} The same as Fig \ref{fig4:band1}, but for the bands with the configuration $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ (labeled as band 2) in $^{166, 168, 170}$Re. } \end{figure} Close to the neutron Fermi surface, there exist several negative parity single-particle levels in these Re isotopes, e.g., $\nu 3/2^+[521]$ ($h_{9/2}$), $\nu 5/2^+[523]$ ($f_{7/2}$) and $\nu 5/2^+[512]$ ($h_{9/2}$). Together with the odd-proton $\pi 9/2^-[514]$ ($h_{11/2}$), several positive parity 2-qp states can be established. Fig.~\ref{fig5:band2} is the same as Fig.~\ref{fig4:band1}, but for the bands with the configuration $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ (labeled as band 2) in $^{166, 168, 170}$Re. Similar to band 1, there is also nearly no signature splitting in band 2 of these three nuclei. Fig.~\ref{fig1:nil}(b) shows that signature splitting exists in all these $\nu [f_{7/2} / h_{9/2}]$ orbitals. This indicates that band 2 in these three nuclei should be coupled from the favored signature of the odd neutron with $\alpha=\pm 1/2$ of the odd proton to form the total signature. Therefore, the Nilsson configuration $\pi 9/2^-[514](\alpha=\pm 1/2) \otimes \nu 3/2^+[521](\alpha=-1/2)$ for $^{166}$Re, $\pi 9/2^-[514](\alpha=\pm 1/2) \otimes \nu 5/2^+[523](\alpha=-1/2)$ for $^{168}$Re, and $\pi 9/2^-[514](\alpha=\pm 1/2) \otimes \nu 5/2^+[512](\alpha=1/2)$ for $^{170}$Re are assigned. Note that $f_{7/2}$ and $h_{9/2}$ are pseudo-spin partners, strong mixing exists in these two orbitals (see the occupation probabilities in Fig.~\ref{fig7:occu}). The bandhead spin of this band in $^{166}$Re is assigned as $I_0=8\hbar$ by Fig.~\ref{fig3:exp2}(b), which is increased by $1\hbar$ compared with Ref.~\cite{Hartley2016_PRC94-054329}. It can be seen in Fig.~\ref{fig5:band2} that the experimental kinematic MOIs and alignments can be well reproduced by the PNC-CSM with the above configurations, which confirms the configuration and bandhead spin assignments for these bands. Note that the second upbending observed in $^{170}$Re is not reproduced by the PNC-CSM results. The calculated rotational frequency of this upbending is about $\hbar\omega\approx$0.52~MeV, which is much higher than the experimental data. After comparing the experimental alignments with the adjacent odd-$A$ nuclei $^{167, 169}$Re and $^{167}$W, the first backbendings in bands 1 and 2 of $^{166, 168, 170}$Re are attributed to the $BC$ (the alignments of the second and third $i_{13/2}$ quasineutrons) and $AB$ (alignment of the lowest $i_{13/2}$ quasineutrons) crossing in Refs~\cite{Hartley2016_PRC94-054329,Hartley2013_PRC87-024315}. In the following, we will analyze the level crossing mechanism of these bands in detail using the PNC-CSM. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig6_jx} \centering \caption{\label{fig6:jx} The experimental (solid squares) and calculated (black solid lines) angular momentum alignments $\langle J_x \rangle$ with signature $\alpha=0$ for band 1 ($\pi h_{11/2} \otimes \nu i_{13/2}$, upper panels) and band 2 ($\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$, lower panels) in $^{166, 168, 170}$Re. Contribution from protons and neutrons is displayed by red and blue dotted lines, respectively. } \end{figure} The experimental and calculated angular momentum alignment $\langle J_x \rangle$ for band 1 ($\pi h_{11/2} \otimes \nu i_{13/2}$, upper panels) and band 2 ($\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$, lower panels) in $^{166, 168, 170}$Re with signature $\alpha=0$ are shown in Fig.~\ref{fig6:jx}. Since there is nearly no signature splitting in all these bands, we only take $\alpha=0$ branch as an example. Note that smoothly increasing part of the alignment represented by the Harris formula ($\omega J_0+\omega^3 J_1$) is not subtracted in this figure. It can be seen clearly in Fig.~\ref{fig6:jx} that all the contributions to the backbendings/upbendings in these bands observed in $^{166, 168, 170}$Re come from the neutrons. The protons only provide a gradual increase of the angular momentum alignment. Therefore, in the following, only the neutron part will be discussed. \begin{figure}[!] \includegraphics[width=1.0\textwidth]{fig7_occu} \centering \caption{\label{fig7:occu} The occupation probabilities $n_\mu$ of orbital $\mu$ (including both signature $\alpha=\pm 1/2$) close to the neutron Fermi surface for band 1 ($\pi h_{11/2} \otimes \nu i_{13/2}$, upper panels) and band 2 ($\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$, lower panels) in $^{166, 168, 170}$Re. The positive (negative) parity levels are shown by blue solid (red dotted) lines. } \end{figure} Figure~\ref{fig7:occu} shows the occupation probabilities $n_\mu$ of each orbital $\mu$ close to the neutron Fermi surface for band 1 ($\pi h_{11/2} \otimes \nu i_{13/2}$, upper panels) and band 2 ($\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$, lower panels) in $^{166,168,170}$Re. In the PNC-CSM calculations, the particle-number is exactly conversed, whereas the occupation probabilities of the single-particle orbitals change with the rotational frequency. By analyzing the variation of the occupation probabilities with rotational frequency, we can learn the level crossing mechanism more deeply. It can be seen in Fig.~\ref{fig7:occu}(a) that the $n_\mu$ of $\nu 3/2^-[521]$ increases from 1.1 to 1.6 with the rotational frequency increasing from about 0.20~MeV to 0.30~MeV, while the $n_\mu$ of $\nu 5/2^-[523]$ decreases from 0.9 to 0.3. At the same time, the $n_\mu$ of some other orbitals in $N=5$ shell, e.g., $\nu5/2^-[512]$, slightly increase or decrease. This can be easily understood from the cranked single-particle levels in Fig.~\ref{fig1:nil}(b). The $\nu 5/2^-[523]$ is slightly above the neutron Fermi surface, and is partly occupied due to the pairing correlations. With increasing rotational frequency, this orbital leaves further from the Fermi surface. Therefore, the occupation probability of this orbital becomes smaller after the level crossing frequency. Meanwhile, the occupation probability of $\nu 3/2^-[521]$, which approaches near to the Fermi surface, becomes larger with increasing rotational frequency. So the predicted sharp upbending at $\hbar\omega\approx 0.25$~MeV in band 1 of $^{166}$Re may come from the level crossing of these two pseudo-spin partners. Around $\hbar\omega\approx 0.45$~MeV, the $n_\mu$ of $\nu 1/2^+[660]$ and $\nu 3/2^+[651]$ increases sharply from about 1.0 to 2.0 and 0.0 to 1.0, respectively, while the the $n_\mu$ of $\nu 3/2^-[521]$ decreases from 1.7 to 0.1. This indicates that the second sharp upbending in this band may mainly due to $\nu 1/2^+[660]$ and $\nu 3/2^+[651]$. It can be seen in Fig.~\ref{fig7:occu}(b) that the $n_\mu$ of $\nu 1/2^+[660]$ and $\nu 3/2^+[651]$ increases sharply from about 1.2 to 2.0 and 0.2 to 1.0, respectively, while the the $n_\mu$ of $\nu3/2^-[521]$ decreases from 1.4 to 0.1 around $\hbar\omega\approx 0.35$~MeV. This indicates that the sharp upbending in band 1 of $^{168}$Re may due to $\nu 1/2^+[660]$ and $\nu 3/2^+[651]$. With neutron number increasing, the $\nu 3/2^-[651]$ becomes the lowest $\nu i_{13/2}$ orbital in $^{170}$Re. It can be seen in Fig.~\ref{fig7:occu}(c) that the $\nu 3/2^-[651]$ has a strong Coriolis mixing with $\nu 1/2^+[660]$ in band 1 of $^{168}$Re, and they have a gradual change in the upbending region together with some orbitals in $N=5$ shell, e.g., $\nu 5/2^-[512]$ and $\nu 5/2^-[523]$. This indicates that a gradual upbending in band 1 of $^{170}$Re may mainly due to these $i_{13/2}$ neutrons. As for the band 2 in $^{166}$Re, the pseudo-spin partners $\nu 3/2^-[521]$ ($h_{9/2}$) and $\nu 5/2^-[523]$ ($f_{7/2}$) have a strong mixing [see Fig.~\ref{fig7:occu}(d)]. The $n_\mu$ of $\nu 1/2^+[660]$ increases sharply from nearly zero to 2.0 at $\hbar\omega\approx 0.25$~MeV, while the $n_\mu$ of $\nu 3/2^-[521]$, $\nu5/2^-[523]$ and $\nu5/2^{-} [512]$ have a sharp decrease. This indicates that the sharp upbending this band may mainly due to $\nu 1/2^+[660]$. With neutron number increases by 2, the $\nu 3/2^-[651]$ and $\nu 5/2^-[642]$ get closer to the Fermi surface and are partly occupied in band 2 of $^{168}$Re. Around $\hbar\omega\approx 0.2$~MeV, the $n_\mu$ of $\nu 1/2^+[660]$ increases sharply from 0.6 to 2.0, while the $n_\mu$ of $\nu 3/2^-[651]$, $\nu 5/2^-[642]$ and some orbitals in $N=5$ shell decrease. This indicates that the sharp upbending in this band may mainly due to these three $i_{13/2}$ neutrons, i.e., $\nu 1/2^+[660]$, $\nu3/2^-[651]$ and $\nu5/2^-[642]$. Similar to the band 2 in $^{166}$Re, the pseudo-spin partners $\nu 5/2^-[512]$ ($h_{9/2}$) and $\nu 5/2^-[523]$ ($f_{7/2}$) also have a strong mixing in this band of $^{170}$Re. It can be seen in this figure that the the sharp upbending this band may also be caused by these three $i_{13/2}$ neutrons. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig8_jxshell} \centering \caption{\label{fig8:jxshell} The contributions of neutron $N = 5, 6$ major shells to the angular momentum alignment $\langle J_{x} \rangle$ of the band 1 ($\pi h_{11/2} \otimes \nu i_{13/2}$, upper panels) and band 2 ($\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$, lower panels) in $^{166,168,170}$Re. Red and blue solid lines are used for the $N = 5$ and $N = 6$ shells, respectively. The contributions of diagonal [$\sum_{\mu} j_x(\mu)$] and off-diagonal [$\sum_{\mu<\nu} j_x(\mu\nu)$] parts in Eq.~(\ref{eq:jx}) are shown by dotted lines. } \end{figure} To have a clearer understanding of the level crossing mechanism in these rotational bands, the contribution of neutron $N = 5, 6$ major shells to the angular momentum alignment $\langle J_x \rangle$ of band 1 ($\pi h_{11/2} \otimes \nu i_{13/2}$, upper panels) and band 2 ($\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$, lower panels) in $^{166,168,170}$Re are shown in Fig.~\ref{fig8:jxshell}. It can be seen in Fig.~\ref{fig8:jxshell}(a) that the first upbending in band 1 of $^{166}$Re is caused by the neutron $N=5$ shell, and the off-diagonal part contributes to this upbending. This is due to the rearrangement of neutron occupations in these pseudo-spin partners. The second upbending is mainly due to the diagonal part of $N=6$ major shell. Fig.~\ref{fig8:jxshell}(b) shows that both the $N=5$ and 6 major shells contribute to the upbending in band 1 of $^{168}$Re. The main contribution comes from the off-diagonal part of $N=5$ and the diagonal part of $N=6$ major shells. For the upbending in band 1 of $^{168}$Re, the main contribution comes from the off-diagonal part of $N=6$ major shell. It can be seen that the contribution from $N=5$ major shell decreases with increasing neutron number. This is because the occupation of the low-$\Omega$ orbital $\nu 3/2^-[521]$ ($h_{9/2}$), which contributes a remarkable amount of angular momentum, is getting larger and larger with increasing neutron number. This is different with the traditional point of view that only the $\nu i_{13/2}$ orbitals contribute to the first backbending/upbending in the rare-earth nuclei. Similar to band 1, the contribution from $N=5$ major shell also decreases with increasing neutron number in band 2 of $^{166,168,170}$Re [see Figs.~\ref{fig8:jxshell}(d), (e), (f)]. The neutron $i_{13/2}$ orbital is not blocked in this configuration, so the level crossing mechanism should be different from that in band 1. It can be see in Figs.~\ref{fig8:jxshell}(d), (e), (f) that except $^{166}$Re, both the diagonal and the off-diagonal parts of $N=6$ shell contribute to the upbending in band 2 of $^{168,170}$Re. It can be seen that with neutron number increasing, the off-diagonal contribution from $N=6$ shell becomes larger and larger, while the diagonal contribution becomes smaller and smaller. This is because the neutron Fermi surface approaches the middle of the $\nu i_{13/2}$ sub-shell with increasing neutron number. \begin{figure}[!] \includegraphics[width=1.0\textwidth]{fig9_jxorb} \centering \caption{\label{fig9:jxborb} Contribution of each neutron orbital in the $N = 5$ and $N=6$ major shell for (a) band 1 and (b) band 2 to the angular momentum alignments $\langle J_x \rangle$ in $^{166, 168, 170}$Re. The diagonal [$j_x(\mu)$] and off-diagonal [$j_x(\mu\nu)$] parts in Eq.~(\ref{eq:jxorb}) are denoted by solid and dotted lines, respectively. } \end{figure} By analyzing the contribution of each diagonal [$j_x(\mu)$] and off-diagonal [$j_x(\mu\nu)$] parts in Eq.~(\ref{eq:jxorb}) in bands 1 and 2 of $^{166, 168, 170}$Re, the level crossing mechanism can be understood thoroughly. Fig.~\ref{fig9:jxborb} shows the contribution of each neutron orbital in the $N = 5$ and $N=6$ major shell for band 1 (upper panels) and band 2 (lower panels) to the angular momentum alignments $\langle J_x \rangle$ in $^{166, 168, 170}$Re. It can be seen in Fig.~\ref{fig9:jxborb}(a) that for band 1 in $^{166}$Re, the contribution from $N=5$ shell to the alignment gain after level crossing comes from the off-diagonal parts $j_x(3/2^-[521]5/2^-[512])$, $j_x(3/2^-[521]3/2^-[532])$, $j_x(5/2^-[523]7/2^-[514])$, and the diagonal part $j_x(5/2^-[523])$. They all originate from the pseudo-spin partners $h_{9/2}$ and $f_{7/2}$. The diagonal parts $j_x(1/2^+[660])$ and $j_x(3/2^+[651])$ in $N=6$ shell contribute to the second upbending. For band 1 in $^{168}$Re [Fig.~\ref{fig9:jxborb}(b)], the contribution from $N=5$ shell to the alignment gain after level crossing comes from the off-diagonal parts $j_x(3/2^-[521]5/2^-[512])$, $j_x(5/2^-[523]7/2^-[514])$, and $j_x(1/2^-[521]5/2^-[523])$. They all come from the pseudo-spin partners $h_{9/2}$ and $f_{7/2}$, except $1/2^-[521]$ ($p_{3/2}$). The main contribution from $N=6$ shell comes from the diagonal parts $j_x(1/2^+[660])$ and $j_x(3/2^+[651])$. The off-diagonal part $j_x(3/2^+[651]5/2^+[642])$ contributes a little. As for the band 1 in $^{170}$Re [Fig.~\ref{fig9:jxborb}(c)], only the off-diagonal parts $j_x(1/2^+[660]3/2^+[651])$ and $j_x(3/2^+[651]5/2^+[642])$ contribute to the upbending. It can be seen in Fig.~\ref{fig9:jxborb}(e) that the alignment gain after level crossing in band 2 of $^{166}$Re mainly comes from the diagonal part $j_x(1/2^+[660])$. The interference terms between several orbitals from the pseudo-spin partners $h_{9/2}$ and $f_{7/2}$ contribute a lot to the upbending, although each of them only contribute a little [see the inset of Fig.~\ref{fig9:jxborb}(e)]. The diagonal part $j_x(5/2^-[523])$ also has a little contribution. For band 2 in $^{168}$Re [Fig.~\ref{fig9:jxborb}(e)], the diagonal part $j_x(1/2^+[660])$ contributes a lot to the upbending. The off-diagonal parts $j_x(1/2^+[660]3/2^+[651])$ and $j_x(3/2^+[651]5/2^+[642])$ also have remarkable contribution. Band 2 in $^{170}$Re is similar to that in $^{168}$Re. The off-diagonal parts $j_x(1/2^+[660]3/2^+[651])$, $j_x(3/2^+[651]5/2^+[642])$ and the diagonal part $j_x(1/2^+[660])$ contribute to the upbending. Therefore, the level crossing mechanism in these bands is understood clearly. \subsection{2- and 4-qp rotational bands in $^{172}$Re} \begin{table}[h] \centering \caption{\label{tab:172config} The configurations of the five rotational bands observed in $^{172}$Re proposed in Ref.~\cite{Hartley2014_PRC90-017301}.} \begin{tabular*}{0.8\columnwidth}{c@{\extracolsep{\fill}}ll} \hline \hline Band & Configuration\\ \hline band 1 & $\pi h_{9/2} (\alpha=1/2) \otimes \nu i_{13/2} (\alpha=\pm 1/2)$ \\ band 2 & $\pi h_{11/2}(\alpha=\pm1/2) \otimes \nu i_{13/2} (\alpha=1/2)$ \\ band 3 & $\pi h_{9/2} (\alpha=1/2) \otimes \nu [f_{7/2}/h_{9/2}](\alpha=\pm 1/2)$ \\ band 4 & $\pi h_{9/2} \otimes \nu^3 (p_{3/2}AB)$ (favored signature) \\ band 5 & $\pi h_{9/2} \otimes \nu^3 (p_{3/2}AB)$ (unfavored signature) \\ \hline \hline \end{tabular*} \end{table} With deformation increasing, the $\pi 1/2^{-} [541]$ ($h_{9/2}$) gets closer to the proton Fermi surface in $^{172}$Re and several 2- and 4-qp bands based on this quasi-proton have been observed experimentally~\cite{Hartley2014_PRC90-017301}. According to the alignment properties and observed band crossings, proper spherical configurations are proposed for these multi-qp rotational bands by Hartley et al.~\cite{Hartley2014_PRC90-017301}, which are shown at Table~\ref{tab:172config}. Bands 1, 2 and 3 are 2-qp structures, and bands 4 and 5 are 4-qp structures. Note that band 4 is observed following the $AB$ crossing and the configuration $\pi h_{9/2} \otimes \nu^3 (p_{3/2}AB)$ is tentatively assigned due to the insufficient spectroscopic information. In addition, the unfavored signature of $\pi h_{9/2} \otimes \nu^3 (p_{3/2}AB)$ would have higher energy compared to the favored one, and both $\pi h_{9/2}$ and $\nu p_{3/2}$ have significant signature splitting. So Ref.~\cite{Hartley2014_PRC90-017301} did not assign firmly this configuration for band 5. Later on, we will check whether the configurations assigned for these bands are reasonable by comparing their experimental and calculated MOIs and alignments. \begin{figure}[h] \includegraphics[width=0.6\columnwidth]{fig10_172exp} \centering \caption{\label{fig10:172exp} Experimental MOIs of band 1 ($\pi h_{9/2} \otimes \nu i_{13/2}$) and band 2 ($\pi h_{11/2} \otimes \nu i_{13/2}$). Solid and open symbols denote the signature $\alpha=0$ and $\alpha=1$ branches, respectively. } \end{figure} Figure~\ref{fig10:172exp} shows the experimental MOIs of band 1 ($\pi h_{9/2} \otimes \nu i_{13/2}$) and band 2 ($\pi h_{11/2} \otimes \nu i_{13/2}$) in $^{172}$Re. It is well known that the proton $h_{11/2}$ level crossing appears at very high-spin region in the rare-earth nuclei, especially this orbital is blocked in band 2 of $^{172}$Re, so the level crossings in these two bands should be caused by neutron. Since the neutron configuration is the same for band 1 with $\alpha=1$ (open squares in Fig.~\ref{fig10:172exp}) and band 2, their level crossing should appear at similar frequency. However, it can be seen in Fig.~\ref{fig10:172exp} that the level crossing frequency of signature $\alpha=1$ branch of band 1 in $^{172}$Re is about 0.34~MeV, which is larger than that in band 2 (the level crossing frequency is about 0.30~MeV). Similar delayed crossing appears at signature $\alpha=0$ branch of band 1 (solid squares in Fig.~\ref{fig10:172exp}), which is observed at a higher frequency of 0.38~MeV. This is also slightly higher than the level crossing frequency at the $\nu i_{13/2}$ band with unfavored signature in the adjacent $^{171}$W~\cite{Espino1994_NPA567-377}. It is well known that the $h_{9/2}$ proton has very strong prolate deformation driving effects and drives the nucleus to a slightly larger deformation~\cite{Nazarewicz1990_NPA512-61, Jensen1991_ZPA340-351, Warburton1995_NPA591-323, Jensen2001_NPA695-3}, which in turn results in this delay in the crossing frequency. The deformation driving effects of the the $h_{9/2}$ proton make the level crossings extremely complicated, especially in odd-odd nuclei. Therefore, in the following calculation, the deformation parameter is chosen as 0.234 when the $\pi 1/2^-[541]$ ($h_{9/2}$) is involved (bands 1, 3-5), which is increased by 10\% compared with the Lund systematics~\cite{Bengtsson1986_ADNDT35-15}. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig11_172moi} \centering \caption{\label{fig11:172moi} The comparison between the experimental~\cite{Hartley2014_PRC90-017301} and calculated kinematic MOIs $J^{(1)}$ (upper panels) and alignments (lower panels) of the five rotational bands observed in $^{172}$Re. The experimental MOIs and alignments are displayed by black solid and red open squares for $\alpha=0$ and $\alpha=1$, respectively. Corresponding calculated values are displayed by black solid and red dotted lines. The alignments $i=\langle J_x \rangle -\omega J_0 -\omega^3 J_1$ and the Harris parameters are $J_{0} =23~\hbar^2{\rm MeV^{-1}}$ and $J_1=65~\hbar^4{\rm MeV^{-3}}$~\cite{Hartley2014_PRC90-017301}. When the $\pi 1/2^-[541]$ is involved (bands 1, 3-5), the deformation parameter is chosen as 0.234, which is increased by 10\% compared with the Lund systematics~\cite{Bengtsson1986_ADNDT35-15}. } \end{figure} Figure~\ref{fig11:172moi} shows the comparison between the experimental and calculated kinematic MOIs $J^{(1)}$ (upper panels) and alignments (lower panels) of the five rotational bands observed in $^{172}$Re. The corresponding Nilsson configurations are also assigned for these bands. It can be seen that the experimental MOIs and alignments, as well as the level crossings can be well reproduced by the PNC-CSM, which in turn confirms their configuration assignments. The results also indicate that a 10\% increase of the deformation is appropriate to describe the bands with $\pi 1/2^-[541]$ involved. For band 1, the experimental MOIs exhibit an obvious signature splitting, especially at high-spin region. Note that both the proton $\pi 1/2^-[541]$ and the neutron $\nu 5/2^+[642]$ orbitals have significant signature splitting (see Fig.~\ref{fig1:nil}). However, according to the alignment behaviour of $\pi 1/2^-[541]$ and the level crossing behaviour of these two signature branches, it is easily to assign this band as the favored signature of $\pi 1/2^-[541] (\alpha=1/2)$ coupled with the $\alpha=\pm 1/2$ of $\nu 5/2^+[642]$. It can be seen that for the $\alpha=0$ sequence, both two upbendings are reproduced quite well by the PNC-CSM, while the second upbending in the $\alpha=1$ sequence appears at a much higher frequency in the PNC-CSM calculations compared with the experimental data. For band 2, the experimental MOIs exhibit nearly no signature splitting. The proton $\pi 9/2^-[514]$ orbital only shows a very small splitting at high-spin region [see Fig.~\ref{fig1:nil}(a)]. So this band is assigned as the favored signature of $\nu5/2^+[642] (\alpha=1/2)$ coupled with the $\alpha=\pm 1/2$ of $\pi 9/2^-[514]$. It can be seen that the PNC-CSM predicts a second upbending a the $\alpha=1$ sequence in this band, which is not observed by the experiments. This upbending is caused by the $h_{9/2}$ proton, i.e., the level crossing between the $\alpha=1/2$ of $\pi 9/2^-[514]$ and $\pi 1/2^-[541]$. The experimental MOIs of band 3 only exhibit a signature splitting after the first level crossing. Similar to band 1, the configuration of this band is assigned as the favored signature of $\pi 1/2^-[541] (\alpha=1/2)$ coupled with the $\alpha=\pm 1/2$ of $\nu5/2^-[512]$. Note that $\nu5/2^-[512]$ has very small signature splitting [see Fig.~\ref{fig1:nil}(b)], so both two signature sequences in the PNC-CSM calculations are nearly degenerate. Therefore, the second upbending in the $\alpha=1$ sequence in this band is not reproduced by the PNC-CSM. This needs further investigation. Experimentally, only one signature sequence is observed both in bands 4 ($\alpha=0$) and 5 ($\alpha=1$). D. J. Hartley et al., suggested that these two bands my be based on the favored (band 4) and unfavored (band 5) signature sequence of $\pi h_{9/2} \otimes \nu^3 (p_{3/2}AB)$. Since both $\pi 1/2^-[541]$ and $\nu 1/2^-[521]$ $(p_{3/2})$ orbitals have signature splitting, four rotational bands with different MOIs can be established. It can be seen that close to the bandhead region, the experimental MOIs of band 4 and band 5 have totally different behavior. One decreases and another increases with rotational frequency. In Fig.~\ref{fig11:172moi}(d) the PNC-CSM calculations with the configuration $\pi 1/2^-[541](\alpha=1/2) \otimes \nu 1/2^-[521](\alpha=\pm1/2)$ are compared with the data. It can be seen that after level crossing, the configuration $\pi 1/2^-[541](\alpha=1/2)\otimes \nu 1/2^-[521](\alpha=1/2)$ can reproduce the band 4 quite well, while the calculated results with the configuration $\pi 1/2^-[541](\alpha=1/2)\otimes \nu 1/2^-[521](\alpha=-1/2)$ disagree with band 5. This indicates that the configuration of band 4 may be $\pi 1/2^-[541](\alpha=1/2)\otimes \nu^3 1/2^-[521]AB(\alpha=1/2)$. Furthermore, the PNC-CSM calculations with the configuration $\pi 1/2^-[541](\alpha=-1/2) \otimes \nu 1/2^-[521](\alpha=\pm1/2)$ are compared with the data in Fig.~\ref{fig11:172moi}(e). It can be seen that after level crossing, the configuration $\pi 1/2^-[541](\alpha=-1/2)\otimes \nu 1/2^-[521](\alpha=1/2)$ can reproduce the band 5 quite well. This indicates that the configuration of band 5 may be $\pi 1/2^-[541](\alpha=-1/2)\otimes \nu^3 1/2^-[521]AB(\alpha=1/2)$. It seems that band 4 can also be reproduced by the configuration $\pi 1/2^-[541](\alpha=-1/2)\otimes \nu 1/2^-[521](\alpha=-1/2)$. However, both $\pi 1/2^-[541](\alpha=-1/2)$ and $\nu 1/2^-[521](\alpha=-1/2)$ are unfavored signature branches, the energy of this coupling mode should be much higher than the coupling mode $\pi 1/2^-[541](\alpha=1/2) \otimes \nu 1/2^-[521](\alpha=1/2)$ with both favored signature. Therefore, we tentatively assign the configuration of band 4 as $\pi 1/2^-[541](\alpha=1/2)\otimes \nu^3 1/2^-[521]AB(\alpha=1/2)$. Note that the two signature branches of $\pi 1/2^-[541]$ have quite different behavior at low-spin region. Therefore, the coupling mode of band 4 can be determined firmly if we have more spectroscopic information about the low-spin region for this band. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig12_172jx} \centering \caption{\label{fig12:172jx} The experimental (solid squares) and calculated (black solid line) angular momentum alignments $\langle J_x \rangle$ for bands 1-5 with $\alpha=0$ in $^{172}$Re. Proton and neutron contributions to $\langle J_x \rangle$ are shown by red and blue dotted lines, respectively. } \end{figure} Figure~\ref{fig12:172jx} shows the experimental (solid squares) and calculated (black solid line) angular momentum alignments $\langle J_x \rangle$ for bands 1-5 in $^{172}$Re. Here we take the $\alpha=0$ branch in each band as an example. It can be seen clearly in Fig.~\ref{fig12:172jx} that all the contributions to the backbendings/upbendings in these five bands come from the neutrons. The protons only provide a gradual increase of the angular momentum alignment. Therefore, in the following only the neutron part will be discussed. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig13_172jxshell} \centering \caption{\label{fig13:172jxshell} The contributions of neutron $N = 5, 6$ major shells to the angular momentum alignment $\langle J_x \rangle$ for bands 1-5 with $\alpha=0$ in $^{172}$Re. Red and blue solid lines are used for the $N = 5$ and $N = 6$ shells, respectively. The contributions of diagonal [$\sum_{\mu} j_x(\mu)$] and off-diagonal [$\sum_{\mu<\nu} j_x(\mu\nu)$] parts are shown by dotted lines. } \end{figure} The contributions of neutron $N = 5, 6$ major shells to the angular momentum alignment $\langle J_x \rangle$ for bands 1-5 in $^{172}$Re are shown in Fig.~\ref{fig13:172jxshell}. It can be seen that the $N=5$ shell has no contribution to the backbendings/upbendings in these bands except the second upbending in band 1 [see Fig.~\ref{fig13:172jxshell}(a)], in which both the diagonal and off-diagonal parts have similar contribution. Note that the diagonal part of $N=6$ shell also contributes a little to the second upbending in band 1. It also can be seen that no matter the $\nu 5/2^+[642]$ ($i_{13/2}$) being blocked or not, the diagonal part of $N=6$ shell only contributes a little to the first backbendings/upbendings. The main contribution comes from the off-diagonal part of $N=6$ shell. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{fig14_172jxorb} \centering \caption{\label{fig14:172jxorb} Contribution of each neutron orbital in the $N = 5$ and $N=6$ major shell for bands 1-5 with $\alpha=0$ to the angular momentum alignments $\langle J_x \rangle$ in $^{172}$Re. The diagonal [$j_x(\mu)$] and off-diagonal [$j_x(\mu\nu)$] parts in Eq.~(\ref{eq:jxorb}) are denoted by solid and dotted lines, respectively. } \end{figure} Figure~\ref{fig14:172jxorb} shows the contribution of each neutron orbital in the $N = 5$ and $N=6$ major shells for bands 1-5 with $\alpha=0$ to the angular momentum alignments $\langle J_x \rangle$ in $^{172}$Re. It can be seen in Fig.~\ref{fig14:172jxorb}(a) that the off-diagonal parts $j_x(3/2^+[651]5/2^+[642])$ and $j_x(5/2^+[642]7/2^+[633])$ contribute to the first upbending in band 1. The main contribution to the second upbending comes from the $N=5$ major shell. The diagonal part $j_x(1/2^-[521])$ and the off-diagonal parts $j_x(3/2^-[521]5/2^-[512])$ and $j_x(5/2^-[523]7/2^-[514])$ contribute to the second upbending. The diagonal part $j_x(5/2^+[642])$ in $N=6$ major shell also have contribution. For band 2, the the off-diagonal parts $j_x(1/2^+[660]3/2^+[651])$, $j_x(3/2^+[651]5/2^+[642])$, and $j_x(5/2^+[642]7/2^+[633])$ contribute to the upbending. In bands 3-5, the neutron orbital $5/2^+[642]$ ($\nu i_{13/2}$) is not blocked. It can be seen in Figs.~\ref{fig14:172jxorb}(c), (d) and (e) that the contribution of each neutron orbital to $\langle J_x \rangle$ in these three bands are quite similar with each other. The main contribution comes from the off-diagonal part $j_x(3/2^+[651]5/2^+[642])$. Moreover, the off-diagonal parts $j_x(1/2^+[660]3/2^+[651])$, and $j_x(5/2^+[642]7/2^+[633])$, and the diagonal part $j_x(5/2^+[642])$ also have remarkable contributions. \section{Summary}{\label{Sec:Summary}} In summary, the recently observed two and four-quasiparticle high-spin rotational bands in the odd-odd nuclei $^{166,168,170,172}$Re are investigated using the cranked shell model with pairing correlations treated by a particle-number conserving method, in which the particle-number is strictly conserved and the Pauli blocking effects are taken into account exactly. The Nilsson configurations for these multi-quasiparticle bands have been assigned. The experimental moments of inertia and alignments of these bands can be well reproduced by the present calculation if appropriate bandhead spins and configurations are assigned, which in turn confirms the spin and configuration assignments. It is found that the bandhead spins of those two rotational bands observed in $^{166}$Re with the configurations assigned as $\pi h_{11/2} \otimes \nu i_{13/2}$ and $\pi h_{11/2} \otimes \nu [f_{7/2} / h_{9/2}]$ should be assigned as $10^-$ and $8^+$ (both be increased by $2\hbar$ compared with those given in Ref.~\cite{Li2015_PRC92-014310}) to get in consistent with the systematics of the experimental and calculated moments of inertia for the same configuration in $^{168,170,172}$Re. The variations of the backbendings/upbendings with increasing neutron number in these nuclei are also investigated. The level crossing mechanism is well understood by analysing the variations of the occupation probabilities of the single-particle states close to the Fermi surface and their contributions to the angular momentum alignment with rotational frequency. In addition, the influence of the deformation driving effects of the proton $\pi 1/2^-[541]$ ($h_{9/2}$) orbtial on the level crossing in $^{172}$Re are also discussed. \section*{Acknowledgement} This work is supported by National Natural Science Foundation of China (No. 11875027, 11775112, 11775026, 11775099, 11975096), and the Fundamental Research Funds for the Central Universities (2021MS046).
2,877,628,090,588
arxiv
\section{Introduction} Feedback is an essential aspect of galaxy formation models. It is invoked to suppress the formation of large numbers of small galaxies \citep{Rees_1977, WhiteRees_1978, White_1991}. While photo heating can suppress star formation in the smallest halos, it cannot explain the low efficiency of SF in halos more massive than $10^9\,\rm M_\odot$ \citep{Efstathiou_1992, Okamoto_2008}. Feedback is also invoked to explain why such a small fraction of the baryons are in stars today \citep{Fukugita_1998, Balogh_2001}. An efficient feedback implementation also appears essential for simulations to produce realistic looking disk galaxies \citep{Scannapieco_2011, McCarthy_2012}. Observations of galactic winds at low \citep{Heckman_1990, Heckman_2000} and high $z\sim 3$ redshift \citep{Pettini_2001} do show gas with a range of temperature and densities moving with large velocities of 100s of km~s$^{-1}$ with respect to the galaxy's stars, although the interpretation in terms of mass loss is complicated by the multi-phase nature of the wind (see e.g. \citealp{Veilleux_2005} for a recent review). Complimentary evidence for outflows comes from the high metal abundance detected in the IGM \citep{Cowie_1995}, even at low densities \citep{Schaye_2003, Aguirre_2004}. Numerical simulations make it plausible that galactic winds are responsible for this metal enrichment \citep{Cen_1999, Aguirre_2001, Theuns_2002, Aguirre_2005, Oppenheimer_2006, Tescari_2011}, with low-mass galaxies dominating the enrichment of the bulk of the IGM \citep{Booth_2012}. The sheer amount of energy released by supernovae (SNe) make the injection of energy into the interstellar medium (ISM) by SN explosions a prime candidate for driving galactic winds \citep{Dekel_1986}. However it is challenging to understand in detail how SNe regulate the transfer of mass and energy between the different phases of the ISM, as envisaged in the model of \citet{McKee_Ostricker_1977}, and how and when this leads to the emergence of a galactic wind. \citet{Efstathiou_2000} and \citet{Silk_2001} extend the \citet{McKee_Ostricker_1977} model to examine how such interactions lead to self-regulation of star formation. They show that the properties of the galactic wind can be broadly understood once a temperature and density for the hot phase is found. This requires a model of evaporation of cold and warm clouds, yet without a more detailed understanding of the geometry and turbulence, we can go little further than steady spherically symmetric conduction models, which go back to \citet{Cowie_McKee_1977}. Even if feedback is indeed due to SNe, it is not yet clear whether this is a consequence of their injection of hot gas, of turbulence \citep{MacLowKlessen_2004, Scannapieco_2010}, of cosmic rays \citep{Jubelgas_2008}, of the combined effects of magnetic fields, cosmic rays, and the galaxy's differential rotation \citep{Kulpa_2011}, or all of the above. Full hydrodynamic modelling of the interplay between the various components of the ISM in a Milky Way-like galaxy in a proper cosmological context is not yet currently possible due to the large range of scales involved, with density ranging from $4\times 10^{-31}$~g~cm$^{-3}$ outside of halos to $\sim 10^{-20}$~g~cm$^{-3}$ in cold clouds, temperatures from a few Kelvin inside star forming regions to $\sim 10^8 \rm K$ inside SN remnants, and time scales from a few thousand year for the propagation of a SN blast wave inside the ISM to $\sim 10^{10}$~years for the age of the Galaxy. Excitingly, such full hydro-dynamical modelling begins to be possible for higher redshift dwarfs (e.g \citealp{Wise2008}), but for the moment models of larger galaxies at $z\sim 0$ are limited to simulating a patch of galactic disk. In addition, we would also like to identify and understand the physics that is important in driving material from the galactic disk, and so it is desirable to have a series of numerical experiments. This is the approach we will follow in this paper. We begin by discussing constraints on galactic winds derived from current theoretical models of galaxy formation, and place our work in the context of comparable approaches. In section \ref{sect:simulations} we introduce the set-up of our own simulations. Briefly, we use a very simple model of the ISM which neglects the cold phase, and which is stirred by hot gas injected by SN explosions. Next we demonstrate that our sub-pc simulations have sufficient resolution to resolve individual explosions, and illustrate the behaviour of both the ISM and of the wind for a reference model with properties chosen to be similar to that of the solar neighbourhood. In Section~\ref{sect:statistics} we vary the properties of the simulated ISM (total and gas surface densities, star formation rate, cooling rate), and investigate if and when a wind is launched, and how its properties depend on that of the ISM. We obtain scaling relations of the wind to the ISM and apply them in Section~\ref{sect:evolution} to predict wind properties for a full galactic disk, and investigate how the wind properties depend on the galaxy properties. We summarise in Section~\ref{sect:conclusions}. \section{Constraints on galactic winds}\label{sec:WindConstraints} \subsection{Model requirements and observations} \label{sect:mass_loading} We will assume that the baryon fraction in Milky-Way-sized halos, and halos of lower mass, falls significantly below the cosmological value, $f_b = \Omega_b / \Omega_M$ due to the action of a galactic wind. Let the gaseous mass outflow rate from this wind be $\dot{M}_{\rm wind}$, and the star formation rate $\dot{M}_\star$. A simple way to parameterise the efficiency of the SN-driven wind in removing baryons from the halo, is its {\em mass loading}, i.e. the ratio \begin{equation}\label{eq:beta} \hat \beta \equiv \frac{\dot{M}_{\rm wind}}{\dot{M}_\star}\,, \end{equation} where our $\hat \beta$ is equivalent to the $\beta$ of \cite{Stringer_2011}. We introduce the hat in order to distinguish the average mass loading for a galaxy, $\hat \beta$, from a local mass loading $\beta$ at some point on the disk. If a galaxy exhausts its gas supply in star formation (and does not recycle wind material) then we will be left with a gas poor galaxy with baryon fraction reduced by a factor $1 / (1+\hat \beta)$. In order to infer the fraction of baryons ejected from galaxies we can use the statistics of galaxies and dark matter halos. The number density of halos as a function of their mass can be approximated for masses below the exponential cut-off scale as a power law \citep{PressSchechter_1974, Reed_2007}, \begin{equation} \frac{ {\rm d} n}{{\rm d} \log M} \propto M^{-0.9} \,. \end{equation} Contrast this with the slope of the galaxy stellar mass function at low masses, \begin{equation} \frac{ {\rm d} n }{{\rm d} \log M_\star} \propto M_\star^{1+\alpha} \, , \end{equation} where observationally $\alpha$ is found to be in the range $\left[-1.5,-1\right]$, (see e.g. \citealp{Blanton_2003, Blanton_2005B, Baldry_2005, Baldry_2012, Li_White_2009}). Naively identifying each dark matter halo with a galaxy of a given stellar mass (e.g. \citealp{Guo_2010}) yields a galaxy mass to halo mass relation of $M_\star \propto M^{-0.9/(\alpha+1)}$. Identifying the stars as the main baryonic component implies a mass loading that scales with halo mass relatively steeply as (see also \citealp{Stringer_2011}) \begin{equation} 1+\hat \beta = f_{\rm b}\,\frac{M}{M_\star} \propto M^{(1.9+\alpha)/(1+\alpha)} \propto M^{-0.8} \, , \label{eq:betascale} \end{equation} where we substituted a faint end slope of $\alpha=-1.5$ to derive the last exponent. Notably, this exponent $\to \infty$ as $\alpha \to -1$ and falls to zero as $\alpha \to -1.9$, as such it is rather poorly constrained even by a well measured slope of the galaxy stellar mass function at low masses. One can infer not only that at low masses the mass loading $\hat \beta \gg 1$ but also that it is strongly increasing towards lower-mass galaxies. Assume star formation results in the explosion of $\varepsilon_{100}$ supernovae per $100 \, \rm M_\odot$ of stars formed, each with energy $E_{\rm SN}$, and that a fraction $\eta_T$ gets converted into kinetic energy of an outflow. Neglecting other sources of energy then implies that \begin{equation} \hat \beta \,v^2_{\rm wind} = 2 (710\,{\rm km}{\rm s}^{-1})^2\,\eta_T\,\varepsilon_{100}\,{E_{\rm SN}\over 10^{51}\,{\rm erg}} \,, \label{eq:eta_T} \end{equation} where $v_{\rm wind}$ is the wind speed. If $\varepsilon_{100}$, the thermalisation factor $\eta_T$ and $E_{\rm SN}$ are all constants, then the product $\hat \beta v^2_{\rm wind}$ is also a constant. In this case large values of $\hat \beta$ imply lower wind speeds, and vice versa. If the mass-loading $\hat \beta$ indeed increases with decreasing halo mass, then of course eventually $\hat \beta$ may become so large that the wind can no longer escape from the galaxy's potential well. Such small halos may be subject to other destructive mechanisms, such as evaporation by re-ionization or obliteration by the explosions of the first stars. For massive halos, in order for the wind to escape it requires high wind speeds, implying low mass loading. The semi-analytical model of galaxy formation presented recently by \cite{Bower_2011} imposes similar constraints on galactic winds to obtain fits to the faint-end of the galaxy mass function as inferred from our naive expectations: galactic winds need to have values of the mass loading $\hat \beta\sim 1$ for Milky Way-like galaxies, with an indication that $\hat \beta$ increases even further towards lower masses. The best fitting models have $\hat \beta \sim 10$ giving $v_{\rm wind} \sim 300 \; \rm km\, s^{-1}$. Numerical simulations of galaxy formation also try to implement galactic winds with similar properties. Cosmological simulations such as \citet{Oppenheimer_Dave_2008} essentially implement the mass loading by hand by de-coupling the winds from the surrounding gas. More advanced techniques impose some constraints during the early stages of a burst of star formation when it is beneath the simulation resolution, but later allow the gas distribution to evolve normally and the mass loading to emerge. Progress in this area has been made by simulations such as \citet{Dubois_2008} and \citet{Shen_2010}. Generally these include efficient feedback in an effort to produce a reasonable galaxy population, although they struggle to produce significant winds to remove enough baryons from Milky Way-sized galaxies. The OWLS simulations \citep{Schaye_2010} examined a variety of feedback prescriptions and models with efficient feedback in terms of a strong galactic winds fit a variety of properties of the galaxy population, including the Tully-Fisher relation \citep{McCarthy_2012}. However, in such models the properties of the winds are still part of the sub-grid modelling, i.e. the wind's properties are not computed but rather are simply imposed. This is required since the mass of gas entrained by a single supernova is a tiny fraction of the mass resolution element of the simulation \citep{Creasey_2011}. In order to directly simulate the \emph{generation} of galactic winds requires a much higher resolution than can be reached in current cosmological simulations, as the sites of energy injection must be resolved (discussed further in section \ref{sect:simulations}). In order to relax these constraints, many simulators have either moved to high redshift (where the volumes are smaller), or modified the SNe in some way (such as aggregation of the energy injection). Examples of the former include \citet{MacLow_1999, Fujita_2004, Wise2008} and \citet{Powell_2011}, all of which struggle to produce mass loadings above unity except \cite{Wise2008}, who had massive Population III progenitors for their SNe. Examples of the latter include \cite{Dubois_2008} and \cite{Hopkins_2011} with similar results, although \cite{Hopkins_2011} saw significant improvement by including the winds from massive stars. Despite having a different focus, there are also a number of studies of a high resolution SN driven ISM which have similar set-up to the current work, although they do not investigate the properties of their winds. \citet{Joung_MacLow_2006, Joung_MacLow_2009, Hill_2012} and whilst this paper was being prepared \cite{Gent_2012} have all modelled a SN driven ISM in a column through a galactic disk, driving a vertical wind. These studies investigate the structure of the ISM, however their wind properties appear qualitatively similar to ours. On larger scales \cite{Stevens_2000} and \cite{Cooper_2008} have extended these to an approximation of the galaxy M82, although again the resolution restrictions severely limit the simulation time and SN energy injection prescription. There are compelling theoretical reasons to expect a high mass loading in galaxy winds, but are such winds seen in practise? The observational evidence for galactic outflows, at least in {\em starburst galaxies}, is extremely strong \citep{Heckman_1990, Heckman_2000, Pettini_2001, Martin_2005, Martin_2002, Strickland_2009}. Unfortunately it is notoriously difficult to constrain the wind properties from the data directly, partly because of uncertain metallicity and ionisation corrections needed to translate between the observed ion outflow and inferred total wind values, and partly because observing the wind in the spectrum of its galaxy does not provide spatial information of where the absorbing gas is located (\citealp{Bouche_2011}, but see \citealp{Wilman_2005, Swinbank_2009} for a few cases of resolved studies of winds). The outflowing gas is likely multi-phase in nature, complicating further the interpretation of the data. The picture for non starburst galaxies is even more complex, with \citet{Strickland_2009} noting the lack of evidence for superwinds in such galaxies. As \citet{Chen_2010} point out, however, the evidence for the high velocity outflows come from blueshifted absorbers such as Na D that are tracing cooler material which is a fraction of the wind (or MgII, for example \citealp{Weiner_2009} in the Deep2 galaxies). As far as it can be measured, the velocity of the outflow seems to be only weakly dependent on the SFR \citep{Rupke_2005b}. Probing the circum-galactic medium around galaxies with a sight line to a background quasar allowed \cite{Bouche_2011} to infer values of $\hat \beta=2-3$ and wind speeds $v_{\rm wind}=150-300$~km~s$^{-1}$ for a set of L$_\star$ galaxies at redshift $z\sim 0.1$. They claim these wind speeds are in fact below the escape speed, and hence we may be observing a galactic fountain rather than a proper outflow. The picture of SNe as the driver of galactic winds also has consequences in terms of the metallicity of the galaxy. As SNe inject both metals and energy we expect and find a corresponding metallicity deficit for low mass galaxies \citep{Tremonti_2004}, interest in which goes back to \citet{Larson_1974}. Both simple models \citep{Peeples_2011, Dayal_2012} and simulations \citep{Finlator_2008} show that galactic winds are an essential ingredient to obtain the observed mass-metallicity relation in galaxies. Summarising, observations provide strong evidence for the presence of galactic winds in star forming galaxies, but the parameters of such winds are currently not tightly constrained. Models that make recourse to such winds to quench star formation require relatively high values of the wind's mass loading, $\hat \beta\sim 1$ for MW-like galaxies, with $\hat \beta$ increasing for lower mass galaxies. But do SNe-driven winds indeed meet these requirements, and if they do, why? \subsection{Resolving SNe in the ISM}\label{sec:SelfConsist} Ideally one would wish to probe the efficiency with which star formation can drive winds with simulations that self consistently included all the relevant physics, i.e. a full galaxy containing a star forming ISM, those stars subsequently redistributing their energy as type II SNe explosions, including outflows and cosmological infall. Unfortunately the range of scales involved in this problem makes such an approach currently computationally infeasible. To progress we must either truncate our resolution at some scale before we have fully resolved the physics, or to truncate our physics such that the available resolution becomes sufficient. The former route is one where we assume that we understand the physical processes to a certain degree and make our best effort at the calculation, forcing us to go deeply in to convergence studies. The latter is that of the numerical experiment where it is assumed that a certain amount of numerical calculation is possible and we make our best effort to include the processes, requiring us to make full comparison with the real Universe to test these assumptions (many simulations, are, of course, a mixture of these approaches). Our focus will be on the latter case, that of the numerical experiment. We will also restrict ourselves to looking at the \emph{launch} region of the galactic wind, where gas is expelled from the galaxy but not necessarily from the halo. This is consistent with what is needed to improve subgrid models in semi-analytic models and hydro simulations. The motivation for our choice of scale relates to the need to resolve individual SN blast waves as they sweep the ISM (as for example described by \citealp{Cox_72}). Briefly, such explosions involve three distinct stages (e.g. \citealp{Truelove_1999}), beginning with the very early stage during which SNe ejecta expand almost freely into the ISM. As the ejecta sweep-up ISM preceded by a shock, eventually a reverse shock will run back into the ejecta, heating them to very high temperatures, signalling the start of the Sedov-Taylor stage \citep{Sedov_59, Taylor_1950}. In both stages radiative cooling is negligible and consequently they can be described by similarity solutions, but the transition between them cannot. Finally at late times, the hot interior of shocked ejecta cools radiatively, and the swept-up shell of ISM and ejecta continue to \lq snow-plough\rq\ further into the ISM, conserving momentum. \cite{Thornton_1998} examine these last two states using a set of 1 dimensional simulations of the evolution of explosions in a uniform ISM, examining in detail the transition from the ST-phase to the snow-plough phase. They claim that radiative cooling is efficient enough that typically only 10 per cent of the initial blast energy is transferred to the ISM. Notably, the amount of gas heated by these explosions is not a linear function of the SN energy, indeed it is sub-linear, and thus we may expect that aggregating the energy injection of many SNe into a single event will underestimate the amount of gas heated. We would in principle like to resolve the earliest phase of the explosions when ejecta dominate, but in this paper we restrict ourselves to initiate our SNe in the Sedov-Taylor phase. The transition between ejecta-dominated and ST-phase occurs approximately when the shock has swept-up an amount of of ISM mass that is comparable to that originally ejected. In low density regions the size of the bubble where the transition happens may then be relatively large, and it would be worthwhile investigating whether this matters; we intend to do so in future work. Given this limitation, and for the typical ISM densities near the centre of the disk in our simulations, it then suffices to resolve scales of order of a few parsecs to fully capture the cooling of the swept-up shell of ISM (e.g. \citealp{Cox_72}), and such a simulation will be able to resolve both the cooling and some part of the adiabatic phase of the remnant. As such the dependence of our question upon sub-parsec phenomena can be seen only in two key areas, raising the following questions \begin{enumerate} \item Star formation occurs on these scales, and thus controls the distribution (in time and space) of supernovae. Does this affect the properties of the galactic wind, for example because supernovae explode in high density environments and/or near to other supernovae? \item The medium that the SNe drive into contains structures on sub-parsec scales, for example cores of molecular clouds. Does this departure from a classical fluid affect the large scale wind? \end{enumerate} We will argue that the answers to both the above the questions may indeed be negative, motivating a set of simulations of a highly simplified ISM. Such a simulation would also improve our physical understanding of the role of the individual processes. On the first question we note that the progenitor of type II core collapse SNe are massive stars \citep{Smartt_2009} with lifetimes $\sim 1-30$ Myrs \citep{Portinari_1998}, therefore the majority of SN energy associated with an instantaneous burst of stars with for example a \cite{Chabrier_2003} initial mass function will be released within $\sim 30$~Myrs. It is thought the birth cloud of such stars is likely destroyed before by the combined effects of stellar winds, proto-stellar jets and radiation (e.g. \citealp{Matzner_2002}), and there is observational evidence for this (e.g. \citealp{Lopez_2011}). Some clouds may form by turbulent compression when overrun by a spiral arm, and may disperse by the same flows that created them in the first place on a short time-scale (\citealp{Dobbs_2008}, see also \citealp{Tasker_2009}). In any case, when the SNe explodes it will in general not do so inside its natal cloud. For this reason we assume that the SNe explode in typical environments in the disk plane of galaxies. Note however that the SNe may still be clustered rather than Poisson, a complication that we neglect. Typical giant molecular clouds have a velocity dispersion of $\sim 4$ km $\rm s^{-1}$ \citep{Scoville_1987, McCray_1987}, which over 10 Myr results in a dispersion of around $40$ pc, which is a significant fraction of our box size and the typical distance between molecular clouds. The second question is delicate, and worthy of significant discussion. We first note that we follow the nomenclature of \citet{Wolfire_1995}, where the $T\sim 100$K phase of the ISM is called the cold neutral medium (hereafter CNM), the $T\sim 10^4$K phase as the warm neutral medium (hereafter WNM) and the $T \gtrsim 10^6$K phase as the hot ionised medium (HIM). The CNM exists in the form of dense clouds, occupying a very small fraction of the total volume but with a significant fraction of the total mass. These clouds are believed to be in pressure equilibrium \cite{Spitzer_1956}, with the WNM and HIM, thus making their energy budget (pressure $\times$ volume) also a small fraction of the ISM thermal energy. Their pressure support is probably composed of a combination of magnetic, thermal and cosmic ray components. The proportions of energy in thermal, bulk and turbulent motions of the HIM and WNM are still not entirely known though there is consensus that much of the turbulence is supersonic \citep{Elmegreen_2004}. A supersonic nature of turbulence in the ISM requires that the energy budget is dominated by inertial terms of the turbulent motions over the thermal and magnetic terms in the WNM and HIM. Despite their small fraction of the energy budget, however, the cold phase can perform the role of a heat sink. Thermal energy from the warm and hot phases can be transported in to the cold phase via thermal conduction which can be dissipated via the molecular transitions of this cold gas (particularly CO, ${\rm H}_2$), metal lines (in particular CI*), and dust. The excited states of the molecules, however, are rather long lived and whilst they are certainly important for star formation they may not significantly cool the WNM phase of the ISM due to its sparse nature \citep{deJong_1980,Martin_1996}. The molecules also play an important role as an absorber of photo-ionizing radiation, however we will ignore radiative driving here. The simulations described in this paper simply neglect the cold phase, by truncating the cooling function below a value of $T_0 = 10^4$~K. If we were to include cooling below $T_0$ we would have to include significantly more physics (magnetic fields, heat conduction, diffuse heating): here we want to investigate and understand the simpler yet still complex case of a two-phase medium. We have also intentionally left out the physics of cosmic rays (see, e.g. \citealp{Pfrommer_2007}) and magnetic fields (e.g. \citealp{Breitschwerdt_2007}) which may be important in providing support against collapse, particularly in the CNM. Our goal is to understand the resultant ISM without these complications, before discussing the implications of their addition. We would also like to stress that although we have included gravity, we have not included \emph{self}-gravity (i.e our gravity is time-independent and only self-consistent for the initial set-up) which would be a poor approximation if we had included the dense, cold material of the CNM. Without the CNM gravity does not influence material on scales below the Jeans length of the WNM, equivalent to the scale height of the warm disk. \section{Simulation set-up} \label{sect:simulations} In the following section we will describe the simulations we have performed of supernova driven outflows from an idealised ISM. Our simulations model the ISM and halo of a disk galaxy in a tall column, with long ($z$) axis perpendicular to the galactic disk, and co-rotating with the disk material. We use outflow conditions at the top and bottom of the column, and periodic boundary conditions in $x$ and $y$. We describe the initial conditions of the gas and the physical processes (gravity, cooling and supernova feedback) we have included, and detail their numerical implementation. Finally we describe some tests we have performed on the code and the parameters we chose to explore in our simulation set. The simulations use a modified version of the {\sc FLASH}\ 3 code \citep{Fryxell00}. {\sc FLASH}\ 3 is a parallel, block structured, uniform time-step, adaptive mesh refinement (AMR) code. Its second order (in space and time) scheme uses a piecewise-parabolic reconstruction in cells. Due to the extremely turbulent nature of the ISM in our simulations, we find that {\sc FLASH}\ attempts to refine (i.e. to use the highest resolution allowed) almost everywhere within our simulation volume. Therefore we disable the AMR capability of {\sc FLASH}\ and run it at a constant refinement level (albeit varied for our resolution studies). To mitigate the overhead of the guard-cell calculations we increase our block size to $32^3$ cells per block. For the gas physics we have assumed a monatomic ideal gas equation of state, \begin{equation} p = (\gamma - 1) \rho u\,, \end{equation} where $u$ is the specific thermal energy and $\gamma=5/3$ is the adiabatic index. This deviates slightly from the physical equation of state which should include the transition in mean particle mass that occurs as the atomic hydrogen becomes ionised, but the impact of this simplification is small compared to the other uncertainties in this kind of simulation. \subsection{Physical processes}\label{sec:PhysProc} The simplified ISM discussed in Section \ref{sec:SelfConsist} is shaped by three fundamental processes: gravity, cooling and energy injection from supernovae, which dominate when we are only considering the WNM and HIM. We stress that our aim is to simplify the problem as much as possible so that we can extract the physical principles. In future works we will experiment with making some assumptions more realistic. Below we discuss the effects and implementation of all these processes. \subsubsection{Gravity} The gas in our simulations is initially in (vertical) hydrostatic equilibrium. In a disk galaxy the gravitational acceleration is induced by the gas and stars in the disk, baryons in the bulge and dark matter (in the halo and possibly the disk, see e.g. \citealt{Read_2008}). Despite these complications, when one moves to the (non-inertial) frame moving locally with the disk, the dominant effective potential lies in the vertical direction, with a scale height of a few hundreds of parsecs. Since the shape of this profile is approximately in accordance with the gaseous one, we model the total gravity of all components (gas, stars, dark matter) as being in proportion to the gaseous component, with a multiplier of the inverse of the gas fraction, $\frac{1}{f_{\rm g}}$, to account for the stellar and dark matter components, i.e. the gravitational potential depends on the gas density through Poisson's equation as \begin{equation} f_{\rm g} \nabla^2 \phi = 4 \pi G \rho \label{eq:poisson} \; . \end{equation} We also make a second assumption, namely that the gravitational profile of the disk is fixed in time, $\phi = \phi [ \rho_0 ]$. This is assumed because the minimum temperature of our cooling function (discussed in Section \ref{sec:cooling} below) sets the Jeans length on the order of the scale of the disk height, so we do not expect smaller self-gravitating clouds to appear in our simulations. In contrast, in the ISM of the Milky Way small self-gravitating clouds can form, because the ISM does cool to lower temperatures, however the physics of star formation is not the process we wish to address in this paper. Other terms we have neglected include those introduced by the Coriolis force across our simulation volume, due to our non-inertial choice of frame, \begin{equation} \dot{\bf v}_{\rm cor} = -2{\bf \Omega} \wedge {\bf v}\, , \end{equation} where $\Omega$ is the angular velocity of the galaxy. Our simulations, however, will typically be of such short time scales and volumes that the Rossby number (the ratio of inertial to coriolis terms) is large. Nevertheless, more complete simulations would include this, along with the time dependent gravitational changes introduced by spiral density waves. Note that our simulations also neglect the velocity shear that is present in a differentially rotating disc. \subsubsection{Radiative cooling}\label{sec:cooling} The cooling function $\Lambda(T)$ of $T\sim 10^4-10^7$K gas with solar abundances is primarily due to bound-bound and bound-free transitions of ions, whereas above $T=10^7$K it is largely dominated by bremsstrahlung \citep{Sutherland_1993}. Below $T\sim 8000$K there is a sharp decrease by several orders of magnitude, causing a build up of gas in the WNM. Cooling below $8000$K is due to dust, metal transition lines such as CI*, and at very low temperatures, molecules. Whilst the imprint of small features in the cooling function should be observable in the ISM, it is really the cut-off at $T\sim 8000$~K that controls the WNM, and as such we approximate the cooling function with a Heaviside function with a step at $T_0 = 10^4 \; \rm K$, \begin{equation} \rho \dot{u} = \left\{ \begin{array}{cc} -\Lambda n^2 ,& T \geq T_0 \\ 0 , & T< T_0\,, \end{array} \right. \label{eq:cf} \end{equation} where we in addition assume pure hydrogen gas so that the number density $n = \rho / m_p$, and $\Lambda=10^{-22} \, \rm erg \, cm^3 \, s^{-1}$ (although it is varied in a few of the simulations). We implement this very simple functional form so that we can explicitly check the effect of the normalisation of the cooling function, and to make sure that any characteristic temperature of the gas is not due to features in $\Lambda$. The cooling function of Eq.~(\ref{eq:cf}) results in a cooling time for gas with $T \geq T_0$ of \begin{eqnarray} t_{\rm cool} &\equiv & \frac{m_p u}{ \Lambda n} \nonumber\\ &\approx& 660 \, {\rm yr} \left( \frac{T}{T_0} \right) \left( \frac{n}{1 \; {\rm cm}^{-3}} \right)^{-1} \times \nonumber\\ & & \left( \frac{\Lambda}{10^{-22} \, \rm erg \, cm^3 \, s^{-1}} \right)^{-1} \; . \end{eqnarray} Since we have chosen a discontinuous function for our cooling, we implement a scheme in our code which prevents cooling below $T_0$ (although the hydrodynamic forces can still achieve lower temperatures adiabatically). This largely prevents the overshoot errors resulting from an explicit solver in this kind of problem. To test the importance of the choice of cooling function, we also implemented the cooling function appropriate for cosmic gas with solar abundance pattern from \citet{Sutherland_1993}, \begin{equation}\label{eq:SD_cooling} \Lambda_{\rm SD} (T) = 5.3\times 10^{-24} \left( T_8^{1/2} + 0.5 f_m T_8^{-1/2} \right) \, \rm erg \, cm^3 \, s^{-1} \, , \end{equation} where $T_8\equiv T/10^8~{\rm K}$, with $f_m=0.03$ for low metallicity gas, and $\Lambda =0$ for $T<10^4\, \rm K$. All runs where this cooling function is used are marked `SD' (see table \ref{tab:parameters}). The minimum of this cooling function is at $5\times 10^7 f_m\; \rm K$, where the cooling rate \begin{equation}\label{eq:SD_min} \Lambda_{\rm SD,min} = 1.30 \times 10^{-24} \, \rm erg \, cm^3 \, s^{-1} \, , \end{equation} (ignoring the cut-off below $10^4\; \rm K$). We show in Appendix \ref{sec:convergence} that the behaviour of the ISM in our simulations is surprisingly independent on the exact shape of the cooling function $\Lambda(T)$, although it depends on the minimum value at high temperatures $\gg 10^4$~K. \subsubsection{Energy injection by supernovae} The Kennicutt-Schmidt (KS) relation connects observed surface density of star formation in a disk galaxy, $\dot{\Sigma}_\star$, to its gas surface density $\Sigma_{\rm g}$, \begin{equation} \dot{\Sigma}_\star \approx 2.5\times 10^{-4} \Sigma_{\rm g1}^{1.4} \, \rm M_\odot \, yr^{-1} \, kpc^{-2} \,, \label{eq:KS} \end{equation} \citep{Kennicutt_1998}, where $\Sigma_{\rm g1} \equiv\Sigma_{\rm g} / 1 \, \rm M_\odot pc^{-2}$. We also perform some simulations with an alternative formulation using a higher star formation rate, more commonly used in cosmological simulations, discussed in Appendix \ref{sec:convergence}. Notably this introduces an additional dependence on the gas fraction of the disk, $f_{\rm g}$, that is absent from the KS relation. Our idealised model of a supernova explosion is the injection of $10^{51}$ ergs \citep{Cox_72} of thermal energy in a small volume, implicitly assuming instantaneous thermalisation of the SN ejecta. The distribution in time of these is taken to be a Poisson process (the Poisson process has the Markov property and so our SNe are independent) with a time independent rate computed from the initial parameters of the disk. For the local spatial distribution of SNe we assume the star formation rate to be proportional to the initial density, i.e. \begin{equation} \mathbb{E} [ \dot{\rho}_\star {\rm \, dV \, dt} ] = \dot{\Sigma}_\star \frac{\rho(t=0) }{\Sigma_{\rm g}} {\rm \, dV \, dt}\, . \end{equation} A consequence of this choice is that if the scale height of the gas profile evolves significantly the distribution of SNe will become inconsistent with the instantaneous mass profile. We discuss this further later. Given the star formation rate, the associated core-collapse SN rate is computed assuming the stellar initial mass function yields $\varepsilon_{100}$ SNe per $100 \, \rm M_\odot$ of star formation. For reference, for a \cite{Chabrier_2003} initial mass function with stars with masses $\in[0.1,100]\; \rm M_\odot$, of which those with mass in the range $[6,100]\; \rm M_\odot$ undergo core collapse, $\varepsilon_{100}=1.8$. The final element of the SN prescription is the distribution of the injected energy over the computational grid. The choice of volume over which to spread the thermal energy of the supernovae is influenced by two considerations. If the volume is too large the remnant will evade the adiabatic expansion phase and immediately proceed to the radiative phase \citep{Cox_72, Creasey_2011}. If the volume is very small the code will require many extra time steps evaluating the initial stages of a Sedov-Taylor blast wave and will perform unnecessary computation\footnote{To get some idea of the computational requirement of this, we recall that the velocity of a 3 dimensional Sedov blast wave evolves as $v \sim t^{-3/5}$. Substituting this into the Courant-Friedrichs-Lewy (CFL) condition we see that the number of time steps required to reach a given radius is proportional to that radius.}. Following \citet{Cox_72}, the radius at which the blast wave cools and forms a dense shell is \begin{eqnarray} R_s &=& 15.6 \left( \frac{E_{\rm SN}}{10^{51} \, \rm erg} \right)^{3/11} \left( \frac{\Lambda}{10^{-22} \, \rm erg \, cm^{3} \, s^{-1}} \right)^{-2/11} \times \nonumber \\ && \left( \frac{n}{1 \, \rm cm^{-3}} \right)^{-5/11} \; \rm pc \, , \label{eq:r_shell} \end{eqnarray} however to account for higher densities and the numerical spreading of shocks it is wise to resolve a fraction of this \citep{Creasey_2011}. Taking the above into consideration, for our simulations we spread the thermal energy of each SN over several cells given by the multivariate (3D) normal distribution of standard deviation 2 pc, consistent with being smaller than the cooling radius of Eq.~(\ref{eq:r_shell}) for densities $n < 77 \, \rm cm^{-3}$ ($\rho< 1.3\times 10^{-22}$~g~cm$^{-3}$). \subsubsection{Time-stepping} In addition to the numerical considerations described above, we also needed to make some adjustments to the time step calculation in {\sc FLASH}. The default time-stepping scheme in {\sc FLASH}\ uses a Strang-split method (\citealp{Strang_1968}, an operator splitting method where the hydrodynamic update occurs in two half steps, with the order in which the Riemann-solver operates reversed from $xyz$ to $zyx$ between the first and the second half step). Source terms such as the injection of SN energy, are evaluated at the end of each half step, after the Riemann solver has been applied. This makes the implementation of the supernova energy injection problematic, as the thermal energy in a cell can increase by many orders magnitude followed by a hydrodynamic step before a new time step is calculated. The latter hydrodynamic step then almost inevitably violates the CFL condition and the Riemann solver fails to converge. We avoid this by making the timestep limiter for the supernova source terms {\it predictive}, i.e. we utilise the foreknowledge of the pre-computed SNe times to recognise when a supernova will occur before the end of the timestep given by the CFL condition and return a timestep of either up to just before the supernova, or of the predicted CFL timestep after the supernova has occurred, whichever is smaller. It is worth contrasting this with some other simulations of the ISM. In a series of simulation \citet{Avillez_2004, Avillez_2005a, Avillez_2005b} uses a set up similar to ours, with imposed gravity, cooling, SNe turbulence and magnetic fields in columns through disks of $1\times 1 \times 10$ kpc, although the focus is not on the mass loading. More recently the ERIS simulations \citep{Powell_2011} simulated the ISM in a single high redshift dwarf galaxy. \citet{Cooper_2008} perform a simulation of the central region of an M82-like starburst galaxy with gravity, cooling and energy injection due to supernovae (although this energy injection is continuous within a central volume, rather than stochastic as in our simulations). \subsubsection{Code tests} A set up as complex as this requires some testing to confirm that the physical processes have been correctly implemented. As such we ran a number of simpler problems as well as the convergence tests in Appendix \ref{sec:convergence}. In order to test our hydrostatic set up we simulated the disk without supernovae for several dynamical times. Some sub-percent evolution in the gas occurred, almost certainly due to our evaluation of the analytic solution for the gravitational potential and density at the centres of cells producing some discretisation error. The implementation of the cooling function was tested largely in \citet{Creasey_2011}. We follow a similar approach where we made the cooling rate for each cell an output of our code which was compared with the instantaneous rate predicted from the temperature and density of each cell (again there were small differences due to the comparison of an instantaneous rate with the average from an implicit scheme). The implementation of the individual SN in our set-up is largely similar to that of the Sedov-Taylor blast wave solution implemented in {\sc FLASH}\ as a standard test, and we compared it to the similarity solution. We calculate the location and times of SNe explosions ahead of the simulation, and verify that the code indeed injects them correctly. We initially also performed these calculations using the {\sc GADGET}\ simulation code \citep{Springel_05} that has been successfully applied to many cosmological simulations. Unfortunately the adaptive time-stepping algorithm proved problematic for correctly following the blast waves, and we noticed similar problems as recently highlighted by \cite{Durier_Dalla_Vecchia_12}: particles may be on long time-steps in the cold ISM, and largely fail to properly account for being shocked by the blast wave from a nearby particle. \cite{Durier_Dalla_Vecchia_12} addressed this problem with a time step propagation algorithm, however we did not have this nor the algorithm of \cite{Saitoh_2009} available and the alternative of a global timestep would have been far too computationally expensive due to the large dynamic range in time steps required in the evolution of the blasts. As such we used the global adaptive time stepping algorithm of {\sc FLASH} . \subsection{Initial conditions} Our initial setup is a tall box poking vertically through an idealised disk profile. We choose the long axis in the $z$-direction in order to capture a multiple of the gravitational scale height of the disk. The profile is a 1-dimensional gravitationally bound isothermal one with gas surface density $\Sigma_{\rm g}$. As discussed in section \ref{sec:PhysProc} we have excluded the effects of shear (due to the Coriolis force in the disk) and large scale motions which may drive some turbulence down to the small scales. The gas density is \begin{equation} \rho(z) = \frac{\Sigma_{\rm g}}{2b} {\rm sech}^2 \left(\frac{z}{b}\right)\,, \end{equation} and the corresponding gravitational acceleration follows from Eq.~(\ref{eq:poisson}), \begin{equation} \nabla \Phi = 2 \pi G \Sigma_{\rm g} f_{\rm g}^{-1} {\rm tanh} \left(\frac{z}{b}\right) \; . \end{equation} Setting the gas temperature to $T_0$ (which is also the base of the imposed cooling function) and assuming the gas to be initially in hydrostatic equilibrium, the scale height is \begin{eqnarray} b &=& \frac{ f_{\rm g} k_{\rm B} T_0}{{\rm m_p}\pi G \Sigma_{\rm g}} \\ &\approx& 61 \left( \frac{ f_{\rm g}}{0.1} \right) \left( \frac{ \Sigma_{\rm g}}{10 \, \rm M_\odot \, pc^{-2}} \right)^{-1} \; {\rm pc}\,, \label{eq:b}\\ \end{eqnarray} where numerically \begin{equation}\label{eq:density} \rho(z) \approx 3.4 \, \left( \frac{ \Sigma_{\rm g}}{10 \, \rm M_\odot \, pc^{-2}} \right)^2 \left( \frac{ f_{\rm g}}{0.1} \right)^{-1} {\rm sech}^2 \left(\frac{z}{b}\right) \, {\rm m_p \, cm^{-3}}\,. \end{equation} The (vertical) dynamical time of the disk is \begin{eqnarray} t_{\rm dyn} &=& \sqrt{\frac{b f_{\rm g}}{G \Sigma_{\rm g}}}\nonumber \\ & \approx & 12 \times 10^6 \left( \frac{ f_{\rm g}}{0.1} \right) \left( \frac{ \Sigma_{\rm g}}{10 \, \rm M_\odot \, pc^{-2}} \right)^{-1} \, {\rm yr}\,, \label{eq:tdyn} \end{eqnarray} and the ratio of the dynamical time to the cooling time \begin{eqnarray} \zeta &\equiv& \frac{t_{\rm dyn}}{t_{\rm cool}} \nonumber \\ &\approx& 1.7 \times 10^5 \left( \frac{ \Lambda}{10^{-22} \, \rm erg \, cm^3 \, s^{-1}} \right) \left( \frac{ \Sigma_{\rm g}}{10 \, \rm M_\odot \, pc^{-2}} \right) \, . \end{eqnarray} The exact gravitational potential is given by \begin{equation}\label{eq:gravpot} \Phi(z) = 2 \pi G b \Sigma_{\rm g} f_{\rm g}^{-1} \log {\rm cosh} \left(\frac{z}{b}\right) \, , \end{equation} and the pressure in hydrostatic equilibrium is \begin{eqnarray} p &=& \pi G \Sigma_{\rm g} f_{\rm g}^{-1} b \rho(z) \\ &\approx & 3.3 \times 10^4 \left( \frac{ \Sigma_{\rm g}}{10 \, \rm M_\odot \, pc^{-2}} \right)^{2} \times \\ && \left( \frac{ f_{\rm g}}{0.1} \right)^{-1} {\rm sech}^2 \left( \frac{z}{b} \right) \, \rm K \, cm^{-3} \, . \end{eqnarray} Finally, the hydrostatic temperature for all our disks is chosen to be \begin{equation} T_0 = 10^4 \, \rm K \, . \end{equation} \subsection{Numerical parameters and boundary conditions} \begin{table} \begin{center} \begin{tabular}{ | r | c | c |} & & Fiducial \\ & Range of values explored & value \\ \hline $\Sigma_{\rm g}\, ({\rm M_\odot \, pc^{-2}})$ \vline & 2.5, 3.23, 4.17, 5.39, 6.96, & 11.61 \\ \vline & 8.99, 11.61, 15, 30, 50, 150, 500 & \\ $f_{\rm g}$ \vline & 0.01, 0.015, 0.022, 0.033, & \\ \vline & 0.050, 0.1, 0.2, 0.5, 1.0 & 0.1 \\ $\dot{\Sigma}_\star$ \vline & Eq. (\ref{eq:KS}), (\ref{eq:KStdyn}) & Eq. (\ref{eq:KS}) \\ $\Lambda\, ({\rm erg \, cm^3 s^{-1}})$ \vline & 1, 2, 4, 8, 16$\times10^{-22},\, \rm SD$ & $10^{-22}$ \\ Resolution (pc) \vline & 0.78, 1.56, 3.12, 6.25 & - \\ \end{tabular} \caption{Parameter variations in our simulation. Each simulation is initialised with an isothermal profile with a surface density of $\Sigma_{\rm g}$ in cold gas and gas fraction of $f_{\rm g}$ (i.e. a total mass density of $\Sigma=\Sigma_{\rm g}/f_{\rm g}$). Star formation proceeds either in a pure Kennicutt-Schmidt prescription, or the dynamical time variation in Eq.(\ref{eq:KStdyn}). Cooling above $10^4\; \rm K$ proceeds at a rate $\Lambda$ and we study the simulations at several resolutions to test for convergence.} \label{tab:parameters} \end{center} \end{table} To produce simulations of a realistic ISM we make the following choices of parameters. In terms of resolution we must have cell sizes fine enough to capture the cooling of supernova remnants (Eq.~\ref{eq:r_shell}) yet the simulation volume needs to be large enough to capture several scale heights of the star forming disk. In terms of gas fractions and gas surface densities we choose values approximating those in the solar neighbourhood and some variations. In practise we chose fiducial values for the disk parameters ($\Sigma_{\rm g} = 11.61 \, \rm M_\odot\, pc^{-2}$, $f_{\rm g}=0.1$) and examine this reference model in detail. For reference, the gas surface density of the solar neighbourhood of the Milky Way has been estimated at $\Sigma_{\rm g} = 13.2 \rm \; M_\odot \, pc^{-2}$, with a dynamical density of $\Sigma_\star = 74 \; \rm M_\odot \, pc^{-2}$ \citep{Flynn_2006}. In order to test the dependence of winds on the disk properties we perform a slice of the parameter space varying $\Sigma_{\rm g}$ and $f_{\rm g}$ (see Table \ref{tab:parameters}). Not all parameter combinations are explored, as we cut out the simulations with very small scale heights (due to resolution constraints) and large scale heights (due to the finite box size). The dependence of the results on cooling, resolution, box size, star formation rate and run time can be seen in the Appendix. All our simulations were conducted in box sizes of $200\times 200 \times 1000 \; \rm pc$ with constant cell sizes. All cells were cubic, and in the vertical direction the number of cells for our default resolution is 640, with corresponding cell size of 1.6~pc. We vary the numerical resolution using 160,320,640,1280 cells in $z$, with corresponding cell sizes ranging from $6.25-0.78$ pc. These simulations are denoted L2, L3, L4, L5 respectively. We also test the effect of adjusting our box size with simulations of $2 \times$ and $4 \times$ the width (see Appendix \ref{sec:convergence}). The gas surface density $\Sigma_{\rm g}$ is varied from $2.5 M_\odot \, \rm pc^{-2}$ to $15 M_\odot \, \rm pc^{-2}$ in $8$ logarithmically spaced steps followed by three additional steps of 30, 50 and 500 $\rm M_\odot\, pc^{-2}$ . Notably some of these are below the minimum surface density threshold for star formation of \citet{Schaye_2004} of $3-10\; \rm M_\odot \, pc^{-2}$ (although there is evidence that star formation proceeds below this level, e.g. \citealp{Bigiel_2008}). The gas fraction $f_{\rm g}$ was varied from $0.01$-$0.05$ in 5 logarithmic steps followed by additional steps of $0.1$, $0.2$, $0.5$ and $1.0$. The cooling function $\Lambda$ was varied from $3.9\times 10^{-25}$ to $1.6\times 10^{-21}\; \rm erg \, cm^3\, s^{-2}$, and we ran additional models with the \citet{Sutherland_1993} cooling function as parameterised in Eq. (\ref{eq:SD_cooling}). Each of our experiments is evolved over 20~Myr (typically thousands of cooling times) in order to simulate many SNe. \section{Results} \label{sect:results} In this section we discuss the results of the simulations described in the previous section. We begin with a discussion of a single snapshot, allowing us to investigate the instantaneous properties of the idealised ISM and outflow. We then move to looking at the evolution of a simulation and the statistics we can measure before finally investigating the effects of all the parameters discussed in the previous section. \subsection{Fiducial run} The impact of SNe depends strongly on whether they explode in the dense gas or in the more rarefied HIM. The supernovae in the disk blast bubbles in the ISM and compress the warm gas into thin sheets and clouds. We note that between the different simulations the volume of the warm medium can vary from a series of disconnected, nearly spherical regions to a highly porous stratus that approximately covering the base of the disk potential. We will use the term `clouds' to apply to both. When supernovae explode in the rarefied regions, either at the edge of the disk or inside previously evacuated bubbles, the heated gas pushes out of the central region and then rapidly escapes from the simulation volume in a zone of acceleration above and below the disk. This is the ISM portion of the galactic wind (i.e. the gas whose thermal energy far exceeds the potential barrier to escaping the disk). Some warm clouds are dragged along with this wind. A movie of this simulation is available online along with time dependent versions of some of the other figures \footnote{See \url{http://astro.dur.ac.uk/~rmdq85}} \begin{figure*} \centering \includegraphics[width=2\columnwidth]{temp_dens_vel.pdf} \caption{Left to right, temperature, density, vertical velocity and pressure plots through a slice of the simulation, at time 5 Myr. Temperature is coloured between $10^4-10^8 \, \rm K$, density between $20^{-27}-10^{-23} \; \rm g\,cm^{-3}$, $v_z$ from $-250$ to $250\, \rm km \, s^{-1}$ and pressure from $10-10^5\;\rm K\,cm^{-3}$. On the far right is the profile of density, temperature and pressure along a vertical line through the centre of the slice. In \emph{dotted blue} and \emph{red} we show the hydrostatic density and pressure profiles at $t=0$. Around $z=0$ we can see the disrupted disk in the temperature and density plots, with the warm gas squeezed into sheets and globules, and a significant fraction of the volume now consumed by a hot ($\sim 10^{6.5} \, \rm K$) sparse phase. In the velocity plot we can see a bulk vertical outflow from the disk. The outflow is inhomogeneous, entraining significant turbulence as well as some warm gas, swept away from the disk. } \label{fig:temp_dens_vel} \end{figure*} In Figure \ref{fig:temp_dens_vel} we show an $x-z$ slice of the fiducial run, at a time of 12 Myr. We can see that the combined action of multiple SNe has disrupted the disk considerably, with the warm gas squeezed into dense sheets and globules entrained in outflowing gas, and around half the volume now occupied by a hot tenuous phase. The gas appears to be in well defined phases, an HIM (greens and yellows) and a WNM (dark blue) with little gas at intermediate temperatures (sell also Fig. \ref{fig:volfrac}). Notably there is more temperature variation in the hot phase (a few orders of magnitude) than in the WNM (which is all close to $10^4 \; \rm K$). The density plot also appears to show two distinct phases, a high and low density medium, where the high densities show up in the temperature plots as WNM. In the velocity plot we can see a bulk vertical outflow from the disk, with velocity correlating with height. The pressure plot shows a dramatically lower dynamic range than either the temperature or density plots, but has some distinctive shells due to individual SN remnants. The impression of a volume in quasi pressure equilibrium is reinforced by the profile plot where the temperature and density fluctuations appear to anti-correlate, resulting in comparatively small pressure variations. Above the plane of the disk the outflow is also very inhomogeneous, containing significant turbulence as well as some warm clouds or globules with cometary shapes. The corresponding locations in the density and pressure panels reveal that these clouds are also overdense and slightly under-pressured. In velocity the clouds appear to be receding from the disk at a lower velocity than the HIM, that is rushing past them at around 100~km~s$^{-1}$. The hot wind is stripping the edges of these warm clouds, as evidenced by their tails (see also the movie online). After only 12 Myr the original disk has undergone considerable disruption but is still observed as a connected feature in this slice (and the majority of the mass of the simulation remains in the central region). The disk has also been disrupted asymmetrically, with more mass pushed into the lower half space by the stochastic locations of the SNe. The externally imposed gravity will ultimately return this mass to the base of the potential, yet the combined action of the supernovae has been enough to displace it. Whilst we have run these simulations at different resolutions, it is important to note that the turbulent and chaotic nature of these simulations results in specific features such as individual clouds being at different locations or indeed absent between the different runs. Global properties, however, such as the outflow mass and temperature will be less stochastic, and we devote Appendix \ref{sec:convergence} to the convergence study of these properties. In general these simulations are numerically well converged. In the following figures we also include a few convergence comparisons where space allows. The value of the ISM pressures in our simulations are around $10^3 \, \rm K \, cm^{-3}$, comparable to the pressure in simulations such as \citet{Joung_MacLow_2006} and \cite{Joung_MacLow_2009}. Estimates of the pressure of a star forming ISM vary, \citet{Bowyer_1995} find a pressure of around $2\times 10^4\, \rm K \, cm^{-3}$ in the local bubble, although in the centre of the highly star forming region of 30 Doradus, \citet{Lopez_2011} estimate a pressure of $\sim 7 \times 10^6 \, \rm K \, cm^{-3}$ from IR dust measurements. \begin{figure} \centering \includegraphics[width=\columnwidth]{sne_volfrac.pdf} \caption{Density and temperature probability distributions for the fiducial run at 10 Myr, as shown in Fig.~\ref{fig:temp_dens_vel}, \emph{solid, dashed} and \emph{dotted} lines denote the L4, L3, L2 resolution runs, respectively. \emph{Upper panels} show the mass fractions in temperature and density, \emph{lower panels} show the corresponding volume fractions. We see a clear bimodality between the WNM (at low temperature and high density) and the HIM (at high temperature and low density). Almost all of the mass is in the WNM phase, but a significant fraction of the volume in the HIM.} \label{fig:volfrac} \end{figure} Figure \ref{fig:temp_dens_vel} suggests that the hot and warm phases are quite distinct, and we test this by inspecting the volume fractions in Fig.~\ref{fig:volfrac}. The warm phase is very tightly distributed below $10^4$K, as we might expect since the only mechanism for cooling here is by adiabatic expansion. The lack of intermediate temperatures suggests they have very short cooling times, which is consistent with a pressure equilibrium view. The hot tail of the distribution suggests the hottest gas either mixes with cooler gas or escapes from the simulation volume. \begin{figure} \centering \includegraphics[width=\columnwidth]{phasespace.png} \caption{Density-temperature histogram for the fiducial model at L3 resolution. Each pixel is coloured by the fraction of cells at given $\rho-T$. \emph{Dashed black lines} indicate lines of constant pressure, $p/k_{\rm B}=10^2,10^3,10^4 \; \rm K \, cm^{-3}$ as indicated in the panel. We see the simulation volume is in an order-of-magnitude pressure equilibrium, with a bimodality in the gas phases into an HIM and WNM that we have segregated approximately with the \emph{dotted black line}, a temperature cut at $15,000 \, \rm K$. Above $10^4$K and $\rho > 10^{-24} \rm g\, cm^{-3}$ the cooling time of the gas is very short and the gas quickly cools to $10^4$K. Some gas reaches lower than this temperature due to adiabatic expansion.} \label{fig:phase} \end{figure} Figure \ref{fig:phase} is the density-temperature phase diagram for the fiducial model at L3 resolution (3~pc cells), which is broadly described by two regions. In the lower right, lying horizontally at a nearly constant temperature of order $T_0=10^4$~K (the base of the cooling curve) is the WNM, which contains most of the mass. The HIM is in the upper left. On examination of time dependent movie of this simulation we see the structure in the HIM is due to multiple supernovae, each supernova blast forms a `finger' roughly along an isobar, and as these shocked regions evolve and expand these lines descend to lower temperatures forming the mixture in the lower right region. As one looks to lower temperatures the fingers start to merge and become indistinct. We see that instantaneously we have variations in pressure within approximately one order of magnitude, and that a significant fraction of the volume is in the HIM. \subsubsection{The characteristic temperature of the HIM}\label{sec:CharacTemp} It is interesting to consider where the characteristic temperature of the hot phase may appear from. We recall that the cooling function used in these simulations was intentionally chosen to be independent of temperature for $T\ge T_0=10^4$~K, and as such cannot by itself introduce a characteristic temperature scale, yet in Fig.~\ref{fig:volfrac} the hot gas quite clearly has a well defined peak temperature $\sim 10^{6}\; \rm K$. This is much higher than the escape temperature for the simulation volume ($\sim 10^5 \; \rm K$, derived from Eq. \ref{eq:gravpot}), and as our SNe are injected just as thermal energy, there is no characteristic temperature for this gas. Since all of the hot gas in our simulations has been produced by the action of SNe it is reasonable to suppose that the temperature of this phase may be determined by the transition from the adiabatic to the momentum driven phases, as described by \citet{Cox_72, Chevalier_1974} and \citet{Larson_1974}. In this explanation, the supernovae would rapidly expand in the adiabatic phase until the action of cooling relative to expansion causes the growth of the remnant to decelerate, and the edge to form a cold dense shell. This shell still expands, but at a considerably reduced rate, driven primarily by the momentum of the shell. We expect the adiabatic phase to remain approximately spherical due to the short sound crossing time within the hot volume, however when the blast enters the momentum driven phase, the cooling shell is unstable and the remnant can become quite asymmetric. If the edge of the remnant reaches other sparse material the hot interior of the remnant can leak out (i.e. a `chimney' such as those seen in \citealp{Ceverino_2009}), otherwise the hot material will gradually be consumed into the dense shell as it radiates away its pressure support. The post shock temperature, $T_s$, of the hot remnant at which the \lq sag\rq\ occurs (when cooling dominates over adiabatic expansion) was calculated in \citet{Cox_72} as \begin{eqnarray} T_s &\approx& 2.0 \times 10^6 \, \left(\frac{n}{1 \; \rm cm^{-3}} \right)^{4/11} \left(\frac{E_{\rm SN}}{10^{51}\; \rm erg } \right)^{2/11} \times \nonumber \\ && \left(\frac{\Lambda}{10^{-22} \rm erg \, cm^3 \, s^{-1} }\right)^{6/11} \, \rm K \, \label{eq:Cox_temp} . \end{eqnarray} The obstacle which radiates away the energy of the SN is the warm disk gas of Fig.~\ref{fig:temp_dens_vel}. Taking a mean density of these from Fig.~\ref{fig:volfrac} \begin{equation} n = 3 \; \rm cm^{-3} \, \label{eq:Cox_dens}, \end{equation} ($\rho=5\times 10^{-24}$~g~cm$^{-3}$) we expect a characteristic temperature of the remnants to be $T_{\rm hot} \approx 3 \times 10^6$~K, very close to our HIM temperature of $\sim 10^{6}\; \rm K$. Another interesting application of Eq. (\ref{eq:Cox_temp}) is to estimate the mass heated by a single supernova before it ends the adiabatic phase. By finding the amount of mass required to absorb the thermal energy of a supernova we derive \begin{eqnarray} M_{\rm hot} &=& \frac{2}{3} \frac{m_{\rm p} E_{\rm SN}}{k_{\rm B} T_s} \nonumber \\ &=& 1350 \,M_\odot\,\left({T_s\over 3 \times 10^6{\rm K}}\right)^{-1}{E_{\rm SN}\over 10^{51}{\rm erg}}\,, \label{eq:mhot} \end{eqnarray} where we have neglected the initial thermal energy of the heated gas, the SN ejecta themselves (see also \citealp{Kahn_1975}), and assumed that none of the SN energy has yet been lost radiatively. For comparison, in the model of \cite{Efstathiou_2000}, a supernova evaporates a similar mass $M_{\rm ev}\sim 540\,M_\odot$ of cold clouds. If all this hot gas were to escape from the simulation without entraining any other material we would derive a mass loading of \begin{eqnarray} \beta &=& \frac{M_{\rm hot} \varepsilon_{100} }{100 \; \rm M_\odot} \nonumber\\ &\approx& 13 \varepsilon_{100} \left(\frac{n}{3 \; \rm cm^{-3}} \right)^{-4/11} \left(\frac{E_{\rm SN}}{10^{51}\; \rm erg } \right)^{9/11} \times \nonumber \\ &&\left(\frac{\Lambda}{10^{-22} \rm erg \, cm^3 \, s^{-1} }\right)^{-6/11} \label{eq:betamax} \\ &\approx & 13 \varepsilon_{100} \left( \frac{ \Sigma_{\rm g}}{10 \, \rm M_\odot \, pc^{-2}} \right)^{-8/11} \left( \frac{ f_{\rm g}}{0.1} \right)^{4/11} \times \nonumber \\ && \left(\frac{E_{\rm SN}}{10^{51}\; \rm erg } \right)^{9/11} \left(\frac{\Lambda}{10^{-22} \rm erg \, cm^3 \, s^{-1} }\right)^{-6/11} \label{eq:beta_surf}\, , \end{eqnarray} where in Eq.~(\ref{eq:betamax}) we have used the warm cloud density $n=3$~cm$^{-3}$ from Eq.~(\ref{eq:Cox_dens}), and in Eq.~(\ref{eq:beta_surf}) we have used the hydrostatic mid-plane density from Eq.~(\ref{eq:density}). The mass loading is higher at lower surface densities (and also volume densities), at higher gas fractions, and for gas that cools more slowly, and increases with the SN energy injected. If all the gas escapes at $T=T_s$ then this is an upper bound for the mass loss, since some energy will be converted to other forms such as radiation and turbulent motion, and for this simulation we do find the measured $\beta$ is significantly below this (see section \ref{sec:fitparams}). Notably many versions of semi-analytic models such as {\sc GALFORM}\ assume $\beta$ close to this maximum. In this section we have described a snapshot of a simulation of a patch of the ISM with similar parameters to that of the solar neighbourhood. We have reproduced a warm and hot phase in order-of-magnitude pressure equilibrium, with a value similar to that estimated for the local volume. We have explored the relation between the temperature of the hot phase and related this to the density of the warm phase via the energy of each SN and the cooling time of the gas. \subsection{Time dependence} We now turn our attention to the time dependence within our simulation. We have seen in Fig.~\ref{fig:temp_dens_vel} that our idealised disk is disrupted by the energy injection from supernovae, and we are interested in the evolution that results from this. The injected energy can be converted into a number of forms, heating of the warm phase, the thermal energy of the hot phase, the mechanical energy of turbulence and the wind, the gravitational potential of the gas as it is lifted out of the disk, and the photons lost through radiative cooling. It is worth recalling that cooling is one of two ways in which energy can leave the simulation volume, the second being the advection of mass across the vertical boundaries of the simulation, taking with it the thermal, mechanical and gravitational potential energy of the gas. \begin{figure} \centering \includegraphics[width=\columnwidth]{sne_heights_time.png} \caption[Volume weighted mean temperature as a function of height and time, for the for the fiducial disk parameters]{Volume weighted mean temperature as a function of height and time, for the for the fiducial disk parameters yet in a wider box of $800\times 800 \times 1000$pc. At each height we have taken the average over a horizontal slice. Superimposed are \emph{red dots} indicating the locations and times of the SN events. As the simulation progresses the activity of many SNe shock heat gas and drive a vertical wind from the disk at around $300 \rm km\, s^{-1}$. \emph{Dotted, dot-dashed, solid} and \emph{dashed} magenta lines denote outflows of 33, 100, 300 and 1000 $\rm km\, s^{-1}$ respectively. Subsequent and around each supernovae can be seen a pulse (in orange) in the temperature. After a short ($t < 1$ Myr) flurry of supernova activity within the disk ($z=\pm 53 \rm pc$), the shocked regions begin to combine and rise out of the disk and the simulation volume. Occasionally, individual supernovae high above the disk (where the gas density is low) make a significant individual contribution to the wind. The 1000 $\rm km\, s^{-1}$ line has been offset to start at 6 Myr to be compared with the propagation of one of such temperature pulses.} \label{fig:height_time} \end{figure} Fig.~\ref{fig:height_time} is a `space-time' plot of the onset of the outflow: time is along the horizontal axis, and the projected mean temperature, $\bar T$, as a function of height is colour coded and shown on the vertical axis, red dots correspond to the times and location of individual SN injection events. In order to reduce the effects of stochastic outflows we performed this simulation in a larger box, of width $800$ pc. The initially hydrostatic gas at temperature $T=T_0$ seen at the far left of the figure is quickly replaced by gas at a range of temperatures. The dark blue coloured band, corresponding to $\bar T\approx T_0$, episodically widens as a function of time, as the disk puffs up. Gas with a mean temperature $\bar T\sim 10^6$~K is seen to stream out of the disk at a range of velocities. From Fig.~\ref{fig:volfrac} we recall that there is actually very little gas by mass at $10^6$~K, however by volume the mean temperature will be close to this. Around each supernova a plume of hot gas can be seen (cyan against the colder dark blue gas). At late times these plumes combine and drive the galactic wind. Comparing with the velocity lines we can see the evolution of the outflow velocity with time, with many structures with velocities in the range of 30-300$\;\rm km\, s^{-1}$. Superposed, however, are some extremely steep (w.r.t. time, i.e. high velocity) discontinuities where much of the simulation volume rapidly experiences an increase in temperature. These appear to propagate from individual SNe, and race away from the disk with velocities in excess of $1000\; \rm km\,s^{-1}$, consistent with a sudden pressurisation of the hot phase of the ISM\footnote{For reference, the temperature that correspond to a given sound speed $c$ is $T=7.3\times 10^7\,{\rm K}\,(c/1000~{\rm km}~{\rm s}^{-1})^2$.} This increased pressure causes stripping from the warm material as shocks drive in to the warmer region of the cloudy medium, adding to the mass of the hot phase. To analyse our simulations we reduced our data set down to the following parameters, listed below. These are chosen to give us a broad overview of the evolution of the star forming disk, rather than information on the individual cells and clouds. For these parameters there is some freedom of definition, e.g. when one attempts to measure the pressure one could take the mid-plane pressure, the pressure within the star forming scale height $b$, the mean pressure within the simulation volume, or the mean pressure within a volume adjusted by some measure of the current disk scale height. In all cases we have attempted to choose a definition which strikes the balance between reducing stochasticity (some candidate measures show considerably more noise than others) and ease of physical intuition. \begin{enumerate} \item{Mass ejection}, $\Sigma_{\rm ej}(t)$, is the amount of gas ejected from the disk per unit area. This is calculated from the mass advected through the boundary at $z = \pm 500 \rm pc$, divided by the surface area of the simulated column. This quantity is used in the calculation of the cosmologically important quantity $\beta = \dot{\Sigma}_{\rm ej} / \dot{\Sigma}_\star$ where we have identified the mass ejected from the idealised disk with the mass ejected from the galaxy. To achieve the nearest correspondence we try to maximise the volume we are measuring the loss from, i.e. the entire simulation volume. The corresponding normalised quantity is the fraction of gas remaining in the disk, $f_\Sigma \equiv 1 - \Sigma_{\rm ej} / \Sigma_{\rm g}$. \item{Cold gas/Hot gas surface density} is the remaining cold/hot gas surface densities in the simulation volume, and in combination with the mass ejected, sum to the initial gas surface density $\Sigma_{\rm g}$. \item{Cold volume fraction}, $f_{\rm cold}$, is the volume fraction of cold gas, sometimes quoted in terms of the porosity \begin{equation}\label{eq:porosity} P = - \log f_{\rm cold}\,, \end{equation} \citep{Silk_2001}. We distinguish between cold and hot phases at a cut-off of $2T_0$ (i.e. twice the lower limit of our cooling function). Though the choice of $2T_0$ may seem arbitrary, it is apparent from Fig.~\ref{fig:volfrac} that the bi-modality of the warm and hot phases is quite strong, so the dependence of our results on the choice of temperature cut-off is rather low. Since the effectiveness of SNe in driving feedback is highly suppressed in dense (and cold) regions, the volume filling factor largely determines the probability that an individual supernova will explode in the hot phase. The volume we study is $z \in [-250,250] \; \rm pc$, as we are not interested in the hot gas far from the plane of the disk (where SNe do not occur). \item{Pressure}, $p$, is the mean pressure in the entire simulation volume. Hot material from the disk is ejected by a mean pressure gradient to the edge of the simulation volume, however the stochastic nature of supernova events creates a significant variation over small time scales and large spatial scales\footnote{The pressure equilibrium predicted by \cite{Spitzer_1956} holds over smaller spatial scales where the supersonic turbulence decays over the sound crossing time.} and thus it is desirable to smooth the pressure estimate over as large a volume as possible. \item{Half-mass height}, $\lambda_{1/2}$, is defined as the height where $z \in [-\lambda_{1/2}, \lambda_{1/2}]$ contains half the original gas mass of the disk, \begin{equation} \label{eq:scale_height} \lambda_{1/2} = \min \left\{ z' : \int_{-z'}^{z'} \left< \rho \right>_z {\rm dz} > \frac{1}{2} \Sigma_{\rm g} \right\}\,. \end{equation} At the start of the simulation this is related to the scale height by our choice of isothermal density profile, at $\lambda_{1/2} = \frac{1}{2} b \log 3$. Large outflows will `puff-up' the disk to greater scale heights, at late times this would become inconsistent with our star formation profile. \item{Effective cooling rate}, $\eta_{\rm eff}$, is the total radiative cooling rate in the simulation volume divided by the mean SNe energy injection rate, \begin{equation} \eta_{\rm eff} = \frac{\int_V \Lambda n^2 {\rm dV}}{\int_{\rm area} E_{\rm SN} \epsilon_{100} (\dot{\Sigma}_\star / 100 {\rm M_\odot}){\rm dA}} \, . \end{equation} Conservation of energy implies that all of the energy not released as radiation must end up either in the wind or as gravitational potential energy. Due to the discrete nature of time sampling with snapshots (i.e. for many of the quantities such as cooling and we have instantaneous measurements of their time derivatives and not measurements of the integrated quantities themselves) there is some error on our estimate of the integrated quantities. Most susceptible is the estimate of the cooling rate: the tail of high-density gas seen in the density probability distribution function of Fig.~\ref{fig:volfrac}, cools very rapidly, and our time sampling means its contribution to cooling is under-estimated. We will inevitably miss some cooling that would have occurred outside the simulation volume (although much of this gas is tenuous and will have a long cooling time, little gas remains dense in the outflowing material). Nevertheless our high snapshot frequency run gives us energy conservation to $\sim 1 \%$ and confidence that we can accurately measure the outflowing components from the low frequency runs (energy conservation in the simulation itself is of course much better than this.) \end{enumerate} \begin{figure} \centering \includegraphics[width=\columnwidth]{fiducial_evol.pdf} \caption[Time evolution of statistics for the run in Fig. \ref{fig:height_time}]{Generation of an outflow in the run in Fig. \ref{fig:height_time} as characterised by the evolution of normalised quantities described in (i)-(v) in the text. After a transient initial stage of $\sim 5$~Myr, gas starts to be ejected at a nearly constant rate of $\sim 0.01\,M_\odot\,{\rm Myr}^{-1}\,{\rm pc}^{-2}$. The \emph{dark blue line} is the cumulative mass ejected per unit area, in units of $0.2 \rm M_\odot \, \rm pc^{-2}$. The porosity $P=-\log(f_{\rm cold})$ of hot gas builds very quickly, \emph{green line} is $0.5+0.2P$, implying a filling factor of the HIM of approximatelty 50\%. The \emph{red line} is the mean pressure, $\log_{10} \left( p/ 10^{3} {\rm K \, cm^{-3}} \right)$, disturbed from its initial value of $0.7\times 10^3$~K~cm$^{-3}$ in the base of the disk by action of the SNe. \emph{Black line} is the fraction of gas remaining in the simulation. The \emph{magenta line} is the evolution of the scale height, Eq.~(\ref{eq:scale_height}), in terms of $0.1 \lambda_{1/2}(t) / \lambda_{1/2}(0)$. The highly stochastic \emph{cyan line} is $\eta_{\rm eff}$, the instantaneous cooling rate as a fraction of the mean SNe energy injection rate. During the first $\sim 2$ Myr the porosity in the simulation rapidly increases, after which the material begins to be ejected from the simulation in a relatively linear fashion. There are periods where the cooling rate increases dramatically by a factor $\sim 10$, which are closely related to SN energy injection events. Energy injection has not significantly puffed up the disk.} \label{fig:single_evolution} \end{figure} In Figure \ref{fig:single_evolution} we inspect these parameters for the simulation in Fig. \ref{fig:height_time}. For the first $\sim 2$ Myr, the most notable feature is the rapid increase of porosity as the supernova blasts evacuate bubbles in the disk. The height of the disk remains approximately constant. As the simulation evolves, the remaining gas fraction declines (black curve) as gas leaves the simulation volume (blue curve). The mass lost from the simulation appears to be a nearly linear function of time at this stage, suggesting a constant outflow rate, which we investigate further in section \ref{sec:fitparams}. \subsection{Comparison to a rarefaction zone}\label{sec:rarefaction} A characteristic feature of both simulated and observed outflows \citep{Steidel_2010} is that the wind speed {\em increases} with height $z$ above the disk, and it has been suggested that radiation driving is the cause of this \citep{Murray_2005}. Since radiation driving is not included in our modelling yet the outflow does accelerate, we suggest the following physical model. The combined effects of several supernova explosions cause the ISM pressure to increase substantially above the hydrostatic equilibrium value. If gravity is not dominant, this will lead to the higher pressure ISM expanding into the lower pressure regions above the disk. In the launch region of such an outflow, 1D (plane-parallel) symmetry is a reasonable description of the geometry. A useful comparison is the behaviour of a rarefaction wave, where a homogeneous static gas is released into a sparse, pressure free zone, and for which the similarity solution is \begin{eqnarray} v(\eta) &=& \frac{2}{\gamma + 1} c_0\,\left( 1 + \eta \right) \nonumber\\ \rho(\eta) &=& \rho_0 \left( {2\over \gamma+1}- {\gamma-1\over\gamma+1}\,\eta \right)^{2/(\gamma-1)} \nonumber\\ \eta &\equiv& {z\over c_0 t}\,, \end{eqnarray} valid for \begin{equation} \eta\in \left[ -1, \frac{2}{\gamma - 1}\right]\,. \end{equation} In such a flow, speed increases with height $z$ and density decreases. This is distinct from the flow due to a single blast wave, since in the Sedov-Taylor phase density {\em increases} with distance from the blast, which is not the case for the disk outflow (Fig.~1). Notably this does not describe a \emph{steady} wind, which would be the result of continuous energy injection. In a rarefaction wave, the acceleration is due to the pressure gradient in the outflow, and results in thermal energy being converted to kinetic energy, and the asymptotic flow speed is $v_{\rm max}=3c_0$ for $\gamma=5/3$. The outflowing gas above the disk is mainly warm ISM gas that is entrained by the hot SN bubbles that power the rarefaction wave. Figure~\ref{fig:rarefaction} shows the behaviour of the simulation to be consistent with this model: velocity increases with height $z$, but decreases with time at a given height in way predicted by the similarity solution. Notably the rarefaction is not a steady-state solution, and thus is not a good description of the time-averaged behaviour of the gas. Such behaviour should mimic the result of continuous energy injection, where multiple overlapping SNe in the form of rarefactions or Sedov-Taylor blast waves (see e.g. \citealp{Castor_1975, Weaver_1977, McCray_1987}) drive a large-scale wind. Our simulations are sufficiently stochastic however that we shall leave this for future work. There will also be departures from a steady state solution as the disk consumes its gas, or in a real galaxy, has some gas inflow. \begin{figure} \centering \includegraphics[width=\columnwidth]{rarefaction.pdf} \caption{\emph{Solid line} shows the mean vertical velocity as a function of height for two times in the $\Sigma_{\rm g} = 2.5\; \rm M_\odot\, pc^{-2}$, $f_{\rm g}=0.01$ simulation showing only the hot gas (where we have defined hot gas to be that above $2\times 10^4 \, \rm K$). \emph{Red dotted line} is a linear fit ($\alpha=2.6$) to the earlier snapshot ($t=2.5 \, \rm Myr$) which is then extrapolated to the later snapshot (\emph{blue dotted line}). This shows the profile is evolving in an approximately self-similar fashion with the hot material accelerating away from the disk primarily due to its thermal energy being converted to kinetic energy.} \label{fig:rarefaction} \end{figure} \begin{figure*} \centering \includegraphics[width=2\columnwidth]{steidel_tempcuts.pdf} \caption{Normalised column density as a function of velocity, for gas with different different temperature (\emph{coloured lines}). For low temperature absorbers ($\lesssim 10^6 \; \rm K$) we to see a single peaked profile centred around the rest frame velocity of the disk. For higher temperatures absorbers, we see absorption at higher velocities relative to the disk, with velocity increasing with temperature. Only the $\gtrsim 10^{7} \; \rm K$ distribution appears to show any significant asymmetry.} \label{fig:outflow} \end{figure*} \subsection{Absorption features of galactic winds}\label{sec:MockAbsorb} \citet{Steidel_2010} proposes that the C{\sc II} absorption line data is also well fit with velocities increasing with distance from the disk (in particular the lower panel of Fig. 24 of \citealp{Steidel_2010}). The explanation above provides a physical mechanism for those measured features. This is without the radiation and dust driven mechanisms invoked by \citet{Murray_2005, Martin_2005, Sharma_2011}. We pointed-out in Fig.~\ref{fig:temp_dens_vel} the multi-phase nature of the outflow, as well as the fact that outflow speed depends on temperature. This is made more vivid in Fig.~\ref{fig:outflow} in which we show mock \lq absorption lines\rq\ of gas selected in narrow temperature bins. These mock line profiles are simply the fraction of gas in a given temperature range, that is moving with a given velocity, as a function of velocity, $v$. For the temperatures $T< 10^7$~K, the lines have their highest optical depths at $v\sim 0$~km~s$^{-1}$, and shapes which with vary little with temperature, $T$, and are almost symmetric in velocity. The line shapes broaden as the temperature increases, and for the hottest gas at $T>10^7$~K the line becomes asymmetric and the absorption centre is now $\sim -100$~km~s$^{-1}$. It is tempting to compare these to absorption line studies in outflows such as \cite{Martin_2005} in NaI and \cite{Weiner_2009} in MgII, however more work would be required to calculate corrections for the geometry and ionisation. Fig.~\ref{fig:temp_dens_vel} also shows colder clouds entrained inside the much hotter flow, with cometary-like tails where the cloudy medium is being ablated by the hot gas rushing past. Absorption lines might arise from mass loading this hot flow either through conductive evaporation (see for example \citealp{Boehringer_1987,Gnat_2010}) and/or through ablation (e.g. \citealp{Hartquist_1986}). \cite{Fujita_2009} investigated the warm clouds in axisymmetric 2-dimensional simulations, where the clouds appear as Rayleigh-Taylor unstable cool shells and fragments that can explain the high velocity Na I absorption lines. We note that the metallicity of the gas phases is likely to be quite distinct, as the supernovae are both the origin of the heating and of the metals, and we intend to explore this in a subsequent paper. \section{The dependence of outflows on disk properties}\label{sec:fitparams} \label{sect:statistics} In the previous section we have discussed in detail the features of a simulation of a supernova-driven wind using a set of fiducial parameters for the disk and supernova rate, the processes which drive it and the statistics that can be used to examine it. In this section we explore how the outflow properties vary and scale with the parameters. We will use such scalings in the next section to integrate over a full galactic disk. \begin{figure} \centering \includegraphics[width=\columnwidth]{velz_Sig_fgas_slices.png} \caption[Matrix of upper half plane velocities for different simulations]{Matrix view of simulations varying gas surface density ($\Sigma_{\rm g}$) and gas fraction ($f_{\rm g}$), each panel showing a time averaged vertical velocity for the upper half plane of each simulation (i.e. the disk is at the base of each panel. Gas surface density increases from left to right, gas fraction increases from bottom to top. There appears to be a strong trend in wind velocity towards the lower right panels, i.e. a disk with low gas fraction but high gas surface density tends to generate a faster wind.} \label{fig:matrix_pics} \end{figure} In Fig.~\ref{fig:matrix_pics} we plot average velocities above the simulation disk, for the simulations varying $\Sigma_{\rm g}$ and $f_{\rm g}$. There appears to be a strong trend in wind velocity, with wind speed increasing with increasing gas surface density, but decreasing gas fraction. There are no simulations in the upper left as these would have a scale height larger than half the box size, or in the lower right as these would have a scale height less than $3$ pc. \subsection{Mass outflow} Inspecting the ratio of mass outflow rate to star formation rate gives us an analogous property to that of Eq. (\ref{eq:beta}), i.e. for a specific area on the disk \begin{equation} \beta = \frac{\dot{\Sigma}_{\rm ej}}{\dot{\Sigma}_\star} \, , \end{equation} which we use in our subsequent analysis. In theory every snapshot from our simulations contains an estimate of this $\beta$, as the mass outflow rate at a specific height, however this is rather stochastic, and as an alternative we calculate $\beta$ as a fit to several measurements of the integrated outflow \begin{equation} y_i = \frac{\int_0^{t_i} \dot{\Sigma}_{\rm ej} {\rm d}t }{\int_0^{t_i} \dot{\Sigma}_\star {\rm d}t} \, , \end{equation} which are easily obtained from each simulation snapshot. We fit the data samples $\left\{(t_i, y_i)\right\}_{i=1}^n$ with the ramp function, \begin{equation} \label{eq:betafit} f(t) = \left\{ \begin{array}{cc} 0 , & t< t_0 \\ \beta t ,& t \geq t_0\,, \end{array} \right. \end{equation} where the parameters $t_0$ and $\beta$ are free variables. The motivation for choosing such a fit is that, whilst the ejection rate is nearly linear in most cases, there is a time ($t_0$) required for the system to reach a quasi steady state. This will not be a true steady state, in that the wind will eventually exhaust the supply of cold gas, however this occurs over a sufficiently long time-scale that the fit is a reasonable description for our simulations. The square error of this function can be analytically solved by finding linear regressions for the subsets $s_k$ of $\left\{ (t_i,y_i) \right\}_{i=1}^n$ defined by $\left\{ (t_i,y_i) \right\}_{i=k}^n$ and choosing the minimum $k$ such that the linear regression $t$-intercept $<t_k$. If we define $g(s_k)$ as the $t$-intercept of the linear regression for $s_k$, then \begin{equation}\label{eq:t0} t_0 = \min \left\{ t_k : g\left(s_k\right)<t_k, s_k \equiv \left\{ (t_i,y_i) \right\}_{i=k}^n \right\} \,, \end{equation} and $\beta$ is the slope of this linear regression. Plots of the gas fraction remaining in the simulation volumes can be seen in Fig. \ref{fig:single_evolution} for the fiducial model, and for the set of simulations of varying $\Sigma_{\rm g}$ and $f_{\rm g}$ in Fig. \ref{fig:sliceA} in the Appendix where we also show the fits given by Eq. (\ref{eq:betafit}). \begin{figure} \centering \includegraphics[width=\columnwidth]{2011_09_21_beta_fit.pdf} \caption{The mass loading $\beta$ (mass ejection rate vs. rate of star formation) as a function of gas surface density $\Sigma_{\rm g}$. Each point represents a fit of $\beta$ (section \ref{sec:fitparams}) to a star formation simulations of varying $\Sigma_{\rm g}$ and $f_{\rm g}$. \emph{Red line} denotes a power law fit with jack-knife errors, \emph{coloured symbols} (red-blue) correspond to the simulations with gas fraction $f_{\rm g} = $ 0.01 - 1.0 respectively. \emph{Vertical grey dashed line} indicates the $3\; \rm M_\odot \, pc^{-2}$ threshold for star formation from \citet{Schaye_2004}. We see a significant negative dependency of $\beta \sim \left( \Sigma_{\rm g}/ 1\; {\rm M_\odot \, pc^{-2}} \right)^{-1.04\pm 0.07}$ on the gas surface density, which may be due to the larger gravitational potential or the higher rate of cooling (incurred by higher gas densities) or some combination of both. We also note that the scatter seems partially a function of $f_{\rm g}$, with higher gas fractions showing larger $\beta$'s than the lower (e.g. \emph{blue} vs. \emph{green}).} \label{fig:betafits} \end{figure} In Fig. \ref{fig:betafits} we plot the mass loading $\beta$ as a function of gas surface density $\Sigma_{\rm g}$. Each point represents a fit of $\beta(\Sigma_{\rm g})$ for the simulations varying $\Sigma_{\rm g}$ and $f_{\rm g}$. The first point to note is that our $\beta$ values all lie below $4$, and for a large range of our parameters $\beta \ll 1$, i.e. our domain of parameter space switches from effective feedback (more gas ejected than stars formed) to ineffective, where the amount of gas released is much smaller than that converted into stars. Based on jack-knife errors, our power law fit shows a significant negative dependency, $\beta\approx 6 \left( \Sigma_{\rm g}/ 1\; {\rm M_\odot \, pc^{-2}} \right)^{-1.04\pm 0.07}$, implying that at high gas surface densities the feedback is less efficient. This could be due to a number of effects. Since a higher gas surface density will correspond to a deeper potential well, the escape velocity of the gas is higher. Secondly, the higher gaseous surface densities correspond to higher gas volume densities (Eq.~(\ref{eq:density})), resulting in shorter cooling times. \begin{figure} \centering \includegraphics[width=\columnwidth]{2011_09_21_beta_fgas_fit.pdf} \caption{Joint dependence of the mass loading $\beta$ on gas surface density, $\Sigma_{\rm g}$, and gas fraction $f_{\rm g}$. Differently coloured curves correspond to simulations with different values of $\Sigma_{\rm g1}\equiv \Sigma_{\rm g} /{\rm M}_\odot\,{\rm pc}^{-2}$, the \emph{thick red line} is our best fit of the simulation points. We see a dependence of $\beta \Sigma_{\rm g}^{1.15}$ on gas fraction, with a power law dependency of $0.16 \pm 0.15$. Higher gas fractions for a given gas surface density imply a shallower potential well, explaining why the outflow efficiency increases with $f_{\rm g}$.} \label{fig:betafgasfits} \end{figure} Another notable dependency is that on the gas fraction. Some of the scatter seen in Fig. \ref{fig:betafits} actually depends systematically on the gas fraction, $f_{\rm g}$, with higher gas fractions showing consistently larger $\beta$'s than the lower values. We explore this in Figure \ref{fig:betafgasfits}, where we have performed a simultaneous fit of $\beta$ to both the gas surface density and the gas fraction, \begin{equation}\label{eq:log_linear_fit} \beta = \beta_0 \Sigma_{\rm g1}^{-\mu} f_{\rm g}^\nu \, , \end{equation} where we find the values \begin{eqnarray} \beta_0 &=& 13 \pm 10 \label{eq:beta_fit}\\ \mu &=& 1.15 \pm 0.12 \label{eq:mu_fit} \\ \nu &=& 0.16 \pm 0.14\,, \label{eq:nu_fit} \end{eqnarray} By construction the joint fit now no longer shows a systematic dependence on either $\Sigma_{\rm g}$ or $f_{\rm g}$. Accounting for this shows a positive dependency of $f_{\rm g}^{0.16 \pm 0.14}$, i.e. by holding the gas surface density constant but increasing the gas fraction (which reduces the gravitational strength, thus increasing the dynamical time and reducing the star formation rate) increases the mass loading. As with the dependence on gas surface density, we are effectively seeing a sub-linear dependence on star formation rate, as we decrease the star formation (increase the gas fraction), we see a less than proportionate drop in the outflow rate. Again, the mechanism causing this should be a combination of the processes for the $\Sigma_{\rm g}$ dependence, derived above. In Fig.~\ref{fig:betafgasfits} there is considerable scatter, especially at high gas fraction where a number of simulations have mass ejection rates considerably above the trend. This is most likely due to heavy disruption of the disk out of the plane where the wind from subsequent supernovae can eject it from the simulation volume. With such stochasticity the description of all the simulations with a simple power law becomes inadequate. Our measured value for the exponents $\mu=1.15$ and $\nu=0.16$ that relate mass loading to gas surface density and gas fraction, $\beta\propto \Sigma_{\rm g}^{-\mu}\,f_{\rm g}^\nu$, can be compared with the values from the model described in Section \ref{sec:CharacTemp}, which predicts scalings of $\mu=8/11=0.72$ and $\nu=4/11=0.37$. That model does not include gravity, and we suggest this is why the measured and predicted values differ. To verify this we have performed a series of simulations with significantly higher star formation rate, described in the Appendix. This uses a slightly different parameterisation that is more commonly used in cosmological simulations which introduces an extra dependence on the gas fraction, but the primary effect is an increase in star formation for the parameter range we study. In these runs, the energy injection rate is much higher, the volume filling factor of the hot phase much larger, and the outflow rates are correspondingly larger as well. Consequently the effect of gravity of the disk is much reduced. Fitting $\beta\propto \Sigma_{\rm g}^{-\mu}\,f_{\rm g}^\nu$ to these runs yields $\mu=0.82$ and $\nu=0.48$, in much better agreement with the predictions of the simple model. It would be interesting to extend the model to account for the gravity of the disk, along the lines followed by \cite{Stringer_2011}. Assume that the $\beta$ of the hot gas in Eq.(\ref{eq:beta_surf}) is modified by an escape fraction $f_{\rm esc}$, which is equal to the fraction of material that has a temperature above the escape temperature of the simulation volume. Assuming the outflow has a range of temperatures, characterised by a Maxwell-Boltzmann distribution, and that only gas with $T>T_{\rm esc}$ escapes, the fraction is \begin{eqnarray} f_{\rm esc} &=& \int_{T_{\rm esc}} f(T) {\rm d}T \\ & \approx & 1 - \frac{4}{3 \sqrt{\pi}} \left( \frac{T_{\rm esc}}{T_{\rm s}} \right)^{3/2} \, . \end{eqnarray} We have assumed that $T_{\rm esc} \ll T_{\rm s}$, i.e. the low energy tail of the distribution fails to escape. The net outflow will thus drop faster at high $\Sigma_{\rm g}\propto T_{\rm esc}$, making the dependence of the mass-loading on $\Sigma_{\rm g}$ stronger, which is consistent with the higher $\mu \approx 1.15$ we see in the lower SFR simulations. \subsection{Radiative efficiency and energy partition in the ISM} Whilst the mass loading of the galactic wind is one of the most cosmologically significant parameters to study, we would also like to evaluate the energy budgets and structure of the winds in our simulations. The energy injected by the SNe is absorbed into the gravitational binding energy, distributed into thermal and mechanical energy (both in the bulk motion of the wind and in turbulence throughout the simulation volume) and released as radiation (via cooling). The energy partition also enables us to evaluate a wind velocity for the galaxy, which is commonly used to characterise feedback models for galaxy formation (e.g. \citealp{Bower_2011}). The fraction of the energy that is incorporated into the wind, in combination with the mass loading, determines the overall wind speed for a galaxy. This is an important parameter in determining whether the wind can leave the galaxy and hence provide efficient quenching of star formation. By examining our simulations we can determine the fractions of energy that has been converted in to the different modes. In our fiducial simulation, we discover that a fraction of 87\% was radiated, 4.5\% was advected out of the computational volume as thermal energy, 5\% as mechanical energy (with over half of this in the form of turbulent energy), 1\% went into heating the simulation volume\footnote{Note that in a true steady state this fraction should be compensated by cooling.}, 1\% went into turbulence in the simulation volume and a rather low $0.5\%$ went into puffing-up the disk. The parameters here are averaged in a similar manner to the mass ejection rate, by taking the mean over snapshots after $t_0$ (Eq. \ref{eq:t0}), i.e. in the quasi-steady regime. Summation of these quantities allows us to estimate $\eta_T$ (Eq. \ref{eq:eta_T}), the fraction of power that is thermalised in to the outflow \begin{equation} \eta_T = \eta_{\rm therm} + \eta_{\rm mech} \, , \end{equation} i.e. the sum of the thermal and mechanical (bulk and turbulent) contributions, (the remainder going almost entirely in to cooling). This allows us to calculate an effective velocity $v_{\rm eff}$ for the wind, \begin{equation}\label{eq:vwind} v_{\rm eff} = \sqrt{\frac{2\eta_T}{\beta} \left(\frac{E_{\rm SN} \varepsilon_{100} }{100 \rm M_\odot}\right)} \, , \end{equation} where we have combined the equation for mass loading, $\beta\equiv \dot M_{\rm wind}/\dot M_\star$, and the thermalisation of supernova energy into the kinetic energy of the wind ($\eta_T$), to find the specific energy in the wind (i.e an inversion of Eq.~(\ref{eq:eta_T})). Notably this will be significantly higher than the wind velocities we see at the edge of our simulation volume because it includes the energy of the thermal and turbulent components. At larger distances from the galaxy, however, we expect this to be a more realistic estimate, as the thermal energy accelerates the wind and is converted in to the mechanical energy of the bulk flow. This is a consequence of our simulations focusing on the launch region of the galactic wind, and hence the wind has not yet reached its terminal velocity. Note that ram pressure from infalling gas may be an important obstacle in slowing down, or even preventing the outflowing gas from escaping (e.g. \citealp{Theuns_2002}). \begin{figure} \centering \includegraphics[width=\columnwidth]{vwind_fits.pdf} \caption{Effective wind speed (\emph{upper panel}), outflow efficiency (\emph{middle panel}) and mass loading (\emph{lower panel}) as a function of total surface density $\Sigma=\Sigma_{\rm g}/f_{\rm g}$. Coloured lines with symbols are the simulations from figures (\ref{fig:betafits}-\ref{fig:betafgasfits}), with values of the gas fraction $f_{\rm g}$ as indicated. \emph{Dotted lines} in the lower panel are the scalings from equations (\ref{eq:beta_fit}-\ref{eq:nu_fit}), plotted for $f_{\rm g} = 0.01,0.015,0.2,1.0$ in the corresponding colours. Lines of constant efficiency, $\eta_T=0.1$ and 0.4 are shown in the middle panel (\emph{black dotted} and \emph{dashed}, respectively). Curves for the corresponding scaling of the effective wind speed for $f_{\rm g}=0.1$ are shown in the upper panel. The outflow efficiency increases with surface density, as does the effective wind speed. } \label{fig:vwindfits} \end{figure} In Figure \ref{fig:vwindfits} we explore the dependence of the mass loading $\beta$, the fraction of power in the outflow, $\eta_T$, and the effective wind velocity, $v_{\rm eff},$ as a function of the total surface density of the disk, $\Sigma=\Sigma_{\rm g}/f_{\rm g}$. In terms of the mass loading we see a negative dependence on surface density, for comparison we have also included the power law fit from Eqs.~(\ref{eq:beta_fit}-\ref{eq:nu_fit}). The fraction of power released in to the wind, $\eta_T$, appears to be correlated almost entirely with gas fraction $f_{\rm g}$, at high gas fractions much of the energy of star formation is simply radiated away, which is intuitive since the higher gas fractions will have shorter cooling times. For comparison we also show values of $\eta_T = 0.1$ and $0.4$, the former being the equivalent to the widely quoted $10\%$ efficiency in \citealp{Larson_1974}): we find star formation in disks to lie close to this value, except at very low gas fractions. The fall in outflow power in Fig. \ref{fig:vwindfits} at low surface densities can also be seen as a fall in the effective wind velocity. Here we have converted our sample values of $\eta_T = 0.1, \; 0.4$ into effective wind velocities using the power law fit for $\beta$ in Eqs.~(\ref{eq:beta_fit}-\ref{eq:nu_fit}). Each gas fraction appears to follow a line of approximately constant $\eta_T$, although there is some suggestion of a change in slope below $\Sigma = 10^2 \; \rm M_\odot \, pc^{-2}$. \section{Impact of outflows on galaxy evolution} \label{sect:evolution} In this section we apply our results from the previous section to the mass outflow from disk galaxies of different masses. We will assume a surface density profile for a galaxy and the use our fits for outflow efficiency as function of surface density, to deduce an overall feedback efficiency. \subsection{Dependence on circular velocity from theoretical arguments}\label{sec:beta_MMW} In this section we take our measured dependencies of the mass loading parameter (which are derived for a patch of the ISM) and apply them to an entire disk galaxy by integrating over the surface of the disk. This will allow us to compare with feedback schemes considered in \cite{Cole_2000, Bower_2006} etc., which introduce a relation between circular velocity, mass loading, and effective wind speed. Our first step is to assume a model for a disk galaxy inside dark a matter halo where we follow \citet{Mo_Mao_and_White_98}. The circular velocity of a spherical isothermal dark halo of mass $M_{200}$ is given by \begin{equation} V_{200}^3 = 10 G M_{200} H(z) \label{eq:iso_halo}\,, \end{equation} \citep{Mo_Mao_and_White_98} where $H(z)$ is the Hubble parameter as a function of redshift, $z$. Since the baryonic component can release energy via cooling, it can collapse further to become a rotationally supported disk. Observed bulge-less disks have a near exponential profile in luminous mass of the form \begin{equation} \Sigma(r) = \Sigma_0 \exp \left( -r/R_{\rm d} \right)\, , \end{equation} with normalisation $\Sigma_0$ and scale length $R_{\rm d}$. The mass of the disk is thus given by \begin{equation} M_{\rm d} = \int_0^\infty 2 \pi \Sigma(r) r {\rm d} r = 2 \pi \Sigma_0 R_{\rm d}^2 \, . \label{eq:md} \end{equation} The scale length $R_{\rm d}$ is controlled by the specific angular momentum of the material forming the disk (e.g. \citealp{Fall_Efstathiou_1980}). An exponential disk with constant rotation velocity $V_{\rm d}$ has angular momentum \begin{equation} J_{\rm d} = 4 \pi \Sigma_0 V_{\rm d} R_{\rm d}^3 \, , \end{equation} and if we parameterise in terms of the disk mass as a fraction of the halo mass, $m_{\rm d} \equiv M_{\rm d} / M_{200}$, the circular velocity of the disk as a fraction of the halo's, $v_{\rm d} = V_{\rm d} / V_{200}$ and the specific angular momentum fraction of the disk $j_{\rm d} / m_{\rm d}$, we can infer the surface density normalisation to be \begin{eqnarray} \Sigma_0 &=& \frac{2}{\pi} \frac{M_{\rm d}^3 V_{\rm d}^2}{J_{\rm d}^2} \nonumber \\ &=& \frac{10 H(z)}{\pi G} \lambda^{-2} \left( \frac{j_{\rm d}}{m_{\rm d}} \right)^{-2} m_{\rm d} v_{\rm d}^2 V_{200} \label{eq:MMW_sigma} \, , \end{eqnarray} where $\lambda$ is the spin parameter of the isothermal halo in Eq.~(\ref{eq:iso_halo}). Notably if we set $v_{\rm d}=1$ we recover the \cite{Mo_Mao_and_White_98} surface density equation, yet for real disks $v_{\rm d} > 1$ as the contribution of baryons to the rotation velocities is not insignificant. We can now compute a mean mass loading $\hat{\beta}$ for such a galaxy, by evaluating \begin{equation} \hat{\beta} \equiv \frac{\dot{M}_{\rm wind}}{\dot{M}_\star} = \frac{\int 2 \pi \beta \dot{\Sigma}_\star r {\rm d}r}{\int 2 \pi \dot{\Sigma}_\star r {\rm d}r} \, , \label{eq:betahat_integral} \end{equation} where we will assume the surface density in star formation, $\dot\Sigma_\star$, follows the Kennicutt-Schmidt relation, \begin{equation} \dot{\Sigma}_\star = A \Sigma_{\rm g}^n \, . \end{equation} \begin{figure} \centering \includegraphics[width=\columnwidth]{disk_wind.pdf} \caption{Fraction of the wind launched at each radii in the disk (Eq.~\ref{eq:frac_wind}), for a Kennicutt-Schmidt relation $\dot\Sigma_\star \propto \Sigma_{\rm g}^n $, with $n=1.4$, and assuming mass loading scales with gas surface density as $\beta\propto \Sigma_{\rm g}^{-\mu}$, with $\mu=1.15$ (Eq.~\ref{eq:mu_fit}). \emph{Dotted line} indicates the characteristic wind radius $R_{\rm w}/R_{\rm d}$ for the galaxy, where the the local mass loading equals the net mass loading for the galaxy as a whole, $\hat\beta=\dot M_{\rm wind}/\dot M_\star$.} \label{fig:disc_wind} \end{figure} Taking the dependence of mass loading on surface density found from our fits to the simulations, Eq.~(\ref{eq:log_linear_fit}), then Eq.~(\ref{eq:betahat_integral}) can be integrated analytically. We re-write Eq.~(\ref{eq:log_linear_fit}) in terms of the total surface density, $\Sigma$, and the gas fraction, $f_{\rm g}$, and obtain \begin{equation} \beta (\Sigma , f_{\rm g}) = \beta_0 \left(\frac{\Sigma}{1 \, M_\odot \, {\rm pc}^{-2}} \right)^{-\mu} f_{\rm g}^{\nu - \mu}\,, \end{equation} giving a dependence on the gas fraction, $\propto f_{\rm g}^{-0.99}$. The fraction of the wind launched as function of radius is given by, \begin{eqnarray} \frac{{\rm d} f_{\rm w}}{{\rm d} (r/R_{\rm d})} &=& \frac{2 \pi R_{\rm d} \beta(r) \dot{\Sigma}_\star(r)}{\dot{M}_{\rm wind}} \nonumber \\ &=& (n-\mu)^2 \left( \frac{r}{R_{\rm d}} \right) \exp\left[ -(n-\mu) r/R_{\rm d} \right] \, , \label{eq:frac_wind} \end{eqnarray} which gives the differential rate of production of the star-formation driven wind, normalised by the total wind. This function is plotted in Fig.~\ref{fig:disc_wind}. At large radii the star formation is most effective at driving a wind, but the net contribution to the galaxy outflow is limited by the low rate of star formation there. Conversely at small radii the wind is limited by the small area of the disk, and so it is at intermediate radii where the local mass loading equals that of the galaxy as a whole. We can characterise this further by defining a wind radius $R_{\rm w}$ by \begin{equation} \hat{\beta} = \beta \left( \Sigma(R_{\rm w}), f_{\rm g} \right)\,, \end{equation} that is, $R_{\rm w}$ is that radius in the galaxy where the local mass loading, $\beta=\dot\Sigma_{\rm ej}/\dot\Sigma_\star$, equals the total mass loading of the entire galaxy, $\hat\beta=\dot M_{\rm wind}/\dot M_\star$. The wind radius for the galaxy is then given by, \begin{eqnarray} \label{eq:r_wind} R_{\rm w} &=& \frac{2}{\mu}\log \left(\frac{n}{n-\mu}\right) R_{\rm d} \\ &\approx& 3.0 R_{\rm d} \, , \end{eqnarray} where we have substituted in $n=1.4$ for the exponent in the KS relation, and used the values for $\mu$ from Eq.~(\ref{eq:mu_fit}). For the Milky Way, a disk scale length of $R_{\rm d} = 2.5 \, \rm kpc$ gives a wind radius of $R_{\rm w} = 7.5 \, \rm kpc$, inside the solar radius but outside the galactic bulge. We have neglected the fact that there will not be any star formation far out in the disk if the gas surface density drops too low, as well as the presence of a bulge, where there may be little gas and hence also little star formation. This will lead us to overestimate the wind in the tails of Fig.~\ref{fig:disc_wind}. To parameterise feedback in terms of the circular velocity, $V_{200}$, we apply Eq.(\ref{eq:MMW_sigma}) and use our fiducial values of $\beta_0, \mu, \nu$ and $f_{\rm g}$, to find \begin{eqnarray} \hat{\beta} &=& \beta_0 \left( \frac{n}{n-\mu} \right)^2 \left(\frac{\Sigma_0}{1 \, M_\odot \, {\rm pc}^{-2}} \right)^{-\mu} f_{\rm g}^{\nu - \mu} \label{eq:beta_Sig0} \\ &\approx& 10 \left( \frac{\beta_0}{13} \right) \left({f_{\rm g}\over 0.2}\right)^{\nu - \mu} \left( \frac{j_{\rm d}}{m_{\rm d} v_{\rm d}} \right)^{2\mu} \times \nonumber \\ && \left[ \left( \frac{\lambda}{0.05} \right)^{-2} \left(\frac{V_{200}}{155\, \rm km \, s^{-1}}\right) \left(\frac{m_{\rm d}}{0.03}\right) \frac{H(z)}{H_0} \right]^{-\mu} \, , \label{eq:hat} \end{eqnarray} where we also assumed $H_0 = 71 \rm \; km \, s^{-1} \, Mpc^{-1}$ \citep{Freedman_2001}. To convert to disk properties we can eliminate the spin parameter with \begin{equation}\label{eq:Rd_angmom} R_{\rm d} = \frac{\lambda V_{200} }{\sqrt{200} H(z)} \left( \frac{j_{\rm d}}{m_{\rm d} v_{\rm d}} \right) \, . \end{equation} Setting $j_{\rm d}/m_{\rm d}$ and $v_{\rm d}$ as unity, however, yields a MW with a rather low circular velocity ($155$~km s$^{-1}$) and scale length considerably higher. The formation of the baryonic disk can increase the rotation velocities from $V_{200}$ both directly and indirectly. The baryons make their own contribution to the gravitational potential, and can also induce changes in the profile of the dark matter, for example due to adiabatic contraction (e.g. \citealp{Mo_2010}). Even without baryons, there will be some adjustment to $v_{\rm d}$ due to the non-isothermal nature of halos \citep{NFW_1997}, i.e. a dependence on the concentration parameter. Here we will take $v_{\rm d} = 1.29$ to give a circular speed of $V_{\rm d} = v_{\rm d} V_{200} = 200 \; \rm km \, s^{-1}$, similar to the value of the MW (\citealp{Dehnen_1998, Flynn_2006}, but see also \citealp{Reid_2009} that has the speed closer to $250 \; \rm km \, s^{-1}$). Having set the circular speed, the disk scale length is implied by the specific angular momentum fraction in Eq.~(\ref{eq:Rd_angmom}). For the $2.5$ kpc disk of \cite{Flynn_2006} we set $j_{\rm d}/m_{\rm d}=0.42$, i.e. the disk is preferentially formed of the low angular momentum baryons. A possible reason for the lower specific angular momentum is the delayed collapse of baryons in the disk due to photo-heating (since disks grow in an inside-out manner, with the low angular momentum material accreted first), \cite{Navarro_1997}. Finally we should mention that the spin parameter of the MW may differ from $0.05$, and indeed recent simulations that remove transient objects from halos have suggested halos have a smaller $\lambda$ (e.g. \citealp{Bett_2007}), however we have made no account for this as it is outside the scope of this model. With these new parameters, the MW disk has a more realistic higher surface density, and Eq.~(\ref{eq:hat}) becomes \begin{eqnarray} \hat{\beta} &\approx& 0.31 \left( \frac{\beta_0}{13} \right) \left({f_{\rm g}\over 0.2}\right)^{\nu - \mu} \times \nonumber \\ && \left[ \left(\frac{V_{\rm d}}{200\, \rm km \, s^{-1}}\right)^3 \left( \frac{R_{\rm d}}{2.5 \; \rm kpc} \right)^{-2} \left(\frac{m_{\rm d}}{0.03}\right) \frac{H_0}{H(z)} \right]^{-\mu}\,. \label{eq:beta_vcirc} \end{eqnarray} The normalisation and scaling with $V_{\rm d}$ we find are somewhat below our expectations for supernova feedback. For a Milky-Way like halo, the star formation would remove less than one solar mass of gas for every solar mass of stars formed ($\hat\beta\sim 0.31$). Nevertheless, halos with smaller circular velocities with the same disk radius and disk mass fraction show increasingly effective feedback, $\hat\beta\propto V_{\rm d}^{-3.4}$, a similar scaling to energy conserving winds (e.g. \citealp{Stringer_2011}). Note that the power-law dependence on $V_{\rm d}$ is somewhat stronger than the value of -1 found by \citet{Hopkins_2011}. Those authors also found an exponent of $-0.5$ for the dependence of mass loading on surface density, which is weaker than our exponent in Eq. (\ref{eq:beta_vcirc}) of $\hat\beta\propto \Sigma^{-1.15 }$. Whilst the agreement between these simulations is not particularly good, this is perhaps not surprising given that they are performed with some different physics, at different resolutions and using different hydrodynamical schemes. Despite the appeal of the above framework in supplying us with predictions for the mass loading in terms of redshift and the disk properties, there is a caveat here in our adjustment of $j_{\rm d} / m_{\rm d}$ and $v_{\rm d}$ to match the observed MW. Although we can derive this from observations for the MW, and the mechanism for this appears to be understood, it would be erroneous to suggest we have a consistent model for this, and current numerical simulations such as those of \cite{Scannapieco_2011} have yet to converge on the properties of a disk for a single halo. Most concerning is that these quantities almost certainly have some implicit dependence on halo mass and thus there should be a corresponding adjustment to the scaling relation in Eq.~(\ref{eq:beta_vcirc}). \subsection{Dependence from observed data}\label{sec:beta_TF} Given the approximate ingredients required to construct the formalism of the previous section, it is interesting to ask whether we can parameterise our fit to the mass-loading, Eq. (\ref{eq:beta_Sig0}), with purely observational estimates, i.e. to compute the disk surface density with from observed disk properties, side-stepping the models of \citet{Mo_Mao_and_White_98}. One particularly attractive method is to invert Eq.~(\ref{eq:md}) to write the surface density in terms of the disk radius $R_{\rm d}$ and mass $M_{\rm d}$, where the latter can be estimated from the circular velocity of the disk with the Tully-Fisher relation \citep{Tully_Fisher_1977}. A recent calibration of the baryonic Tully-Fisher relations gives $M_{\rm d} = 8 \times 10^{10}\, M_\odot (V_{\rm max}/200~{\rm km}~{\rm s}^{-1})^4$ \citep{Trachternach_2009}, application of which gives \begin{eqnarray} \hat{\beta}_{\rm TF} &=& 0.31 \left( \frac{\beta_0}{13} \right) \left({f_{\rm g}\over 0.2}\right)^{\nu - \mu} \times \nonumber \\ && \left[ \left(\frac{V_{\rm d}}{200\, \rm km \, s^{-1}}\right)^4 \left(\frac{R_{\rm d}}{2.5 \; \rm kpc}\right)^{-2} \right]^{-\mu} \, , \label{eq:beta_TF} \end{eqnarray} which is very close to the relation in Eq.~(\ref{eq:beta_vcirc}), including normalisation and the $R_{\rm d}$ scaling. The difference is in the exponent of $V_{\rm d}$, and the dependence of Eq.~(\ref{eq:beta_vcirc}) on $m_{\rm d}$, which implicitly depends upon $V_{\rm d}$ as well. In principle it is possible to calculate the mass fraction in the disk from the stellar mass to halo mass function using an abundance matching approach, which would relate $m_{\rm d}$ to $V_{200}$. A single power law, $m_{\rm d} \propto M \propto V_{200}^{3}$ is a good fit, although from Eq. (\ref{eq:betascale}) we see there is a dependence on the faint end slope of the stellar mass function (and at higher masses a broken power law may be more appropriate, e.g. \citealp{Yang_2003, Moster_2010, Guo_2010}). Substituting this relation for $m_{\rm d}$ in Eq.~(\ref{eq:beta_TF}) then yields $\hat\beta\propto V_{200}^{-6.9} R_{\rm d}^{2.3}$ versus $\hat{\beta}_{\rm TF} \propto V_{\rm d}^{-4.6} R_{\rm d}^{2.3}$ from Eq.~(\ref{eq:beta_TF}) (taking $\mu=1.15$ for both). Finally we can try to eliminate the dependence on $R_d$, assuming $R_{\rm d} \propto M_{\rm d}^{0.15}$, as inferred by \cite{Shen_2003}. This yields a scaling of $\hat\beta\propto V_{\rm d}^{-4.8}$ versus $\hat{\beta}_{\rm TF} \propto V_{\rm d}^{-2.5}$. The difference between these scalings is due to the discrepancies between the modelled and observed slope for the Tully-Fisher relation and the uncertainty in modelling the disk mass fraction. Although both our scalings are strongly dependent on $V_{\rm d}$, our $\beta$ values were all in the range 0.01-4, so the change in feedback acts more like a switch. At low disk circular velocities $V_{\rm d} \hbox{$\; \buildrel < \over \sim \;$} 140\; \rm km \, s^{-1}$ the feedback is high ($1<\beta < 4$) to and at the higher disk velocities the feedback shuts off, all over a relatively small range in $V_{\rm d}$. To summarise, we have developed two approaches to analyse the mass loading for a galaxy based upon our estimates for the mass loading in our ISM patches. In Section \ref{sec:beta_MMW} we take an analytic approximation to the properties of disk in their host halos which allows us to trace the feedback with redshift. This does, however, require us to make assumptions about the scaling of the gravitational contribution of the baryonic disks and the preferential accretion of low angular momentum baryons, neither of which are fully understood. Section \ref{sec:beta_TF} has bypassed these model concerns by parameterising the galaxies using the observed disk mass-velocity relation to directly apply the mass loadings. One price for this is the loss of the dependence on redshift and the cosmological evolution. Although these two approaches lead to different scalings, they do give a consistent normalisation for the feedback in the MW at redshift zero. In principle, one way to test this formalism is to apply it in phenomenological models such as {\sc GALFORM} , where such parameters as $j_{\rm d}, v_{\rm d}$ and $m_{\rm d}$ are followed. We discuss this comparison further in the next section. \subsection{Comparison to cosmological models} We are now in a position to compare the outflow rate we measured in our high resolution simulations with values assumed in semi-analytic models such as {\sc GALFORM}\ \citep{Cole_2000}. The feedback prescription for the original {\sc GALFORM}\ was \begin{equation} \beta = \left({V_{\rm d}\over V_{\rm hot}}\right)^{-\alpha_{\rm hot}} \, , \end{equation} with values in the reference model of $V_{\rm hot} = 200\; \rm km \, s^{-1}$ and $\alpha_{\rm hot}=2.0$. These models give a slope to the faint end of the galaxy luminosity function, $\alpha\approx -1.5$. More recent models such as \citet{Bower_2006} have used $\alpha_{\rm hot}=3.2$ for a good match to the $b_{\rm J}$ and K-band galaxy luminosity functions. These can be compared with our exponents from the previous paragraph, $\alpha_{\rm hot}=4.8$ and $\alpha_{\rm hot,TF}=2.5$, which bracket the value used by \cite{Bower_2006}. For the normalisation, \cite{Cole_2000} parameters yield $\beta_{200}=1.0$ ($\beta$ for a disk of $V_{\rm d}=200$~km~s$^{-1}$), whilst the \cite{Bower_2006} parameters give $\beta_{200} \approx 17$ (although this drops to 12 using updated cosmological parameters, see \citealp{Bower_2011}) as compared with our value of $\hat\beta_{200}=0.31$. The net mass loading for MW like galaxies obtained from our simulations is less than that assumed by \citep{Cole_2000} by about a factor 2, and considerably less than assumed by \cite{Bower_2006}. It is also interesting to consider whether the values of $\beta$ should rise in starburst galaxies, where the star formation rate may be significantly above the normalisation of the Kennicutt-Schmidt relation. Although our higher star formation rate simulations do show higher values of $\beta$ (see Appendix A), this is only by a factor of 2, with $\beta$ still falling at high gas surface densities. This suggests that the mechanism for galaxies to stay at high mass loadings is to remain in a state with relatively low surface densities (e.g. \citealp{Read_2006}). An alternative formulation of feedback in semi-analytics, suggested by \citet{Bower_2011}, is to attempt to match only the observable portion of the stellar mass function rather than trying to match a slope that goes to arbitrarily faint galaxies. For example, a model with a constant wind speed (from the disk) ultimately produces a faint end slope that is identical to that of the halo mass function. In an intermediate mass range, however, the effects of the gravitational potential causes material to be recycled back into the galaxy, producing a characteristic flat portion to the galaxy stellar mass function. By tuning the value of the wind speed, a nearly flat stellar mass function can be achieved over a restricted range. Although this mechanism cannot be extended to arbitrarily faint galaxies (which may be suppressed by other mechanism for example by re-ionization), it does provide a good fit to the observations with a constant $\beta \approx 8$ over this portion of the mass function. In contrast to some of the predictions of semi-analytic models are the smaller estimates for the normalisation for mass loading found by hydrodynamic simulations. \cite{Oppenheimer_2010} use a $\beta = 2$ and a $v_{\rm wind}=680 \; \rm km \, s^{-1}$ to recreate the $z=0$ mass function. These simulations are at low resolution with the wind particles partially decoupled from the surrounding gas, making them more comparable to semi-analytic models. Fully hydrodynamical simulations where the wind is coupled to the surrounding ISM are much harder to interpret. Resolution of these issues is beyond the scope of this paper, but better understanding of the differences between semi-analytic models and hydro simulations is clearly required. In terms of the observed MW, \cite{Wakker_2008} estimates the mass accretion rate to be $0.4\; \rm M_\odot \, yr^{-1}$ from infalling high velocity clouds. If this is to be combined in a steady state model of a MW with non-negligible star formation, then $\beta_{200} \lesssim 0.4$, so there is some tension between the observed star formation of the MW and the semi-analytic models that would reduce its baryon fraction, and our simulations lie nearer the low observed estimates. One option is that the semi-analytic models consistently over-estimate the $\beta_{200}$ required. In particular, there are significant degeneracies between $\beta_{200}$ and the exponent $\alpha_{\rm hot}$. Moreover, many models assume that the wind scaling has a fixed energy efficiency ($\eta_T$) and do not correctly account for the recapture of gas ejected from low mass galaxies (see \citealp{Bower_2011} for further discussion). It is entirely plausible that a careful search of parameter space may revel strongly mass dependent solutions much closer to those found here. On the hydrodynamical side, there are a number of physical processes that we neglected that may nevertheless be important. In terms of the gas phases we have included, the inhomogeneous metallicity will make an adjustment to the cooling, and larger scale effects such as a full 3-dimensional galactic potential along with shear and features such as bars and spiral arms will also play a role in shaping the ISM. However, it is not apparent why either of these effects will change the overall mass leaving the disk. In terms of the stellar populations we could explore the star formation distribution in terms of the correlation with molecular clouds and also the clustering of stars, which may allow the explosions to strip more material, but this is unlikely because SNe are delayed sufficiently to diffuse out of their parent clouds. The large scale radiation field may provide an additional mechanism to accelerate the wind \citep{Murray_2005, Hopkins_2011}, however in our simulations the thermal energy of the hot material in the disk already provides sufficient velocities to escape the disk. Potentially the largest discrepancy we have identified is the inconsistency of the distribution of SNe with the gas evolution, i.e. matching the scale height of star formation with the new scale height of the disk. It may even be possible to make the simulations completely self consistent by matching the star formation rate to the turbulent structure of the ISM, in a manner such as that envisaged by \cite{Krumholz_2005}. Future simulations could also include the cold phase of the ISM by including radiative cooling below $10^4\; \rm K$. On its own this would tend to reduce $\beta$, since a cold phase removes material from the warm phase it would not directly increase the mass loading, however the physics of this brings in other processes such as self gravity, magnetic fields, and cosmic rays (which may be dominant at these scales). Magnetic fields in particular seem a candidate for entraining more material into the wind, although simulations such as \cite{Hill_2012} do not find it to play a significant role. Overall, whilst we will include the above physical processes in future work, we suspect that these processes will not radically alter the mass-loading or significantly change the scalings we have found. \section{Conclusions} \label{sect:conclusions} In this paper we have constructed numerically well converged simulations of a simplified two-phase interstellar medium model, in which an initially isothermal and hydrostatic disk gets disrupted and heated by individual supernovae. By not simulating the cold phase of the ISM we avoided the need to introduce significantly more physical ingredients which require heavy algorithmic approximations and/or fragile recipes. By restricting our simulation volume to only a small section of a disk, we achieve sub-parsec resolution, and are able to investigate the dependence of the outflow on the parameters of the disk. We have included fixed gravity corresponding to our hydrostatic initial conditions, star formation that follows the Kennicutt-Schmidt relation, hydrodynamics and a cosmological cooling function. On scales outside the volume, the host disk galaxy for this toy model is reduced to the parameters of gas surface density, gas fraction and star formation efficiency normalised by the Kennicutt-Schmidt relation. Our simulations demonstrate the ability of supernovae to launch a galactic wind vertically from a disk, although we do not follow the subsequent evolution of the material in the halo. The supernovae create a turbulent ISM with very distinct hot and warm phases, due to the strong transition of the cooling function at $10^4\; \rm K$. These phases exist in order-of-magnitude pressure equilibrium, with the warm material squeezed into dense lumps, and the excess thermal energy of the hot material causing it to accelerate away from the disk. In section \ref{sec:rarefaction} we compare this to a rarefaction process, with the hot ISM escaping to an IGM which is comparatively sparse and pressure-free. Such a model naturally leads an outflow with speed increasing with height above the disk but density decreasing. The hot outflow entrains colder ISM gas from the disk, that may have relatively high metallicity. The hot gas rushes past this cloudy medium producing characteristic tails. Such interfaces may be the cites where lower ionisation lines are produced. In section \ref{sec:MockAbsorb} we explore this further by calculating the normalised cross section of different temperature phases in our simulations, where we see the velocity distribution of the cooler gas is significantly beneath that of the escaping material. In a given snapshot the precise features of our simulations vary greatly due to turbulence and the stochastic nature of supernovae, therefore we examine several global properties which are less sensitive, such as the disk pressure, cooling rate as a fraction of the mean energy injection rate, disk scale height and mass ejection. These reveal a disk that rapidly evolves to higher porosity before reaching a state with an approximately constant mass ejection rate. This evolution of porosity is broadly reminiscent of the model by \citet{Silk_2001}. We perform a range of simulation to investigate the dependence of the mass loading on gas surface density, gas fraction, and star formation efficiency, and fit the resulting trends with power laws. Our mass loadings lie in the range $0.01$-$4$, suggesting a switch from a low to a high feedback regime at $V_{\rm d} \approx 140 \; \rm km \, s^{-1}$. We find little dependence on the normalisation of the star formation relation but a significant dependence on the gas fraction and surface density. The latter two can be combined to explain the bulk of the trends as depending on the total surface density of the disk. At high surface densities we find low mass loading and a high effective wind speed. At low surface densities the reverse is true, and there is an additional contribution due to an increase of the fraction of energy radiated by cooling gas. In Section \ref{sec:CharacTemp} we present a simple model where SNe blasts stall as they run into clouds swept-up by previous explosions that are so dense that they cool very efficiently predicts that mass loading depends on gas surface density and gas fraction as $\beta=\dot\Sigma_{\rm wind}/\dot\Sigma_\star \propto \Sigma_{\rm g}^{-8/11}\,f_{\rm g}^{4/11}$. These scalings are very close to those we find from simulations with high star formation rate, $\beta\propto \Sigma_{\rm g}^{-0.82}\,f_{\rm g}^{0.48}$ and weaker (in terms of surface density) than that for the pure Kennicutt relation, $\beta\propto \Sigma_{\rm g}^{-1.15}\,f_{\rm g}^{0.16}$. Our prediction for the mass loading in the solar neighbourhood is that each supernova results in an ejection of around $50 \; \rm M_\odot$ of gas, or a $\beta \sim 0.5$, slightly above $0.3$, our the average for the MW as a whole. The relationship between the wind velocity and thermalisation efficiency exhibits a more complex relationship to the disk properties than that of the mass loading. The thermalisation efficiency appears to show a dependency on both the surface density and the gas fraction, and correspondingly the wind velocity does not show a straightforward power law implied from a constant efficiency model. For high surface densities and low gas fractions, an approximate $40\%$ of the injected energy is converted into the outflow's thermal, turbulent and kinetic energy components, although we will underestimate the cooling outside our simulation volume. We employ the scaling relation obtained from the simulations to calculate the net mass loading, $\hat\beta=\dot M_{\rm wind}/\dot M_\star$, of an exponential disk galaxy with constant gas fraction. Using the \cite{Mo_Mao_and_White_98} scaling relation between disk and halo, we obtain a scaling with circular velocity of $\hat\beta\propto V_{\rm d}^{-4.8}$, stronger than either energy or momentum-driven winds. Using the observed Tully-Fisher relation we find a weaker dependence, $\hat\beta\propto V_{\rm d}^{-2.5}$. This compares well with recent semi-analytic models which assume $\alpha_{\rm hot} \in [2.0, 3.2]$. The normalisation of our net mass loading at redshift $z=0$ for a Milky-Way like galaxy is significantly lower than assumed in recent phenomenological models, although these models appear to have some degeneracy between the exponent and the normalisation, which we will exploit in future work. Notably the mass loading only increases weakly with star formation rate but decreases strongly with surface density, so for starburst galaxies the feedback may be less efficient. Interestingly, our estimated normalisation is comparable with inferred values of outflow for the MW based upon the observed accretion and star formation. If indeed there is a higher mass loading, it will require supernovae to heat a larger mass of material to a lower temperature, or for the hot outflow to entrain a larger fraction of the warm ISM gas. The scaling we find sets the investigation of galaxy winds on a new footing, providing a physically motivated sub-grid description of winds that can be implemented in cosmological simulations and semi-analytic models. \section*{Acknowledgements} Peter Creasey would like to acknowledge the support of an STFC studentship. The authors would like to thank Martin Stringer, Tom Abel, Crystal Martin, Claudia Lagos and Andrew Pontzen for helpful discussions. We would also like to thank the anonymous referee for comments which substantially improved this paper. The calculations for this paper were performed on the ICC Cosmology Machine, which is part of the DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS, and Durham University. The {\sc FLASH}\ software used in this work was in part developed by the DOE-supported ASC/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. This research was supported in part by the National Science Foundation under Grant NSF PHY11-25915. \bibliographystyle{mn2e}
2,877,628,090,589
arxiv
\section{Introduction} Spoken dialogue systems that can help users solve complex tasks such as booking a movie ticket have become an emerging research topic in artificial intelligence and natural language processing areas. With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions. The recent advance of deep learning has inspired many applications of neural dialogue systems~\cite{wen2017network,bordes2017learning,dhingra2017towards,li2017end}. A typical dialogue system pipeline can be divided into several parts: 1) a speech recognizer that transcribes a user's speech input into texts, 2) a natural language understanding module (NLU) that classifies the domain and associated intents and fills slots to form a semantic frame ~\cite{chi2017speaker,chen2017dynamic,zhang2018addressee, su2018time, su2019dynamically}, 3) a dialogue state tracker (DST) that predicts the current dialogue state in the multi-turn conversations, 4) a dialogue policy that determines the system action for the next step given the current state~\cite{peng2018deep, su2018discriminative}, and 5) a natural language generator (NLG) that outputs a response given the action semantic frame~\cite{wen2015semantically, su2018natural, su2018investigating}. \begin{comment} As the front-end component of a dialogue system is a natural language understanding (NLU) module---parsing user utterances into semantic frames that capture the core meaning~\cite{tur2011spoken}. A typical SLU first determines the domain given input utterances, predicts the intent, and then fill the associated slots~\cite{hakkani2016multi,chen2016knowledge,chen2016syntax,wang2016learning}. However, the above work focused on single-turn interactions, where each utterance is treated independently. To overcome the error propagation and further improve understanding performance, contextual information has been leveraged and shown useful~\cite{bhargava2013easy,xu2014contextual,chen2015leveraging,sun2016an}. Prior work incorporated dialogue contexts into the recurrent neural networks (RNN) for improving understanding results~\cite{xu2014contextual,shi2015contextual,weston2015memory,chen2016end}. Recently, modeling speaker role information~\cite{chi2017speaker,chen2017dynamic,zhang2018addressee, su2018time, su2019dynamically} has been demonstrated to learn the notable variance in speaking behavior during conversations for better understanding performance. Another key component of a dialogue system, the goal of NLG is to generate natural language sentences given the semantics provided by the dialogue manager to feedback to users. As the endpoint of interacting with users, the quality of generated sentences is crucial for better user experience. The common and mostly adopted method is the rule-based (or template-based) method~\cite{mirkovic2011dialogue}, which can ensure the natural language quality and fluency. In spite of robustness and adequacy of the rule-based methods, frequent repetition of identical, tedious output makes talking to a template-based machine unsatisfactory. Furthermore, scalability is an issue, because designing sophisticated rules for a specific domain is time-consuming. \end{comment} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/nlu-nlg.pdf} \vspace{-3mm} \caption{NLU and NLG emerge as a dual form.} \label{fig:framework} \vspace{-3mm} \end{figure} Many artificial intelligence tasks come with a \emph{dual} form; that is, we could directly swap the input and the target of a task to formulate another task. Machine translation is a classic example \cite{wu2016google}; for example, translating from English to Chinese has a dual task of translating from Chinese to English; automatic speech recognition (ASR) and text-to-speech (TTS) also have structural duality \cite{tjandra2017listening}. Previous work first exploited the duality of the task pairs and proposed supervised \cite{xia2017dual} and unsupervised (reinforcement learning) \cite{he2016dual} training schemes. The recent studies magnified the importance of the duality by boosting the performance of both tasks with the exploitation of the duality. NLU is to extract core semantic concepts from the given utterances, while the goal of NLG is to construct corresponding sentences based on given semantics. In other words, understanding and generating sentences are a dual problem pair shown in Figure~\ref{fig:framework}. In this paper, we introduce a novel training framework for NLU and NLG based on \emph{dual supervised learning} \cite{xia2017dual}, which is the first attempt at exploiting the duality of NLU and NLG. The experiments show that the proposed approach improves the performance for both tasks. \section{Proposed Framework} This section first describes the problem formulation, and then introduces the core training algorithm along with the proposed methods of estimating data distribution. \begin{comment} \subsection{Problem Formulation} Suppose we have two spaces: the semantics space $\mathcal{X}$ and the space $\mathcal{Y}$ of natural language. The goal of language generation is, given semantics, to generate corresponding utterance. In other words, the task is to learn a mapping function $f: \mathcal{X} \to \mathcal{Y}$ to transform semantic representation into natural language. On the other hand, language understanding is to capture the core meaning of utterances, the task is to find a function $g: \mathcal{Y} \to \mathcal{X}$ to predict semantic representation from given natural language. Given $n$ data pairs $\{(x_i, y_i)\}^n_{i=1}$ $i.i.d.$ sampled from the joint space $\mathcal{X} \times \mathcal{Y}$. A typical strategy of these optimization problem is based on maximum likelihood estimation (MLE) of the parameterized conditional distribution by the learnable parameters $\theta_{x \to y}$ and $\theta_{y \to x}$: \begin{align*} f(x;\theta_{x \to y}) = \argmax_{y' \in \mathcal{Y}} P(y' \mid x ; \theta_{x \to y} ), \\ g(y;\theta_{y \to x}) = \argmax_{x' \in \mathcal{X}} P(x' \mid y ; \theta_{y \to x} ). \\ \end{align*} \end{comment} Assuming that we have two spaces, the semantics space $\mathcal{X}$ and the natural language space $\mathcal{Y}$, given $n$ data pairs $\{(x_i, y_i)\}^n_{i=1}$, the goal of NLG is to generate corresponding utterances based on given semantics. In other words, the task is to learn a mapping function $f(x;\theta_{x \to y})$ to transform semantic representations into natural language. On the other hand, NLU is to capture the core meaning of utterances, finding a function $g(y;\theta_{y \to x})$ to predict semantic representations given natural language. A typical strategy of these optimization problems is based on maximum likelihood estimation (MLE) of the parameterized conditional distribution by the learnable parameters $\theta_{x \to y}$ and $\theta_{y \to x}$. \subsection{Dual Supervised Learning} Considering the duality between two tasks in the dual problems, it is intuitive to bridge the bidirectional relationship from a probabilistic perspective. If the models of two tasks are optimal, we have \textit{probabilistic duality}: \begin{align*} P(x)P(y \mid x ; \theta_{x \to y}) &= P(y)P(x \mid y ; \theta_{y \to x} ) \\ &= P(x,y) \ \forall x,y, \end{align*} where $P(x)$ and $P(y)$ are marginal distributions of data. The condition reflects parallel, bidirectional relationship between two tasks in the dual problem. Although standard supervised learning with respect to a given loss function is a straight-forward approach to address MLE, it does not consider the relationship between two tasks. \citet{xia2017dual} exploited the duality of the dual problems to introduce a new learning scheme, which explicitly imposed the empirical probability duality on the objective function. The training strategy is based on the standard supervised learning and incorporates the probability duality constraint, so-called \textit{dual supervised learning}. Therefore the training objective is extended to a multi-objective optimization problem: \begin{align*} \begin{cases} \min_{\theta_{x \to y}} (\mathbb{E}[l_{1}(f(x;\theta_{x \to y}),y)]), \\ \min_{\theta_{y \to x}} (\mathbb{E}[l_{2}(g(y;\theta_{y \to x}),x)]), \\ \text{s.t.} \ P(x)P(y \mid x ; \theta_{x \to y}) = P(y)P(x \mid y ; \theta_{y \to x}), \end{cases} \end{align*} where $l_{1,2}$ are the given loss functions. Such constraint optimization problem could be solved by introducing Lagrange multiplier to incorporate the constraint: \begin{align*} \begin{cases} \min_{\theta_{x \to y}} (\mathbb{E}[l_{1}(f(x;\theta_{x \to y}),y)] + \lambda_{x \to y} l_{duality}), \\ \min_{\theta_{y \to x}} (\mathbb{E}[l_{1}(g(y;\theta_{y \to x}),x)] + \lambda_{y \to x} l_{duality}), \\ \end{cases} \end{align*} where $\lambda_{x \to y}$ and $\lambda_{y \to x}$ are the Lagrange parameters and the constraint is formulated as follows: \begin{align*} l_{duality} &= (\mathrm{log}\hat{P}(x) + \mathrm{log}P(y \mid x ; \theta_{x \to y}) \\ &- \mathrm{log}\hat{P}(y) - \mathrm{log}P(x \mid y ; \theta_{y \to x} ))^2. \end{align*} Now the entire objective could be viewed as the standard supervised learning with an additional regularization term considering the duality between tasks. Therefore, the learning scheme is to learn the models by minimizing the weighted combination of an original loss term and a regularization term. Note that the true marginal distribution of data $P(x)$ and $P(y)$ are often intractable, so here we replace them with the approximated empirical marginal distribution $\hat{P}(x)$ and $\hat{P}(y)$. \subsection{Distribution Estimation as Autoregression} With the above formulation, the current problem is how to estimate the empirical marginal distribution $\hat{P(\cdot)}$. To accurately estimate data distribution, the data properties should be considered, because different data types have different structural natures. For example, natural language has sequential structures and temporal dependencies, while other types of data may not. Therefore, we design a specific method of estimating distribution for each data type based on the expert knowledge. From the probabilistic perspective, we can decompose any data distribution $p(x)$ into the product of its nested conditional probability, \begin{align} p(x) = \prod_{d}^{D} p(x_{d} \mid x_{1}, ... , x_{d-1}), \label{eq:auto} \end{align} where $x$ could be any data type and $d$ is the index of a variable unit. \subsubsection{Language Modeling} Natural language has an intrinsic sequential nature; therefore it is intuitive to leverage the autoregressive property to learn a language model. In this work, we learn the language model based on recurrent neural networks \cite{mikolov2010recurrent, sundermeyer2012lstm} by the cross entropy objective in an unsupervised manner. \begin{align} p(y) = \prod_{i}^{L} p(y_{i} \mid y_{1}, ... , y_{i-1}; \theta_{y}), \label{eq:lm} \end{align} where $y_{(\cdot)}$ are words in the sentence $y$, and $L$ is the sentence length. \subsubsection{Masked Autoencoder} The semantic representation $x$ in our work is discrete semantic frames containing specific slots and corresponding values. Each semantic frame contains the core concept of a certain sentence, for example, the slot-value pairs ``\texttt{name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near[Clare Hall]}'' corresponds to the target sentence ``\emph{Bibimbap House is a moderately priced restaurant who's main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area.}''. Even though the product rule in (\ref{eq:auto}) enables us to decompose any probability distribution into a product of a sequence of conditional probability, how we decompose the distribution reflects a specific physical meaning. For example, language modeling outputs the probability distribution over vocabulary space of $i$-th word $y_i$ by only taking the preceding word sequence $y_{<i}$. Natural language has the intrinsic sequential structure and temporal dependency, so modeling the joint distribution of words in a sequence by such autoregressive property is logically reasonable. However, slot-value pairs in semantic frames do not have a single directional relationship between them, while they parallel describe the same sentence, so treating a semantic frame as a sequence of slot-value pairs is not suitable. Furthermore, slot-value pairs are not independent, because the pairs in a semantic frame correspond to the same individual utterance. For example, French food would probably cost more. Therefore, the correlation should be taken into account when estimating the joint distribution. \begin{figure}[t!] \centering \includegraphics[width=.85\linewidth]{figures/MADE.pdf} \vspace{-1mm} \caption{The illustration of the masked autoencoder for distribution estimation (MADE).} \label{fig:ste} \vspace{-2mm} \end{figure} Considering the above issues, to model the joint distribution of flat semantic frames, various dependencies between slot-value semantics should be leveraged. In this work, we propose to utilize a masked autoencoder for distribution estimation (MADE) \cite{germain2015made}. By zeroing certain connections, we could enforce the variable unit $x_{d}$ to only depend on any specific set of variables, not necessary on $x_{<d}$; eventually we could still have the marginal distribution by the product rule: \begin{equation} p(x) = \prod_{d}^{D} p(x_{d} \mid S_{d} ), \end{equation} where $S_{d}$ is a specific set of variable units. \begin{comment} In practical, we elementwise-multiply every weight matrix by a binary mask matrix $M$ to interrupt some connections. The idea is simple, to impose the autoregressive property we first assign each hidden unit $k$ a integer number $m(k)$ ranging from 1 to the dimension of data $D-1$ inclusively. For the weight matrices of hidden layer $W^{l}$, we build binary mask matrices as follows: \begin{align*} M^{W^l}_{} = \begin{cases} 1 & \text{if } m^{l}(k') \geq m^{l-1}(k), \\ 0 & \text{otherwise }, \end{cases} \end{align*} where $l$ indicates the index of the hidden layer. For the input and output layer, we assign each unit a number ranging from 1 to $D$ exclusively, then we enforce build the mask for the output matrix $V$ as \begin{align*} M^{V}_{} = \begin{cases} 1 & \text{if } m^{L}(d) > m^{L-1}(k), \\ 0 & \text{otherwise }, \end{cases} \end{align*} where $L$ indicates the output layer. With the constructed mask matrices, the masked autoencoder is shown to be able to estimate the joint distribution as autoregression. Because there is no explicit rule specifying the exact dependencies between the slot-value pairs in our data, we consider various dependencies by ensemble of multiple decomposition, that is, to sample different sets $S_{d}$. \end{comment} In practice, we elementwise-multiply each weight matrix by a binary mask matrix $M$ to interrupt some connections, as illustrated in Figure~\ref{fig:ste}. To impose the autoregressive property, we first assign each hidden unit $k$ an integer $m(k)$ ranging from 1 to the dimension of data $D-1$ inclusively; for the input and output layers, we assign each unit a number ranging from 1 to $D$ exclusively. Then binary mask matrices can be built as follows: \begin{align*} M^{}_{} = \begin{cases} 1 & \text{if } m^{l}(k') \geq m^{l-1}(k), \\ 1 & \text{if } m^{L}(d) > m^{L-1}(k), \\ 0 & \text{otherwise.} \end{cases} \end{align*} Here $l$ indicates the index of the hidden layer, and $L$ indicates the one of the output layer. With the constructed mask matrices, the masked autoencoder is shown to be able to estimate the joint distribution as autoregression. Because there is no explicit rule specifying the exact dependencies between slot-value pairs in our data, we consider various dependencies by ensemble of multiple decomposition, that is, to sample different sets $S_{d}$. \begin{table*} \centering \begin{tabular}{ | c| l | c | c c c c| } \hline \multicolumn{2}{|c|}{\multirow{2}{*}{\bf Learning Scheme}} & \bf NLU & \multicolumn{4}{c|}{\bf NLG}\\ \multicolumn{2}{|c|}{} & \bf \small F1 & \bf \small BLEU & \bf \small ROUGE-1 & \bf \small ROUGE-2 & \bf \small ROUGE-L \\ \hline \hline (a) & Baseline: Iterative training & 71.14 & 55.05 & 55.37 & 27.95 & 39.90 \\ (b) & Dual supervised learning, $\lambda = 0.1$ & \bf 72.32 & \bf 57.16 & \bf 56.37 & \bf 29.19 & \bf 40.44 \\ (c) & Dual supervised learning, $\lambda = 0.01$ & 72.08 & 55.07 & 55.56 & 28.42 & 40.04 \\ (d) & Dual supervised learning, $\lambda = 0.001$ & 71.71 & 56.17 & 55.90 & 28.44 & 40.08 \\ (e) & Dual supervised learning w/o MADE & 70.97 & 55.96 & 55.99 & 28.74 & 39.98 \\ \hline \end{tabular} \vspace{-1mm} \caption{The NLU performance reported on micro-F1 and the NLG performance reported on BLEU, ROUGE-1, ROUGE-2, and ROUGE-L of models (\%).} \vspace{-3mm} \label{tab:results} \end{table*} \begin{comment} \subsection{Optimization} With the estimation of semantics $\hat{P}(x)$ and natural language $\hat{P}(y)$, the multi-objective problem is optimized in a supervised fashion. \end{comment} \section{Experiments} To evaluate the effectiveness of the proposed framework, we conduct the experiments, the settings and analysis of the results are described as follows. \subsection{Settings} The experiments are conducted in the benchmark E2E NLG challenge dataset~\cite{novikova2017e2e}, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. Each instance is a pair of a semantic frame containing specific slots and corresponding values and an associated natural language utterance with the given semantics. The data preprocessing includes trimming punctuation marks, lemmatization, and turning all words into lowercase. Although the original dataset is for NLG, of which the goal is to generate sentences based on the given slot-value pairs, we further formulate a NLU task as predicting slot-value pairs based on the utterances, which is a multi-label classification problem. Each possible slot-value pair is treated as an individual label, and the total number of labels is 79. To evaluate the quality of the generated sequences regarding both precision and recall, for NLG, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references, while F1 score is measured for the NLU results. \subsection{Model Details} The model architectures for NLG and NLU are a gated recurrent unit (GRU)~\cite{cho2014learning} with two identical fully-connected layers at the two ends of GRU. Thus the model is symmetrical and may have semantic frame representation as initial and final hidden states and sentences as the sequential input. In all experiments, we use mini-batch \textit{Adam} as the optimizer with each batch of 64 examples, 10 training epochs were performed without early stop, the hidden size of network layers is 200, and word embedding is of size 50 and trained in an end-to-end fashion. \subsection{Results and Analysis} The experimental results are shown in Table \ref{tab:results}, where each reported number is averaged over three runs. The row (a) is the baseline that trains NLU and NLG separately and independently, and the rows (b)-(d) are the results from the proposed approach with different Lagrange parameters. The proposed approach incorporates probability duality into the objective as the regularization term. To examine its effectiveness, we control the intensity of regularization by adjusting the Lagrange parameters. The results (rows (b)-(d)) show that the proposed method outperforms the baseline on all automatic evaluation metrics. Furthermore, the performance improves more with stronger regularization (row (b)), demonstrating the importance of leveraging duality. In this paper, we design the methods for estimating marginal distribution for data in NLG and NLU tasks: language modeling is utilized for sequential data (natural language utterances), while the masked autoencoder is conducted for flat representation (semantic frames). The proposed method for estimating the distribution of semantic frames considers complex and implicit dependencies between semantics by ensemble of multiple decomposition of joint distribution. In our experiments, the empirical marginal distribution is the average over the results from 10 different masks and orders; in other words, 10 types of dependencies are modeled. The row (e) can be viewed as the ablation test, where the marginal distribution of semantic frames is estimated by considering slot-value pairs independent to others and statistically computed from the training set. The performance is worse than the ones that model the dependencies, demonstrating the importance of considering the nature of input data and modeling data distribution via the masked autoencoder. We further analyze understanding and generation results compared with the baseline model. In some cases, it is found that our NLU model can extract the semantics of utterances better and our NLU model can generate sentences with richer information based on the proposed learning scheme. In sum, the proposed approach is capable of improving the performance of both NLU and NLG in the benchmark data, where the exploitation of duality and the way of estimating distribution are demonstrated to be important. \section{Conclusion} This paper proposes a novel training framework for natural language understanding and generation based on dual supervised learning, which first exploits the duality between NLU and NLG and introduces it into the learning objective as the regularization term. Moreover, expert knowledge is incorporated to design suitable approaches for estimating data distribution. The proposed methods demonstrate effectiveness by boosting the performance of both tasks simultaneously in the benchmark experiments. \section*{Acknowledgements} We thank the anonymous reviewers for their insightful feedback on this work. This work was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 108-2636-E-002-003 and 108-2634-F-002-019.
2,877,628,090,590
arxiv
\section{Introduction} \subsection{Introduction and statement of the main results} We consider the nonlinear Korteweg-de Vries (KdV) equation in a bounded interval $(0, L)$ equipped with the Dirichlet boundary condition and the Neumann boundary condition on the right: \begin{equation}\label{KdV-NL}\left\{ \begin{array}{cl} u_t (t, x) + u_x (t, x) + u_{xxx} (t, x) + u (t,x) u_x (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] u(t, x=0) = u(t, x=L) = u_x(t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] u(t = 0, \cdot) = u_0 & \mbox{ in } (0, L), \end{array}\right. \end{equation} where $u_0 \in L^2(0, L)$ is the initial data. The KdV equation has been introduced by Boussinesq \cite{1877-Boussinesq} and Korteweg and de Vries \cite{KdV} as a model for propagation of surface water waves along a channel. This equation also furnishes a very useful nonlinear approximation model including a balance between a weak nonlinearity and weak dispersive effects and has been studied extensively, see e.g.~\cite{Whitham74, Miura76}. Regarding \eqref{KdV-NL}, Rosier \cite{Rosier97} introduced a set of critical lengths ${\mathcal N}$ defined by \begin{equation}\label{def-cN} {\mathcal N} : = \left\{ 2 \pi \sqrt{\frac{k^2 + kl + l^2}{3}}; \, k, l \in \mathbb{N}_*\right\}. \end{equation} This set plays an important role in both the decay property of the solution $u$ of \eqref{KdV-NL} and the controllability property of the system associated with \eqref{KdV-NL} where $u_x(t, L)$ is a control instead of $0$. Let us briefly review the known results on the controllability of \eqref{KdV-NL} where $u_x(t, L)$ is a control: \begin{equation}\label{KdV-NLC}\left\{ \begin{array}{cl} u_t (t, x) + u_x (t, x) + u_{xxx} (t, x) + u (t, x) u_x (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] u(t, x=0) = u(t, x=L) = 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] u_x(\cdot , x= L) : \mbox{ is a control}, \\[6pt] u(t = 0, \cdot) = u_0 & \mbox{ in } (0, L). \end{array}\right. \end{equation} For initial and final datum in $L^2(0, L)$ and controls in $L^2(0, T)$, Rosier~\cite{Rosier97} proved that system \eqref{KdV-NLC} is small-time locally controllable around 0 provided that the length $L$ is not critical, i.e., $L \not \in {\mathcal N}$. To this end, he studied the controllability of the linearized system using the Hilbert Uniqueness Method and compactness-uniqueness arguments. He also established that when the length $L$ is critical, i.e., $L \in {\mathcal N}$, the linearized system is not controllable. More precisely, he showed that there exists a non-trivial finite-dimensional subspace ${\mathcal M}$ ($= {\mathcal M}_L$) of $L^2(0, L)$ such that its orthogonal space in $L^2(0, L) $ is reachable from $0$ whereas ${\mathcal M}$ is not. To tackle the control problem for the critical length $L \in {\mathcal N}$, Coron and Cr\'epeau introduced the power series expansion method \cite{CC04}. The idea is to take into account the effect of the nonlinear term $u u_x$ absent in the linearized system. Using this method, they showed \cite{CC04} (see also \cite[section 8.2]{Coron07}) that system \eqref{KdV-NLC} is small-time locally controllable if $L = m 2 \pi$ for $m \in \mathbb{N}_*$ satisfying \begin{equation} \nexists (k, l) \in \mathbb{N}_* \times \mathbb{N}_* \mbox{ with } k^2 + kl + l^2 = 3 m^2 \mbox{ and } k \neq l, \end{equation} with initial and final datum in $L^2(0, L)$ and controls in $L^2(0, T)$. In this case, $\dim {\mathcal M} = 1$ and ${\mathcal M}$ is spanned by $1 - \cos x$. Cerpa \cite{Cerpa07} developed the analysis in \cite{CC04} to prove that \eqref{KdV-NLC} is locally controllable at \emph{a finite time} in the case $\dim {\mathcal M} = 2$. This corresponds to the case where \[ L = 2 \pi \sqrt{\frac{k^2 + kl + l^2}{3}} \] for some $k, \, l \in \mathbb{N}_*$ with $k>l$, and there is no $(m, n) \in \mathbb{N}_* \times \mathbb{N}_*$ with $m>n$ and $m^2 + mn + n^2 = k^2 + kl + l^2$. Later, Cr\'epeau and Cerpa \cite{CC09} succeeded to extend the ideas in \cite{Cerpa07} to obtain the local controllability for all other critical lengths at {\it a finite time}. Recently, Coron, Koenig, and Nguyen \cite{CKN-20} prove that when $(2k + l) / 3 \not \in \mathbb{N}_*$, one cannot achieve the small time local controllability for initial datum in $H^3(0, L)$ and controls in $H^1$ (in time). We also establish the local controllability for finite time of \eqref{KdV-NLC} for some subclass of these pairs $(k, l)$ with initial datum in $H^3(0, L)$ and the controls in $H^1(0, T)$. This is surprising when compared with known results on internal controls for system \eqref{KdV-NL}. It is known, see \cite{CPR15, PVZ02, Pazoto05}, that system \eqref{KdV-NL} is locally controllable using internal controls {\it whenever} the control region contains an {\it arbitrary} open subset of $(0, L)$. We next discuss the decay property of \eqref{KdV-NL}. Multiplying the equation of $u$ (real) with $u$ and integrating by parts, one obtains \begin{equation}\label{key-identity} \int_{0}^L |u(t, x)|^2 \, dx + \int_0^t |u_x(s, 0)|^2 \, ds = \int_{0}^L |u(0, x)|^2 \, dx \mbox{ for all } t > 0. \end{equation} As a consequence of \eqref{key-identity}, one has \begin{equation}\label{key-identity-0} \int_{0}^L |u(t, x)|^2 \, dx \le \int_{0}^L |u(0, x)|^2 \, dx \mbox{ for all } t > 0. \end{equation} In the case $L \not \in {\mathcal N}$, Menzala, Vasconcellos, and Zuazua \cite{PVZ02} proved that the solutions of \eqref{KdV-NL} with small initial datum in $L^2(0, L)$ decay exponentially to 0. Their analysis is based on the exponential decay of the linearized system for which it holds, see \cite[Proposition 3.3]{Rosier97}, \begin{equation}\label{key-identity-1} \int_0^t |u_x(s, 0)|^2 \, ds \ge c_t \int_{0}^L |u(0, x)|^2 \mbox{ for all } t > 0. \end{equation} When a local damping was added, they also obtained the global exponential stability using the multiplier technique, compactness arguments, and the unique continuation for the KdV equations. Related results on modified nonlinear KdV equations can be found in \cite{RZ06,LP07}. It is known from the work of Rosier \cite{Rosier97} that for $u_0 \in {\mathcal M}$, the solution $u$ of the linearized system satisfies \begin{equation}\label{key-identity-2} \int_0^t |u_x(s, 0)|^2 \, ds = 0 \mbox{ for all } t > 0, \end{equation} which implies in particular that \eqref{key-identity-1} does not hold for any $t > 0$. The work of Menzala, Vasconcellos, and Zuazua naturally raises the question whether or not the solutions of \eqref{KdV-NL} go to 0 as the time goes to infinity (see \cite[Section 4]{PVZ02} and also \cite[Section 5]{Pazoto05}). Quite recently, progress has been made for this problem. Concerning the decay property of \eqref{KdV-NL} for critical lengths, when $\dim {\mathcal M} = 1$, Chu, Coron, and Shang \cite{CCS15} showed that the solution $u(t, \cdot)$ goes to 0 as $t \to+ \infty$ for all small initial data in $L^2(0, L)$. Moreover, they showed that there exists a constant $C$ depending only on $L$ such that \begin{equation}\label{CCS15} \| u(t, \cdot) \|_{L^2(0, L)} \le \frac{C}{\sqrt{t}} \mbox{ for } t > 0. \end{equation} It is worth mentioning that the set of $L \in {\mathcal N}$ such that $\dim {\mathcal M} = 1$ is infinite \cite{CC04}. When $k = 2$ and $l = 2$ (the smallest length for which $\dim {\mathcal M} = 2$), Tang, Chu, Sang, and Coron \cite{TCSC18} also established the decay to 0 of the solutions by establishing an estimate equivalent to \eqref{CCS15} (see \cite[(1.20) in Theorem 1.1]{TCSC18}). The analysis in \cite{CCS15,TCSC18} is based on the center manifold theory in infinite dimensions, see e.g. \cite{HI11}, in particular the work \cite{VMW04}. To this end, the authors showed the existence and smoothness of a center manifold associated with \eqref{KdV-NL}, which have their own interests. \medskip In this paper, we show that all solutions of \eqref{KdV-NL} decay to 0 at least with a rate $1/t^{1/2}$ provided their initial data in $L^2(0, L)$ is small enough when $\dim {\mathcal M} = 1$ or when condition \eqref{main-assumption} below holds (this requires in particular that $\dim {\mathcal M}$ is even). Given a critical length $L$, condition~\eqref{main-assumption} can be checked numerically, a scilab program is given in the appendix (see \Cref{cor1} for a range of validation). Our approach is inspired by the spirit of the power series expansion due to Coron and Cr\'epeau \cite{CC04} and involves the theory of quasi-periodic functions. Before stating our results, let us introduce some notations associated with the structure of ${\mathcal M}$, see e.g. \cite{Rosier97,CC04,Cerpa14}. Recall that, for each $L \in {\mathcal N}$, there exists exactly $n_L \in \mathbb{N}_*$ pairs $(k_m, l_m) \in \mathbb{N}_* \times \mathbb{N}_*$ ($1 \le m \le n_L$) such that $k_m \ge l_m$, and \begin{equation}\label{def-L} L = 2 \pi \sqrt{\frac{k_m^2 + k_m l_m + l_m^2}{3}}. \end{equation} For $1 \le m \le n_L$, set \begin{equation}\label{def-pm} p_m = p(k_m, l_m) = \frac{(2k_m + l_m)(k_m - l_m)(2 l_m + k_m)}{3 \sqrt{3}(k_m^2 + k_m l_m + l_m^2)^{3/2}}, \end{equation} and denote \begin{equation} {\mathcal P}_L = \Big\{p_m \mbox{ given by } \eqref{def-pm}; 1 \le m \le n_L \Big\}. \end{equation} For $L \in {\mathcal N}$ and $1 \le m \le n_L$ with $p_m \neq 0$, let $\sigma_{j, m}$ ($1 \le j \le 3$) be the solutions of $$ \sigma^3 - 3 (k_m^2 + k_m l_m + l_m^2) \sigma + 2(2 k_m + l_m)(2 l_m + k_m) (k_m - l_m) = 0, $$ and set, with the convention $\sigma_{j+3, m} = \sigma_{j, m}$ for $j \ge 1$, \begin{equation}\label{def-sm} s_m = s(k_m, l_m) : = \sum_{j=1}^3 \sigma_{j, m} (\sigma_{j+2, m} - \sigma_{j+1, m} ) \left( e^{\frac{4 \pi i (k_m - l_m)}{ 3} } e^{2 \pi i \sigma_{j, m}} + e^{- 2 \pi i \sigma_{j, m}}\right). \end{equation} We are ready to state the main result of the paper: \begin{theorem}\label{thm1} Let $L \in {\mathcal N}$. Assume that either $\dim {\mathcal M} = 1$ or \begin{equation}\label{main-assumption} p_m \neq 0 \quad \mbox{ and } \quad s_m \neq 0 \quad \mbox{ for all } 1 \le m \le n_L. \end{equation} There exists $\varepsilon_0 > 0$ depending only on $L$ such that for all (real) $u_0 \in L^2(0, L)$ with $\| u_0 \|_{L^2(0, L)} \le \varepsilon_0$, the unique solution $u \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ of \eqref{KdV-NL} satisfies \begin{equation}\label{thm1-cl1} \lim_{t \to 0} \| u(t, \cdot) \|_{L^2(0, L)} = 0. \end{equation} More precisely, there exists a constant $C$ depending only on $L$ such that, for $t \ge C/ \| u_0 \|_{L^2(0, L)}^2$ and $\| u_0 \|_{L^2(0, L)} \le \varepsilon_0$, it holds \begin{equation}\label{thm1-cl2} \|u(t, \cdot) \|_{L^2(0, L) } \le \frac{1}{2} \| u(0, \cdot) \|_{L^2(0, L)}. \end{equation} As a consequence, we have \begin{equation}\label{thm1-cl3} \| u(t, \cdot) \|_{L^2(0, L)} \le c /t^{1/2} \mbox{ for } t \ge 0, \end{equation} for some positive constant $c$ depending only on $L$. \end{theorem} \begin{remark} \rm Let $L \in \mathbb{N}$. Condition $p_m \neq 0$ for all $1 \le m \le n_L$ is equivalent to the fact that $\dim {\mathcal M}$ is even, see e.g. \cite{Cerpa14}. \begin{remark}\rm Note that $s_m$ is a antisymmetric function of $(\sigma_{1, m}, \sigma_{2, m}, \sigma_{3, m})$ and hence the condition \eqref{main-assumption} does not depend on the order of $(\sigma_{1, m}, \sigma_{2, m}, \sigma_{3, m})$. \end{remark} \begin{remark} \rm Assume \eqref{main-assumption}. Applying \Cref{thm1}, one derives from \eqref{key-identity-0} that $0$ is (locally) asymptotically stable with respect to $L^2(0, L)$-norm for system \eqref{KdV-NL}. \end{remark} \begin{remark} \rm Assume that \eqref{thm1-cl2} holds. By the regularity properties of the KdV equations, one derives that the same rate of decay holds for $t>1$ when $\| \cdot \|_{L^2(0, L) }$ is replaced by $\| \cdot \|_{H^m(0, L) }$ for $m \ge 1$. \end{remark} \end{remark} Condition \eqref{main-assumption} can be checked numerically. For example, using scilab (the program is given in the appendix), we can check $s_m \neq 0$ for all $(k_m, l_m) \in \mathbb{N}_*$ with $1 \le l_m < k_m < 2000$. As a consequence, we have \begin{corollary}\label{cor1} Let $L \in {\mathcal N}$. Assume that either $\dim {\mathcal M} = 1$ or $1 \le k_m, l_m \le 1000$ for some $1 \le m \le n_L$. Then \eqref{thm1-cl3} holds if $p_m \neq 0$ for all $1 \le m \le n_L$. \end{corollary} We thus rediscover the decay results in \cite{CCS15,TCSC18} by a different approach and obtain new results. \begin{remark} \rm Concerning \eqref{main-assumption}, we expect that $s_m \neq 0$ holds for all $L \in {\mathcal N}$ but we are not able to show it. \end{remark} The optimality of the decay rate $1/ t^{1/2}$ given in \eqref{thm1-cl3} is open. However, we can establish the following result for all critical lengths. \begin{proposition}\label{pro-opt} Let $L \in {\mathcal N}$. There exists $c > 0$ such that for all $\varepsilon > 0$, there exists $u_0 \in L^2(0, L)$ such that $$ \| u_0\|_{L^2(0, L)} \le \varepsilon \quad \mbox{ and } \quad \| u(t, \cdot) \|_{L^2(0, L)} \ge c \ln (t+2) /t \mbox{ for some } t > 0. $$ \end{proposition} It is natural to ask if the decay holds globally, i.e., without the assumption on the smallness of the initial data. In fact, this cannot hold even for non-critical lengths. More precisely, Doronin and Natali \cite{DN14} showed that there exist (infinite) stationary states of \eqref{KdV-NL} for any length $L$, which is critical or not. \subsection{Ideas of the analysis and structure of the paper} The key of the analysis of \Cref{thm1} is to (observe and) establish the following fact (see \Cref{lemK}): Let $L \in {\mathcal N}$. Under condition \eqref{main-assumption} or $\dim {\mathcal M} =1$, there exist two constants $T_0>0$ and $C>0$ depending only on $L$ such that for $T \ge T_0$, one has, for all $u_0 \in L^2(0, L)$ with $\| u_0\|_{L^2(0, L)}$ sufficiently small, \begin{equation}\label{decayK-I} \| u(T, \cdot) \|_{L^2(0, L)} \le \| u_0 \|_{L^2(0, L)} \Big(1 - C \| u_0 \|_{L^2(0, L)}^2 \Big) \mbox{ for } T \ge T_0, \end{equation} where $u$ is the unique solution of \eqref{KdV-NL}. To get an idea of how to prove \eqref{decayK-I}, let us consider the case $u_0 \in {\mathcal M} \setminus \{0 \}$, which is somehow the worst case. The analysis is inspired by the spirit of the power expansion method \cite{CC04}. Let $\widetilde u_1$ be the unique solution of \begin{equation} \label{hu1-Int}\left\{ \begin{array}{cl} \widetilde u_{1, t} (t, x) + \widetilde u_{1, x} (t, x) + \widetilde u_{1, xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \widetilde u_1(t, x=0) = \widetilde u_1(t, x=L) = \widetilde u_{1, x} (t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] \widetilde u_1(t = 0, \cdot) = u_0/ \varepsilon & \mbox{ in } (0, L), \end{array}\right. \end{equation} with $\varepsilon = \| u_0\|_{L^2(0, L)} > 0$, and let $\widetilde u_2$ be the unique solution of \begin{equation}\label{hu2-Int}\left\{ \begin{array}{cl} \widetilde u_{2, t} (t, x) + \widetilde u_{2, x} (t, x) + \widetilde u_{2, xxx} (t, x) + \widetilde u_{1, x} (t, x) \widetilde u_1 (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \widetilde u_2(t, x=0) = \widetilde u_2(t, x=L) = \widetilde u_{2, x} (t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] \widetilde u_2(t = 0, \cdot) = 0 & \mbox{ in } (0, L). \end{array}\right. \end{equation} By considering the system of $\varepsilon \widetilde u_1 + \varepsilon^2 \widetilde u_2 - u$, we can prove that, for arbitrary $\tau > 0$, \begin{equation}\label{diff-I} \| (\varepsilon \widetilde u_1 + \varepsilon^2 \widetilde u_2 - u)_x (\cdot, 0) \|_{L^2(0, \tau)} \le c_\tau \varepsilon^3, \end{equation} for some $c_\tau > 0$ depending only on $\tau$ and $L$, provided that $\varepsilon$ is sufficiently small. Since $\widetilde u_{1}(t, \cdot) \in {\mathcal M}$ for all $t > 0$, one can then derive that $$ \widetilde u_{1, x} (t, 0) = 0 \mbox{ for } t \ge 0. $$ Thus, if one can show that, for some $\tau_0 > 0$ and for some $c_0> 0$ \begin{equation}\label{cond-hu2-I} \| \widetilde u_{2, x}(\cdot, 0) \|_{L^2(0, \tau_0)} \ge c_0, \end{equation} then from \eqref{diff-I} one has, for $\varepsilon$ small enough, $$ \| u_{x}(\cdot, 0) \|_{L^2(0, \tau_0)} \ge c_0 \varepsilon^2. $$ This implies \eqref{decayK-I} with $T_0 = \tau_0$ by \eqref{key-identity}. To establish \eqref{cond-hu2-I}, we first construct a special solution $W$ of the system \begin{equation}\label{W-Int}\left\{ \begin{array}{cl} W_t (t, x) + W_{x} (t, x) + W_{xxx} (t, x) + \widetilde u_{1, x} (t, x) \widetilde u_1 (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] W(t, x=0) = W(t, x=L) = W (t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \end{array}\right. \end{equation} via a separation-of-variable process. Moreover, we can prove for such a solution $W$ that \begin{multline}\label{W_x-quasi-I} \mbox{$W$ is bounded by $\| \widetilde u_1(0, \cdot) \|_{L^2(0, L)}$ up to a positive constant,} \\ \mbox{and $W_{x}(\cdot, 0)$ is a non-trivial quasi-periodic function.} \end{multline} The proof of this property is based on some useful observations on $p_m$ and the boundary conditions considered in \eqref{KdV-NL}, and involves some arithmetic arguments. It is in the proof of the existence of $W$ and the second fact of \eqref{W_x-quasi-I} that assumption \eqref{main-assumption} or $\dim {\mathcal M} = 1$ is required. Note that, for all $\delta > 0$, there exists $T_\delta > 0$ such that it holds, for $\tau \ge T_\delta$, \begin{equation}\label{decay-I} \| y_x(\cdot, 0) \|_{L^2(\tau, 2 \tau)} \le \delta \| y_0 \|_{L^2(0, L)}, \end{equation} for all solution $y \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ of the system \begin{equation*}\left\{ \begin{array}{cl} y_t (t, x) + y_x (t, x) + y_{xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] y(t, x=0) = y(t, x=L) = y_x(t , x= L)= 0 & \mbox{ for } t \in (0, +\infty). \end{array}\right. \end{equation*} Combining \eqref{W_x-quasi-I} and \eqref{decay-I}, we can derive \eqref{cond-hu2-I} after applying the theory of quasi-periodic functions, see e.g. \cite{Bohr47}. \medskip The proof of \Cref{pro-opt} is inspired by the approach which is used to prove \Cref{thm1} and is mentioned above. \medskip The paper is organized as follows. The elements for the construction of $W$ are given in \Cref{sect-construction} and the elements for the proof of \eqref{W_x-quasi-I} are given in \Cref{sect-quasi}. The proof of \Cref{thm1} is given in \Cref{sect-thm1} where \eqref{decayK-I} is formulated in \Cref{lemK}. The proof of \Cref{pro-opt} is given in \Cref{sect-opt}. In the appendix, we reproduce a proof of a technical result, which is obtained in \cite{CKN-20}, and provide the scilab code. \section{Construction of auxiliary functions} \label{sect-construction} Let us begin with recalling and introducing some useful notations motivated by the structure of ${\mathcal M}$, see e.g. \cite{Rosier97,CC04,Cerpa14}. For $L \in {\mathcal N}$ and for $1 \le m \le n_L$, denote \begin{equation}\label{def-etam} \left\{\begin{array}{c} \displaystyle \eta_{1, m} = - \frac{2 \pi i (2 k_m + l_m) }{3 L },\\[6pt] \displaystyle \eta_{2, m} = \eta_{1, m} + \frac{2 \pi i }{L} k_m = \frac{2 \pi i (k_m - l_m) }{3 L }, \\[6pt] \displaystyle \eta_{3, m} = \eta_{2, m} + \frac{2 \pi i }{L} l_m = \frac{2 \pi i (k_m + 2 l_m) }{3 L }. \end{array}\right. \end{equation} Set \begin{equation}\label{def-psi} \left\{ \begin{array}{cl} \psi_m(x) = \sum_{j=1}^3 (\eta_{j+1, m} - \eta_{j, m}) e^{\eta_{j+2, m} x} & \mbox{ for } x \in [0, L], \\[6pt] \Psi_m(t, x) = e^{- i t p_m} \psi_m(x) & \mbox{ for } (t, x) \in \mathbb{R} \times [0, L], \end{array} \right. \end{equation} (recall that $p_m$ is defined in \eqref{def-pm}). It is clear from the definition of $\eta_{j, m}$ in \eqref{def-etam} that \begin{equation}\label{pro-etam} e^{\eta_{1, m} L} = e^{\eta_{2, m} L} = e^{\eta_{3, m} L}. \end{equation} This property of $\eta_{j,m}$ associated with $L$ is used several times in our analysis. \begin{remark} \rm One can check that $\eta_{j, m}$ are the solutions of the equation $$ \lambda^3 + \lambda - i p_m \lambda = 0. $$ This implies in particular that $p_{m_1} \neq p_{m_2}$ if $(k_{m_1}, l_{m_1}) \neq (k_{m_2}, l_{m_2})$ as observed in \cite{Cerpa07}. \end{remark} It is known that $\Psi_m$ is a solution of the linearized KdV system; moreover, $$ \Psi_{m, x}(\cdot, 0) = 0, $$ i.e., \begin{equation}\label{KdV-Psi}\left\{ \begin{array}{cl} \Psi_{m, t} (t, x) + \Psi_{m, x} (t, x) + \Psi_{m, xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \Psi_m(t, 0) = \Psi_m(t, L) = \Psi_{m, x} (t , 0) =\Psi_{m, x}(t , L) = 0 & \mbox{ for } t \in (0, +\infty). \end{array}\right. \end{equation} These properties of $\Psi_m$ can be easily checked. It is known that, see e.g. \cite{Cerpa14}, \begin{equation}\label{span-M} {\mathcal M} = \mbox{span} \Big\{ \big\{ \Re( \psi_m(x)); 1 \le m \le n_L \Big\} \cup \Big\{ \Im( \psi_m(x)); 1 \le m \le n_L \big\} \Big\}. \end{equation} Here and in what follows, for a complex number $z$, we denote $\Re z$, $\Im z$, and $\bar z$ its real part, its imaginary part, and its conjugate, respectively. In this section, we prepare elements to construct the function $W$ mentioned in the introduction. Assume that $u_0 \in {\mathcal M} \setminus \{0\}$ and let $\varepsilon = \| u_0 \|_{L^2(0, L)}$. By \eqref{span-M}, there exists $(\alpha_m)_{m=1}^{n_L} \subset \mathbb{C}$ such that \begin{equation}\label{tu1-***} \frac{1}{\varepsilon} u_{0} = \Re \left\{ \sum_{m=1}^{n_L} \alpha_m \Psi_m (0, x) \right\}. \end{equation} The function $\widetilde u_1$ defined by \eqref{hu1-Int} is then given by $$ \widetilde u_1(t, x) = \Re \left\{ \sum_{m=1}^{n_L} \alpha_m \Psi_m (t, x) \right\} = \Re \left\{ \sum_{m=1}^{n_L} \alpha_m e^{- i p_m t} \psi_m (x) \right\}. $$ Using the fact, for an appropriate complex function $f$, $$ \Re f(t, x) \Re f_x(t, x) = \frac{1}{2} \Big( (\Re f(t, x) )^2 \Big)_x= \frac{1}{8} \Big( \big( f(t, x)^2)_x + \big( \bar f(t, x) ^2\big)_x + 2 ( |f(t, x)|^2)_x \Big), $$ we derive from \eqref{def-psi} and \eqref{tu1-***} that \begin{align}\label{motivation-sect-construction} \widetilde u_{1, x} (t, x) \widetilde u_1(t, x) = & \frac{1}{8} \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \Big( \alpha_{m_1} \alpha_{m_2} e^{-i (p_{m_1} + p_{m_2}) t} \psi_{m_1} (x) \psi_{m_2} (x) \Big)_x \\[6pt] & + \frac{1}{8} \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \Big( \overline{ \alpha_{m_1} \alpha_{m_2} e^{-i (p_{m_1} + p_{m_2}) t} \psi_{m_1} (x) \psi_{m_2} (x)} \Big)_x \nonumber \\[6pt] & + \frac{1}{4} \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \Big( \alpha_{m_1} \bar \alpha_{m_2} e^{-i (p_{m_1} - p_{m_2}) t} \psi_{m_1} (x) \bar \psi_{m_2} (x) \Big)_x. \nonumber \end{align} Motivated by \eqref{motivation-sect-construction}, in this section, we construct solutions of system \eqref{pro1-sys-1}-\eqref{pro1-sys-2} and system \eqref{pro1-sys-1-Co}-\eqref{pro1-sys-2-Co} below. \medskip We begin with the following simple result whose proof is omitted. \begin{lemma}\label{lem-der} Let $L \in {\mathcal N}$ and $1 \le m_1, m_2 \le n_L$. We have, in $[0, L]$, \begin{multline}\label{lem-der-cl1} \Big(\psi_{m_1} \psi_{m_2} \Big)'(x) \\[6pt] = \sum_{j=1}^3 \sum_{k=1}^3 (\eta_{j+1, m_1} - \eta_{j, m_1})(\eta_{k+1, m_2} - \eta_{k, m_2}) (\eta_{j+2, m_1} + \eta_{k+2, m_2}) e^{(\eta_{j+2, m_1} + \eta_{k+2, m_2}) x}, \end{multline} and \begin{multline}\label{lem-der-cl2} \Big(\psi_{m_1} \bar \psi_{m_2} \Big)'(x) \\[6pt] = \sum_{j=1}^3 \sum_{k=1}^3 (\eta_{j+1, m_1} - \eta_{j, m_1})(\bar \eta_{k+1, m_2} - \bar \eta_{k, m_2}) (\eta_{j+2, m_1} + \bar \eta_{k+2, m_2}) e^{(\eta_{j+2, m_1} + \bar \eta_{k+2, m_2}) x} . \end{multline} \end{lemma} We next introduce \begin{definition}\label{def1} For $z \in \mathbb{C}$, let $\lambda_j = \lambda_{j} (z)$ $(1 \le j \le 3)$ be the roots of the equation \begin{equation}\label{def-lambda} \lambda^3 + \lambda - i z = 0, \end{equation} and set \begin{equation}\label{def-Q} Q (z) = \left(\begin{array}{ccc} 1 & 1 & 1 \\[6pt] e^{\lambda_1 L } & e^{\lambda_2 L} & e^{\lambda_3 L} \\[6pt] \lambda_1 e^{\lambda_1 L} & \lambda_2 e^{\lambda_2 L} & \lambda_3 e^{\lambda_3 L} \end{array}\right). \end{equation} \end{definition} \begin{remark}\rm Some comments on the definition of $Q$ are in order. The matrix $Q$ is antisymmetric with respect to $\lambda_j$ ($j=1, 2, 3$), and its definitions depend on a choice of the order of $(\lambda_1, \lambda_2, \lambda_3)$. Nevertheless, we later consider either the equation $\det Q = 0$ or a quantity depending on $Q$ in such a way that the order of $(\lambda_1, \lambda_2, \lambda_3)$ does not matter. The definition of $Q$ is only considered in these contexts. \end{remark} \begin{remark} \rm The definition of $\lambda_j(z)$ in \Cref{def1} is slightly different from the one given in \cite{CKN-20} where $i z$ is used instead of $-iz$ in \eqref{def-lambda}. \end{remark} \begin{remark} \rm \label{rem-pm-lambda} It is known that if $z \in {\mathcal P}_L$ for some $L \in {\mathcal N}$, then $$ \lambda_j = \eta_{j, m} \mbox{ for some } 1 \le m \le n_L. $$ Hence, by \eqref{pro-etam}, $$ e^{\lambda_1 L } = e^{\lambda_2 L } = e^{\lambda_3 L }. $$ \end{remark} \begin{remark}\label{rem-lambda} \rm Note that \eqref{def-lambda} has simple roots for $z \neq \pm 2/(3 \sqrt{3})$. Thus, a general solution of the equation $$ y'''(x) + y'(x) - i z y (x) = 0 \mbox{ in } [0, L], $$ is of the form $\sum_{j=1}^3 a_j e^{\lambda_j(z) x}$ when $z \neq \pm 2/ (3 \sqrt{3})$. For $z = \pm 2/(3 \sqrt{3})$, equation \eqref{def-lambda} has three roots $$ \lambda_1 = \mp 2i / \sqrt{3} \quad \mbox{ and } \quad \lambda_2 = \lambda_3 = \pm i/ \sqrt{3}. $$ \end{remark} We now recall a useful property of solutions of the equation $\det Q = 0$ which is established in \cite{CKN-20} (a consequence of \cite[Remark 2.7]{CKN-20}). \begin{lemma}\label{lem-Q} Let $z \in \mathbb{R}$. Then $\det Q(z) = 0$ if and only if either $z = \pm 2/ \sqrt{3}$ or ($L \in {\mathcal N}$ and $z \in {\mathcal P}_L$). Moreover, $$ \big\{ \pm 2/ \sqrt{3} \big\} \cap {\mathcal P}_{L} = \emptyset \mbox{ for all } L \in {\mathcal N}. $$ \end{lemma} The proof of \Cref{lem-Q} is reproduced in the appendix for the convenience of the reader. \medskip Let $L \in {\mathcal N}$ and $1 \le m_1, m_2 \le n_L$. As mentioned above, we are interested in constructing a solution of the system \begin{equation}\label{pro1-sys-1} - i (p_{m_1} + p_{m_2}) \varphi_{m_1, m_2}(x) + \varphi_{m_1, m_2}' (x) + \varphi_{m_1, m_2}''' (x) + \Big(\psi_{m_1} \psi_{m_2} \Big)'(x)= 0 \mbox{ in } (0, L), \end{equation} and \begin{equation}\label{pro1-sys-2} \varphi_{m_1, m_2}(0) = \varphi_{m_1, m_2}(L) = \varphi_{m_1, m_2}'(L)= 0. \end{equation} \medskip We have \begin{proposition}\label{pro1} Let $L \in {\mathcal N}$ and $1 \le m_1, m_2 \le n_L$. Let $\lambda_j = \lambda_j(p_{m_1} + p_{m_2})$ and $\mathcal Q = Q(i p_{m_1} + i p_{m_2})$ where $\lambda_j$ and $Q$ are defined by \eqref{def-lambda} and \eqref{def-Q}. When $p_{m_1} \neq 0$ and $p_{m_2} \neq 0$, set \begin{equation}\label{pro1-D} D = D_{m_1, m_2}= \sum_{j=1}^3 \sum_{k=1}^3 \frac{(\eta_{j+1, m_1} - \eta_{j, m_1})(\eta_{k+1, m_2} - \eta_{k, m_2}) }{3 \eta_{j+2, m_1} \eta_{k+2, m_2}}, \end{equation} and \begin{equation}\label{pro1-fm1m2} \chi_{m_1, m_2}(x) = - \sum_{j=1}^3 \sum_{k=1}^3 \frac{(\eta_{j+1, m_1} - \eta_{j, m_1})(\eta_{k+1, m_2} - \eta_{k, m_2}) }{3 \eta_{j+2, m_1} \eta_{k+2, m_2}} e^{(\eta_{j+2, m_1} + \eta_{k+2, m_2}) x} \mbox{ in } [0, L]. \end{equation} We have \begin{enumerate} \item[1)] Assume that $p_{m_1} \neq 0$, $p_{m_2} \neq 0$, and $p_{m_1} + p_{m_2} \not \in {\mathcal P}_L \cup \big\{ 2 / (3 \sqrt{3}) \big\}$. The unique solution of system \eqref{pro1-sys-1}-\eqref{pro1-sys-2} is given by \begin{equation}\label{pro1-wm1m2} \varphi_{m_1, m_2} (x) = \chi_{m_1, m_2}(x) + \sum_{j=1}^3 a_j e^{\lambda_j x}, \end{equation} where $(a_1, a_2, a_3) $ is uniquely determined via \eqref{pro1-sys-2}, i.e., \begin{equation}\label{pro1-aj} \mathcal Q (a_1, a_2, a_3)^\mathsf{T} = D(1, e^{(\eta_{1, m_1} + \eta_{1, m_2}) L }, 0)^\mathsf{T}. \end{equation} \item[2)] Assume that $p_{m_1} \neq 0$, $p_{m_2} \neq 0$, and $p_{m_1} + p_{m_2} \in {\mathcal P}_L$. A solution of system \eqref{pro1-sys-1}-\eqref{pro1-sys-2} is given by \eqref{pro1-wm1m2} where $(a_1, a_2, a_3)$ satisfies \begin{equation}\label{pro1-aj-2} a_1 + a_2 + a_3 = D \quad \mbox{ and } \quad \lambda_1 a_1 + \lambda_2 a_2 + \lambda_3 a_3 = 0. \end{equation} \item[3)] Assume that $p_{m_1} \neq 0$, $p_{m_2} \neq 0$, and $p_{m_1} + p_{m_2} = 2/ (3 \sqrt{3})$. Consider the convention \begin{equation}\label{lem2-lambda} \lambda_1 = - 2 i / \sqrt{3} \quad \mbox{ and } \quad \lambda_2 = \lambda_3 = i / \sqrt{3}. \end{equation} System \eqref{pro1-sys-1}-\eqref{pro1-sys-2} has a unique solution given by \begin{equation}\label{def-wmm-*} \varphi_{m_1, m_2} (x) = \chi_{m_1, m_2}(x) + a_1 e^{\lambda_1 x} + (a_2 + a_3 x) e^{\lambda_2 x}, \end{equation} where $(a_1, a_2, a_3)$ is uniquely determined via \eqref{pro1-sys-2}, i.e., \begin{equation}\label{pro1-aj-3} \mathcal Q_1 (a_1, a_2, a_3)^\mathsf{T} = D(1, e^{(\eta_{1, m_1} + \eta_{1, m_2}) L }, 0)^\mathsf{T}, \end{equation} where \begin{equation}\label{def-Qm} \mathcal Q_1 = \left(\begin{array}{ccc} 1 & 1 & 0 \\[6pt] e^{\lambda_1 L } & e^{\lambda_2 L} & L e^{\lambda_2 L} \\[6pt] \lambda_1 e^{\lambda_1 L} & \lambda_2 e^{\lambda_2 L} &(\lambda_2 L + 1) e^{\lambda_2 L} \end{array}\right). \end{equation} \item[4)] Assume that $p_{m_1} = p_{m_2} = 0$ and thus $m_1 = m_2 = m$. A solution of system \eqref{pro1-sys-1}-\eqref{pro1-sys-2} is \begin{equation}\label{def-wmm-0} \varphi_{m, m} (x) = 4 \left( L \sin x + \frac{1}{6} - x \sin x - \frac{1}{6} \cos (2x) \right). \end{equation} \end{enumerate} \end{proposition} \begin{proof} We proceed with the proof of 1), 2), 3), and 4) in Steps 1, 2, 3, and 4 below, respectively. \medskip \noindent{\it Step 1}: Proof of 1). Since $\eta = \eta_{j, m}$ $(1 \le j \le 3)$ is a root of the equation \begin{equation*} \eta^3 + \eta - ip_m = 0, \end{equation*} it follows that \begin{equation*} \eta_{j, m_1} \neq - \eta_{k, m_2} \end{equation*} (since otherwise $p_{m_1} = - p_{m_2}$ which is impossible), and \begin{equation*} (\eta_{j, m_1} + \eta_{k, m_2})^3 + (\eta_{j, m_1} + \eta_{k, m_2}) - i (p_{m_1} + p_{m_2}) = 3 \eta_{j, m_1} \eta_{k, m_2} (\eta_{j,m_1} + \eta_{k, m_2}). \end{equation*} Since $p_{m_1} \neq 0$ and $p_{m_2} \neq 0$, we derive from \Cref{lem-der} that $\chi_{m_1, m_2}$ is a solution of \eqref{pro1-sys-1}. Since a general solution of the equation $\xi''' + \xi' = i (p_{m_1} + p_{m_2}) \xi$ is of the form $\sum_{j=1}^3 a_j e^{\lambda_j x}$ by \Cref{rem-lambda}, it follows that \begin{equation}\label{pro1-gen-sol} \mbox{ a general solution of \eqref{pro1-sys-1} is of the form $\chi_{m_1, m_2} (x) + \sum_{j=1}^3 a_j e^{\lambda_j x}$}. \end{equation} We have \begin{equation}\label{pro1-pro-chi} - \chi_{m_1, m_2} (0) = D, \quad - \chi_{m_1, m_2} (L) \mathop{=}^{\eqref{pro-etam}} D e^{(\eta_{1, m_1} + \eta_{1, m_2}) L}, \quad \mbox{ and } \quad - \chi_{m_1, m_2, x} (L) \mathop{=}^{\eqref{pro-etam}} 0. \end{equation} It follows that a function of the form $\chi_{m_1, m_2} (x) + \sum_{j=1}^3 a_j e^{\lambda_j x}$ satisfies \eqref{pro1-sys-2} if and only if \begin{equation*} \sum_{j=1}^3 a_j = D, \quad \sum_{j=1}^3 a_j e^{\lambda_j L} = D e^{(\eta_{1, m_1} + \eta_{1, m_2}) L}, \quad \sum_{j=1}^3 a_j \lambda_j e^{\lambda_j L} = 0, \end{equation*} which is equivalent to \eqref{pro1-aj}. Since $p_{m_1} + p_{m_2} \not \in {\mathcal P}_{L} \cup \big\{2 /( 3 \sqrt{3}) \big\}$ and $p_{m_1} + p_{m_2} > 0$, it follows from \Cref{lem-Q} that $\det \mathcal Q \neq 0$. Therefore, one obtains 1). \medskip \noindent{\it Step 2:} Proof of 2). A solution of \eqref{pro1-sys-1} is of the form $\chi_{m_1, m_2} (x) + \sum_{j=1}^3 a_j e^{\lambda_j x}$. This function satisfies \eqref{pro1-sys-2} if and only if, by \Cref{rem-pm-lambda} (recall that $p_{m_1} + p_{m_2} \in {\mathcal P}_L$), \begin{equation*} \sum_{j=1}^3 a_j = D, \quad e^{\lambda_1 L} \sum_{j=1}^3 a_j \mathop{=}^{\eqref{pro-etam}} D e^{(\eta_{1, m_1} + \eta_{1, m_2}) L}, \quad \sum_{j=1}^3 a_j \lambda_j \mathop{=}^{\eqref{pro-etam}} 0. \end{equation*} This system has a solution if \begin{equation}\label{Step2-key} e^{\lambda_1 L} = e^{(\eta_{1, m_1} + \eta_{1, m_2}) L}, \end{equation} and a solution is given by \eqref{pro1-wm1m2} where $(a_1, a_2, a_3)$ satisfies \eqref{pro1-aj-2}. It remains to prove \eqref{Step2-key}. Assume, for some $p_{m_3} \in {\mathcal P}_L$, that \begin{equation}\label{pro1-S2-p} p_{m_1} + p_{m_2} = p_{m_3}. \end{equation} To establish \eqref{Step2-key}, it suffices to prove that, by \eqref{pro-etam} and \Cref{rem-pm-lambda}, $$ e^{( \eta_{2, m_1} + \eta_{2, m_2} ) L } = e^{\eta_{2, m_3} L } $$ which is equivalent to the fact, by \eqref{def-etam}, \begin{equation}\label{pro1-S2-mod} \frac{k_{m_3} - l_{m_3}}{3} - \frac{k_{m_1} - l_{m_1}}{3} - \frac{k_{m_2} - l_{m_2}}{3} \in \mathbb{Z}. \end{equation} From \eqref{pro1-S2-p} and the definition of $p_m$ in \eqref{def-pm}, we have \begin{multline}\label{pro1-S2-p1} (k_{m_3} - l_{m_3}) (2k_{m_3} + l_{m_3}) (2 l_{m_3} + k_{m_3}) \\[6pt] = (k_{m_1} - l_{m_1}) (2k_{m_1} + l_{m_1}) (2 l_{m_1} + k_{m_1}) + (k_{m_2} - l_{m_2}) (2k_{m_2} + l_{m_2}) (2 l_{m_2} + k_{m_2}). \end{multline} Since $$ (k_{m_j} - l_{m_j}) (2k_{m_j} + l_{m_j}) (2 l_{m_j} + k_{m_j}) = l_{m_j} - k_{m_j} \mod 3, $$ It follows from \eqref{pro1-S2-p1} that $$ k_{m_3} - l_{m_3} = k_{m_1} - l_{m_1} + \big( k_{m_2} - l_{m_2} \big) \mod 3, $$ which yields \eqref{pro1-S2-mod}. The proof of Step 2 is complete. \medskip \noindent{\it Step 3:} Proof of 3). A solution of \eqref{pro1-sys-1} is of the form $\chi_{m_1, m_2}(x) + a_1 e^{\lambda_1 x} + (a_2 + a_3 x) e^{\lambda_2 x}$. This function satisfies \eqref{pro1-sys-2} if and only if, by \eqref{pro1-pro-chi}, $$ a_1 + a_2 = D, \quad a_1 e^{\lambda_1 L } + a_2 e^{\lambda_2 L } + a_3 L e^{\lambda_2 L} = De^{(\eta_{1, m_1} + \eta_{1, m_2} ) L}, $$ and $$ a_1 \lambda_1 e^{\lambda_1 L } + a_2 \lambda_2 e^{\lambda_2 L } + a_3 (\lambda_2 L + 1) e^{\lambda_2 L} = 0, $$ which is equivalent to \eqref{pro1-aj-3}. Hence, it suffices to prove that $\mathcal Q_1$ is invertible. Replacing the third row of $\mathcal Q_1$ by itself minus $\lambda_2$ times the second row, we obtain \begin{equation} \mathcal Q_2 = \left(\begin{array}{ccc} 1 & 1 & 0 \\[6pt] e^{\lambda_1 L } & e^{\lambda_2 L} & L e^{\lambda_2 L} \\[6pt] (\lambda_1 - \lambda_2) e^{\lambda_1 L} & 0 &e^{\lambda_2 L} \end{array}\right). \end{equation} We have $$ \det \mathcal Q_2 = e^{2 \lambda_2 L} - \big(1 - L (\lambda_1 - \lambda_2) \big) e^{(\lambda_1 + \lambda_2 ) L }. $$ Using \eqref{lem2-lambda}, we derive that $\det \mathcal Q_2 = 0$ if and only if $$ e^{3 \lambda_2 L } = 1 + 3 \lambda_2 L. $$ Since the equation $e^{ix } = 1 + i x$ has only one solution $x = 0$ in the real line, one derives that $\det \mathcal Q_2 \neq 0$. Therefore, $\mathcal Q_1$ is invertible. The proof of Step 3) is complete. \medskip \noindent{\it Step 4:} Proof of 4). Since $ p_{m} =0$, it follows that $k_m = l_m$, and $L = 2 \pi k_m$. One then has $$ \eta_{1, m} = -i, \quad \eta_{2, m} = 0, \quad \eta_{3, m} = i. $$ It follows from the definition of $\psi_m$ in \eqref{def-psi} that \begin{equation}\label{psi-complex} \psi_m(x) = 2 i (\cos x - 1). \end{equation} This implies $$ \big( \psi_m^2(x) \big)_x = 8 (\cos x - 1) \sin x. $$ A straightforward computation gives the conclusion. \medskip The proof of \Cref{pro1} is complete. \end{proof} \begin{remark} \rm In the case, $p_{m_1} = 0$ and $p_{m_2} \neq 0$, one cannot construct a solution of \eqref{pro1-sys-1}-\eqref{pro1-sys-2} in general. In fact, one can check that \begin{multline} \chi_{m_1, m_2} (x) = - \sum_{j=1, 2} \sum_{k=1}^3 \frac{(\eta_{j+1, m_1} - \eta_{j, m_1})(\eta_{k+1, m_2} - \eta_{k, m_2}) }{3 \eta_{j+2, m_1} \eta_{k+2, m_2}} e^{(\eta_{j+2, m_1} + \eta_{k+2, m_2}) x} \\[6pt] - \sum_{k=1}^3 \frac{(\eta_{1, m_1} - \eta_{3, m_1})(\eta_{k+1, m_2} - \eta_{k, m_2}) \eta_{k+2, m_2} }{3 {\eta_{k+2, m_2}}^2 + 1} x e^{ \eta_{k+2, m_2} x} \end{multline} is a solution of \eqref{pro1-sys-1}. However, $$ \chi_{m_1, m_2} (0) \neq e^{-\eta_{1, m_2} L} \chi_{m_1, m_2} (L) $$ since, in general, $$ \sum_{k=1}^3 \frac{(\eta_{k+1, m_2} - \eta_{k, m_2}) \eta_{k+2, m_2} }{3 {\eta_{k+2, m_2}}^2 + 1} \neq 0. $$ Hence one cannot find $(a_1, a_2, a_3) \in \mathbb{C}^3$ such that the function $ \chi_{m_1, m_2}(x) + \sum_{j=1}^3 a_j e^{\lambda_j x}$, with $\lambda_j = \lambda_j (p_{m_2})$, verifies \eqref{pro1-sys-2}. \end{remark} Let $L \in {\mathcal N}$ and $1 \le m_1, m_2 \le n_L$. We are next interested in constructing a solution of the system \begin{equation}\label{pro1-sys-1-Co} - i (p_{m_1} - p_{m_2}) \phi_{m_1, m_2}(x) + \phi_{m_1, m_2}'(x) + \phi_{m_1, m_2}''' (x) + \Big(\psi_{m_1} \bar \psi_{m_2} \Big)'(x)= 0 \mbox{ in } (0, L), \end{equation} and \begin{equation}\label{pro1-sys-2-Co} \phi_{m_1, m_2}(0) = \phi_{m_1, m_2}(L) = \phi_{m_1, m_2}'(L)= 0. \end{equation} We have \begin{proposition}\label{pro1-Co} Let $L \in {\mathcal N}$ and $1 \le m_1, m_2 \le n_L$. Let $\widetilde \lambda_j = \lambda_j(p_{m_1} - p_{m_2})$ and $\widetilde \mathcal Q = Q(i p_{m_1} - i p_{m_2})$ where $\lambda_j$ and $Q$ are defined by \eqref{def-lambda} and \eqref{def-Q}. When $p_{m_1} \neq 0$ and $p_{m_2} \neq 0$, set \begin{equation}\label{pro1-D-Co} \widetilde D = \widetilde D_{m_1, m_2} = \sum_{j=1}^3 \sum_{k=1}^3 \frac{(\eta_{j+1, m_1} - \eta_{j, m_1})(\bar \eta_{k+1, m_2} - \bar \eta_{k, m_2}) }{3 \eta_{j+2, m_1} \bar \eta_{k+2, m_2}} \end{equation} and \begin{equation}\label{pro1-chi-Co} \widetilde \chi_{m_1, m_2}(x) = - \sum_{j=1}^3 \sum_{k=1}^3 \frac{(\eta_{j+1, m_1} - \eta_{j, m_1})(\bar \eta_{k+1, m_2} - \bar \eta_{k, m_2}) }{3 \eta_{j+2, m_1} \bar \eta_{k+2, m_2}} e^{(\eta_{j+2, m_1} + \bar \eta_{k+2, m_2}) x} \mbox{ in } [0, L]. \end{equation} We have \begin{enumerate} \item[1)] Assume that $p_{m_1} \neq 0$, $p_{m_2} \neq 0$, $p_{m_1} \neq p_{m_2}$, and $p_{m_1} - p_{m_2} \not \in {\mathcal P}_L $. The unique solution of system \eqref{pro1-sys-1-Co}-\eqref{pro1-sys-2-Co} is given by \begin{equation}\label{pro1-wm1m2-Co} \phi_{m_1, m_2} (x) = \widetilde \chi_{m_1, m_2}(x) + \sum_{j=1}^3 a_j e^{\widetilde \lambda_j x}, \end{equation} where $(a_1, a_2, a_3)$ is uniquely determined via \eqref{pro1-sys-2-Co}, i.e., \begin{equation}\label{def-aj-Co} \widetilde \mathcal Q (a_1, a_2, a_3)^\mathsf{T} = \widetilde D(1, e^{(\eta_{1, m_1} + \bar \eta_{1, m_2}) L }, 0)^\mathsf{T}. \end{equation} \item[2)] Assume that $p_{m_1} \neq 0$, $p_{m_2} \neq 0$, $p_{m_1} \neq p_{m_2}$, and $p_{m_1} - p_{m_2} \in {\mathcal P}_L$. A solution of system \eqref{pro1-sys-1-Co}-\eqref{pro1-sys-2-Co} is given by \eqref{pro1-wm1m2-Co} where $(a_1, a_2, a_3)$ satisfies \begin{equation} a_1 + a_2 + a_3 = \widetilde D \quad \mbox{ and } \widetilde \lambda_1 a_1 + \widetilde \lambda_2 a_2 + \widetilde \lambda_3 a_3 = 0. \end{equation} \item[3)] Assume that $p_{m_1} = p_{m_2} \neq 0$ and thus $m_1 = m_2 = m$. System \eqref{pro1-sys-1-Co}-\eqref{pro1-sys-2-Co} has a unique solution \begin{multline*} \phi_{m, m} (x) = - \sum_{j=1}^3 \sum_{k=1}^3 \frac{(\eta_{j+1, m} - \eta_{j, m})(\bar \eta_{k+1, m} - \bar \eta_{k, m}) }{3 \eta_{j+2, m} \bar \eta_{k+2, m}} e^{(\eta_{j+2, m} + \bar \eta_{k+2, m}) x} \\[6pt] + \sum_{j=1}^3 \sum_{k=1}^3 \frac{(\eta_{j+1, m} - \eta_{j, m})(\bar \eta_{k+1, m} - \bar \eta_{k, m}) }{3 \eta_{j+2, m} \bar \eta_{k+2, m}}. \end{multline*} \end{enumerate} \item[4)] Assume that $p_{m_1} = p_{m_2} = 0$ and thus $m_1 = m_2 = m$. A solution of system \eqref{pro1-sys-1-Co}-\eqref{pro1-sys-2-Co} is \begin{equation}\label{def-wmm-0-Co} \phi_{m, m} (x) = - 4 \left( L \sin x + \frac{1}{6} - x \sin x - \frac{1}{6} \cos (2x) \right). \end{equation} \end{proposition} \begin{proof} We proceed with the proof of 1), 2), 3), and 4) in Steps 1, 2, 3, and 4 below, respectively. \medskip \noindent{\it Step 1}: Proof of 1). The proof is similar tos Step 1 in the proof of \Cref{pro1}. One just notes that \begin{equation*} (\eta_{j, m_1} + \bar \eta_{k, m_2})^3 + (\eta_{j, m_1} + \bar \eta_{k, m_2}) - i (p_{m_1} - p_{m_2}) = 3 \eta_{j, m_1} \bar \eta_{k, m_2} (\eta_{j, m_1} + \bar \eta_{k, m_2}), \end{equation*} and $$ \eta_{j, m_1} + \bar \eta_{k, m_2} \neq 0 $$ since $p_{m_1} \neq p_{m_2}$. \medskip \noindent{\it Step 2:} Proof of 2). The proof is almost the same as Step 2 in the proof of \Cref{pro1}. The details are omitted. \medskip \noindent{\it Step 3:} Proof of 3). One can check that $\phi_{m, m}$ is a solution of \eqref{pro1-sys-1-Co}-\eqref{pro1-sys-2-Co}. The uniqueness follows from the fact that equation \eqref{def-lambda} has simple roots for $z=0$. \medskip \noindent{\it Step 4:} Proof of 4). The conclusion is from 4) of \Cref{pro1} by noting that $$ |\psi_m(x)|^2 \mathop{=}^{\eqref{psi-complex}} - \psi_m(x)^2 \mbox{ if } p_m =0. $$ \medskip The proof is complete. \end{proof} \section{Properties of auxiliary functions} \label{sect-quasi} The main goal of this section is to establish, for $L \in {\mathcal N}$ and $1 \le m \le n_L$ with $p_m \neq 0$, that \begin{equation}\label{der-varphi-0} \varphi_{m, m}'(0) \neq 0 \end{equation} provided \eqref{main-assumption} holds (see \Cref{pro-chi}) where $\varphi_{m, m}$ is determined in \Cref{pro1}. We begin with \begin{lemma} \label{lemE} Let $L \in {\mathcal N}$ and $1 \le m \le n_L$ with $p_m \neq 0$. Set \begin{equation} E_m : = \sum_{j=1}^3 \frac{\eta_{j+1, m} - \eta_{j, m}}{\eta_{j+2, m}}. \end{equation} We have \begin{equation}\label{lemE-cl1} D_{m, m} = - \chi_{m, m}(0) = \frac{1}{3} E_m^2, \end{equation} and \begin{equation}\label{lemE-cl2} E_m = - \frac{2 7 k_m l_m (k_m+l_m)}{(k_m+2l_m) (2 k_m + l_m) (k_m-l_m)} \neq 0. \end{equation} \end{lemma} \begin{proof} It is clear to see from \eqref{pro1-fm1m2} that $$ D_{m, m} = - \chi_{m, m}(0) = \frac{1}{3} E_m^2. $$ With the notation $\gamma_{j, m} = L \eta_{j, m}/ (2 \pi i)$, we have \begin{equation}\label{lem-E-gamma} \gamma_{1, m} = - \frac{2 k_m + l_m}{3}, \quad \gamma_{2, m} = \frac{k_m -l_m}{3}, \quad \gamma_{3, m} = \frac{k_m+ 2 l_m}{3}. \end{equation} It follows that \begin{equation*} E_m = \sum_{j=1}^3 \frac{\gamma_{j+1, m} - \gamma_{j, m}}{\gamma_{j+2, m}} = \frac{3k_m}{k_m + 2 l_m } - \frac{3 l_m }{2 k_m + l_m} - \frac{3 (k_m+l_m)}{k_m -l_m}. \end{equation*} Since \begin{multline*} k_m(2k_m + l_m) (k_m - l_m) - l_m(k_m + 2 l_m) (k_m - l_m) - (k_m + l_m) (k_m + 2 l_m) (2 k_m + l_m) \\[6pt] = 2 (k_m^2 - l_m^2) (k_m - l_m) - (k_m + l_m) (k_m + 2 l_m) (2 k_m + l_m) \\[6pt] = (k_m + l_m) \Big( 2 k_m^2 - 4 k_m l_m + 2 k_m^2 - 2 k_m^2 - 2 l_m^2 - 5 k_m l_m \Big) = - 9 k_m l_m (k_m + l_m), \end{multline*} we derive that \begin{equation*} E_m = - \frac{2 7 k_m l_m (k_m+l_m)}{(k_m+2l_m) (2 k_m + l_m) (k_m-l_m)} \neq 0. \end{equation*} The proof is complete. \end{proof} We next show in \Cref{lem-2/3,lem-2pm} below that for $L \in {\mathcal N}$ and for $1 \le m \le n_L$ with $p_m \neq 0$, it holds $$ 2 p_m \neq 2 / (3 \sqrt{3}) \quad \mbox{ and } \quad 2 p_m \not \in {\mathcal P}_L. $$ As a consequence $\varphi_{m, m}$ is constructed via 1) and 4) in \Cref{pro1}. We begin with \begin{lemma}\label{lem-2/3} Let $L \in {\mathcal N}$ and $1 \le m \le n_L$. Then $$ 2 p_m \neq 2 / (3 \sqrt{3}). $$ \end{lemma} \begin{proof} We first claim that there is no $k, l \in \mathbb{N}_*$ with $k \ge l $ such that \begin{equation}\label{lem-2/3-p1} (2k + l) (2 l + k) (k-l) = (k^2 + l^2 + kl)^{3/2}. \end{equation} We prove this by contradiction. Assume that there exists such a pair $(k, l)$. Set $$ H = \Big\{(k, l) \in \mathbb{N}_*\times \mathbb{N}_*, \; k \ge l, \mbox{ and \eqref{lem-2/3-p1} holds}\Big\}. $$ Set $$ h = \min \Big\{ k + l; (k, l) \in H \Big\} > 0. $$ Fix $(k, l) \in H$ such that $k + l = h$. Since $$ (2k + l) (2 l + k) (k-l) \mbox{ is even}, $$ it follows from \eqref{lem-2/3-p1} that $k^2 + l^2 + kl $ is even. Hence both $k$ and $l$ are even. We write $k = 2 k_1$ and $l = 2 l_1$ for some $k_1, l_1 \in \mathbb{N}_*$. It is clear that $$ k_1 \ge l_1, $$ and \begin{equation*} (2k_1 + l_1) (2 l_1 + k_1) (k_1-l_1) = (k_1^2 + l_1^2 + k_1l_1)^{3/2}. \end{equation*} This implies $$ (k_1, l_1) \in H. $$ We have $$ k_1 + l_1 = (k+ l)/ 2 = h/2 \quad \mbox{ and } \quad h > 0. $$ This contradicts the definition of $h$. The claim is proved. \medskip We are ready to derive the conclusion of \Cref{lem-2/3}. Since $2 p_m = 2 / (3 \sqrt{3})$ for some $1 \le m \le n_L$ and for some $L \in {\mathcal N}$ if and only if, by the definition of $p_m$ in \eqref{def-pm}, $$ (2k_m + l_m)(k_m - l_m)(2 l_m + k_m) = (k_m^2 + l_m^2 + k_m l_m)^{3/2}, $$ the conclusion follows from the claim. \end{proof} We next prove \begin{lemma}\label{lem-2pm} There is no quadruple $(k_1, l_1, k_2, l_2) \in \mathbb{N}_*^4$ satisfying the system \begin{equation}\label{sys-kl} \left\{\begin{array}{c} k_1 > l_1, \quad k_2 > l_2, \\[6pt] k_1^2 + k_1 l_1 + l_1^2 = k_2^2 + k_2 l_2 + l_2^2, \\[6pt] (2k_2 + l_2) (2 l_2 + k_2) (k_2 - l_2) = 2 (2k_1 + l_1) (2 l_1 + k_1) (k_1 - l_1). \end{array}\right. \end{equation} Consequently, for $L \in {\mathcal N}$ and $1 \le m \le n_L$, we have \begin{equation}\label{lem-2pm-cl2} 2 p_m \not \in {\mathcal P}_L \mbox{ if } p_m \neq 0. \end{equation} \end{lemma} \begin{proof} We prove the non-existence by contradiction. Assume that there exists a quadruple $(k_1, l_1, k_2, l_2) \in \mathbb{N}_*^4$ satisfying \eqref{sys-kl}. Set \begin{equation} G = \Big\{ (k_1, l_1, k_2, l_2) \in \mathbb{N}_*^4; \eqref{sys-kl} \mbox{ holds} \Big\}, \end{equation} and let \begin{equation} g = \min \Big\{ k_1 + l_1 + k_2 + l_2; (k_1, l_1, k_2, l_2) \in G \Big\} > 0. \end{equation} Fix $(k_1, l_1, k_2, l_2) \in G$ such that $k_1 + l_1 + k_2 + l_2 = g$. Set \begin{equation}\label{lem-2pm-A} A := k_1^2 + k_1 l_1 + l_1^2 = k_2^2 + k_2 l_2 + l_2^2 \quad (\mbox{by the second line of \eqref{sys-kl}}). \end{equation} Since, for $(k, l) \in \mathbb{R}$, $$ (2k + l)(2l + k) = 2 (k^2 + kl + l^2) + 3 kl \quad \mbox{ and } \quad (k-l)^2 = (k^2 + k l + l^2) - 3 k l, $$ it follows from the square of the last line of \eqref{sys-kl}, with \begin{equation}\label{def-x1x2} x_1 = 3 k_1 l_1 \quad \mbox{ and } \quad x_2 = 3 k_2 l_2, \end{equation} that $$ (2 A + x_2)^2 (A - x_2) = 4 (2 A + x_1)^2 (A - x_1). $$ This implies \begin{equation} (4 A^3 - 3A x_2^2 - x_2^3) = 4(4 A^3 - 3A x_1^2 - x_1^3), \end{equation} or equivalently \begin{equation}\label{lem-2pm-p1} 12 A^3 = 3 A (4 x_1^2 - x_2^2) + 4 x_1^3 - x_2^3. \end{equation} Using \eqref{def-x1x2}, we derive that $A^3 = 0 \mod 3$, which yields $$ A = 0 \mod 3. $$ Putting this information into \eqref{lem-2pm-p1} and using again \eqref{def-x1x2}, we obtain $$ x_1^3 - x_2^3 = 0 \mod 3^4. $$ We deduce from \eqref{def-x1x2} that \begin{equation}\label{lem-2pm-p2} (k_1 l_1)^3 - (k_2 l_2)^3 = 0 \mod 3. \end{equation} By writing $k_1 l_1$ under the form $k_2 l_2 + 3 q + r$ with $q \in \mathbb{Z}$ and $r \in \mathbb{N}$ with $0 \le r \le 2$, we have \begin{equation}\label{lem-2pm-p3} (k_1 l_1)^3 - (k_2 l_2)^3 = 3 k_2^2 l_2^2 (3 q + r) + 3 k_2 l_2 (3 q + r)^2 + (3 q + r)^3. \end{equation} Combining \eqref{lem-2pm-p2} and \eqref{lem-2pm-p3} yields that $r = 0$. Putting this information into \eqref{lem-2pm-p1}, we obtain $$ A^3 = 0 \mod 3^4. $$ This implies $$ A = 0 \mod 9. $$ We deduce from \eqref{lem-2pm-A} that $$ k_1 = 0 \mod 3, \quad l_1 = 0 \mod 3, \quad k_2 = 0 \mod 3, \quad l_2 = 0 \mod 3. $$ Let $\hat k_1, \hat l_1, \hat k_2, \hat l_2 \in \mathbb{N}_*$ be such that $$ k_1 = 3 \hat k_1, \quad l_1 = 3 \hat l_1, \quad k_2 = 3 \hat k_2, \quad l_2 = 3 \hat l_2. $$ One can easily check that $(\hat k_1, \hat l_1, \hat k_2, \hat l_2) \in G$ and $$ \hat k_1 + \hat l_1 + \hat k_2 + \hat l_2 = g/3 < g. $$ We obtain a contradiction. The non-existence associated with \eqref{sys-kl} is proved. \medskip It is clear that \eqref{lem-2pm-cl2} is just a consequence of the non-existence by the definition of $L$ and $p_m$ as a function of $k_m$ and $l_m$ in \eqref{def-L} and \eqref{def-pm}. The proof is complete. \end{proof} We are ready to state and prove the main result of this section: \begin{proposition}\label{pro-chi} Let $L \in {\mathcal N}$ and $1 \le m \le n_L$. Then \begin{equation}\label{pro-chi-cl1} \varphi_{m, m}' (0) = 4 \pi L = - \phi_{m, m}'(0) \mbox{ if } p_m = 0, \end{equation} and, if $p_m \neq 0$ and $s_m \neq 0$ then \begin{equation}\label{pro-chi-cl2} \varphi_{m, m}' (0) \neq 0. \end{equation} \end{proposition} \begin{proof} Assertion \eqref{pro-chi-cl1} follows immediately from 4) of \Cref{pro1,pro1-Co}. We next consider the case $p_m \neq 0$. By \Cref{lem-2/3,lem-2pm}, we have $$ \varphi_{m, m}'(0) = 0 $$ only if, with $\alpha = e^{2 \eta_{2, m} L}$ and $\lambda_j = \lambda_j (2 p_m)$, \begin{equation} \left\{\begin{array}{c} \sum_{j=1}^3 \lambda_j a_j = 0 \quad (= \varphi_{m, m}' (0) \mbox{ since $\chi_{m, m}'(0) = 0$}), \\[6pt] \sum_{j=1}^3 \lambda_j e^{\lambda_j L} a_j = 0 \quad (= \varphi_{m, m}'(L) \mbox{ since $\chi_{m, m}'(L) = 0$} ), \\[6pt] \sum_{j=1}^3 (e^{\lambda_j L} - \alpha ) a_j = 0 \quad (= - \chi_{m, m}(L) + \alpha \chi_{m, m}(0) \mbox{ since $\chi_{m, m}(L) = \alpha \chi_{m, m } (0)$} ). \end{array}\right. \end{equation} Since $E_m \neq 0$ by \Cref{lemE}, one has a non-trivial solution $(a_1, a_2, a_3)$ of this system. This implies \begin{equation}\label{pro-chi-p1} \det K_1 = 0 \quad \mbox{ where } K_1 : = \quad \left(\begin{array}{ccc} \lambda_1 & \lambda_2 & \lambda_3 \\[6pt] \lambda_1 e^{\lambda_1 L} & \lambda_2 e^{\lambda_2 L} & \lambda_3 e^{\lambda_3 L} \\[6pt] e^{\lambda_1 L} - \alpha & e^{\lambda_2 L} - \alpha & e^{\lambda_3 L} - \alpha \end{array}\right). \end{equation} Set $$ \hat \lambda_j = \lambda_j L. $$ Condition \eqref{pro-chi-p1} is equivalent to \begin{equation}\label{pro-chi-p2} \det K_2 = 0 \quad \mbox{ where } \quad K_2 : = \left(\begin{array}{ccc} \hat \lambda_1 & \hat \lambda_2 & \hat \lambda_3 \\[6pt] \hat \lambda_1 e^{\hat \lambda_1} & \hat \lambda_2 e^{\hat \lambda_2} & \hat \lambda_3 e^{\hat \lambda_3} \\[6pt] e^{\hat \lambda_1} - \alpha & e^{\hat \lambda_2} - \alpha & e^{\hat \lambda_3} - \alpha \end{array}\right). \end{equation} A computation yields \begin{equation*} \det K_2 = \sum_{j=1}^3 \hat \lambda_j \Big( ( \hat \lambda_{j+1} - \hat \lambda_{j+2}) e^{\hat \lambda_{j+1} + \hat \lambda_{j+2}} - \alpha (\hat \lambda_{j+1}e^{\hat \lambda_{j+1}} - \hat \lambda_{j+2}e^{\hat \lambda_{j+2}} ) \Big), \end{equation*} which implies \begin{equation}\label{det-K2} \det K_2 = \sum_{j=1}^3 \hat \lambda_j ( \hat \lambda_{j+1} - \hat \lambda_{j+2}) \Big( e^{- \hat \lambda_{j}} + \alpha e^{\hat \lambda_{j}} \Big). \end{equation} Here we used the fact $\sum_{j=1}^3 \hat \lambda_j = L \sum_{j=1}^3 \lambda_j =0 $. From the definition of $\lambda_j = \lambda_{j} (2 p_m)$ given in \Cref{def1}, we have \begin{equation*} \left\{\begin{array}{c} \hat \lambda_1 + \hat \lambda_2 + \hat \lambda_3 = 0, \\[6pt] \hat \lambda_1 \hat \lambda_2+ \hat \lambda_1 \hat \lambda_3 + \hat \lambda_2 \hat \lambda_3 = L^2, \\[6pt] \hat \lambda_1 \hat \lambda_2 \hat \lambda_3 = 2 i p_m L^3. \end{array}\right. \end{equation*} Define $\sigma_{j, m}$ by $$ \hat \lambda_{j} = \frac{2 \pi i \sigma_{j, m}}{3}. $$ We then have \begin{equation*} \left\{\begin{array}{c} \sigma_{1, m} + \sigma_{2, m} + \sigma_{3, m} = 0, \\[6pt] \sigma_{1, m} \sigma_{2, m} + \sigma_{1, m} \sigma_{3, m} + \sigma_{2, m} \sigma_{3, m} = - 3 (k_m^2 + l_m^2 + k_m l_m), \\[6pt] \sigma_{1, m} \sigma_{2, m} \sigma_{3, m} = - 2 (2k_m + l_m) (2 l_m + k_m) (k_m - l_m), \end{array}\right. \end{equation*} where in the last identity, we used the fact $$ p_m L^3 = \frac{1}{27} (2 \pi)^3 (2k_m + l_m) (2 l_m + k_m) (k_m - l_m). $$ It is clear that $\det K_2 = 0$ if and only if \eqref{main-assumption} holds. The proof is complete. \end{proof} \section{Useful properties related to quasi-periodic functions} In this section, we derive some properties for $W_x(\cdot, 0)$ given in the introduction using the quasi-periodic-function theory. The main result of this section is \Cref{pro-quasi}. We begin with its weaker version. \begin{lemma}\label{lem-quasi} Let $\ell \in \mathbb{N}_*$, $a_j \in \mathbb{C}$, $q_j \ge 0$ for $1 \le j \le \ell$, and $M_{j_1, j_2}, N_{j_1, j_2} \in \mathbb{C}$ with $1 \le j_1, j_2 \le \ell$. Assume that \begin{equation}\label{lem-quasi-A1} \left\{\begin{array}{c} q_{j_1} \neq q_{j_2} \mbox{ for } 1 \le j_1 \neq j_2 \le \ell, \\[6pt] M_{j, j} \neq 0 \mbox{ for } 1 \le j \le \ell, \\[6pt] (M_{j, j} \mbox{ is real and } N_{j, j} \neq 0) \mbox{ if } q_j = 0, \\[6pt] a_{j} \in i \mathbb{R} \mbox{ if } q_j =0, \end{array}\right. \end{equation} and \begin{equation}\label{lem-quasi-A2} \sum_{j=1}^\ell |a_{j}|^2 > 0. \end{equation} Set, for $t \in \mathbb{R}$, \begin{multline}\label{lem-quasi-g} g(t) \\[6pt]:= \sum_{j_1 = 1}^\ell \sum_{j_2 = 1}^\ell \Big( a_{j_1} a_{j_2} M_{j_1, j_2} e^{- i (q_{j_1} + q_{j_2}) t} + \bar a_{j_1} \bar a_{j_2} \bar M_{j_1, j_2} e^{ i (q_{j_1} + q_{j_2}) t} + 2 a_{j_1} \bar a_{j_2} N_{j_1, j_2} e^{-i (q_{j_1} - q_{j_2})} \Big). \end{multline} There exists $t \in \mathbb{R}_+$ such that \begin{equation}\label{lem-quasi-cl} g(t) \neq 0. \end{equation} \end{lemma} \begin{proof} We prove \eqref{lem-quasi-cl} by recurrence in $\ell$. It is clear that the conclusion holds for $\ell=1$. Indeed, if $q_1 \neq 0$ then since $e^{2 q_1 t}$, $0$, and $e^{-2 q_1 t}$ are independent, the conclusion follows. Otherwise, $q_1 = 0$. Since $M_{1, 1}$ is real and $a_{1} \in i \mathbb{R}$, we have $$ g(t) = 2 |a_1|^2 N_{1, 1}. $$ The conclusion in the case $\ell = 1$ follows since $N_{1, 1} \neq 0$. Assume that the conclusion holds for $\ell \ge 1$, we prove that the conclusion holds for $\ell +1$. Without loss of generality, one might assume that \begin{equation}\label{lem-quasi-q} 0 \le q_1 < q_2 < \dots < q_{\ell} < q_{\ell +1}. \end{equation} We will prove \eqref{lem-quasi-cl} for $\ell+1$ by contradiction. Assume that there exist $a_j$ and $q_j \ge 0$ with $1 \le j \le \ell +1$, $M_{j_1, j_2}, \, N_{j_1, j_2} \in \mathbb{C}$ with $1 \le j_1, j_2 \le \ell+1$ such that \eqref{lem-quasi-A1}, \eqref{lem-quasi-A2}, and \eqref{lem-quasi-q} hold, and, for all $t \in \mathbb{R}_+$, \begin{equation}\label{lem-quasi-p1} \sum_{j_1 = 1}^{\ell+1} \sum_{j_2 = 1}^{\ell + 1} \Big( a_{j_1} a_{j_2} M_{j_1, j_2} e^{- i (q_{j_1} + q_{j_2}) t} + \bar a_{j_1} \bar a_{j_2} \bar M_{j_1, j_2} e^{ i (q_{j_1} + q_{j_2}) t} + 2 a_{j_1} \bar a_{j_2} N_{j_1, j_2} e^{-i (q_{j_1} - q_{j_2})} \Big) = 0. \end{equation} Since the function $e^{- 2 i q_{\ell+1} t}$ defined in $\mathbb{R}_+$ does not belong to the space \begin{multline*} \mbox{span} \left( \Big \{e^{-it (q_{j_1} + q_{j_2})}; 1 \le j_1 \le \ell +1; 1 \le j_2 \le \ell \Big\}, \right. \\[6pt] \left. \Big \{e^{it (q_{j_1} + q_{j_2})}; 1 \le j_1 \le \ell +1; 1 \le j_2 \le \ell + 1 \Big\}, \right. \\[6pt] \left. \Big \{e^{-it (q_{j_1} - q_{j_2})}; 1 \le j_1 \le \ell +1; 1 \le j_2 \le \ell +1 \Big\} \right), \end{multline*} for $t \in \mathbb{R}_+$ by \eqref{lem-quasi-q}, we have $$ a_{\ell+1}^2 M_{\ell +1, \ell +1} = 0. $$ This yields, since $M_{\ell+1, \ell+1} \neq 0$, $$ a_{\ell+1} = 0. $$ It follows from \eqref{lem-quasi-p1} that \begin{equation} \sum_{j_1 = 1}^{\ell} \sum_{j_2 = 1}^{\ell} \Big( a_{j_1} a_{j_2} M_{j_1, j_2} e^{- i (q_{j_1} + q_{j_2}) t} + \bar a_{j_1} \bar a_{j_2} \bar M_{j_1, j_2} e^{ i (q_{j_1} + q_{j_2}) t} + 2 a_{j_1} \bar a_{j_2} N_{j_1, j_2} e^{-i (q_{j_1} - q_{j_2})} \Big) = 0. \end{equation} We now can use the assumption on the recurrence to obtain a contradiction. The proof of \eqref{lem-quasi-cl} is complete. \end{proof} Using \Cref{lem-quasi} and the theory of quasi-periodic functions, see e.g. \cite{Bohr47}, we can derive the following useful result for the proof of \Cref{thm1}. \begin{proposition}\label{pro-quasi} Let $\ell \in \mathbb{N}_*$, $a_j \in \mathbb{C}$, $q_j \ge 0$ for $1 \le j \le \ell$, and $M_{j_1, j_2}, N_{j_1, j_2} \in \mathbb{C}$ with $1 \le j_1, j_2 \le \ell$. Assume that \eqref{lem-quasi-A1} holds and denote $g$ by \eqref{lem-quasi-g}. For all $0 < \gamma_1 < \gamma_2 $ there exist $\gamma_0>0$ and $\tau_0 > 0$ depending only on $\gamma_1$, $\gamma_2$, $\ell$, $q_j$, $M_{j_1, j_2}$, and $N_{j_1, j_2}$ such that if \begin{equation} \gamma_1 \le \sum_{j=1}^\ell |a_j|^2 \le \gamma_2, \end{equation} then \begin{equation}\label{pro-cl} \| g \|_{L^2(\tau, 2 \tau)} \ge \gamma_0 \mbox{ for all } \tau \ge \tau_0. \end{equation} \end{proposition} \begin{proof} Instead of \eqref{pro-cl}, it suffices to prove \begin{equation}\label{pro-cl1} \| g \|_{L^\infty(\tau, 2 \tau)} \ge \gamma_0 \mbox{ for } \tau \ge \tau_0 \end{equation} by contradiction since $|g'(t)| \le C$ in $\mathbb{R}$. Assume that for all $n \in \mathbb{N}_*$ there exist $(a_{j, n})_{j=1}^\ell \subset \mathbb{C}$ and $(t_n) \subset \mathbb{R}$ such that $\gamma_1 \le \sum_{j=1}^\ell |a_{j, n}|^2 \le \gamma_2$, $t_n \ge n$, and \begin{equation}\label{pro-p1} \|g_n\|_{L^\infty(t_n, 2t_n)} \le 1/n, \end{equation} where $g_n$ is defined in \eqref{lem-quasi-g} where $a_{j_1}$ and $a_{j_2}$ are replaced by $a_{j_1, n}$ and $a_{j_2, n}$. Without loss of generality, one might assume that $$ \lim_{n \to + \infty} a_{j, n} = a_j \in \mathbb{C} $$ and $\gamma_1 \le \sum_{j=1}^N |a_j|^2 \le \gamma_2$. Consider $g$ defined by \eqref{lem-quasi-g} with these $a_j$. We have \begin{equation}\label{pro-p2} \lim_{n \to + \infty} \| g_n - g\|_{L^\infty(\mathbb{R})} = 0. \end{equation} Since $g$ is an almost-periodic function with respect to $t$ (see e.g. \cite[Corollary on page 38]{Bohr47}), it follows from the definition of almost-periodic functions, see e.g. \cite[Section 44 on pages 32 and 33]{Bohr47}, that for every $\varepsilon > 0$, there exists ${\mathcal L}_\varepsilon > 0$ such that every interval $(\alpha, \alpha + {\mathcal L}_\varepsilon)$ containing a number $\tau(\varepsilon, \alpha)$ for which it holds \begin{equation}\label{pro-ap} |g(t + \tau(\varepsilon, \alpha) ) - g(t)| \le \varepsilon \mbox{ for all } t \in \mathbb{R}. \end{equation} The proof is now divided into two cases. \medskip \noindent{\it Case 1:} $\liminf_{\varepsilon \to 0 } {\mathcal L}_\varepsilon < + \infty$. Denote ${\mathcal L}_0= \liminf_{\varepsilon \to 0} {\mathcal L}_\varepsilon$. We claim that $g$ is $T$-periodic for some period $T \le {\mathcal L}_0 + 1$. Indeed, by \eqref{pro-ap} applied with $\alpha = 1/2$, there exists a sequence $(\tau_n) \subset (1/2, {\mathcal L}_0 +1)$ such that, for large $n$, $$ |g(t+ \tau_n) - g(t)| \le 1/n \mbox{ for all } t \in \mathbb{R}. $$ By choosing $T = \liminf_{n \to + \infty} \tau_n$, we have $$ g(t + T) = g(t) \mbox{ for all } t \in \mathbb{R}. $$ The claim is proved. Since $g$ is $T$-periodic, we have $$ \|g \|_{L^\infty(t_n, t_n+ T + 1)} = \|g \|_{L^\infty(0, T + 1)} \mbox{ for } n \in \mathbb{N}_*, $$ and since $g$ is analytic and $g \neq 0$ by \Cref{lem-quasi}, we obtain $$ \|g \|_{L^\infty(0, T + 1)} > 0. $$ This contradicts \eqref{pro-p1} and \eqref{pro-p2}. The proof of Case 1 is complete. \medskip \noindent{\it Case 2:} $\lim_{\varepsilon \to 0}L_\varepsilon = + \infty$. Set \begin{equation}\label{pro-quasi-rho} \rho = \|g \|_{L^\infty(0, 1)}. \end{equation} It follows from \Cref{lem-quasi} that $g$ is not identically equal to 0. Since $g$ is analytic, we derive that \begin{equation}\label{pro-quasi-rho>0} \rho > 0. \end{equation} Let $n_0 \ge 2$ be such that $$ \|g_n - g \|_{L^\infty(\mathbb{R})} < \rho/4, \quad \| g_n\|_{L^\infty(t_n, 2 t_n)} < \rho / 4 \mbox{ for } n \ge n_0. $$ Such an $n_0$ exists by \eqref{pro-p1}, \eqref{pro-p2}, and \eqref{pro-quasi-rho>0}. We have, for $n \ge n_0$, \begin{equation}\label{pro-p3} \|g \|_{L^\infty(t_n, 2 t_n)} \le \|g_n - g\|_{L^\infty(t_n, 2 t_n)} + \|g_n\|_{L^\infty(t_n, 2 t_n)} \le \rho/4 + \rho/4 = \rho/2. \end{equation} Fix $0< \varepsilon < \rho/4$ and fix $n \ge n_0$ such that $1 \le {\mathcal L}_\varepsilon \le t_n/2$. Such a number $n$ exists since $t_n \ge n$. It follows from the definition of $\tau( \varepsilon, t_n)$ that \begin{equation}\label{pro-tau1} \tau(\varepsilon, t_n) \in (t_n, t_n + {\mathcal L}_\varepsilon) \subset (t_n, 3 t_n/2), \end{equation} and \begin{equation}\label{pro-tau2} \big|g\big(t + \tau(\varepsilon, t_n)\big) - g(t)\big| \le \varepsilon \mbox{ for all } t \in \mathbb{R}. \end{equation} This yields \begin{equation}\label{pro-p4} \|g \|_{L^\infty(t_n, 2 t_n)} \mathop{\ge}^{\eqref{pro-tau1}} \|g \|_{L^\infty(\tau(\varepsilon, t_n), \tau(\varepsilon, t_n) + 1)} \mathop{\ge}^{\eqref{pro-tau2}} \|g \|_{L^\infty(0, 1)} - \varepsilon \ge \rho - \rho/4 = 3 \rho/4. \end{equation} Combining \eqref{pro-p3} and \eqref{pro-p4} yields a contradiction since $\rho > 0$ by \eqref{pro-quasi-rho>0}. The proof of Case 2 is complete. \end{proof} \section{An upper bound for the decay rate - Proof of \Cref{thm1}} \label{sect-thm1} This section containing two subsections is devoted to the proof of \Cref{thm1}. The main ingredient is given in the first section and the proof is presented in the second one. \subsection{A key lemma} In this section, we prove \begin{lemma}\label{lemK} Let $L \in {\mathcal N}$. Assume that $\dim {\mathcal M} = 1$ or \eqref{main-assumption} holds. There exist $\varepsilon_0 > 0$, $C>0$, and $T_0 > 0$ depending only on $L$ such that for all (real) $u_0 \in L^2(0, L)$ with $\| u_0 \|_{L^2(0, L)} \le \varepsilon_0$, the unique solution $u \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ of system \eqref{KdV-NL} satisfies \begin{equation}\label{decayK} \| u(T, \cdot) \|_{L^2(0, L)} \le \| u_0 \|_{L^2(0, L)} \Big(1 - C \| u_0 \|_{L^2(0, L)}^2 \Big) \mbox{ for } T \ge T_0. \end{equation} \end{lemma} \begin{proof} We first collect several known facts. Let $T_1 > 0$ be such that \begin{equation}\label{lemK-T1} \| v_x(\cdot, 0) \|_{L^2(0, t)} \ge \frac{1}{2} \| v(0, \cdot) \|_{L^2(0, L)} \mbox{ for } t \ge T_1, \end{equation} for all solutions $v \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ of the system \begin{equation}\left\{ \begin{array}{cl} v_t (t, x) + v_x (t, x) + v_{xxx} (t, x) = 0 & \mbox{ in } (0, +\infty) \times (0, L), \\[6pt] v(t, x=0) = v(t, x=L) = v_x(t , x= L)= 0 & \mbox{ in } (0, +\infty), \end{array}\right. \end{equation} with $v(0, \cdot) \in L^2(0, L)$ satisfying the condition $$ v(0, \cdot) \perp {\mathcal M} $$ (the orthogonality is considered with respect to $L^2(0, L)$-scalar product). The existence of such a constant $T_1$ follows from \cite{Rosier97}. There exist two positive constants $\varepsilon_0$ and $c_1$ such that if $\| u_0 \|_{L^2(0, L)} \le \varepsilon_0$, then \begin{equation}\label{lemK-c1} \| u \|_{C\big([0, T_1]; L^2(0, L) \big)} + \| u \|_{L^2\big( (0, T_1); H^1(0, L) \big)} \le c_1 \| u_0\|_{L^2(0, L)} \end{equation} (see e.g., \cite[Proposition 14]{CC04}). There is a positive constant $c_2$ such that if $\widetilde u_0 \in L^2(0, L)$, $\widetilde f \in L^1\big((0, T_1); L^2(0, L) \big)$, and $\widetilde y \in C \big([0, T_1); L^2(0, L) \big) \cap L^2 \big([0, T_1); H^1(0, L) \big) $ is the unique solution of the system \begin{equation}\left\{ \begin{array}{cl} \widetilde u_t (t, x) + \widetilde u_x (t, x) + \widetilde u_{xxx} (t, x) = \widetilde f & \mbox{ in } (0, T_1) \times (0, L), \\[6pt] \widetilde u(t, x=0) = \widetilde u(t, x=L) = \widetilde u_x(t , x= L)= 0 & \mbox{ in } (0, T_1), \\[6pt] \widetilde u(t= 0, \cdot) = \widetilde u_0 & \mbox{ in } (0, L), \end{array}\right. \end{equation} then \begin{multline}\label{lemK-c2} \| \widetilde u_x(\cdot, 0) \|_{L^2(0, T_1)} + \| \widetilde u \|_{C\big([0, T_1]; L^2(0, L) \big)} + \| \widetilde u \|_{L^2\big( (0, T_1); H^1(0, L) \big)} \\[6pt] \le c_2 \Big( \| \widetilde u_0\|_{L^2(0, L)} + \|\widetilde f\|_{L^1\big((0, T_1); L^2(0, L) \big)} \Big). \end{multline} There exists a positive constant $c_3$ depending only on $L$ such that, for {\it all} $T>0$, \begin{equation}\label{lemK-c3} \| \xi \xi_x \|_{L^1\big( (0, T); L^2(0, L) \big)} \le c_3 \| \xi \|_{L^2\big( (0, T); H^1(0, L) \big)}^2 \end{equation} (the constant $c_3$ is independent of $T$). \medskip We now decompose $u_0$ into two parts: \begin{equation}\label{thm-u0-decomp} u_0 = u_{0, 1} + u_{0, 2} \mbox{ in } (0, L), \end{equation} where \begin{equation*} u_{0, 1} = \mbox{Projection}_{{\mathcal M}} u_0 \end{equation*} with respect to $L^2(0, L)$-scalar product. \medskip The proof is now divided into two cases, with $0< \varepsilon = \| u_{0} \|_{L^2(0, L)} < \varepsilon_0$ (the conclusion is clear if $\varepsilon =0$), \begin{itemize} \item Case 1: $\| u_{0, 2} \|_{L^2(0, L)} \ge \beta \varepsilon^2 = \beta \| u_0\|_{L^2(0, L)}^2$, \item Case 2: $\| u_{0, 2} \|_{L^2(0, L)} < \beta \varepsilon^2 = \beta \| u_0\|_{L^2(0, L)}^2$, \end{itemize} where \begin{equation}\label{def-beta} \beta = 4c_1^2 c_2 c_3. \end{equation} \medskip \noindent{\it Case 1}: Assume that \begin{equation}\label{Case1} \| u_{0, 2} \|_{L^2(0, L)} \ge \beta \varepsilon^2 = \beta \| u_0\|_{L^2(0, L)}^2. \end{equation} Let $\hat u \in C \big([0, T_1); L^2(0, L) \big) \cap L^2 \big([0, T_1); H^1(0, L) \big)$ be the unique solution of \begin{equation}\label{lemK-hu}\left\{ \begin{array}{cl} \hat u_t (t, x) + \hat u_x (t, x) + \hat u_{xxx} (t, x) = 0 & \mbox{ in } (0, T_1) \times (0, L), \\[6pt] \hat u(t, 0) = \hat u(t, L) = \hat u_x(t , L)= 0 & \mbox{ in } (0, T_1), \\[6pt] \hat u(0, \cdot) = u_0 & \mbox{ in } (0, L). \end{array}\right. \end{equation} Then \begin{equation}\label{lemK-c1c2} \|(\hat u - u)_x(\cdot, 0) \|_{L^2(0, T_1)} \mathop{\le}^{\eqref{lemK-c2}} c_2 \| u u_x \|_{L^1 \big((0, T_1) ; L^2(0, L) \big)} \mathop{\le}^{\eqref{lemK-c1}, \eqref{lemK-c3}} c_1^2 c_2 c_3 \varepsilon^2. \end{equation} Let $\hat u_j \in C \big([0, T_1); L^2(0, L) \big) \cap L^2\big([0, T_1); H^1(0, L) \big)$ with $j=1, \, 2$ be the unique solution of \begin{equation}\left\{ \begin{array}{cl} \hat u_{j, t} (t, x) + \hat u_{j, x} (t, x) + \hat u_{j, xxx} (t, x) = 0 & \mbox{ for } t \in (0, T), \, x \in (0, L), \\[6pt] \hat u_j(t, 0) = \hat u_j(t, L) = \hat u_{j, x} (t , L)= 0 & \mbox{ for } t \in (0, T), \\[6pt] \hat u_j(0, \cdot) = u_{0, j} & \mbox{ in } (0, L). \end{array}\right. \end{equation} Then $$ \hat u = \hat u_1 + \hat u_2 \mbox{ in } [0, T_1] \times [0, L]. $$ We have \begin{equation}\label{Case1-p1} \hat u_{1, x} (\cdot, 0) =0 \mbox{ in } [0, T_1], \end{equation} and, by the choice of $T_1$ via \eqref{lemK-T1}, \begin{equation}\label{Case1-p2} \| \hat u_{2, x} (\cdot, 0)\|_{L^2(0, T_1)} \ge \frac{1}{2} \| \hat u_2(0, \cdot)\|_{L^2(0, L)} = \frac{1}{2} \| u_{0,2}\|_{L^2(0, L)}. \end{equation} It follows from \eqref{Case1} that \begin{equation}\label{Case1-p3} \| \hat u_{x} (\cdot, 0)\|_{L^2(0, T_1)} \ge \frac{1}{2} \beta \varepsilon^2. \end{equation} From \eqref{lemK-c1c2} and \eqref{Case1-p3}, we obtain \begin{multline*} \| u_{x} (\cdot, 0)\|_{L^2(0, T_1)} \ge \| \hat u_{x} (\cdot, 0)\|_{L^2(0, T_1)} - \| (u - \hat u)_{x} (\cdot, 0)\|_{L^2(0, T_1)} \ge \left( \frac{1}{2} \beta - c_1^2 c_2 c_3 \right) \varepsilon^2 \mathop{\ge}^{\eqref{def-beta}} c_1^2 c_2 c_3 \varepsilon^2. \end{multline*} In other words, \begin{equation}\label{Case1-*} \| u_{x} (\cdot, 0)\|_{L^2(0, T_1)} \ge c_1^2 c_2 c_3 \| u_0\|_{L^2(0, L)}^2. \end{equation} \medskip \noindent{\it Case 2}: Assume that \begin{equation}\label{Case2} \| u_{0, 2} \|_{L^2(0, L)} < \beta \varepsilon^2 = \beta \| u_0 \|_{L^2(0, L)}^2. \end{equation} Since $$ \| u_{0, 1} \|_{L^2(0, L)}^2 + \| u_{0, 2} \|_{L^2(0, L)}^2 = \| u_{0} \|_{L^2(0, L)}^2 = \varepsilon^2, $$ by considering $\varepsilon$ sufficiently small, one can assume that $$ \| u_{0, 1} \|_{L^2(0, L)} \ge \varepsilon/2. $$ Let $\alpha_m \in \mathbb{C}$ ($1 \le m \le n_L$) be such that \begin{equation}\label{thm-def-y1} \frac{1}{\varepsilon} u_{0, 1} = \Re \left\{ \sum_{m=1}^{n_L} \alpha_m \Psi_m (0, x) \right\}. \end{equation} Since $u_{0, 1} \in {\mathcal M}$, such a family of $(\alpha_m)_{m=1}^{n_L}$ exists. Since $$ 1/2 \le \| \frac{1}{\varepsilon} u_{0, 1} \|_{L^2(0, L)} \le 1 $$ and $\Big( \Psi_m (0, \cdot) \Big)$ is orthogonal in $L^2(0, L)$ (with respect to the complex field), one can assume in addition that $$ 0< \gamma_1 \le \sum_{m=1}^{n_L} |\alpha_m|^2 \le \gamma_2, $$ for some constants $\gamma_1$, $\gamma_2$ depending only on $L$. Moreover, since $\Psi_m (0, x) \in i \mathbb{R}$ for $x \in [0, L]$ (by \eqref{psi-complex}) if $p_m = 0$ (see e.g. \eqref{psi-complex}), one can also assume that $a_m \in i \mathbb{R}$ if $p_m =0$. Let $\gamma_0 > 0$ and $\tau_0> 0$ be the constants given in \Cref{pro-quasi} with $\ell= n_L$, $\gamma_1$ and $\gamma_2$ determined above, $q_m= p_m$ given by \eqref{def-pm}, \begin{equation}\label{Case2-M} M_{m_1, m_2} = \frac{1}{8} \varphi_{m_1, m_2}' (0) \quad \mbox{ and } \quad N_{m_1, m_2} = \frac{1}{8} \phi_{m_1, m_2}' (0), \end{equation} where $\varphi_{m_1, m_2}$ and $\phi_{m_1, m_2}$ are defined in \Cref{pro1} and \Cref{pro1-Co}, respectively; in the case the definition of $\varphi_{m_1, m_2}$ and $\phi_{m_1, m_2}$ in \Cref{pro1} and \Cref{pro1-Co} are not unique, we fix a choice of $\varphi_{m_1, m_2}$ and $\phi_{m_1, m_2}$. By \Cref{pro-chi}, we have $$ M_{m,m} \neq 0, $$ and $$ (M_{m, m} \mbox{ is real and $N_{m, m} \neq 0$) if } p_m = 0. $$ Then, by \Cref{pro-quasi}, for all $a_j \in \mathbb{C}$ $(1 \le j \le N)$ satisfying $\gamma_1 \le \sum_{j=1}^N |a_j|^2 \le \gamma_2$, it holds \begin{equation}\label{Case2-pro-g} \| g \|_{L^2(\tau, 2 \tau)} \ge \gamma_0 \mbox{ for all } \tau \ge \tau_0, \end{equation} where \begin{multline}\label{Case2-def-g} g(t) = \sum_{m_1 = 1}^{n_L} \sum_{m_2 = 1}^{n_L} \Big( a_{m_1} a_{m_2} M_{m_1, m_2} e^{- i (p_{m_1} + p_{m_2}) t} \\[6pt] + \bar a_{m_1} \bar a_{m_2} \bar M_{m_1, m_2} e^{ i (p_{m_1} + p_{m_2}) t} + 2 a_{m_1} \bar a_{m_2} N_{m_1, m_2} e^{-i (p_{m_1} - p_{m_2})} \Big). \end{multline} Define \begin{equation}\label{Case2-defA} A = \beta + 2 \sum_{m_1 = 1}^{n_L} \sum_{m_2 = 1}^{n_L} \|\varphi_{m_1, m_2}\|_{L^2(0, L)} + 2 \sum_{m_1 = 1}^{n_L} \sum_{m_2 = 1}^{n_L} \|\phi_{m_1, m_2}\|_{L^2(0, L)}, \end{equation} and set \begin{equation}\label{Case2-def-c4} c_4 = 1/ (2 A). \end{equation} Let $T_2 \ge 2 \tau_0$ be such that \begin{equation}\label{Case2-T2} \| y_x(\cdot, 0) \|_{L^2(T_2/2, T_2)} \le c_4 \gamma_0 \| y(0, \cdot) \|_{L^2(0, L)}, \end{equation} for all solutions $y \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ of \begin{equation}\label{thm-sys1}\left\{ \begin{array}{cl} y_t (t, x) + y_x (t, x) + y_{xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] y(t, 0) = y(t, L) = y_x(t , L)= 0 & \mbox{ for } t \in (0, +\infty), \end{array}\right. \end{equation} with $y(0, \cdot) \in L^2(0, L)$. Note that $T_2$ is independent of $y(0, \cdot)$. The existence of $T_2$ can be proved by decomposing $y(0, \cdot) = y_1(0, \cdot) + y_2(0, \cdot)$ with $y_1(0, \cdot) \in {\mathcal M}$, and noting that \eqref{Case2-T2} holds for the solution with initial data being $y_2(0, \cdot)$ since the solution is exponential decay, and the contribution for $y_x(\cdot, 0)$ from the solution with initial data is $y_1(0, \cdot)$ is 0. Let $\widetilde u_1, \; \widetilde u_2 \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ be the unique solution of \begin{equation}\label{sys-NL}\left\{ \begin{array}{cl} \widetilde u_{1, t} (t, x) + \widetilde u_{1, x} (t, x) + \widetilde u_{1, xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \widetilde u_1(t, 0) = \widetilde u_1(t, L) = \widetilde u_{1, x}(t , L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] \displaystyle \widetilde u_1(0, \cdot ) = \frac{1}{\varepsilon} u_{0, 1} & \mbox{ in } [0, L], \end{array}\right. \end{equation} and \begin{equation}\label{Case2-sys}\left\{ \begin{array}{cl} \widetilde u_{2, t} (t, x) + \widetilde u_{2, x} (t, x) + \widetilde u_{2, xxx} (t, x) + \widetilde u_1 \widetilde u_{1, x} = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \widetilde u_2(t, 0) = \widetilde u_2(t, L) = \widetilde u_{2, x}(t , L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] \displaystyle \widetilde u_2(0, \cdot) = \frac{1}{\varepsilon^2} u_{0, 2} & \mbox{ in } [0, L]. \end{array}\right. \end{equation} Set $$ V(t, x) = \sum_{m=1}^{n_L} \alpha_m \Psi_m(t, x) \quad \mbox{ and } \quad U(t, x) = \Re V(t, x). $$ We have \begin{equation}\label{sys-U}\left\{ \begin{array}{cl} U(t, x) + U_x (t, x) + U_{xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] U(t, x=0) = U(t, x=L) = U_x(t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] \displaystyle U(t=0, \cdot) = \frac{1}{\varepsilon} u_{0, 1} & \mbox{ in } [0, L]. \end{array}\right. \end{equation} This implies $$ \widetilde u_1 = U \mbox{ in } (0, + \infty) \times (0, L). $$ Define \begin{equation}\label{def-V1} V_1(t, x) = \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \alpha_{m_1} \alpha_{m_2} \varphi_{m_1, m_2}(x) e^{- i (p_{m_1} + p_{m_2} ) t}, \end{equation} and \begin{equation}\label{def-V2} V_2(t, x) = \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \alpha_{m_1} \bar \alpha_{m_2} \phi_{m_1, m_2}(x) e^{- i (p_{m_1} - p_{m_2} ) t}. \end{equation} Then, by the construction of $\varphi_{m_1, m_2}$, \begin{equation}\left\{ \begin{array}{cl} V_{1, t} (t, x) + V_{1, x} (t, x) + V_{1, xxx} (t, x) + \big( V(t, x) V(t, x) \big)_x = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] V_1(t, 0) = V_1(t, L) = V_{1, x}(t , L)= 0 & \mbox{ for } t \in (0, +\infty), \end{array}\right. \end{equation} and, by the construction of $\phi_{m_1, m_2}$, \begin{equation}\left\{ \begin{array}{cl} V_{2, t} (t, x) + V_{2, x} (t, x) + V_{2, xxx} (t, x) + \big( |V(t, x)|^2 \big)_x = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] V_2(t, 0) = V_2(t, L) = V_{2, x}(t , L)= 0 & \mbox{ for } t \in (0, +\infty). \end{array}\right. \end{equation} Set \begin{equation}\label{def-W} W = \frac{1}{8} \Big( V_1 + \bar V_1 + 2 V_2 \Big) \mbox{ in } (0, + \infty) \times (0, L). \end{equation} It follows from \eqref{Case2-def-g} that $W_x(t, 0) = g(t)$ in $\mathbb{R}_+$ and hence, by \eqref{Case2-pro-g}, \begin{equation}\label{Case2-pro-W} \| W_x(t, 0) \|_{L^2(\tau, 2 \tau)} \ge \gamma_0 \mbox{ for all } \tau \ge \tau_0. \end{equation} Since $$ \big( V(t, x) V(t, x) \big)_x + \overline{\big( V(t, x) V(t, x) \big)_x} + 2 \big( |V(t, x)|^2 \big)_x = 8 U(t, x) U_x(t, x), $$ we derive from \eqref{def-W} that \begin{equation}\left\{ \begin{array}{cl} W_{t} (t, x) + W_{x} (t, x) + W_{xxx} (t, x) + U(t, x) U_{x}(t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] W(t, 0) = W(t, L) = W_{x}(t , L)= 0 & \mbox{ for } t \in (0, +\infty). \end{array}\right. \end{equation} Let $\widetilde W \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ be the unique solution of \begin{equation}\left\{ \begin{array}{cl} \widetilde W_{t} (t, x) + \widetilde W_{x} (t, x) + \widetilde W_{xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \widetilde W(t, 0) = \widetilde W(t, L) = \widetilde W_{x}(t , L) = 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] \displaystyle \widetilde W(0, \cdot) = \widetilde u_{2} (0, \cdot) - W(0, \cdot). \end{array}\right. \end{equation} Then \begin{equation}\label{Case2-hu2WW} \widetilde u_2 = \widetilde W + W \mbox{ in } (0, + \infty) \times (0, L). \end{equation} We have \begin{equation}\label{Case2-tt1} \| \widetilde u_{2, x} (\cdot, 0) \|_{L^2(T_2/2, T_2)} \mathop{\ge}^{\eqref{Case2-hu2WW}} \| W_{x} (\cdot, 0) \|_{L^2(T_2/2, T_2)} - \| \widetilde W_{x} (\cdot, 0) \|_{L^2(T_2/2, T_2)}, \end{equation} \begin{equation}\label{Case2-tt2} \| \widetilde W_{x} (\cdot, 0) \|_{L^2(T_2/2, T_2)} \mathop{\le}^{\eqref{Case2-T2}} c_4 \gamma_0 \|\widetilde W(0, \cdot) \|_{L^2(0, L)}, \end{equation} and, since $T_2 \ge \tau_0$, \begin{equation}\label{Case2-tt3} \| W_{x} (\cdot, 0) \|_{L^2(T_2/2, T_2)} \mathop{\ge}^{\eqref{Case2-pro-W}} \gamma_0. \end{equation} Since, by \eqref{def-V1}, \eqref{def-V2}, and \eqref{def-W} \begin{multline*} 8W(0, x) = \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \alpha_{m_1} \alpha_{m_2} \varphi_{m_1, m_2}(x) \\[6pt] + \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \bar \alpha_{m_1} \bar \alpha_{m_2} \bar \varphi_{m_1, m_2}(x) + 2 \sum_{m_1=1}^{n_L} \sum_{m_2=1}^{n_L} \alpha_{m_1} \bar \alpha_{m_2} \phi_{m_1, m_2}(x), \end{multline*} it follows that \begin{equation}\label{Case2-W0} \|W(0, \cdot) \|_{L^2(0, L)} \le 2 \sum_{m_1 = 1}^{n_L} \sum_{m_2 = 1}^{n_L} \|\varphi_{m_1, m_2}\|_{L^2(0, L)} + 2 \sum_{m_1 = 1}^{n_L} \sum_{m_2 = 1}^{n_L} \|\phi_{m_1, m_2}\|_{L^2(0, L)}. \end{equation} By the definition of $A$ in \eqref{Case2-defA}, we obtain from \eqref{Case2} and \eqref{Case2-W0} that \begin{equation}\label{Case2-A2} A \ge \| \widetilde u_2 (0, \cdot) \|_{L^2(0, L)} + \|W(0, \cdot) \|_{L^2(0, L)} \mathop{\ge}^{\eqref{Case2-hu2WW}} \| \widetilde W(0, \cdot) \|_{L^2(0, L)}. \end{equation} Combining \eqref{Case2-tt1}, \eqref{Case2-tt2}, \eqref{Case2-tt3}, and \eqref{Case2-A2} yields \begin{equation*} \|\widetilde u_{2, x} (\cdot, 0) \|_{L^2(T_2/2, T_2)} \ge \gamma_0 - c_4 \gamma_0 A. \end{equation*} Since $c_4 = 1 / (2 A)$ by \eqref{Case2-def-c4}, we obtain \begin{equation}\label{lemK-p1} \|\widetilde u_{2, x} (\cdot, 0) \|_{L^2(T_2/2, T_2)} \ge \gamma_0/2. \end{equation} Set \begin{equation*} u_d = \varepsilon \widetilde u_1 + \varepsilon^2 \widetilde u_2 - u \mbox{ in } (0, + \infty) \times (0, L), \end{equation*} and \begin{equation} f_d = u u_x - \varepsilon^2 \widetilde u_1 \widetilde u_{1, x} \mbox{ in } (0, + \infty) \times (0, L). \end{equation} We have, by \eqref{sys-NL} and \eqref{Case2-sys}, \begin{equation}\left\{ \begin{array}{cl} u_{d, t} (t, x) + u_{d, x} (t, x) + u_{d, xxx} (t, x) = f_d (t, x) & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] u_d(t, x=0) = u_d(t, x=L) = u_d(t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] u_d(t = 0, \cdot) = 0 & \mbox{ in } (0, L). \end{array}\right. \end{equation} It is clear that \begin{equation*} \| f_d\|_{L^1\big((0, T_2); L^2(0, L) \big)} \le C \varepsilon^2, \end{equation*} where $C$ is a positive constant depending only on $T_2$ and $L$. It follows that \begin{equation*} \| u_d \|_{C\big([0, T_2]; L^2(0, L) \big)} + \| u_d \|_{L^2\big( (0, T_2); H^1(0, L) \big)} \le C \varepsilon^2. \end{equation*} This in turn implies that \begin{equation*} \| f_d\|_{L^1\big((0, T_2); L^2(0, L) \big)} \le C \varepsilon^3. \end{equation*} and therefore, \begin{equation}\label{lemK-p2} \| u_{d, x} (\cdot, 0) \|_{L^2( 0, T_2) } \le C \varepsilon^3. \end{equation} Combining \eqref{lemK-p1} and \eqref{lemK-p2}, and noting that $\widetilde u_{1, x} (t, 0) = 0$ yield \begin{equation}\label{est-case2} \|u_x (\cdot, 0) \|_{L^2(T_2/2, T_2) } \ge C \varepsilon^2. \end{equation} The analysis of Step 2 is complete. \medskip The conclusion now follows from Case 1 where one obtains \eqref{Case1-*} and Case 2 where one obtains \eqref{est-case2} by choosing $T_0 = \max\{T_1, T_2\}$ and using \eqref{key-identity}. The proof is complete. \end{proof} We are ready to give \subsection{Proof of \Cref{thm1}} By \Cref{lemK}, we have $$ \|u(T_2, \cdot) \|_{L^2(0, L) } \le \| u(0, \cdot) \|_{L^2(0, L)} \Big(1 - C \| u(0, \cdot) \|_{L^2(0, L)}^2 \Big). $$ This yields, with $\| u(0, \cdot) \|_{L^2(0, L)} = \varepsilon>0$ and $p$ being the largest integer less than $1/ (2 C \varepsilon^2)$, $$ \|u(p T_2, \cdot) \|_{L^2(0, L) } \le \frac{1}{2} \| u(0, \cdot) \|_{L^2(0, L)}. $$ Here we also used \eqref{key-identity}. Using \eqref{key-identity} again, it follows that, for $T \ge C/ \| u(0, \cdot) \|_{L^2(0, L)}^2$, $$ \|u(T, \cdot) \|_{L^2(0, L) } \le \frac{1}{2} \| u(0, \cdot) \|_{L^2(0, L)}. $$ This implies, by recurrence, that $$ \| u(T, \cdot) \|_{L^2(0, L)} \le 2^{-n} \| u_0 \|_{L^2(0, L)}\mbox{ for } T \ge C \sum_{p=0}^{n-1} 2^{2p} /\| u(0, \cdot) \|_{L^2(0, L)}^2 $$ since $\|u(t, \cdot) \|_{L^2(0, L)}$ is a non-increasing function with respect to $t$. In particular, we obtain, since $\|u(t, \cdot) \|_{L^2(0, L)}$ is a non-increasing function with respect to $t$, \begin{equation} \|u(t, \cdot) \|_{L^2(0, L)} \le C/ t^{1/2}. \end{equation} The proof is complete. \qed \section{A lower bound for the decay rate - Proof of \Cref{pro-opt}} \label{sect-opt} Fix $1 \le m \le n_L$ and $\alpha_m \in \mathbb{C}$ with $|\alpha_m| = 1$ such that $$ \Re ( \alpha_m \varphi_{m, m}(x)) \mbox{ is not identically equal to 0 in } [0, L]. $$ Let $\widetilde u_1 \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ be the unique solution of \begin{equation}\label{sys-NL-L}\left\{ \begin{array}{cl} \widetilde u_{1, t} (t, x) + \widetilde u_{1, x} (t, x) + \widetilde u_{1, xxx} (t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \widetilde u_1(t, x=0) = \widetilde u_1(t, x=L) = \widetilde u_{1, x}(t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] \displaystyle \widetilde u_1(0, \cdot ) = \Re ( \alpha_m \varphi_{m, m}). \end{array}\right. \end{equation} Set \begin{equation}\label{def-V1-L} V_1(t, x) = \alpha_{m}^2 \varphi_{m, m}(x) e^{- 2 i p_{m} t}, \end{equation} \begin{equation}\label{def-V2-L} V_2(t, x) = |\alpha_{m}|^2 \phi_{m, m}(x), \end{equation} and denote \begin{equation}\label{def-W-L} \widetilde u_2 = \frac{1}{8} \Big( V_1 + \bar V_1 + 2 V_2 \Big) \mbox{ in } (0, + \infty) \times (0, L). \end{equation} Since $\phi_{m, m}$ is real by 3) of \Cref{pro1-Co}, it follows that $V_2$ is real and hence so is $\widetilde u_2$. As in the proof of \Cref{lemK}, we have $$ \widetilde u_1(t, x) = \Re \Big(\alpha_m \varphi_{m, m} (x)e^{- i p_m t} \Big), $$ and \begin{equation*}\left\{ \begin{array}{cl} \widetilde u_{2, t} (t, x) + \widetilde u_{2, x} (t, x) + \widetilde u_{2, xxx} (t, x) + \widetilde u_1(t, x) \widetilde u_{1, x}(t, x) = 0 & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] \widetilde u_2(t, x=0) = \widetilde u_2(t, x=L) = \widetilde u_{2, x}(t , x= L)= 0 & \mbox{ for } t \in (0, +\infty). \end{array}\right. \end{equation*} Let $u \in C \big([0, + \infty); L^2(0, L) \big) \cap L^2_{\operatorname{loc}} \big([0, + \infty); H^1(0, L) \big)$ be a (real) solution of \eqref{KdV-NL} with $$ \| u(0, \cdot) \|_{L^2(0, L)} \le \Gamma \varepsilon, $$ where $$ \Gamma : = \sup_{t} \| \widetilde u_{1} (t, \cdot) \|_{L^2(0, L)} + 1. $$ Set $$ \widetilde u_2 (t, x) = W(t, x) \mbox{ in } (0, + \infty) \times (0, L), $$ \begin{equation*} u_d = \varepsilon \widetilde u_1 + \varepsilon^2 \widetilde u_2 - u \mbox{ in } (0, + \infty) \times (0, L), \end{equation*} \begin{equation*} f_d = u u_x - \varepsilon^2 \widetilde u_1 \widetilde u_{1, x} \mbox{ in } (0, + \infty) \times (0, L). \end{equation*} We have \begin{equation}\left\{ \begin{array}{cl} u_{d, t} (t, x) + u_{d, x} (t, x) + u_{d, xxx} (t, x) = f_d (t, x) & \mbox{ for } t \in (0, +\infty), \, x \in (0, L), \\[6pt] u_d(t, x=0) = u_d(t, x=L) = u_d(t , x= L)= 0 & \mbox{ for } t \in (0, +\infty), \\[6pt] u_d(t = 0, \cdot) = 0 & \mbox{ in } (0, L). \end{array}\right. \end{equation} Denote \begin{equation}\label{pro-opt-def-g} g_d = \varepsilon^3 ( \widetilde u_1 \widetilde u_{2, x} + \widetilde u_2 \widetilde u_{1, x} ) + \varepsilon^4 \widetilde u_2 \widetilde u_{2, x}. \end{equation} We write $f_d$ under the form \begin{align*} f_d = & (u- \varepsilon \widetilde u_1 - \varepsilon^2 \widetilde u_2) u_x + (\varepsilon \widetilde u_1 + \varepsilon^2 \widetilde u_2) (u- \varepsilon \widetilde u_1 - \varepsilon^2 \widetilde u_2)_x + g_d \\[6pt] = & - u_d u_x - (\varepsilon \widetilde u_1 + \varepsilon^2 \widetilde u_2) u_{d, x} + g_d. \end{align*} Multiplying the equation of $u_d$ with $u_d$ (which is real), integrating by parts in $(1, t) \times (0, L)$, and using the form of $f_d$ just above give \begin{multline}\label{pro-opt-p1} \int_0^L |u_d(t, x)|^2 \, dx \le \int_0^L |u_d(1, x)|^2 \, dx + 2 \int_1^t \int_0^L |u_d|^2 |u_x| \, dx \, ds \\[6pt] + \int_1^t \int_{0}^L ( \varepsilon |\widetilde u_1| + \varepsilon^2 |\widetilde u_2| )_x |u_d|^2 \,dx \, ds + 2 \int_1^t \int_0^L |g_d| |u_d|. \end{multline} Since $$ \int_0^L |u(t, x)|^2 \, dx \le \int_0^L |u(0, x)|^2 \le C \varepsilon \mbox{ for } t \ge 0, $$ and the effect of the regularity, one has \begin{equation}\label{pro-opt-p2} |u(t, x)| + |u_x(t, x)| \le C \varepsilon \mbox{ for } t \ge 1, \, x \in [0, L]. \end{equation} Let $a$ be a (small) positive constant defined later (the smallness of $a$ depending only on $L$). Let $t_0 \in [1, a/\varepsilon]$ be such that $$ \int_0^L |u_d(t_0, x)|^2 \, dx = \max_{t \in [1, a/\varepsilon]} \int_0^L |u_d(t, x)|^2 \, dx. $$ Combining \eqref{pro-opt-p1} with $t = t_0$ and \eqref{pro-opt-p2} yields $$ \int_0^L |u_d(t_0, x)|^2 \, dx \le \int_0^L |u_d(1, x)|^2 \, dx + C a \int_0^L |u_d(t_0, x)|^2 \, dx + \int_1^{a/\varepsilon} \int_0^L \varepsilon^{-1} |g_d|^2 \, d x. $$ This implies, if $a$ is sufficiently small, $$ \int_0^L |u_d(t_0, \cdot)|^2 \, dx \, dx \le C \int_0^L |u_d(1, x)|^2 \, dx + C \varepsilon^4 $$ by \eqref{pro-opt-def-g}. On the other hand, one has $$ \int_0^L |u_d(t, \cdot)|^2 \, dx \, dx \le C \int_0^L |u_d(0, x)|^2 \, dx + C \varepsilon^4 \mbox{ for } t \in [0, 1]. $$ We have just proved that, for $a$ sufficiently small, $$ \sup_{t \in [0, a/\varepsilon]} \| u_d(t, \cdot) \|_{L^2(0, L)} \le C \Big( \| u_d(0, \cdot) \|_{L^2(0, L)} + \varepsilon^2 \Big). $$ Continuing this process, we obtain \begin{equation}\label{pro-opt-p3} \sup_{t \in [0, a n /\varepsilon]} \| u_d(t, \cdot) \|_{L^2(0, L)} \le C^n \| u_d(0, \cdot) \|_{L^2(0, L)} + \sum_{k=1}^n C^k \varepsilon^2. \end{equation} We now consider $u$ with $$ u(0, \cdot) = \varepsilon \widetilde u_1(0, \cdot) + \varepsilon_2 \widetilde u_2 (0, \cdot). $$ Thus \begin{equation}\label{pro-opt-p4} u_d(0, \cdot) = 0. \end{equation} Fix $ \gamma > 0$ such that \begin{equation} \inf_{t \in \mathbb{R}} \int_{0}^L |\widetilde u_1(t, x)|^2 \,dx \ge 4 \gamma. \end{equation} With $n $ being the largest integer number such that $C^{n+1} \le \gamma \varepsilon^{-1}$ (we assume now and later on that $C \ge 2$), we derive from \eqref{pro-opt-p3} and \eqref{pro-opt-p4} that \begin{equation*} \sup_{t \in [0, a n /\varepsilon]} \| u_d(t, \cdot) \|_{L^2(0, L)} \le \gamma \varepsilon. \end{equation*} Since $$ u_d = \varepsilon \widetilde u_1 + \varepsilon^2 \widetilde u_2 - u, $$ by the choice of $\gamma$, we have, for $\varepsilon $ sufficiently small, \begin{equation*} \| u(a n/ \varepsilon, \cdot) \|_{L^2(0, L)} \ge \gamma \varepsilon. \end{equation*} We deduce that, with $\tau = a n/ \varepsilon \sim \varepsilon^{-1} \ln \varepsilon^{-1}$ (hence $\varepsilon^{-1} \sim \tau / \ln \tau$), \begin{equation*} \| u(\tau, \cdot) \|_{L^2(0, L)} \ge \gamma \varepsilon \ge C \gamma \ln \tau/ \tau. \end{equation*} The proof is complete. \qed
2,877,628,090,591
arxiv
\section{INTRODUCTION} The disorder is ubiquitous in nature. The influence of quenched disorder on the phase transition has been of great interest in the theoretical and experimental physics \cite{vojta}. In the three dimensional Ising model, the quenched disorder leads to a new critical point with exponents different from the pure ones \cite{landau,diluteising}. In the McCoy-Wu model, the disorder causes the Griffiths-McCoy singularity, where some thermodynamic quantities are singular in a range of temperature, rather than just at the critical point \cite{mccoy,mccoy1,fisher}. It is shown rigorously that in 2D quenched randomness results in the suppression of first-order phase transitions in the random-field Ising model, random bond Potts model and spin glasses \cite{aizenman}. In the present work we show two novel effects of the disorder on the phase transition. We study the wetting transition in the McCoy-Wu Ising model with surface fields, which is depicted in Fig. 1. In this model on the two dimensional lattice, all the vertical bonds are the same, while the horizontal bonds are identical to each other within each column but differ from column to column \cite{mccoy}. The first effect is that the phase transition is first-order even with the disorder. The second one is that for a fixed surface field the wetting transition temperatures are sample dependent and do not to converge to a limit as the size of the system goes to infinity. To our knowledge, in the all previous studied disordered systems the phase transition temperatures converge to a limit as the system size goes to infinity (see examples in Ref. \cite{domany,chakravarty,bellafard}). This should be the strongest effect of the disorder on the phase transition discovered up to now. Wetting transition on the Ising model was first studied by Abraham with the two-dimensional Ising model, in which two opposite external field are applied on the boundary \cite{abraham}. In the Abraham's exact solution, it is shown that the wetting transition is continuous and the average distance of the interface (separating the predominantly $+$ and $-$phases) from the boundary diverges smoothly to infinity. Forgacs, Svrakić and Privman found that the wetting transition becomes first-order if one adds a line defect in the bulk \cite{privman}. The McCoy-Wu model can be regarded as an Ising model being added many line defects. \begin{figure} \includegraphics[width=0.5\textwidth]{setup-mc.eps} \caption{ (a) The sketch of the McCoy-Wu model with surface fields. The surface fields act on the spins at the first and last column. The first column is usually called the left wall. The horizontal segments in different colors represent random horizontal bonds. } \end{figure} We solve the model on finite size lattices with the Bond Propagation Algorithm (BPA) and Site Propagation Algorithm (SPA) \cite{loh,wu1,wu}. We reveal the physical picture of the wetting transition. It is found that the thermodynamics at the wet phase is dominated by the group with most adjacent line defects and at the nonwet phase by the bonds near the boundary. This leads to the sample dependent wetting transition temperature and the distribution width of the wetting transition temperature should remain finite even if the size of lattice goes to infinity. The paper is orgnized as follows. In Sec. II, we define the model and introduce the numerical method, BPA and SPA. In Sec. III, the evidences of first-order transition are shown. In Sec. IV, we obtain the distribution of wetting transition temperature on the finite size lattices. In Sec. V, we reveal the physical picture of the wetting transition and argue that the transition temperature is sample dependent in the thermodynamic limit. In Sec. VI, we propose two semi-random models to improve our arguments further. Sec. VII is a discussion and acknowledgment. \section{THE MODEL} The McCoy-Wu model with surface fields is sketched in Fig. 1, in which all the vertical bonds are the same, while the horizontal bonds are identical to each other within each column but differ from column to column. Consider a set of spins $\sigma(n,m)=\pm 1$ located at points $(n,m)$ of the planar square lattice such that $1\le n \le N, 1\le m \le M$. The energy of a configuration $\{ \sigma \}$ of spins is given by \begin{eqnarray} E & = & -J\sum_{m=1}^{ M-1}\sum_{n=1}^{N} \sigma_{n,m}\sigma_{n,m+1} \nonumber \\ & &-J\sum_{m=1}^{ M}\sum_{n=1}^{N-1} a_n\sigma_{n,m}\sigma_{n+1,m} \nonumber \\ & &-\sum_{m=1}^{M}[H_1\sigma_{1,m}+H_N\sigma_{N,m}] \label{eq:lattice1} \end{eqnarray} where $H_1$ and $H_N$ are the surface field. The left and right boundary are often referred to competing walls. The random bond $a_n J$ is the horizontal bond in the $n$th column. In this paper we consider the binary bond disorder probability distribution: \begin{equation} p(a_n)=\frac{1}{2}[\delta(a_n-1)+\delta(a_n-0.9)]. \end{equation} The horizontal bonds are either strong $J$ or weak $0.9J$ with half probability. We use this special case as an example, but the conclusions should be general. The normalized canonical probability is $P(\{ \sigma \})=Z^{-1}\exp [-\beta E]$ where $Z$ is the canonical partition function. We set $J/k_B=1$ , where $k_B$ is the Boltzman constant. Following the convention, we call a given configuration of disorder $(\{ a_n \})$ a sample. For a given sample, we calculate this model with BPA and SPA \cite{loh,wu1,wu}. We consider two types of boundary conditions as done in Abraham's model \cite{abraham}: \begin{eqnarray} +-: & ~~~H_N & =1,~~~H_1=-a_0 \nonumber \\ ++: & ~~~H_N & =1,~~~H_1=a_0 \label{eq:bound} \end{eqnarray} where $a_0>0$. Throughout this paper, we set \begin{equation} a_0=0.4 \end{equation} in the numerical calculation. Under the boundary condition $+-$, there is an interface. Then the interfacial free energy (density) is defined \begin{equation} f=-\frac{k_B T}{M}\ln \frac{Z_{+-}}{Z_{++}} \label{eq:interfacial} \end{equation} Fixing surface field $a_0$, there is a wetting transition for the boundary condition $+-$ as the temperature changes. There is no singularity in $\ln Z_{++}$. The subtraction of $\ln Z_{++}$ just eliminates the non-singular background term in $\ln Z_{+-}$. Correspondingly, we define the interfacial internal energy (density) by \begin{equation} u=\frac{\partial (\beta f)}{\partial \beta} \end{equation} and the interfacial specific heat by \begin{equation} c= \frac{\partial u}{\partial T} . \end{equation} These quantities can be calculated with BPA on finite-size lattices. The magnetization $\overline{\sigma}_{n,m}$, which is the thermodynamic average of the spin at the site $(n,m)$, is defined by \begin{equation} \overline{\sigma}_{n,m}=\sum_{\{\sigma_{l,k}\}}\sigma_{n,m}e^{-E(\{\sigma_{l,k}\})/K_BT}. \end{equation} Note that the above average for the magnetization is carried on a given sample, a specific disorder configuration. It is not the average over many samples. Note that the top and bottom boundary are open in this model because of our algorithm. In our numerical calculation, we set \begin{equation} M=N^2. \end{equation} Moreover it has $N \ge 80$. So the lattices with size $N \times M$ are very narrow rectangles. Hence the effect of the open boundary at the top and bottom can be ignored. We solve the model on finite size lattices with the Bond Propagation Algorithm (BPA) and Site Propagation Algorithm (SPA) \cite{loh,wu1,wu}. These algorithms are very accurate and can be carried out on very large lattices. In our numerical calculation, the largest size of the lattice is $200 \times 200^2$. One can calculate the free energy, internal energy and specific heat with BPA, and the magnetization on each site with SPA. These calculation are in a high accuracy with error less than $10^{-6}$. \section{The evidence for the first order phase transition} In this section, we show the wetting transitions in some samples on the lattice with size $N=120$ and $N=200$. It is shown that the interfacial specific heat in these samples satisfy the scaling function of first-order phase transition. \begin{figure} \includegraphics[width=0.5\textwidth]{u-c-sample.eps} \caption{ (a) The interfacial internal energy for four samples with $N=120$ and four samples with $N=200$. (b) The corresponding interfacial specific heat for the eight samples} \end{figure} Fig. 2(a) shows the interfacial internal energy and specific heat for four samples with $N=120$ (sample $1,2,3,4$) and four samples with $N=200$ (sample $5,6,7,8$). They have a jump in the interfacial interanl energy as temperature changes. The jumping in $u$ is far more drastic for the sample $5,6,7,8$ with $N=200$ than the sample $1,2,3,4$ with $N=120$. Figure 2(b) shows the corresponding interfacial specific heat for the eight samples. The interfacial specific heats for samples $5,6,7,8$ with $N=200$ show very narrower and higher peaks than samples $1,2,3,4$ with $N=120$. The wetting transition temperatures $T_w$ is defined as at which the interfacial specific heat is maximal. $T_w$ should be actually called the pseudo-phase-transition temperature because the systems in our calculation are finite. To be simple, we call $T_w$ the wetting transition temperature in the following. The maximum of the interfacial specific heat is denoted by $c_m$. As shown in Fig. 2, the wetting temperatures $T_w$ are different for sample to sample. Moreover, the difference between the wetting transition temperatures for any two samples is usually much larger than their widths of the specific heat peaks. We find that the interfacial specific heat peak satisfies the the following scaling function, which is a typical finite-size scaling for the first order \cite{binder1}: \begin{equation} \frac{c}{c_m}=[(e^{t}+e^{-t})/2]^{-2} \label{eq:gauss} \end{equation} where \begin{equation} t=\frac{T-T_w}{\tau} \end{equation} and $\tau$ characterizes the width of the interfacial specific heat peak, which is obtained by fitting the numerical results. Fig. 3 shows the collapses of interfacial specific heat for the eight samples rescaled with Eq. (\ref{eq:gauss}). The parameters $c_m, T_w, \tau $ are given in Table I. We can see the main part of the interfacial specific heat peak are perfectly given by the scaling function Eq. (\ref{eq:gauss}), although the parameters of $c_m,\tau$ for the samples differ very much. \begin{figure} \includegraphics[width=0.5\textwidth]{c-rescale.eps} \caption{ Scaling the interfacial specific heat for the eight samples shown in Fig. 2(b). The full curve represents Eq. (\ref{eq:gauss}).} \end{figure} \begin{table}[htbp] \caption{ The parameters of $c_m, T_w, \tau $ for the Fig. 2(b). } \begin{tabular}{cccl} \hline & $c_m$ & $T_w$ & $\tau$ \\ sample 1~~~~~ & $36.5774$~~ & $1.951219$~~ & $0.0117$ \\ sample 2~~~~~ & $28.5872$~~ & $1.901140$~~ & $0.0214$ \\ sample 3~~~~~ & $547.562$~~ & $1.943223$~~ & $0.000791$ \\ sample 4~~~~~ & $167.495$~~ & $1.987095$~~ & $0.00184$ \\ sample 5~~~~~ & $702.838$~~ & $1.999803$~~ & $0.000401$ \\ sample 6~~~~~ & $2291.39$~~ & $1.934441$~~ & $0.000214$ \\ sample 7~~~~~ & $1166.27$~~ & $1.947504$~~ & $0.000298$ \\ sample 8~~~~~ & $1915.94$~~ & $1.912100$~~ & $0.000232$ \\ \hline \end{tabular} \label{a1} \end{table} The agreement with Eq. (\ref{eq:gauss}) are much better for the four samples with $N=200$, sample $5,6,7,8$ than the four samples with $N=120$, sample $1,2,3,4$ in Fig.2. In the samples with $N=200$ the surface specific heat peaks $c_m$ are much higher and the widths $\tau$ are much narrower than those with $N=120$. As shown later, as the system size grows, the average of $c_m$ diverges and the average of $\tau$ converges to zero. This infers that the interfacial specific heat for the larger system can be better described by Eq. (\ref{eq:gauss}), which is a typical finite-size scaling for the first order \cite{binder1}. This is the first clue that this wetting transition is first-order. Here we give a derivation of the scaling function Eq. (\ref{eq:gauss}). It can be obtained by following Binder's argument on the finite size scaling of the first order transition in Potts model. See the argument leading to the equation ($2.41$) in the reference \cite{binder1}. Letting $q=1$ and $c_{+}/c_{-}=1$ give rise to Eq. (\ref{eq:gauss}). In reference \cite{binder1}, q-state Potts model is discussed and there are q-fold degenerate ordered states. Here both below and above the wetting transition the states are ordered and there is only one state, so one can let $q$ to be $1$. The wetting transition takes place under the boundary condition $+-$, and $c_+$ and $c_-$ should be the specific heat for the whole system below and above the wetting transition respectively. Then the heat capacity difference below and above the transition is given by $(c_+-c_- )MN$. On the other hand, the heat capacity difference below and above the transition equals to $M\Delta c$, where $\Delta c$ is the difference of interfacial specific heat below and above the transition. Therefore it has $(c_+-c_- )MN=M\Delta c$, so $c_+ -c_- =M\Delta c/(MN)=\Delta c/N$. Obviously the difference of the interfacial specific heat below and above the transition $\Delta c$ should be finite, so in the limit of thermodynamic limit, i.e. $N \rightarrow \infty$, $c_+ -c_-$ vanishes. Therefore in the limit of thermodynamic limit, $c_+/c_-$ equals $1$. We calculate the magnetization $\overline{\sigma}_{n,m}$ at the site $(n,m)$ to obtain the magnetization profile. Because the system is translation invariant in the vertical direction, the magnetization $\overline{\sigma}_{n,m}$ only depends on the x-coordinate "$n$". So, the magnetization in the middle row $\overline{\sigma}_{n,M/2}$, $1\le n \le N$ can represent the magnetization profile as shown in Fig. 4 (a). From the magnetization profile, we can get the position $x_d$ of the interface (or domain wall, where the magnetization is zero. It is obtained by a simple interpolation between the magnetization at the $n_d$th column and $n_d+1$th column, \begin{equation} x_d=n_d+\frac{\overline{\sigma}_{n_d,M/2}}{\overline{\sigma}_{n_d,M/2}-\overline{\sigma}_{n_d+1,M/2}} \end{equation} if $\overline{\sigma}_{n_d,M/2}<0$ and $\overline{\sigma}_{n_d+1,M/2}>0$, the magnetization changes sign. We also call $x_d$ the interface position. \begin{figure} \includegraphics[width=0.5\textwidth]{interface-120.eps} \caption{ (a) The magnetization profile for the four samples $1,2,3,4$ in Fig. 2,3 at two temperatures. At $T=1.852$ the four samples are at nonwet phase and at $T=2.083$ they are at wetting phase. (b) The interface position $x_d$ vs the temperature.} \end{figure} In Fig. 4, we show the magnetization profiles and the interface positons for samples $1,2,3,4$ in Fig. 2 and 3. Fig. 4(a) shows the magnetization profiles of the four samples at two temperatures. At $T=2.083$, the systems are at the wet phase, where the interfaces are far from the left wall. At $T=1.852$, the systems are at the nonwet phase, where the interfaces are pinned at the left wall. Fig. 4(b) shows the interface postion v.s. temperature for the four samples. As the temperature increases the interface unbinds discontinuously from the left wall, where $n=0$, and becomes localized far from the left wall. The distance of the interface from the left wall $x_d$ has a jump at the wetting transition point. The rounding of the jumping of $x_d$ is due to the finite size effect. The temperatures at the jump of the interface position $x_d$ coincide with those at the maximal interfacial specific heat in Fig. 2. \section{Finite size scaling of the average quantities and their deviations} Since the physical quantities $T_w,c_m,\tau$ depends on the configuration of the disorder, we study their distribution over more than $1000$ disorder configurations for $N=80,120,160,200$. For each configuration of disorder on a finite lattice, we calculate the interfacial specific heat at different temperatures. To search the wetting transition point precisely and efficiently we adopt the following procedure. At first, we calculate the specific heat at five points with equal distance with the temperature lower bound being $1.78$ and the upper bound being $2.12$. The transition usually takes place in this range. Then compare the specific heats of the middle three points and pick out the one at which the specific heat is maximal. Take this point as the central point and the two neighbored points as the lower and upper bounds and repeat the first step again. After repeating this procedure $13$ times, we take the point at which the specific heat is maximal as the transition point $T_w$ and the specific heat at this point as specific heat maximum $c_m$. Then the precision of $T_w$ is about $10^{-5}$. We use the data obtained during the iterations to get $c_m$ and $\tau$ with Eq. (\ref{eq:gauss}). \begin{figure} \includegraphics[width=0.5\textwidth]{ptw-s.eps} \caption{ The distribution of wetting transition temperature and its distribution width for $N=80,120,160,200$. The inset shows the deviations of the transition temperatures for $N=80,120,160,200$.} \end{figure} Figure 5 shows the distribution of wetting transition temperature $T_w$ for $4$ lattice sizes $N=80,120,160,200$. The number of samples, the disorder configurations, is $N_s=3956,1191,1080,1170$ for $N=80,120,160,200$ respectively. The distribution $P(T_w)$ is defined as $P(T_w)=n_{T}/N_s$ where $n_{T}$ is the number of samples with $T<T_w<T+\Delta T$. The average of the wetting transition temperature is obtained by $\overline{T}_w=\sum_i^{N_s}T_{wi}/N_s$, where $T_{wi}$ is the transition temeperature of the $i$th sample. For the system size $N=80,120,160,200$ the averages of transition temperature are $\overline{T}_w=1.93(3),1.92(6),1.92(6),1.92(6)$ respectively. The average of the transition temperature keeps the same as the lattices size increases. The deviation of the wetting transition temperature is obtained by $\delta T_w=\sqrt{\sum_i^{N_s}(T_{wi}-\overline{T}_w)^2/N_s}$. The inset of Fig. 5 shows the deviations of the wetting transition temperatures for lattice size $N=80,120,160,200$ . One can see that the deviation $\delta T_w$ does not decrease as the lattice size increases. It is not strange for the averaging transition temperature $\overline{T}_w$ not to change with the lattice size. But it is strange for the deviation also not to change with the lattice size. \begin{figure} \includegraphics[width=0.5\textwidth]{pcm-s.eps} \caption{ The distributions of interfacial specific heat maximum for $N=80,120,160,200$ are presented in (a). (b)The average of $\ln c_m$ vs the system size. (c)The deviation of $\ln c_m$ vs the system size.} \end{figure} Now we discuss the statistics of $c_m$, the maximal interfacial specific heat. Since $c_m$ are distributed in wide scale, we calculate the distribution of $P(\ln c_m)$, which are shown in Fig. 6. As one can see, the qualitative properties of the distributions are different. For $N=80$ the distribution has a single sharp peak. For $N=120$, the distribution has a long tail. For $N=160,200$, it has two peaks, one at the smaller $c_m$ side and another one at the larger $c_m$ side. Obviously, we can not simply rescale these distributions to collapse. Because the distibution of specific heat maximums is broad, we study the distribution of their logarithmic, i.e. $P(\ln c_m)$, which is defined as $P(\ln c_m)\Delta \ln c_m=n_{c}/N_s$ where $n_{c}$ is the number of samples satisfying $\ln c_m<\ln c_{mi}< \ln c_{m} +\Delta \ln c_m$. Fig. 6(a) shows the distribution of $P(\ln c_m)$ for lattice size $N=80,120,120,160,200$. Their averages are defined by $(\ln c_m)_{av}=\sum_i^{N_s}\ln c_{mi} /N_s$ and deviation are defined by $\delta (\ln c_m)=\sqrt{\sum_i(\ln c_{mi}-(\ln c_m)_{av})^2/N_s}$. They are shown in Fig. 6(b) and 6(c). $\ln c_{mav}$ and $\delta (\ln c_m)$ vs $\ln N$ lie almost in straight lines. Simply linear fitting yields \begin{equation} (\ln c_m)_{av}=2.4(1)\ln N-7.6(5). \end{equation} and \begin{equation} \delta (\ln c_m)=1.22(4)\ln N-4.3(2). \end{equation} The average of $\ln c_m$ and its deviation increase with the lattice size $N$. \begin{figure} \includegraphics[width=0.5\textwidth]{ptau-s.eps} \caption{ The distributions of specific heat peak width for $N=40,60,80,100,120,160,200$ are presented in figure (a)-(g). Figure (h) shows the average of the specific heat peak width and it deviation vs the system size} \end{figure} For the width of the specific heat peak $\tau$, we calculate the distributions of $P(\ln \tau)$. They are shown in Fig. 7. The distributions seem to belong three types. For $N=40,60,80$ the distribution has a single sharp peak. For $N=100,120$, the distribution has a plateau. For $N=160,200$, it has a plateau and a peak, one at the smaller $\tau$ side. However their averaged width $(\ln \tau)_{av}=\sum_i^{N_s}\ln \tau_i$ over $N_s$ samples and its deviation $\delta (\ln \tau)=\sqrt{\sum_i(\ln \tau_i-(\ln \tau)_{av}))^2/N_s}$, as shown in Fig. 5(h), indicates some scaling. $(\ln \tau)_{av}$ vs $\ln N$ is approximately in a straight line except the point of $N=100$. Simply linear fitting yields \begin{equation} (\ln \tau)_{av}=-2.7(1)\ln N+8.0(5). \end{equation} Similarly, $\delta (\ln \tau)$ vs $\ln N$ is almost in a straight line except the point of $N=100$. Simply linear fitting yields \begin{equation} \delta (\ln \tau)=1.09(6)\ln N-3.5(3). \end{equation} The above two equations tell us that $\tau$ distributes in larger scales for larger lattices although its average becomes smaller. These results tell us that the interfacial specific heat peak becomes higher and higher, narrower and narrower, as the system size increases. It is expected that it becomes a $\delta$-function in the limit of $N\rightarrow \infty$. This just is the most important characteristic of the first order. \begin{figure} \includegraphics[width=0.5\textwidth]{am-s.eps} \caption{ The distributions of $a=c_m \tau$, which characterizes the jump of the interfacial internal energy below and above the transition temperature, for $N=80,120,160,200$. The inset (h) shows the average of $a$ and it deviation $\delta a$ vs the system size} \end{figure} For the $i$th sample, the interfacial specific heat peak has a maximum $c_{mi}$ and a width $\tau_i$. We define their product $c_m \tau$ as \begin{equation} a_i=c_{mi} \tau_i \end{equation} which equals approximately the integration over the interfacial specific heat peak (divided by $\pi$). See Eq. (\ref{eq:gauss}). It equals the jump of the interfacial internal energy (divided by $\pi$) below and above the transition temperature approximately. See Fig 2(a), the jump of the interfacial internal energy at the transition point is different from sample to sample. We calculate the distributions $P(a)$ of $a$, which are shown in Fig. 8. As we can see, the distribution functions are in the similar shape. The averages $a_{av}$ seem to converge to a limit as the lattice size increases and the deviation $\delta a$ changes a bit as shown in the inset of Fig. 8. The deviation of the transition temperature does not converge to zero as the lattice size increases shown in the inset of Fig. 5. If this feature remains as the system size increase to infinity, the wetting transition temperature is sample dependent. Of course we can not guarantee the validity of the extrapolating our results on the finite size lattices to the infinite size. However we will show a lot of evidence pointing to that the wetting transition temperature should be sample dependent in the thermodynamic limit. \section{PHYSICAL PICTURE OF THE WETTING TRANSITION: COMPETITION BETWEEN THE GROUPS OF ADJACENT LINE DEFECTS} To understand the present wetting transition, we retrospect the the previous results on the wetting transitions of the two-dimensional Ising model. With $a_n=1$ for all $n$, the system is known as the Abraham's model \cite{abraham}, which is solved exactly by Abraham . It undergoes a continuous wetting transition at a temperature $T_w$ below the critical temperature $T_C$ of the 2D Ising model. For $T_w<T<T_C$, the interface is infinitely far from the left wall and the interfacial free energy is obtained by Onsager \cite{onsager} \begin{equation} f_{O}=2k_BT (K-K^*) \label{eq:onsager} \end{equation} where $K=J/k_B T$ and $\exp(-2K^*)=\tanh 2K$. For $T<T_w$, the interface is pinned at the the left wall and the interfacial free energy is obtained by Abraham \cite{abraham} \begin{equation} f_{A}=-k_B T\ln (A-\sqrt{A^2-1}) \label{eq:abraham} \end{equation} with $A=\frac{1}{2}(B+1/B)+1-\frac{1}{2}(S+1/S)$, $B=\tanh K^* \coth K$, and $ S=e^{2K} (\cosh 2K-\cosh 2a_0 K)/\sinh 2K$. At the wetting transition point, it has \begin{equation} f_O(T_w)=f_A(T_w). \end{equation} At $T=T_w$ it has $ e^{2K}[\cosh 2K-\cosh 2a_0 K]=\sinh 2K$. Forgacs {\sl et al.} found that the wetting transition is first-order if one adds a line defect in the bulk \cite{privman}, say the $N_1$th column bonds being $a_{N_1}J$, where $a_0<a_{N_1}<1$ and $1\ll N_1<N$. In the wetting phase, the interface is pinned at the line defect. The interfacial free energy is given by \begin{equation} f_{FSP}=-k_B T\ln |-x-\sqrt{x^2-1}| \label{eq:x} \end{equation} where $x=(c c^*-c_2\sqrt{s_2^2 s^4+1})/(s^2_2 s^2-1)$ with $c=\cosh 2K$, $s=\sinh 2K$, $c^*=\cosh 2K^*$, $c_2=\cosh 2(K_2^*-K^*)$, $s_2=\sinh 2(K_2^*-K^*)$ and $K_2=a_{N_1}K$, $\exp(-2K_2^*)=\tanh 2K_2$. Simply speaking, this first order wetting transition is a competition of two interface situations, one for the interface pinned at the left wall and one for the interface pinned at the line defect. For $T_1<T<T_C$, where $T_1$ is the first order wetting transition temperature \cite{privman}, it has $f_{FSP}<f_A$. Then, the interface is pinned at the defect line and the interfacial free energy is given by $f_{FSP}$. On the contrary, for $T<T_1$, it has $f_{FSP}>f_{A}$. Then the interface is pinned at the left wall and the interfacial free energy is given by $f_{A}$. The first-order transition temperature $T_1$ is given by \begin{equation} f_A(T_1)=f_{FSP}(T_1). \end{equation} In a sense the present model is an extension of Forgacs {\sl et al.}'s idea, i.e. adding many line defects into the system randomly. To understand the complicated cases of random bond, we consider the following imhomogeneous models with size $N=120$ \begin{eqnarray} S2:~~~~a_n & =& 0.9~~~ for ~~~n=60,61; \nonumber \\ S4:~~~~a_n & =& 0.9~~~ for ~~~n=59,60,61,62. \nonumber \\ D24:~~~~a_n & = & 0.9~~~ for ~~~n=40,41,79,80,81,82; \nonumber \\ D42:~~~~a_n & = & 0.9~~~ for ~~~n=39,40,41,42,80,81; \nonumber \end{eqnarray} and it still has $H_1=a_0=0.4$ and $a_n=1.0$ for other $n$. In the case $S2$, there is a single group of two adjacent line defects located about $n=60$. In the case $S4$, there is a single group of four adjacent line defects located about $n=60$. In the case $D24$, both the two groups of two adjacent line defects and four adjacent line defects are present,one group is located about $n=40$ and another one about $n=80$. In the case $D42$, the two groups swap their positions. We want to know how the two groups of adjacent line defects determine the wetting transition in the cases $D24$ and $D42$. The two cases $S2$ and $S4$ with a single group are studied to be references. \begin{figure} \includegraphics[width=0.5\textwidth]{specific-2-4.eps} \caption{ The interfacial specific heat VS the temperature for the cases: S2,S4,D24 and D42. } \end{figure} We calculate the interfacial specific heat, magnetization profile and the interface position for the four cases. The results are shown in Fig. 9, 10 and 11. We conclude that the group of four adjacent line defects dominates the transition in the $D42$ and $D24$ cases. The first clue is that it has the same wetting transition temperature $T_w$ in cases $D24$, $D42$ and $S4$. As shown in Fig. 9(a), the wetting transition takes place at $T_w\approx 2.022$ in the $S2$ case, and at $T_w\approx 1.970$ in the $S4$ case. In both cases $D24$ and $42$, it has $T_w\approx 1.970$, which is the same as that in the case $S4$. The second clue is the interfacial free energy. As shown in Fig. 10, The interfacial free energies almost coincide in cases $D24$, $D42$ and $S4$. In Fig. 10, the solid lines in black and red are given by Eq. (\ref{eq:onsager}) and (\ref{eq:abraham}) respectively. The solid line in green (and blue) is the interfacial free energy in the $S2$ (and $S4$) case with $a_0=1$, which is related to model B in reference \cite{privman} and the interface is always pinned at the adjacent line defects. The interfacial free energy for $S2$ with $a_0=0.4$ is divided into two parts by the wetting transition point (marked by the blue arrow): it coincides with the solid line in red for $T<T_w$ and with the green line for $T>T_w$. The interfacial free energies in case $D24$ and $D42$ almost coincide with that in $S4$. They are divided into two parts by the wetting transition point marked by the red arrow: they coincide with the solid line in red for $T<T_w$ and with the solid line in blue for $T>T_w$. \begin{figure} \includegraphics[width=0.5\textwidth]{free-2-4.eps} \caption{ The interfacial free energy VS the temperature for the cases: S2,S4,D24 and D42. See the text for the solid lines. } \end{figure} The third clue is that at the wet phase the interface is localized at the group of four adjacent line defects in both cases $D24$ and $D42$. This is a direct evidence that the group of four adjacent line defects dominates the wetting transition in the cases $D24$ and $D42$. At the temperature $T=1.961$, the system is in the nonwet phase for both $D24$ and $D42$ and the interface is pinned at the left wall as shown in Fig. 11(a). At the temperature $T=2.083$, the system is in the wet phase and the interface is pinned at the group of four adjacent line defects. For $D24$ and $D42$, the interface is pinned at about $n=80$ and $n=40$ respectively, where the group of four adjacent line defects is located. Fig. 11(b) shows that above the wetting transition temperature $T_w=1.970$the interface position is pinned at the group of four adjacent line defects: $x_d\approx 80$ in the $D24$ case and $x_d \approx 40$ in the $D42$ case. Below the transition temperature the interface is pinned at the left wall, $x_d\approx 0$ in both cases $D42$ and $D24$. \begin{figure} \includegraphics[width=0.5\textwidth]{interface-2-4.eps} \caption{ (a) The magnetization profile for the cases $D24$ and $D42$ at two temperatures. (b) The interface position v.s. the temperature for the cases: $D24$ and $D42$. } \end{figure} Enlighten by these imhomogeneous models, we conclude that the phase transition is the competition among the interface locations. See the Fig. 10, the interfacial free energy of the group of four adjacent line defects in $S4$ case (in solid blue line) is lower than that of the two adjacent line defects in the $S2$ case (in solid green line) above the transition temperature. In both $D24$ and $D42$ cases, the interfaces are pinned at the group of four adjacent line defects. Below the transition temperature the interfacial free energy at the left wall (in solid red line) is lower, then the interfaces jump from the group of four line defects to the left wall. \section{Two semi-random models} Extending the above discussion, one can conjecture that the wetting transition in the random bond model Eq. (1) should be the competition among interface locations. From the renormalization group theory, the adjacent line defects can be dealt as a single effective line defect. The more the adjacent line defects are, the weaker the bond of the effective line defect is. Therefore in the wet phase, the interface should be pinned at the group with the most adjacent line defects far from the left wall. The situations are so for the eight samples shown in Fig. 2. For an infinite system, one can find a group with an arbitrarily large number of adjacent line defects. The lower bound of the interfacial free energy is given as the interface is pinned at the group with infinite line defects. It can be obtained from the Onsager's exact result \cite{onsager} and is given by $ f=2k_BT (0.9K-K^*)$, where $0.9K$ is the weak bond in the present model. As the system size goes to infinity, the interfacial free energy for the interface pinned at the group with large number of adjacent line defects should converge to this lower bound. At the nonwet phase, the interface is pinned at the left wall. See Fig. 4(a) and 11(a), the absolute value of magnetization is depressed notably only near the left wall. From Eq. (\ref{eq:interfacial}) we know that the interfacial free energy is defined as the free energy with boundary condition $+-$ subtracted by that with boundary condition $++$. Under the boundary condition $++$, the magnetization is close to $1$ because the system is deeply ordered. The interfacial free energy is definitely related to the difference of the magnetization between boundary condition $+-$ and $++$. Just these differences induce the interfacial free energy. The magnetization differences decay rapidly as the distance from the left wall increases. As we can see in Fig. 4(a) and 11(a), the absolute value of magnetization is about $0$ near the left wall, but approaches to $1$ as $n>20$. The interfacial free energy is not only related to the magnetization but also to the configuration of bonds. Obviously the bonds far from the the left wall, say $n>20$ in Fig. 11(a), are not closely related to the interfacial free energy. Only the bonds close to the left wall are related to the interfacial free energy since the interface is pinned at the left wall. In other words, the interfacial free energy for the interface pinned at the left wall should be related only to finite number column bonds near the left wall. Therefore as the system size goes to infinity, i.e. $N\rightarrow \infty$, the interfacial free energy for the interface pinned at the left wall should not converge to certain limit, because the interfacial free energy is only related to the configuration of the disorder near the left wall and not related to the disorder at other regions. Hence the spreading of the interfacial free energy for the interface pinned at the left wall should keep the same as $N\rightarrow \infty$. \begin{figure} \includegraphics[width=0.5\textwidth]{ptw-s1.eps} \caption{ The distributing of wetting transition temperatures in the SR1 case with $N=80,120,160,200$. The inset shows the deviation of wetting transition temperatures.} \end{figure} The intersection between the interfacial free energies at the left wall and at the group with the most adjacent line defects determines the phase transition temperature. As discussed above the spreading of the interfacial free energy at the left wall will not decreases as the size of the system goes to infinity, so the distribution width $\delta T_w$ will not converge to be zero as the usual phase transition in the disordered systems. To test this argument, we design two semi-random lattices. We call them SR1 and SR2. In SR1 case, we set that $a_n=1.0$ for $n\le 20$ and $a_n$ is random for $n> 20$. In other words, the bonds near the left wall are nonrandom for $n<20$. In case SR2, we set that $a_n$ is random for $n \le 20 $, $a_n=0.9$ for $N/2< n \le N/2+20$, and $a_n=1.0$ for other $n$. In this case only the bonds near the left wall have randomness. We solve the model on these two semi-random lattices for $N=80,120,160,200$ and study the distribution of $T_w$ over more than $1000$ samples. The distributions of $T_w$ for SR1 and SR2 case are shown in Fig. 12 and 13 respectively. In the SR1 case, the number of samples are $1197,1221,1292,1174$ for $N=80,120,160,200$ respectively. In the SR2 case, the number of samples are $1598,1193,1141,1231$ for $N=80,120,160,200$ respectively. \begin{figure} \includegraphics[width=0.5\textwidth]{ptw-s2.eps} \caption{ The distributing of wetting transition temperatures in the SR2 case with $N=80,120,160,200$. The inset shows the deviation of wetting transition temperatures.} \end{figure} In the SR1 case, the interfacial free energy at the left wall is fixed since the bonds near the left wall are nonrandom and fixed. As the system size increases, the maximum of the number of the adjacent line defects increases. Then the interfacial free energy for the interface pinned at the group with most adjacent line defects will converge to the limit of infinite line defects. Therefore the distribution width $\delta T_w$ will decrease as the system size $N$ increases. In Fig. 12, the distribution width $\delta T_w$ is $1.973\times 10^{-2}$, $1.648\times 10^{-2}$, $1.467\times 10^{-2}$,$1.282\times 10^{-2}$ for $N=80,120,160,200$ respectively. The deviation of $T_w$ indeed shows the trend of converging to zero as the lattice size increases. On the contrary, in the SR2 case the interfacial free energy at the left wall will not converge as the system size increase, so the distribution width $\delta T_w(N)$ will not decrease. The numerical results for SR2 is shown in Fig. 13, in which the distribution width $\delta T_w$ is $3.197\times 10^{-2}$, $3.342 \times 10^{-2}$, $3.384 \times 10^{-2}$, $3.320\times 10^{-2}$ for $N=80,120,160,200$ respectively. It indeed does not decrease as the lattice size increases. The numerical results on the two semi-random models are consistent to our expectation. \section{DISCUSSION} It is very unusual that the wetting transition temperature is sample dependent. For the usual phase transition in the disordered systems, the phase transition temperatures converge to a limit as the system size goes to infinity. For the usual phase transition, the free energy is related to the whole system. As first argued by Brout \cite{brout}, we may divide the system into n large subsystems (much larger than the correlation length). If we assume that the coupling between neighboring subsystems is negligible, then the value of any density of an extensive quantity over the whole sample is equal to the average of the (independent) values of this quantity over the subsystems. The pseudo-phase-transition temperature fluctuates from sample to sample due to the finite-size effects. However as the system size goes to infinity, the pseudo-phase-transition temperatures should converge to a limit $T_C(\infty)$ \cite{aharony}. In the present wetting transition the interfacial free energy is only related to the left wall and the group of the most adjacent line defects. Obviously, the present model can not be divided into two similar subsystems in the horizontal direction. If the left and right wall are separated, there is no wetting transition. The groups of adjacent line defects are the so-called rare regions \cite{vojta}. Near the critical point of the McCoy-Wu model, the rare regions dominate the phase transition \cite{vojta,fisher}. In this wetting transition, the situation is more extreme. Only the largest rare region matters. In fact, the wetting transition in the random bond systems has been also studied extensively \cite{kadar,lipowsky,kadar1,lipowsky1,huang,wuttke}. However in these previous studies, the random bonds are not correlated. In the McCoy-Wu Ising model the random bonds are perfectly correlated in one direction \cite{mccoy}. This should be the main reason which makes this transition so different. Because the McCoy-Wu model is equivalent to the one-dimensional random transverse field quantum Ising model \cite{fisher}, it is expected that a similar wetting transition exists in the one-dimensional random transverse field quantum Ising model. The quantum Ising chains with boundary fields has been studied by Campastrini et. al \cite{campastrini}. There is a magnet-to-kink transition similar to the critical wetting transition in the Abraham model \cite{abraham}. The random quantum Ising chain with boundary fields is the quantum version of the wetting transition in the McCoy-Wu model. It is pleasure to thank professor Kurt Binder for useful discussions. The author also thanks professor Wenan Guo and the SGI in Department of Physics in Beijing Normal University for the supply of computing time.
2,877,628,090,592
arxiv
\section{Introduction}\label{sec1} Fluorescence is a vastly used optical method for chemical and biological detection~\cite{Shcheslavskiy:18,giljohann2009}; however, its performance and efficiency shall be enhanced especially in sensing and imaging applications~\cite{ribeiro2017artefact,moerland2013shaping,zhao2014gold,wang2015plasmon}. Plasmonics appears to be the best method for tailoring and enhancing the fluorescence emission~\cite{Stockman2018,bauch2014}. It is known that light can be confined in close vicinity of metallic surfaces or nano-particles due to the coupling between electromagnetic (EM) waves and oscillations of electrical charges at the surface. The idea is to employ This interaction, which is referred to as surface plasmon resonance (SPR) in a way that a large enhancement in the optical density of states can be obtained in the neighborhood of fluorophores. This will be an effective means for elevating the excitation rate and raising the quantum yield as well as controlling the angular distribution of the fluorescence emission~\cite{kwon2008surface,kinkhabwala2009large,aouani2011plasmonic,lozano2013plasmonics,langguth2013plasmonic}. Although the presence of fluorophore in the vicinity of metallic structures leads to the above mentioned appealing features, this adjacency may also increase the probability of quenching and non-radiative energy loss for the excited fluorophore~\cite{anger2006enhancement,pons2007quenching,li2009fluorescence,reineck2013distance}. This inadequacy can be resolved by means of a hybrid photonic-plasmonic structure (HPPS), that is created by attaching a photonic crystal (PC) to the metal surface (plasmonic structure). In other words, coupling between the surface plasmon polaritons (SPPs), i.e. the EM waves which are confined along the metal-dielectric interface, and the guided or trapped modes of the photonic crystal leads to the striking features of increasing the propagation length of SPPs, confining light in a deep subwavelength scale, and highly guided modes and cavity resonances in the PC structure~\cite{romanov2011,yang2011,zhang2012,schokker2017}. Moreover, by using an HPPS, the effective length of the evanescent normal component of SPPs can extend to tens of nanometers above the metal surface, therefore, the enhancement and directionality are obtained without any significant quenching~\cite{zhu2012broadband,lopez2010,ding2013spectral}. On the other hand, due to the weak correlation between the spontaneous emissions of fluorophores, the resulted light is isotropic in space and broad in spectrum, which in turn lower the detectability. Therefore, it is potentially advantageous to utilize a technique to create a coherent light from such spontaneous emitters. It must be noted that a desirable technique shall also prevent any significant loss in the emission intensity~\cite{raghunathan2012,greffet2002,de2012conversion}. In order to address these requirements, Shi and his coworkers~\cite{shi2014spatial,shi2014coherent} have proposed using a HPPS whose optical attributions was previously investigated by the group~\cite{shi2010optical}. It is shown that the interaction between the leaky modes (that can scape from propagating along the metal-dielectric interface) of HPPS and the fluorescent molecules successfully provides both the temporal and spatial coherence for the fluorescent emission. Such a technique can play an important role in different fluorescence applications. However, further investigations are needed to obtain a more efficient structure with an inherent capability to be adjusted for different spontaneous emitters. This is the aim of the present work. Among different options for the photonic-crystal part of an HPPS, anodic aluminium oxide (AAO) presents a special feature; the vertical cylindrical cavities in the structure facilitates one's control over the direction of the propagation of the leaky modes. This provides the directionality of the emitted light and therefore can enhance the spatial coherence. Moreover, the frequency of the leaky modes can be adjusted by adapting the geometry of the cavities. In the literature, the capability of AAO in enhancing the intensity of fluorescent emission has also been addressed~\cite{li2012aluminum}. Nevertheless, AAO has not yet been used in forming an HPPS to achieve coherent fluorescent emission. In this work, a robust and easy to fabricate HPPS using the AAO structure is proposed that substantially enhances the coherence, while brings the required flexibility to be adjusted for spontaneous emitters with different excitation/emission frequencies. Here, the previously proposed method~\cite{ARXIV} which empowers finite difference time domain to simulate fluorescent molecules is applied to a novel HPPS that is constructed by placing an inverse photonic crystal, made by pore-opened\cite{Bruschi2015} anodic aluminium oxide (AAO), on top of a 200 nm thick silver (Ag) layer (Fig.\ref{fig:AAOstr}). The cylindrical holes of PC are filled with S101-doped PVA, which also forms a layer of 50 nm thickness on top of the AAO. The structural parameters, i.e., thickness of AAO layer $h=500$ nm, diameter of cylindrical holes $d=200$ nm, and the center-to-center distance the neighboring cylinders $a=250$ nm, have been set in a way that not only the structure can be easily fabricated, but also, it significantly enhances the coherence as seen in the rest of this paper. \begin{figure}[t] \hspace{5ex}\includegraphics[width=0.99\columnwidth]{anodic.pdf}\\ \caption{\label{fig:AAOstr} The overall 3D view of the proposed hybrid anodic structure (left) and its $yz$ cross-section (right).} \end{figure} Similar to the previous test-case, the proposed HPPS experiences an incident continuous EM wave with a wavelength of 575 nm from its side that is perpendicular to the $y$-axis (Fig.~\ref{fig:AAOstr}) and the resulting vertical emission (along $z$-axis) is recorded. In this way, the detected wave would not be masked by the incident wave. Moreover, this is a wise choice of the excitation and detection, which is appropriate for imaging and LED applications. In Fig.~\ref{fig:AAOFlu}, the recorded field is shown in frequency domain. The graph hits its peak at $\lambda=616$ nm with a FWHM of $|\Delta \lambda| = 3$~nm. On the other hand, using TCF, a coherence time of $\tau_c=2.1\times10^{-13}$ sec or equivalently $|\Delta \lambda| = 4$~nm is calculated. It is observed that the presence of the proposed HPPS leads to an almost eight times greater coherence length for the fluorescence emission. \begin{figure}[t] \includegraphics[width=0.99\columnwidth]{FluoEmi_Vis.pdf} \caption{\label{fig:AAOFlu}Wavelength spectrum of the detected wave obtained for the proposed HPPS. The emission pick is observed at $\lambda=616$ nm, while a lower pick occurs at $\lambda=575$ nm that corresponds to the unabsorbed portion of the excitation wave. Shown in the inset is the fringe visibility as a function of the separation distance of the double-slits.} \end{figure} In order to identify the underlying cause of the emergence of such a narrow emission bandwidth, all modes propagated horizontally (along the $xz$ plane) in PC of the proposed HPPS are also calculated and shown in Fig.~\ref{fig:HAAOmode}. It is known that there is no possibility for p-polarized modes to be vertically propagated since the associated electric field is aligned with z-axis. On the other hand the electrical field is directed horizontally for s-polarized modes and thus, for each wavelength that the horizontal propagation is prevented by HPPS, a vertical reflection is possible. Therefore, considering that the excitation input wave is aligned with y-axis, one should focus on the y-directed s-polarized propagation graph in Fig.~\ref{fig:HAAOmode}, which reveals a trough at $\lambda=616$ nm. In this sense, the portion of the emission band of S101 molecules that coincides with this trough is expected to reflect vertically while the rest propagates along the metal surface. Since for S101 molecules, the excitation band overlaps partially with the emission band, the horizontally propagated modes can excite the neighboring fluorophores. This synchronizes the transitions of the molecules and therefore, a spatial coherence is also expected. The visibilities obtained as results of the double-slits tests are shown in Fig.~\ref{fig:AAOFlu} (inset), which clearly shows a spatial coherence width of greater than $10\:\mu$m. Anyway, this coherence width is proportional to the propagation length of the horizontally propagated modes, which is determined by the imaginary part of the corresponding wave-vectors. It can state that in the near-field, the plasmonic modes are responsible for coherence while the s-polarized modes are capable to transfer this coherence to the far-field. It must also be noted that the conversion between the s- and p-polarized modes is possible due to the random orientation of the dipole moment of the molecules and internal reflections inside cavities. It is worth noting that the wider the spectral range of the horizontally propagated modes, the larger the portion of energy absorbed by structure. This is the case for the proposed HPPS as seen in Fig.~\ref{fig:HAAOmode}, where only a small portion of modes (corresponding to the troughs) is prevented from being horizontally propagated, and thus, a large portion of the incident energy is responsible for exciting the fluorophores (see Fig.~\ref{fig:AAOFlu}). This is also among desirable features of the proposed HPPS compared to the previously proposed structures. \begin{figure}[t] \includegraphics[width=0.99\columnwidth]{TrPCAg.pdf} \caption{\label{fig:HAAOmode}Wavelength spectrum of horizontally propagated modes in PC of the proposed HPPS obtained for different directions of propagation in $xy$-plane with either s- or p-polarizations.} \end{figure}
2,877,628,090,593
arxiv
\section{Introduction} The motion of a compressible isentropic perfect fluid with self-gravitation is modeled by the Euler-Poisson equations in three space dimensions (cf \cite{ch}): \begin{equation}\label{1.1} \begin{cases} &\rho_t+\nabla\cdot(\rho {\bf v})=0,\\ &(\rho {{\bf v}})_t+\nabla\cdot(\rho {\bf v}\otimes {\bf v})+\nabla p(\rho)=-\rho \nabla \Phi,\\ &\Delta \Phi=4\pi \rho. \end{cases} \end{equation} Here $\rho$, ${\bf v}=(v_1, v_2, v_3)$, $p(\rho)$ and $\Phi$ denote the density, velocity, pressure and gravitational potential, respectively. The gravitational potential is given by \begin{equation}\label{phi}\Phi(x)=-\int_{\RR^3} \frac{\rho(y)}{|x-y|}dy =-\rho\ast \frac{1}{|x|},\end{equation} where $\ast$ denotes convolution. The momentum $\rho{\bf v}$ is denoted by ${\bf m}=(m_1, m_2, m_3)$. System (\ref{1.1}) is used to model the evolution of a Newtonian gaseous star (\cite {ch}). In the study of time-independent solutions of system (\ref{1.1}), there are two important cases, non-rotating stars and rotating stars. A non-rotating star solution is a time-independent spherical symmetric solution of the form $(\rho_N, 0, \Phi_N)(x)$ (the velocity is zero), with $\Phi_ N(x)=-\rho_N\ast \frac{1}{|x|}$. A rotating star solution models a star rotating around the $x_3$-axis ($x=(x_1,\ x_2,\ x_3))$ with prescribed angular momentum (per unit mass), or angular velocity. The existence and properties of stationary non-rotating star solutions is classical (cf. \cite{ch}). In contrast, the study for rotating stars is more challenging and of significance in both astrophysics and mathematics. A rigorous mathematical theory for rotating stars of compressible fluids was initiated by Auchmuty \& Beals (\cite {AB}) in 1971. The existence and properties of rotating star solutions were obtained by Auchmuty \& Beals (\cite{AB}), Auchmuty(\cite{Au}), Caffarelli \& Friedman (\cite{CF}), Friedman \& Turkington(\cite{FT1}, \cite{FT2}), Li(\cite{Li1}), Chanillo \& Li(\cite{Li2}), and Luo \& Smoller (\cite{LS}). In \cite{mc}, McCann proved an existence result for rotating binary stars. The existence of rotating star solutions of compressible fluids was first obtained by Auchmuty \& Beals (\cite {AB}) who formulated this problem as a variational problem of finding a minimizer of the energy functional $F(\rho)$, (which will be defined in Section 2), in the class of functions $W_{M, S}=W_M\cap W_S$, where $W_M$ is the set of integrable functions $\rho: \RR^3\to \RR^+$ which are a.e. non-negative, axi-symmetric, of total mass $M=\int_{\RR^3}\rho(x)dx$, and having a finite rotational kinetic energy (precise statements can be found in Section 2). $W_S$ is defined by \begin{equation}\label{W2'} W_S=\{\rho: \RR^3\to \RR^+,\ \rho(x_1, x_2, -x_3)=\rho(x_1, x_2, x_3),\ x_i\in \RR,\ i=1, 2, 3\}. \end{equation} In this paper, we first give a proof of the existence of a minimizer of the energy functional $F(\rho)$ in the wider class of functions $W_M$. Our proof is quite different from that in \cite{AB}. As in \cite{AB}, the main difficulty in the proof is the loss of compactness due to the unboundedness of $\RR^3$. The method in \cite {AB} is to minimize the functional $F$ on $W_R=\{\rho\in W_{M,S}, \rho(x)=0\, \ |x|>R\}$ and to obtain some uniform estimates on the support of the minimizer. Our method is to use the concentration-compactness method due to P. L. Lions (\cite{lions}), which was also used in \cite{rein1} to prove the existence of non-rotating star solutions. The reason that we seek minimizers in $W_M$ instead of $W_{M, S}$ is that we want to discuss the full stability problem dynamically in a more general context with less restrictions on the symmetry of solutions. The dynamical stability of these steady-state solutions is an important question. The linearized stability and instability for non-rotating stars and rotating stars were discussed by Lin (\cite{lin} ), Lebovitz (\cite{lebovitz}) and Lebovitz \& Lifschitz (\cite{lebovitz1}). The nonlinear dynamical stability of {\it non-rotating} star solutions was studied by Rein (\cite {Rein}) via an energy-Casimir technique. It should mentioned here that the energy-Casimir technique was used in \cite{guo1} to study the stability problem in stellar dynamics. Roughly speaking, for $p(\rho)=\rho^{\gamma}$, the result in \cite {Rein} says that if the initial data of the Euler-Poisson equations (\ref{1.1}) is close to the non-rotating star solution in some topology, then the solution of (\ref{1.1}) with the same total mass as the non-rotating star, stays close to the non-rotating solution in the same topology as long as the solution preserves both the energy $E(t)$ which is defined by \begin{equation}\label{energy1} E(t)=\int_{\RR^3}\left(\frac{p(\rho)}{\gamma-1}+\frac{1}{2}\rho|{\bf v}|^2\right)(x, t)dx-\frac{1}{8\pi}\int_{\RR^3}|\nabla \Phi|^2(x, t)dx,\end{equation} and the total mass $\int_{\RR^3}\rho(x, t)dx$. An interesting feature of the energy is that it has both positive and negative parts, making the analysis difficult. For solutions of (\ref{1.1}) without shock waves, energy is conserved. For solutions with shock waves, the energy $E(t)$ is non-increasing due to the entropy condition associated with shock waves (cf. \cite{lax} and \cite{smoller}). In this paper we extend the above nonlinear stability results to {\it rotating} stars. As in the non-rotating star case (\cite{Rein}), our nonlinear stability result is in the class of solutions having the same total mass as that of the rotating steady-state solution. For solutions with different total masses, we investigate the nonlinear dynamical stability of a solution $\bar u=(\bar \rho, \bar {\bf v}, \bar \Phi)\in W^{1, \infty}_{loc}$, (which includes both rotating and non-rotating stars), in the context of weak entropy solutions, for more general perturbations not necessarily having the same mass as $\bar u$, under some assumptions on the $L^{\infty}$-norm and the support of the solutions. This is achieved by using the techniques of relative entropies together with a careful analysis of the gravitational energy; i.e., the negative part in the total energy $E(t)$. It should be mentioned here that the method of relative entropies was used by Dafermos (\cite{dafermos}) and Chen/Frid {\cite{chen}) to study the stability and behavior of solutions of hyperbolic conservation laws. The main difficulty in applying this method to the the Euler-Poisson equations (\ref{1.1}) is again due to the non-definiteness of the energy density. We also give a uniform a priori estimate for the weak solutions of Cauchy problem of (\ref{1.1}) satisfying the entropy conditions. This paper is organized as follows: in Section 2, we prove the existence of rotating star solutions which are the minimizers of an energy functional $F$ in $W_M$ with prescribed total mass and angular momentum with finite rotational kinetic energy. We also derive some properties concerning the minimizing sequence. These properties are interesting, and are important for our stability analysis. In Section 3, we prove our nonlinear stability result for rotating stars. Section 4 is devoted to the stability result for the entropy weak solutions and in Section 5, we obtain uniform in time a priori estimates for entropy weak solutions. Throughout this paper, for simplicity of presentation, we assume that the pressure function $p(\rho)$ satisfies the usual $\gamma$-law, \begin{equation} p(\rho)=\rho^{\gamma},\ \rho\ge 0, \end{equation} for some $\gamma>1$. We now introduce some notation which will be used throughout this paper. We use $\int$ to denote $\int_{\RR^3}$, and use $||\cdot||_q$ to denote $||\cdot||_{L^q(\RR^3)}$. For any point $x=(x_1, x_2, x_3)\in \RR^3$, let \begin{equation}\label{1.5'} r(x)=\sqrt{x_1^2+x_2^2}, \ z(x)=x_3,\ B_R(x)=\{y\in \RR^3, \ |y-x|<R\}.\end{equation} For any function $f\in L^{1}(\RR^3)$, we define the operator $B$ by \begin{equation}\label{B} B f(x)=\int \frac{f(y)}{|x-y|}dy =f\ast \frac{1}{|x|}.\end{equation} Also, we use $\nabla$ to denote the spatial gradient, i.e., $\nabla=\nabla_x=(\partial_{x_1},\ \partial_{x_2}, \ \partial_{x_3})$. $C$ will denote a generic positive constant. \section{Existence of Rotating Star Solutions} A rotating star solution $(\tilde \rho, \tilde {\bf v},\tilde \Phi)(r, z)$, where $r=\sqrt{x_1^2+x_2^2}$ and $z=x_3$, $x=(x_1, x_2, x_3)\in \RR^3,$ is an {\it axi-symmetric} time-independent solution of system (\ref{1.1}), which models a star rotating about the $x_3$-axis. Suppose the angular momentum (per unit mass), $J(m_{\tilde \rho}(r))$ is prescribed, where \begin{equation} m_{\tilde \rho}(r)=\int_{\sqrt{x_1^2+x_2^2}<r}\tilde \rho(x)dx=\int_0^r 2\pi s\int_{-\infty}^{+\infty}\tilde \rho(s, z)dsdz, \end{equation} is the mass in the cylinder $\{x=(x_1, x_2, x_3): \sqrt{x_1^2+x_2^2}<r\}$, and $J$ is a given function. In this case, the velocity field $\tilde {\bf v}(x)=(v_1, v_2, v_3)$ takes the form $$\tilde {\bf v}(x)=(-\frac{x_2 J(m_{\tilde \rho}(r))}{r^2}, \frac{x_1 J(m_{\tilde \rho}(r))}{r^2}, 0). $$ Substituting this in (\ref{1.1}), we find that $\tilde \rho(r, z)$ satisfies the following two equations: \begin{equation}\label{03}\begin{cases} &\partial_r p(\tilde \rho)=\tilde \rho\partial_r (B \tilde \rho)+\tilde \rho L(m_{\tilde \rho}(r)r^{-3}, \\ &\partial_z p(\tilde \rho)=\tilde \rho\partial_z (B \tilde \rho), \end{cases}\end{equation} where the operator $B$ is defined in (\ref{B}), and $$ L(m_{\tilde \rho})=J^2(m_{\tilde \rho})$$ is the square of the angular momentum. For any function $\rho\ge 0$ and $\gamma>1$, we define \begin{equation}\label{z1} A(\rho)=\frac{p(\rho)}{\gamma-1}=\frac{\rho^\gamma}{\gamma-1}.\end{equation} It is easy to verify that (cf. \cite{AB}) (\ref{03}) is equivalent to \begin{equation}\label{z2} A'(\tilde \rho(x))+\int_{r(x)}^{\infty}L(m_{\tilde \rho}(s)s^{-3}ds-B\tilde \rho(x)=\lambda, \qquad {\rm where~} \tilde \rho(x)>0,\end{equation} for some constant $\lambda$. Here $r(x)$ and $z(x)$ are as in (\ref{1.5'}). In \cite{AB}, Auchmuty and Beals formulated the problem of finding solutions of (\ref{z2}) as the following variational problem. First, let $M$ be a positive constant and let $W_M$ be the set of functions $\rho$ defined by (cf. (1.5)), \begin{align*}W_M=&\{\rho: \RR^3\to \RR,\ \rho {\rm~is~axisymmetric, ~}\rho\ge 0, a.e.,\ \rho\in L^1(\RR^3)\cap L^{\gamma}(\RR^3),\\ &\int\rho(x)dx=M, \ \int\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}dx<+\infty.\}\end{align*} For $\rho\in W_M$, we define the {\bf energy functional} $F$ by \begin{align}\label{E} F(\rho)&=\int [A(\rho(x))+\frac{1}{2}\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}-\frac{1}{2}\rho(x)\cdot B\rho(x)]dx\notag\\ &=\int [A(\rho(x))+\frac{1}{2}\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}]dx-\frac{1}{8\pi}||\nabla B\rho||_2^2. \end{align} ($\frac{1}{8\pi}|| B\rho||_2^2<+\infty$ follows from $\rho\in L^1(\RR^3)\cap L^{\gamma}(\RR^3)$ and Lemma 2.3 if $\gamma\ge 4/3$.) In (\ref{E}), the first term denotes the potential energy, the middle term denotes the rotational kinetic energy and the third term is the gravitational energy. Assume that the function $L\in C^1[0, M]$ and satisfies \begin{equation}\label{L} \ L(0)=0,\ L(m)\ge 0, \ for~ 0\le m\le M.\end{equation} Auchmuty and Beals (cf. \cite{AB}) proved the existence of a minimizer of the functional $F(\rho)$ in the class of functions $W_{M, S}=W_M\cap W_S$, where \begin{equation}\label{W2} W_S=\{ \rho: \RR^3\to \RR,\ \rho(x_1, x_2, -x_3)=\rho(x_1, x_2, x_3),\ x_i\in \RR, i=1,\ 2, \ 3\}. \end{equation} Their result is given in the following theorem. \begin{thm}\label{aa1}(\cite{AB}). If $\gamma>4/3$ and (\ref{L}) holds, then there exists a function $\hat \rho(x)\in W_{M, S}$ which minimizes $F(\rho)$ in $W_{M, S}$. Moreover, if \begin{equation}\label{G} G=\{x\in \RR^3:\ \hat \rho(x)>0\},\end{equation} Then $\bar G$ is a compact set in $\RR^3$, and $\hat \rho\in C^1(G)\cap C^{\beta}(\RR^3)$ for some $0<\beta<1$. Furthermore, there exists a constant $\mu<0$ such that \begin{equation}\label{lambda} \begin{cases} & A'(\hat \rho(x))+\int_{r(x)}^{\infty}L(m_{\hat \rho}(s)s^{-3}ds-B\hat \rho(x)=\mu, \qquad x\in G,\\ &\int_{r(x)}^{\infty}L(m_{\hat \rho}(s)s^{-3}ds-B\hat \rho(x)\ge \mu, \qquad x\in \RR^3-G.\end{cases}\end{equation} \end{thm} In this paper, we are interested in the minimizer of functional $F$ in the {\it larger} class $W_M$. By the same argument as in \cite{AB}, it is easy to prove the following theorem on the regularity of the minimizer. \begin{thm}\label{ab} Let $\tilde \rho$ be a minimizer of the energy functional $F$ in $W_M$ and let \begin{equation}\label{G1} \Gamma=\{x\in \RR^3:\ \tilde \rho(x)>0\}.\end{equation} If $\gamma>6/5$, then $\tilde \rho\in C(\RR^3)\cap C^1(\Gamma)$. Moreover, there exists a constant $\lambda$ such that \begin{equation}\label{lambda1} \begin{cases} & A'(\tilde \rho(x))+\int_{r(x)}^{\infty}L(m_{\tilde \rho}(s)s^{-3}ds-B\tilde \rho(x)=\lambda, \qquad x\in \Gamma,\\ &\int_{r(x)}^{\infty}L(m_{\tilde \rho}(s)s^{-3}ds-B\tilde \rho(x)\ge \lambda, \qquad x\in \RR^3-\Gamma.\end{cases}\end{equation} \end{thm} We call such a minimizer $\tilde \rho$ a {\it rotating star} solution with total mass $M$ and angular momentum $\sqrt{ L(m)}$. In this paper, we prove the existence of a minimizer for the functional $F$ in the class $W_M$. For this purpose, in addition to (\ref{L}), we require that $L$ satisfies the following conditions: \begin{equation}\label{L1} L(a m)\ge a^{4/3}L(m), \ 0<a\le 1,\ 0\le m\le M,\end{equation} \begin{equation}\label{L2'} L'(m)\ge 0,\qquad 0\le m\le M.\end{equation} \begin{rem} Condition (\ref{L2'}) is called the S$\ddot{\rm o}$lberg stability criterion, see [33, Section 7.3]. This condition was also used by Auchmuty in \cite{Au} for the study of global branching of rotating star solutions.\end{rem} Our main result in this section is the following theorem. \begin{thm}\label{aa} Suppose that $\gamma>4/3$ and the square of the angular momentum $L$ satisfies (\ref{L}), (\ref{L1}) and (\ref{L2'}). Then the following hold:\\ (1) the functional $F$ is bounded below on $W_M$ and $\inf_{W_M} F(\rho)<0$,\\ (2) if $\{\rho^i\}\subset W_M $ is a minimizing sequence for the functional $F$, then there exist a sequence of vertical shifts $a_i{\bf e_3}$ ($a_i\in \RR$, ${\bf e_3}=(0, 0, 1)$), a subsequence of $\{\rho^i\}$, (still labeled $\{\rho^i\}$), and a function $\tilde \rho\in W_M$, such that for any $\epsilon>0$ there exists $R>0$ with \begin{equation}\label{2.15} \int_{a_i{\bf e_3}+B_R(0)}\rho^i(x)dx\ge M-\epsilon, \quad i\in \mathbb{N},\end{equation} and \begin{equation}\label{2.16} T\rho^i(x):=\rho^i(x+a_i{\bf e_3})\rightharpoonup \tilde \rho,\ weakly~in~L^{\gamma}(\RR^3),\ as\ i\to \infty.\end{equation} \noindent Moreover (3) \begin{equation}\label{2.17}\nabla B (T\rho^i)\to \nabla B(\tilde \rho)~ strongly~ in~L^2(\RR^3),\ as\ i\to \infty. \end{equation} (4) $\tilde \rho$ is a minimizer of $F$ in $W_M$. \end{thm} Thus $\tilde \rho$ is a rotating star solution with total mass $M$ and angular momentum $\sqrt {L}$. \begin{rem} It is easy to verify that the functional $F$ is invariant under any vertical shift, i.e., if $\rho(\cdot)\in W_M$, then $\bar \rho(x)=:\rho(x+a{\bf e_3})\in W_M$ and $F(\bar \rho)=F(\rho)$ for any $a\in \RR$. Therefore, if $\{\rho^i\}$ is a minimizing sequence of $F$ in $W_M$, then $\{T\rho^i\}$ defined in (2.15) is also a minimizing sequence in $W_M$. \end{rem} \begin{rem} In \cite{FT1}, \cite{FT2} and \cite{Li2}, the diameter estimate of rotating star solutions with the symmetry $\tilde \rho(r,-z)= \tilde \rho(r, z)$ was obtained. The ideas and techniques developed in \cite{FT1}, \cite{FT2} and \cite{Li2} should also be applied to obtain the diameter estimates for the rotating star solutions in Theorem \ref{aa}. Due to the length of this paper, we leave this issue for the future study. \end{rem} Theorem \ref{aa} is proved in a sequence of lemmas. We first give some inequalities which will be used later. We begin with Young's inequality (see \cite{GT}, p. 146.) \begin{lem} If $f\in L^p\cap L^r$, $1\le p<q<r\le +\infty$, then \begin{equation}\label{young} ||f||_q\le ||f||_p^a||f||_r^{1-a}, \qquad a=\frac{q^{-1}-r^{-1}}{p^{-1}-r^{-1}}.\end{equation} \end{lem} The following two lemmas are proved in \cite{AB}. \begin{lem}\label{bf1'} Suppose the function $f\in L^1(\RR^3)\cap L^{q}(\RR^3)$. If $1<q\le 3/2$, then $Bf=:f\ast\frac{1}{|x|}$ is in $L^{r}(\RR^3)$ for $3<r<3q/(3-2q)$, and \begin{equation}\label{bf1} || Bf||_r\le C \left(||f||_1^b||f||_q^{1-b}+||f||_1^c||f||_q^{1-c}\right),\end{equation} for some constants $C>0$, $0<b<1$, and $0<c<1$. If $q>3/2$, then $Bf(x)$ is a bounded continuous function, and satisfies (2.18) with $r=\infty.$ \end{lem} \begin{lem}\label{lem2.2} For any function $f\in L^1(\RR^3)\cap L^{\gamma}(\RR^3)$, if $\gamma\ge 4/3$, then $\nabla Bf\in L^2(\RR^3)$. Moreover, \begin{equation}\label{bf2} |\int f(x)Bf(x)dx|=\frac{1}{4\pi}||\nabla Bf||_2^2\le C \left(\int|f|^{4/3}(x)dx\right)\left(\int|f|(x)dx\right)^{2/3},\end{equation} for some constant $C$. \end{lem} Throughout this paper, we assume the function $L$, the square of the angular momentum satisfies conditions (\ref{L}), (\ref{L1})and (2.13). Let \begin{equation}\label{fm} f_M=\inf_{\rho\in W_M}F(\rho).\end{equation} We begin our analysis with the following lemma. \begin{lem}\label{lem4.2} Suppose $\gamma>4/3$. If $\rho\in W_M$, then there exist two positive constants $C_1$ and $C_2$ depending only on $\gamma$ and $M$ such that \begin{equation}\label{us} \int [\rho^{\gamma}(x)+\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}] dx\le C_1 F(\rho)+C_2.\end{equation} This implies $$f_M>-\infty, $$ where $f_M$ is defined in (2.20). \end{lem} \begin{proof} Using (\ref{bf2}), we have, for $\rho\in W_M$, \begin{align}\label{00} F(\rho)&=\int [A(\rho)+\frac{1}{2}\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}-\frac{1}{2}\rho B\rho]dx\notag\\ &\ge \int [A(\rho)+\frac{1}{2}\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}]dx -C\int \rho^{4/3}dx(\int \rho dx)^{2/3}\notag\\ &=\int [A(\rho)+\frac{1}{2}\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}]dx-CM^{2/3}\int \rho^{4/3}dx. \end{align} Taking $p=1$, $q=4/3$, $r=\gamma$, and $a=\frac{\frac{3}{4}\gamma-1}{\gamma-1}$ in Young's inequality (\ref{young}), we obtain, \begin{equation} ||\rho||_{4/3}\le ||\rho||_1^a||\rho||_{\gamma}^{1-a}=M^a||\rho||_{\gamma}^{1-a}.\end{equation} This is \begin{equation}\label{haha}\int\rho^{4/3}dx\le M^{\frac{4}{3}a}(\int\rho^\gamma dx)^b, \end{equation} where $b=\frac{1}{3(\gamma-1)}$. Since $\gamma>4/3$, we have $0<b<1$. Therefore, (\ref{00}) and (\ref{haha}) imply \begin{equation}\label{0001} \int [A(\rho)+\frac{1}{2}\frac{\rho(x)L(m_{\rho}(r(x)))}{r(x)^2}]dx\le F(\rho)+C(\gamma-1)^bM^{\frac{4}{3}a+\frac{2}{3}}(\int A(\rho)dx)^b.\end{equation} Using (\ref{haha}) and the inequality (cf. \cite{GT} p. 145) \begin{equation}\label{0002}\alpha\beta\le \epsilon\alpha^s+\epsilon^{-t/s}\beta^t,\end{equation} if $s^{-1}+t^{-1}=1$ ($s, t>1$) and $\epsilon>0$, since $b<1$, we can bound the last term in (\ref{0001}) by $\frac{1}{2}\int A(\rho)dx+C_2$, where $C_2$ is a constant depending only on $M$ and $\gamma$ (we can take $\epsilon=1/2$ and $s=1/b$ and $t=(1-s^{-1})^{-1}$ in (\ref{0002}) since $s>1$ due to $0<b<1$). This implies (\ref{us}).\end{proof} We also need the following lemma. \begin{lem}\label{lem4.4} Suppose $\gamma>4/3$. Then \\ (a) $f_M<0$ for every $M>0$,\\ (b) if (\ref{L1}) holds, then $f_{\bar M}\ge (\bar M/M)^{5/3}f_M$ for every $M>\bar M>0$ .\end{lem} \begin{proof} It follows from \cite{AB} that there exists $\hat \rho\in W_{M, S}\subset W_M$ such that $F(\hat \rho)=\inf_{\rho\in W_{M, S}}F(\rho)$. By Theorem \ref{aa1}, it is easy to verify that the triple $(\hat \rho, \hat {\bf v}, \hat\Phi)$ is a time-independent solution of the Euler-Poisson equations (\ref{1.1}) in the region $G=\{x\in \RR^3:\ \hat \rho(x)>0\},$ where $\hat {\bf v}=(-\frac{x_2 J(m_{\hat \rho}(r))}{r}, \frac{x_1 J(m_{\hat \rho}(r))}{r}, 0)$ and $\hat \Phi=-B\hat \rho$. Therefore \begin{equation}\label{04}\nabla_x p(\hat \rho)=\hat \rho\nabla_x(B\hat \rho)+\hat \rho L(m_{\hat \rho})r(x)^{-3}{\bf e}_r, \ x\in G, \end{equation} where ${\bf e}_r=(\frac{x_1}{r(x)}, \frac{x_2}{r(x)}, 0)$. Moreover, it is proved in \cite{CF} that the boundary $\partial G$ of $G$ is smooth enough to apply the Gauss-Green formula (cf. \cite{evans}) on G. Applying the Gauss-Green formula on G and noting that $\hat \rho|_{\partial G}=0$, we obtain, \begin{equation}\label{05} \int_G x\cdot \nabla_x p(\hat \rho)dx=-3\int_G p(\hat \rho)dx=- 3\int p(\hat \rho)dx.\end{equation} By an argument in \cite {Ta} (used also in \cite{LY}), we obtain \begin{equation}\label{jjyy} \int_G x\cdot \hat \rho\nabla_x B\hat \rho dx=-\frac{1}{2}\int_G\hat \rho B\hat \rho dx=-\frac{1}{2}\int\hat \rho B\hat \rho dx. \end{equation} (In fact, this can be verified as follows. Let $$I=\int_G x\cdot \hat \rho\nabla_x B\hat \rho dx=-\int_G \hat \rho(x)\int_{G} \frac{\rho(y)(x-y)\cdot x}{|x-y|^3}dydx.$$ Then \begin{align} I&=-\int_G \hat \rho(x)\int_{G} \frac{\hat \rho(y)(x-y)\cdot (x-y)}{|x-y|^3}dydx-\int_G \hat \rho(x)\int_{G} \frac{\rho(y)(x-y)\cdot y}{|x-y|^3}dydx\notag\\ &=-\int_G\hat \rho(x)\int_G \frac{\hat \rho(y)(x-y)\cdot (x-y)}{|x-y|^3}dydx-I\notag\\ &=-\int_G\hat \rho B\hat \rho dx-I,\end{align} which is (\ref{jjyy}).) Next, since $x\cdot {\bf e}_r=r(x)$, we have \begin{align}\label{07} &\int_G x\cdot \hat \rho(x) L(m_{\hat \rho}(r(x))r^{-3}(x){\bf e}_rdx\notag\\&=\int_G \hat \rho(x) L(m_{\hat \rho}(r(x))r^{-2}(x)dx\notag\\&=\int \hat \rho(x) L(m_{\hat \rho}(r(x))r^{-2}(x)dx.\end{align} Therefore, from (\ref{05})-(\ref{07}) we have \begin{equation}\label{08} -3\int p(\hat \rho)dx=-\frac{1}{2}\int\hat \rho B\hat \rho dx+\int \hat \rho(x) L(m_{\hat \rho}(r(x))r^{-2}(x)dx, \end{equation} so that $$ F(\hat \rho)=\frac{4-3\gamma}{\gamma-1}\int p(\hat \rho)dx-\frac{1}{2}\int \hat \rho(x) L(m_{\hat \rho}(r(x))r^{-2}(x)dx.$$ Thus, if $\gamma>4/3$, $F(\hat \rho)<0$ since $L(m)\ge 0$ for $0\le m\le M$. Since $\hat \rho\in W_{M, S}\subset W_M$, then $inf_{\rho\in W_M}F(\rho)<0.$ This completes the proof of part (a). \\ The proof of part (b) follows from a scaling argument as in \cite{rein1}. Taking $b=(M/\bar M)^{1/3}$ and letting $\bar \rho(x)=\rho(bx)$ for any $\rho\in W_M$. It is easy to verify that $\bar \rho \in W_{\bar M}$ and that the following identities hold, \begin{equation}\label{010} \int \bar \rho B\bar \rho dx=b^{-5}\int \rho B \rho dx, \end{equation} \begin{equation}\label{0101} \int A(\bar\rho )dx=b^{-3}\int A(\rho) dx. \end{equation} Moreover, for $r\ge 0$, \begin{align} m_{\bar \rho}(r)&=2\pi \int_0^r s\int_{-\infty}^{\infty} \bar \rho(s, z)dsdz\notag\\ &=2\pi \int_0^r s\int_{-\infty}^{\infty} \rho(bs, bz)dsdz\notag\\ &=\frac{1}{b^3}2\pi \int_0^{br} s'\int_{-\infty}^{\infty} \rho(s', z')ds'dz'\notag\\ &=\frac{1}{b^3}m_{ \rho}(br).\end{align} Since $L$ satisfies (\ref{L1}) and $b> 1$, we have \begin{equation}\label{L2} L(m_{\bar \rho}(r))\ge \frac{1}{b^4} L(m_\rho(br)).\end{equation} Thus, \begin{align}\label{012} \int\frac{\bar\rho(x) L(m_{\bar \rho}(r(x)))}{r(x)^2}dx&\ge \frac{1}{b^4}\int_0^{+\infty}\frac{2\pi r}{r^2}L(m_{\rho}(br))\int_{-\infty}^{\infty}\rho(br, bz)dzdr\notag\\ &=\frac{1}{b^5}\int_0^{+\infty}\frac{2\pi r'}{r'^2}L(m_{\rho}(r'))\int_{-\infty}^{\infty}\rho(r', z')dz'dr'\notag\\ &=\frac{1}{b^5}\int\frac{\rho(x) L(m_{\bar \rho}(r(x)))}{r(x)^2}dx.\end{align} Therefore, since $b\ge 1$, it follows from (\ref{010})-(\ref{012}) that \begin{align} F(\bar\rho)&\ge b^{-3}\int A(\rho)dx-\frac{b^{-5}}{2}\int \rho B\rho dx+\frac{b^{-5}}{2}\int\frac{\rho(x) L(m_{\bar \rho}(r(x)))}{r(x)^2}dx\notag\\ &\ge b^{-5}\left(\int A(\rho)dx-\frac{1}{2}\int \rho B\rho dx+\frac{1}{2}\int\frac{\rho(x) L(m_{\bar \rho}(r(x)))}{r(x)^2}dx\right)\notag\\ &=(\bar M/M)^{5/3} F(\rho).\end{align} Since $\rho\to\bar\rho$ is one-to-one between $W_M$ and $W_{\bar M}$, this proves part (b). \end{proof} The following lemma gives the boundedness of a minimizing sequence of $F$ in $L^{\gamma}(\RR^3)$. \begin{lem}\label{lem4.3} Suppose $\gamma>4/3$. Let $\{\rho^i\}\subset W_M $ be a minimizing sequence of $F$. Then $\{\rho^i\}$ is bounded in $L^{\gamma}(\RR^3)$, and moreover, the rotating kinetic energy $$\frac{1}{2}\int \frac{\rho^i(x)L(m_{\rho^i}(r(x)))}{r(x)^2}$$ is also uniformly bounded. \end{lem} \begin{proof} By Lemma \ref{lem4.2}, we have \begin{equation}\label{us1} \int [(\rho^i)^{\gamma}(x)+\frac{\rho^i(x)L(m_{\rho^i}(r(x)))}{r(x)^2}] dx\le C_1 F(\rho^i)+C_2,\ i\ge 1. \end{equation} The lemma follows from this and Part a) in Lemma \ref{lem4.4}. \end{proof} \begin{lem}\label{lem4.5} Suppose $\gamma>4/3$. Let $\{\rho^i\}\subset W_M $ be a minimizing sequence for $F$. Then there exist constants $r_0>0$, $\delta_0>0$, $i_0\in \mathbf{N}$ and $x^i\in \RR^3$ with $r(x^i)\le r_0$, such that \begin{equation}\label{keynote} \int_{B_1(x^i)}\rho^i(x)dx\ge \delta_0, \ i\ge i_0. \end{equation}\end{lem} \begin{proof} First, since $\lim_{i\to\infty}F(\rho^i)\to f_M$ and $f_M<0$ (see part (a) of Lemma \ref{lem4.4}), for large $i$, \begin{equation}\label{y1} -\frac{f_M}{2}\le -F(\rho^i)\le \frac{1}{2}\int \rho^iB\rho^idx.\end{equation} For any $i$, let \begin{equation} \delta_i=\sup_{x\in \RR^3}\int_{|y-x|<1}\rho^i(y)dy.\end{equation} Now \begin{align}\label{y2'} &\int \rho^iB\rho^i(x)dx\notag\\ &=\int_{\RR^3}\rho^i(x)\int_{\RR^3} \frac{\rho^i(y)}{|y-x|}dydx\notag\\ &=\int_{\RR^3}\rho^i(x)\int_{|y-x|<1}\frac{\rho^i(y)}{|y-x|}dydx+\int_{\RR^3}\rho^i(x) \int_{1<|y-x|<r}\frac{\rho^i(y)}{|y-x|}dydx+\int_{\RR^3}\rho^i(x)\int_{|y-x|>r}\frac{\rho^i(y)}{|y-x|}dydx\notag\\ &=:B_1+B_2+B_3,\end{align} and $B_3\le M^2r^{-1}$. The shell $1<|y-x|<r$ can be covered by at most $ Cr^3$ balls of radius 1, so $B_2\le C M \delta_ir^3$. By using H${\rm \ddot{o}}$lder's inequality and (\ref{bf1}), we get \begin{align}\label{new111} B_1&=\int_{\RR^3}\rho^i(x)\int_{\RR^3}\frac{\chi_{\{y||y-x|<1\}}(y)\rho^i(y)}{|y-x|}dydx\notag\\ &\le \|\rho^i\|_{4/3}\|B\{\chi_{\{y||y-x|<1\}}(y)\rho^i(y)\}\|_4\notag\\ &\le C \|\rho^i\|_{4/3}\left(\|\chi_{\{y||y-x|<1\}}(y)\rho^i(y)\|_1^b\|\rho^i\|_{4/3}^{1-b}+\|\chi_{\{y||y-x|<1\}}(y)\rho^i(y)\|_1^c\|\rho^i\|_{4/3}^{1-c}\right)\notag\\ &\le C \|\rho^i\|_{4/3}\left(\delta_i^b\|\rho^i\|_{4/3}^{1-b}+\delta_i^c\|\rho^i\|_{4/3}^{1-c}\right),\end{align} where $\chi$ is the indicator function, $0<b<1$ and $0<c<1$. By lemma 2.6, we know that $\|\rho^i\|_{\gamma}$ is bounded, so $\|\rho^i\|_{4/3}$ is bounded if $\gamma\ge 4/3$ in view of (2.17) and the fact $\|\rho^i\|_1=M$. This gives $B_1\le C(\delta_i^b+\delta_i^c)$. It follows that we could choose $r$ so large that the above estimates give $\int \rho^iB\rho^i(x)dx<-f_M$ {\it if $\delta_i$ were small enough}. This would contradict (\ref{y1}). So there exists $\delta_0>0$ such that $\delta_i\ge \delta_0$ for large $i$. Thus, as $i$ is large, there exists $x^i\in \RR^3$ and $i_0\in \mathbb{N}$ such that \begin{equation}\label{keynote1} \int_{B_1(x^i)}\rho^i(x)dx\ge \delta_0, \ i\ge i_0. \end{equation} We now prove that there exists $r_0>0$ independent of $i$ such that those $x^i$ must satisfy $r(x^i)\le r_0$ for $i$ large. Namely, since $\rho^i$ has mass at least $\delta_0$ in the unit ball centered at $x^i$, and is axially symmetric, it has mass $\ge Cr(x^i)\delta_0$ in the torus obtained by revolving this ball around $x_3$-axis (or $z$-axis).Therefore $r(x^i)\le (C\delta_0)^{-1}M.$ \end{proof} In order to prove Theorem \ref{aa}, we will need the following lemma which is proved in \cite{rein1}, and uses a concentration-compactness argument. \begin{lem}\label{lem4.6} Suppose $\gamma>4/3$. Let $\{f^i\}$ be a bounded sequence in $L^{\gamma}(\RR^3)$ and suppose $$f^i\rightharpoonup f^0~~ weakly~in~ L^{\gamma}(\RR^3).$$ Then\\ (a) For any $R>0$, $$\nabla B(\chi_{B_R(0)}f^i)\to \nabla B(\chi_{B_R(0)}f^0) ~~strongly~in ~ L^2(\RR^3),$$ where $\chi$ is the indicator function.\\ (b) If in addition $\{f^i\}$ is bounded in $L^1(\RR^3)$, $f^0\in L^1(\RR^3)$, and for any $\epsilon>0$ there exist $R>0$ and $i_0\in \mathbf N$ such that \begin{equation}\label{y3}\int_{|x|>R}|f^i(x)|dx<\epsilon,\qquad i\ge i_0,\end{equation} then $$\nabla Bf^i\to \nabla Bf^0 ~strongly~in ~ L^2(\RR^3).$$ \end{lem} Before giving the proof of Theorem \ref{aa}, we first outline the main steps. In step 1, we first show (2.15) and (2.16). In step 2 we show that if $\tilde \rho$ is a weak limit in $L^{\gamma}(\RR^3)$ of $\{T\rho^i\}$, then $m_{\tilde \rho}(r)$ is a continuous function of $r$ for all $r\ge 0$. The third step is to prove that $F$ is lower semi-continuous with respect to the weak topology in $L^\gamma(\RR^3)$. \vskip 0.2cm \noindent {\it Proof of Theorem \ref{aa}} \vskip 0.2cm \noindent\underline{Step 1}. We prove (2.16), and apply Lemma \ref{lem4.6} to prove (2.14). We begin with a splitting as in \cite{rein1}. For $\rho\in W_M$, for any $0<R_1<R_2$, we have \begin{equation}\label{017} \rho=\rho\chi_{|x|\le R_1}+\rho\chi_{R_1<|x|\le R_2}+\rho\chi_{|x|>R_2} =:\rho_1+\rho_2+\rho_3,\end{equation} where $\chi$ is the indicator function. It is easy to verify that \begin{equation}\label{018} \int A(\rho)dx=\sum_{j=1}^3\int A(\rho_j)dx,\end{equation} and \begin{equation} \int \rho B\rho dx=\sum_{j=1}^3\int \rho_jB \rho_jdx+I_{12}+I_{13}+I_{23},\end{equation} where $$I_{ij}=\int_{\RR^3}\int_{\RR^3} |x-y|^{-1}\rho_i(x)\rho_j(y)dxdy,\qquad 1\le i<j\le 3.$$ Also, \begin{align}\label{020} \int\frac{\rho(x)L(m_{\rho}(r(x))}{r^2(x)}dx&=\sum_{j=1}^3\int\frac{\rho_j(x)L(m_{\rho_j}(r(x))}{r^2(x)}dx\notag\\ &+\sum_{j=1}^3\int\frac{\rho_j(x)(L(m_{\rho}(r(x))-L(m_{\rho_j}(r(x))}{r^2(x)}dx.\end{align} It follows from (\ref{017})-(\ref{020}) that \begin{align}\label{021} F(\rho)&=\sum_{j=1}^3F(\rho_j)-\frac{1}{2}\sum_{1\le i<j\le 3}I_{ij}\notag\\ &+\frac{1}{2}\sum_{j=1}^3\int\frac{\rho_j(x)(L(m_{\rho}(r(x))-L(m_{\rho_j}(r(x))}{r^2(x)}dx. \end{align} Since $\rho\ge \rho_j$ , we have $m_\rho(r)\ge m_{\rho_j}(r)$ for any $r\ge 0$ and $j=1, 2, 3$. By (\ref{L2'}), \begin{equation}\label{022} F(\rho)\ge \sum_{j=1}^3F(\rho_j)-\frac{1}{2}\sum_{1\le i<j\le 3}I_{ij}.\end{equation} Using (\ref{022}) and Lemma \ref{lem4.4}, by the same argument as in the proof of Theorem 3.1 in \cite{rein1}, we can show that \begin{equation}\label{rx1} f_M-F(\rho)\le C f_M M_1M_3+C(R_2^{-1}+||\rho||_{\gamma}^{(q+1)/6}||\nabla B\rho_2||_2),\end{equation} by choosing $R_2>2R_1$ in the splitting (\ref{017}), where $M_1=\int \rho_1(x)dx=\int_{|x|\le R_1}\rho(x)dx$, $M_3=\int \rho_3(x)dx=\int_{|x|> R_2}\rho(x)dx$ and $q=1/(\gamma-1)$. Let $\{\rho^i\}$ be a minimizing sequence of $F$ in $W_M$. By Lemma \ref{lem4.5}, we know that there exists $i_0 \in \mathbf{N}$ and $\delta_0>0$ independent of $i$ such that \begin{equation}\label{y5} \int_{B_1(x^i)}\rho^i(x)dx\ge \delta_0, \qquad i\ge i_0 \end{equation} for some $x^i\in \RR^3$ with $r(x^i)\le r_0$ for some constant $r_0>0$ independent of $i$. Let $a_i=z(x^i)$ and $R_0=r_0+1$, then (\ref{y5}) implies \begin{equation}\label{y4} \int_{a_i{\bf e_3}+B_{R_0(0)}}\rho^i(x)dx\ge \delta_0, \qquad if~i\ge i_0,\end{equation} where ${\bf e_3}=(0, 0,1)$. Having proved (\ref{y4}), we can follow the argument in the proof of Theorem 3.1 in \cite{rein1} to verify (\ref{y3}) for $$f^i(x)=T\rho^i(x)=:\rho^i(\cdot+a_i{\bf e_3})$$ by using (\ref{022}) and (\ref{y4}) and choosing suitable $R_1$ and $R_2$ in the splitting (\ref{017}). We sketch this as follows. The sequence $T\rho^i=:\rho^i(\cdot+a_i{\bf e_3})$, $i\ge i_0$, is a minimizing sequence of $F$ in $W_M$ (see Remark 2 after Theorem \ref{aa}). We rewrite (\ref{y4}) as \begin{equation}\label{y4'} \int_{B_R(0)}T\rho^i(x)dx\ge \delta_0, \ i\ge i_0.\end{equation} Applying (\ref{rx1}) with $T\rho^i$ replacing $\rho$, and noticing that $\{T\rho^i\}$ is bounded in $L^\gamma(\RR^3)$ (see Lemma 2.6), we obtain, if $R_2>2R_1$, \begin{equation}\label{rx3} -C f_M M^i_{1}M^i_{3}\le C(R_2^{-1}+||\nabla BT\rho^i_{2}||_2)+F(T\rho^i)-f_M,\end{equation} where $M^i_{1}=\int T\rho^i_1(x)dx=\int_{|x|<R_1}T\rho^i(x)dx,$, $M^i_{3}=\int T\rho^i_3(x)dx=\int_{|x|>R_2}T\rho^i(x)dx$ and $T\rho^i_{2}=\chi_{R_1<|x|\le R_2}T\rho^i.$ Since $\{T\rho^i\}$ is bounded in $L^{\gamma}(\RR^3)$, there exists a subsequence, still labeled by $\{T\rho^i\}$, and a function $\tilde \rho\in W_M$ such that $$T\rho^i\rightharpoonup \tilde \rho{~\rm weakly~ in~} L^{\gamma}(\RR^3).$$ This proves (2.15). By (\ref{y4'}), we know that $M^i_{1}$ in (\ref{rx3}) satisfies $M^i_{1}\ge \delta_0$ for $i\ge i_0$ by choosing $R_1\ge R_0$ where $R_0$ is the constant in (\ref{y4'}). Therefore, by (\ref{rx3}) and the fact that $f_M<0$ (cf. Part (a) in Lemma 2.5) , we have \begin{equation}\label{rx4} -C f_M \delta_0 M^i_{3}\le CR_2^{-1}+C||\nabla B\tilde \rho_{2}||_2+C||\nabla BT\rho^i_{2}-\nabla B\tilde \rho_{2}||_2)+F(T\rho^i)-f_M,\end{equation} where $\tilde \rho_{2}=\chi_{|x|>R_2}\tilde \rho$. Given any $\epsilon>0$, by the same argument as \cite{rein1}, we can increase $R_1>R_0$ such that the second term on the right hand side of (\ref{rx4}) is small, say less than $\epsilon/4$. Next choose $R_2>2R_1$ such that the first term is small. Now that $R_1$ and $R_2$ are fixed, the third term on the right hand side of (\ref{rx4}) converges to zero by Lemma \ref{lem4.6}(a). Since $\{T\rho^i\}$ is a minimizing sequence of $F$ in $W_M$, we can make $F(T\rho^i)-f_M$ small by taking $i$ large. Therefore, for $i$ sufficiently large, we can make \begin{equation}\label{rx5} M^i_{3}=:\int_{|x|>R_2}T\rho^i(x)dx<\epsilon.\end{equation} This verifies (\ref{y3}) in Lemma 2.8 for $f^i=T\rho^i$. By weak convergence we have that for any $\epsilon>0$ there exists $R>0$ such that $$M-\epsilon\le \int_{B_R(0)}\tilde \rho(x)dx\le M,$$ which implies $\tilde \rho\in L^1(\RR^3)$ with $\int \tilde \rho dx=M$. Therefore, by Lemma \ref{lem4.6}(b),we have \begin{equation}\label{rx6} ||\nabla BT\rho^i-\nabla B\tilde \rho||_2\to 0, \qquad i\to +\infty.\end{equation} This proves (2.16). (2.14) in Theorem \ref{aa} follows from (\ref{rx5}) by taking $R=R_2$. \vskip 0.2cm \noindent\underline{Step 2}. Let $\tilde \rho$ be a weak limit of a subsequence of $\{T\rho^i\}$ in $L^{\gamma}(\RR^3)$ (we still label the subsequence by $\{T\rho^i\}$). We claim that the mass function \begin{equation}\label{pr1} m_{\tilde \rho}(r)=:\int_{\sqrt {x_1^2+x_2^2}\le r} \tilde \rho(x) dx{\rm~~~ is~ continuous~ for~} r\ge 0.\end{equation} This is proved as follows. By the lower semicontinuity of norms (cf. \cite{lieb} p.51) and Lemma 2.6, we have \begin{equation}\label{pr2}||\tilde \rho||_{\gamma}\le \lim_{i\to \infty}\inf ||T\rho^i||_{\gamma}=\lim_{i\to \infty}\inf ||\rho^i||_{\gamma}\le C, \end{equation} for some positive constant $C$. For any $\epsilon>0$, by the weak convergence and (\ref{2.15}) which we have already proved, there exists $R>0$ such that \begin{equation}\label{rx9} \int_{|x|> R} T\rho^i(x)dx<\epsilon, \qquad i\in \mathbb{N},\end{equation} and \begin{equation}\label{rx8} \int_{|x|> R} \tilde \rho(x)dx=\lim_{i\to \infty}\int_{|x|> R} T\rho^i(x)dx\le \epsilon.\end{equation} For any $r\ge 0$ and $r_1\ge r$, \begin{align}\label{pr3} &0\le m_{\tilde \rho}(r_1)-m_{\tilde \rho}(r)\notag\\&=\int_{r\le \sqrt{x_1^2+x_2^2}\le r_1}\tilde \rho (x)dx\notag\\ &\int_{r\le \sqrt{x_1^2+x_2^2}\le r_1, |x_3|> R}\tilde \rho (x)dx+\int_{r\le \sqrt{x_1^2+x_2^2}\le r_1, |x_3|\le R}\tilde \rho (x)dx. \end{align} Since $\{x=(x_1, x_2, x_3)\in\RR^3: r\le \sqrt{x_1^2+x_2^2}\le r_1, |x_3|> R\}\subset \{x=(x_1, x_2, x_3)\in\RR^3: |x|> R\}$, by (\ref{rx8}), we have \begin{equation}\label{pr4} \int_{r\le \sqrt{x_1^2+x_2^2}\le r_1, |x_3|> R}\tilde \rho (x)dx<\epsilon. \end{equation} By (\ref{pr2}) and H${\rm\ddot{o}}$lder's inequality, \begin{align}\label{pr5} &\int_{r\le \sqrt{x_1^2+x_2^2}\le r_1, |x_3|\le R}\tilde \rho (x)dx\notag\\ &\le ||\tilde \rho||_{\gamma}\left(meas\{x=(x_1, x_2, x_3)\in \RR^3: r\le \sqrt{x_1^2+x_2^2}\le r_1, |x_3|\le R\}\right)^{1/\gamma'}\notag\\ &\le C [2\pi R (r_1+r)(r_1-r)]^{1/\gamma'}, \end{align} where $meas$ denotes the Lebsgue measure and $\gamma'=(\gamma-1)/\gamma.$ Now, if we take $\delta=\min\{\frac{(\epsilon/C)^{\gamma'}}{2\pi R(2r+1)}, 1\}$, then by (\ref{pr5}), we obtain \begin{equation}\label{pr6}\int_{r\le \sqrt{x_1^2+x_2^2}\le r_1, |x_3|\le R}\tilde \rho (x)dx<\epsilon,\end{equation} whenever $0\le r_1-r<\delta.$ It follows from (\ref{pr3}), (\ref{pr4}) and ({\ref{pr5}), we have \begin{equation}\label{pr61} | m_{\tilde \rho}(r_1)-m_{\tilde \rho}(r)|<2\epsilon, \end{equation} whenever $0\le r_1-r<\delta.$ This proves that $m_{\tilde \rho}(r)$ is continuous from the right for any $r\ge 0$. By the same method, we can show that $m_{\tilde \rho}(r)$ is continuous from the left for any $r> 0$ . Since $m_{\tilde \rho}(0)=0$, this proves (\ref{pr1}). \vskip 0.2cm \noindent\underline{Step 3}. Let $\{\rho^i\}$ be a minimizing sequence of the energy functional $F$, and let $\tilde \rho$ be a weak limit of $\{T\rho^i\}$ in $L^{\gamma}(\RR^3)$. We will prove that $\tilde \rho$ is a minimizer of $F$ in $W_M$; that is \begin{equation}\label{2.80} F(\tilde \rho)\le \lim\inf_{i\to \infty} F(T\rho^i).\end{equation} First, by (\ref{pr2}), we have \begin{equation}\label{rx7} \int A(\tilde \rho)dx\le \lim\inf_{i\to \infty} \int A(T\rho^i)dx.\end{equation} We fix a positive number $\delta$ and show that \begin{equation}\label{rx10}\lim_{i\to \infty}\int_{r(x)\ge \delta}\frac{T\rho^i(x)L(m_{T\rho^i}(r(x))-\tilde \rho(x)L(m_{\tilde \rho}(r(x))}{r^2(x)}dx=0.\end{equation} To see this, we write \begin{align}\label{rx11} &\int_{r(x)\ge \delta}\frac{(T\rho^i(x)L(m_{T\rho^i}(r(x))-\tilde \rho(x)L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\notag\\ &=\int_{r(x)\ge \delta}\frac{(T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\notag\\&+\int_{r(x)\ge \delta}\frac{T\rho^i(x)(L(m_{T\rho^i}(r(x))-L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx. \end{align} For any $R>0$, we have \begin{align}\label{rx12} &\int_{r(x)\ge \delta}\frac{(T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\notag\\ &=\int_{r(x)\ge \delta, |x|\le R}\frac{(T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\notag\\&+\int_{r(x)\ge \delta, |x|\ge R}\frac{(T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))}{r^2(x)}dx.\end{align} \vskip 0.1cm \noindent In view of (\ref{rx9}) and (\ref{rx8}), for any $\epsilon>0$, we can choose $R$ such that \vskip 0.1cm \begin{equation}\label{rx13}|\int_{r(x)\ge \delta, |x|\ge R}\frac{(T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))}{r^2(x)}dx| \le \frac{2L(M)\epsilon}{\delta^2}.\end{equation} \vskip 0.1cm \noindent By the weak convergence of $\{T\rho^i\}$ in $L^{\gamma}(\RR^3)$ and the fact that $L$ is defined on a bounded range, $L(m_{\tilde \rho}(r(x))\chi_{\{r(x)\ge \delta, |x|\le R\}}(x){r^{-2}(x)}\in L^{\gamma'}(\RR^3)$, where as before $\chi$ is the indicator function, and $\gamma'=\frac{\gamma}{\gamma-1}$ (satisfying $1/\gamma+1/\gamma'=1$). We have \begin{align}\label{rx14} &\lim_{i\to \infty}\int_{r(x)\ge \delta, |x|\le R}\frac{(T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\notag\\ &=\lim_{i\to \infty}\int (T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))\chi_{\{r(x)\ge \delta, |x|\le R\}}(x){r^{-2}(x)}dx\notag\\ &=0, \end{align} because $T\rho^i$ converges weakly to $\tilde \rho$. Since $\epsilon$ is arbitrary, (\ref{rx13}) and (\ref{rx14}) imply \begin{equation}\label{rx15} \lim_{i\to \infty}\int_{r(x)\ge \delta}\frac{(T\rho^i(x)-\tilde \rho(x))L(m_{\tilde \rho}(r(x))}{r^2(x)}dx=0.\end{equation} We handle the second term in (\ref{rx11}) as follows. By weak convergence, we know that $m_{T\rho^i}(r)$ converges to $m_{\tilde \rho}(r)$ pointwise for $r\ge 0$. Since $m_{T\rho^i}(r)$ and $m_{\tilde \rho}(r)$ are non-decreasing functions of $r$ for $r\ge 0$ and $m_{\tilde \rho}(r)$ is continuous on $[0, +\infty)$ (see (\ref{pr1})), by a variation on Dini's theorem (\cite{Rudin}, p.167)$^*$, \begin{figure}[b]\rule[-2.5truemm]{5cm}{0.1 truemm} {\footnotesize \\ $^*$ We thank Dmitry Khanvinson for pointing out this to us. } \end{figure} we know that $m_{T\rho^i}(r)$ converges to $m_{\tilde\rho}(r)$ uniformly on the interval $ [0, R]$ for any $R>0$. Since $L\in C^1[0, M]$, it follows that $L(m_{T\rho^i}(r))$ converges to $L(m_{\tilde \rho}(r))$ uniformly on any interval $[0, R]$. For any $\epsilon>0$, we can fix $R>0$ such that (\ref{rx9}) and (\ref{rx8}) hold. Since $L(m_{T\rho^i}(r))$ converges uniformy to $L(m_{\tilde \rho}(r))$ on any interval $[0, R]$, we have \begin{equation}\label{pr9} \lim_{i \to \infty}||L(m_{T\rho^i}(\cdot))-L(m_{\tilde \rho}(\cdot))||_{L^{\infty}[0, R]}=0. \end{equation} Let \begin{equation}\label{rx17} A_{\delta}=\{x\in\RR^3, r(x)\ge \delta\}, \end{equation} then we have, using (\ref{rx9}) and (\ref{rx8}) that \begin{align}\label{rx18} &|\int_{r(x)\ge \delta}\frac{T\rho^i(x)(L(m_{T\rho^i}(r(x))-L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx|\notag\\ &\le |\int_{A_\delta\cap B_{R}(0)}\frac{T\rho^i(x)(L(m_{T\rho^i}(r(x))-L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx|\notag\\ &+|\int_{A_\delta-B_{R}(0)}\frac{T\rho^i(x)(L(m_{T\rho^i}(r(x))-L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx|\notag\\ &\le ||L(m_{T\rho^i}(\cdot))-L(m_{\tilde \rho}(\cdot))||_{L^{\infty}[0, R]}\delta^{-2}M +2\delta^{-2}L(M) \epsilon. \end{align} Since $\epsilon$ is arbitrary, it follows from (\ref{pr9}) and (\ref{rx18}) that \begin{equation}\label{rx19}\lim_{i\to \infty}\int_{r(x)\ge \delta}\frac{T\rho^i(x)(L(m_{T\rho^i}(r(x))-L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx=0. \end{equation} This, together with (\ref{rx11})and (\ref{rx15}), implies (\ref{rx10}). Next, we show that \begin{equation}\label{rx20} \lim_{i\to \infty}\inf\int\frac{T\rho^i(x)L(m_{T\rho^i}(r(x))-\tilde \rho(x)L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\ge 0, \end{equation} by using (\ref{rx10}) and the monotone convergence theorem for integrals. In fact we have \begin{align}\label{rx21} &\int\frac{T\rho^i(x)L(m_{T\rho^i}(r(x))-\tilde \rho(x)L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx\notag\\ &=\int\frac{T\rho^i(x)L(m_{T\rho^i}(r(x)))(1-\chi_{A_\delta})}{r^2(x)}dx\notag\\ &+\int\frac{[T\rho^i(x)L(m_{T\rho^i}(r(x)))-\tilde \rho(x)L(m_{\tilde \rho}(r(x)))]\chi_{A_\delta}}{r^2(x)}dx\notag\\ &+\int\frac{\tilde \rho(x)L(m_{\tilde \rho}(r(x)))(\chi_{A_\delta}-1)}{r^2(x)}dx, \end{align} where $\chi$ is the indicator function, and $A_\delta$ is the set defined in (\ref{rx17}). For any $ i\ge 1$, \begin{equation}\label{rx21'} \int\frac{T\rho^i(x)L(m_{T\rho^i}(r(x)))(1-\chi_{A_\delta})}{r^2(x)}dx\ge 0. \end{equation} We fix $\delta$, and by (\ref{rx10}), we know that the second term on the right hand side of (\ref{rx21}) approaches zero as $i\to \infty$. Therefore, in view of (\ref{rx21'}), \begin{align}\label{rx22} &\lim\inf_{i\to \infty}\int\frac{T\rho^i(x)L(m_{T\rho^i}(r(x)))-\tilde \rho(x)L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx\notag\\ &\ge \int\frac{\tilde \rho(x)L(m_{\tilde \rho}(r(x)))(\chi_{A_\delta}-1)}{r^2(x)}dx. \end{align} By the monotone convergence theorem of integrals, we have \begin{equation}\label{rx23} \lim_{\delta\to 0}|\int\frac{\tilde \rho(x)L(m_{\tilde \rho}(r(x)))(\chi_{A_\delta}-1)}{r^2(x)}dx|=0. \end{equation} Letting $\delta\to 0$ in (\ref{rx22}), gives (\ref{rx20}). By (\ref{rx6}), (2.71}) and (\ref{rx20}), we obtain \begin{equation}\label{2.801} F(\tilde \rho)\le \lim\inf_{i\to \infty} F(T\rho^i).\end{equation} Since $T\rho^i$ is a minimizing sequence, $\tilde \rho$ is a minimizer of $F$ in $W_M$. This completes the proof of Theorem \ref{aa}. \section{Nonlinear Stability of Rotating Star Solutions}. We consider the Cauchy problem for (\ref{1.1}) with the initial data \begin{equation}\label{initial}\rho(x,0)=\rho_0(x),\ {\bf v}(x,0)={\bf v}_0(x).\end{equation} We begin by giving the definition of a weak solution.\\ \noindent {\bf Definition:} Let $\rho {\bf v}={\bf m}$. The triple $(\rho, {\bf m}, \Phi)(x, t)$ ($x\in\RR^3, t\in[0, T])$ $(T>0)$ and $\Phi$ given by (\ref{phi}, with $\rho\ge 0,$ ${\bf m}$, ${\bf m}\otimes{\bf m}/\rho$ and $ \rho\nabla\Phi$ being in $L^1_{loc}( \RR^3\times [0, T])$, is called a {\it weak solution} of the Cauchy problem (\ref{1.1}) and (\ref{initial}) on $ \RR^3\times [0, T]$ if for any Lipschitz continuous test functions $\psi$ and ${\bf \Psi}=(\psi_1, \psi_2, \psi_3)$ with compact supports in $\RR^3\times [0, T]$,\\ \begin{equation} \int_0^T\int \left(\rho\psi_t+{\bf m}\cdot \nabla\psi\right)dxdt+\int\rho_0(x)\psi(x,0)dx=0, \end{equation} and \begin{equation}\label{3.3} \int_0^T\int \left({\bf m}\cdot{\bf \Psi}_t+\frac{{\bf m}\otimes{\bf m}}{\rho}\cdot \nabla{\bf \Psi}\right)dxdt+\int{\bf m}_0(x){\bf \Psi}(x,0)dx=\int_0^T \int\rho\nabla \Phi{\bf \Psi} dxdt, \end{equation} both hold. \vskip 0.2cm For any weak solution, it is easy to verify that the total mass is conserved by using a generalized divergence theorem for $L^{r}$ functions ($r\ge 1$) (cf. \cite{chenfrid}), \begin{equation}\label{5.1} \int\rho(x, t)dx=\int \rho(x, 0)dx,\qquad t\ge 0.\end{equation} The {\it total energy} of system (\ref{1.1}) at time $t$ is \begin{equation}\label{energy} E(t)=E(\rho(t), {\bf v}(t))=\int\left(A(\rho)+\frac{1}{2}\rho|{\bf v}|^2\right)(x, t)dx-\frac{1}{8\pi}\int|\nabla \Phi|^2(x, t)dx,\end{equation} where as before, \begin{equation}\label{A}A(\rho)=\frac{p(\rho)}{\gamma-1}.\end{equation} Note that the energy $E(t)$ has both a positive and a negative part. This makes the stability analysis highly nontrivial, as noted in \cite{Rein}. For a solution of (\ref{1.1}) without shock waves, the total energy is conserved, i.e., $E(t)=E(0)$ ($t\ge0$)(cf. \cite{Ta}). For solutions with shock waves, the energy should be non-increasing in time, so that for all $t\ge 0$, \begin{equation}\label{denergy} E(t)\le E(0),\end{equation} due to the entropy conditions, which are motivated by the second law of thermodynamics (cf. \cite{lax} and \cite{smoller}). This will be proved in Theorem 5.1, below. We consider axi-symmetric initial data, which takes the form \begin{align}\label{5.2'} &\rho_0(x)=\rho(r, z),\notag\\ & {\bf v}_0(x)=v^r_0(r, z){\bf e}_r+v^{\theta}_0(r, z){\bf e}_{\theta}+v^3_0(\rho, z){\bf e}_3. \end{align} Here $r=\sqrt {x_1^2+x_2^2},\ z=x_3$, $x=(x_1, x_2, x_3)\in \RR^3$ (as before), and \begin{equation} {\bf e}_r=(x_1/r, x_2/r, 0)^\mathrm{T},\ {\bf e}_{\theta}=(-x_2/r, x_1/r,\ 0)^\mathrm{T},\ {\bf e}_3=(0, 0, 1)^\mathrm{T}.\end{equation} We seek axi-symmetric solutions of the form \begin{align}\label{5.3'} &\rho(x, t)=\rho(r, z, t),\notag\\ & {\bf v}(x, t)=v^r(r, z, t){\bf e}_r+v^{\theta}(r, z, t){\bf e}_{\theta}+v^3(r, z, t){\bf e}_3,\\ &\Phi(x, t)=\Phi(r, z, t)=-B\rho(r, z, t), \end{align} We call a vector field ${\bf u}(x, t)=(u_1, u_2, u_3)(x)$ ($x\in \RR^3$ ) axi-symmetric if it can be written in the form $${\bf u}(x)=u^r(r, z){\bf e}_r+u^{\theta}(r, z){\bf e}_{\theta}+u^3(\rho, z){\bf e}_3.$$ For the velocity field ${\bf v}=(v_1, v_2, v_3)(x, t)$, we define the angular momentum $j(x,t)$ about the $x_3$-axis at $(x, t)$ , $t\ge 0$, by \begin{equation}\label{5.3} j(x, t)=x_1v_2-x_2v_1.\end{equation} For an axi-symmetric velocity field \begin{equation}\label{asv} {\bf v}(x, t)=v^r(r, z, t){\bf e}_r+v^{\theta}(r, z, t){\bf e}_{\theta}+v^3(\rho, z, t){\bf e}_3,\end{equation} \begin{equation}\label{comp} v_1=\frac{x_1}{r}v^r-\frac{x_2}{r}v^{\theta},\ v_2=\frac{x_2}{r}v^r+\frac{x_1}{r}v^{\theta}, v_3=v^3,\end{equation} so that \begin{equation}\label{j}j(x, t)= r v^{\theta}(r, z, t). \end{equation} In view of ( {\ref{asv}) and (\ref{j}), we have \begin{equation}\label{V} |{\bf v}|^2=|v^r|^2+\frac{j^2}{r^2}+|v^3|^2.\end{equation} Therefore, the total energy at time $t$ can be written as \begin{align}\label{en} E(\rho(t), {\bf v}(t)) &=\int A(\rho)(x, t)dx+\frac{1}{2}\int \frac{\rho j^2(x,t)}{r^2(x)}dx\notag\\ &-\frac{1}{8\pi}\int|\nabla B\rho|^2(x, t)dx+\frac{1}{2}\int \rho(|v^r|^2+|v^3|^2)(x, t)dx.\end{align} There are two important conserved quantities for the Euler-Poisson equations (\ref{1.1}); namely the total mass and the angular momentum. In order to describe these, we define $D_t$, the non-vacuum region at time $t\ge 0$ of the solution by \begin{equation}\label{nonvacuum} D_t=\{x\in \RR^3: \rho(x, t)>0\}. \end{equation} We will make the following physically reasonable assumptions A1)-A4) on weak solutions of the Cauchy problem (\ref{1.1}) and (\ref{initial}): \vskip 0.2cm A1) For any $t\ge0$, there exists a measurable subset $G_t\subset D_t$ with $meas(D_t-G_t)=0$ ($meas$ denotes the Lebsegue measure) such that, for any $x\in G_t$, there exists a unique (backwards) particle path $\xi(\tau, x, t)$ for $0\le \tau\le t$ satisfying \begin{equation}\label{particlepath} \partial_{\tau}\xi(\tau, x, t)={\bf v}(\xi(\tau, x, t), \tau),\ \xi(t, x, t)=x.\end{equation} \vskip 0.2cm For $x\in G_t$, we write $$\xi(0, x, t)=\xi_{-t}(x).$$ Also, for $x\in \RR^3$ and $t\ge 0$, we denote the total mass at time $t$ in the cylinder $\{y\in \RR^3: r(y)\le r(x)\}$ by $m_{\rho(t)}(r(x))$, i.e., \begin{equation}\label{mass} m_{\rho(t)}(r(x))=\int_{r(y)\le r(x)}\rho(y, t)dy.\end{equation} For axi-symmetric motion, we assume \vskip 0.2cm A2) \begin{equation}\label{mass1} m_{\rho(t)}(r(x))=m_{\rho_0}(r(\xi_{-t}(x))), \qquad {\rm for~} x\in G_t, t\ge 0.\end{equation} Moreover, the angular momentum is conserved along the particle path: \vskip 0.2cm A3) \begin{equation}\label{angular1}j (x, t)=j( \xi_{-t}(x), 0), \qquad {\rm for~} x\in G_t, t\ge 0.\end{equation} (Both (3.21) and (3.22) are shown in \cite{Ta} if the solution has some regularity.) \vskip 0.2cm \noindent Finally, for $L=j^2$, we need a technical assumption; namely, \\ A4) \begin{equation}\label{extra1} \lim_{r\to 0+}\frac{L(m_{\rho(t)}(r)+m_{\tilde \rho}(r))m_{\sigma(t)}(r)}{r^2}=0, \end{equation} for $t\ge 0$, where $\sigma(t)=\rho(t)-\tilde \rho. $ \begin{rem} (\ref{extra1}) can be understood as follows. For any $\rho\in W_M$, we have $\lim_{r\to 0+} m_{\rho}(r)=0. $ Therefore $\lim_{r\to 0+}L(m_{\rho(t)}(r)+m_{\tilde \rho}(r))=L(0)=0,$ so if we define $$\hat \rho(s, t)-\hat \tilde \rho(s)=\int_{-\infty}^{+\infty} (\rho(s,z, t)- \tilde \rho(s,z))dz, $$ then if \begin{equation}\label{good5} \frac{m_{\sigma(t)}(r)}{r^2}=\frac{\int_0^r(2\pi s (\hat \rho(s, t)-\hat {\tilde \rho}(s))ds}{r^2}\in L^{\infty}(0, \delta) \ for\ some \ \delta>0, \end{equation} (\ref{extra1})will hold. If $\hat \rho(\cdot, t)-\hat \tilde \rho(\cdot)\in L^{\infty}(0, \delta)$, then (\ref{good5}) holds. This can be assured by assuming that $\rho(r, z, t)-\tilde \rho(r, z)\in L^{\infty}((0, \delta)\times \RR\times \RR^+)$ and decays fast enough in the $z$ direction. For example, when $\rho(x, t)-\tilde \rho(x)$ has compact support in $\RR^3$ and $\rho(\cdot, t)-\tilde \rho(\cdot)\in L^{\infty}(\RR^3)$, then (\ref{extra1}) holds. \end{rem} \vskip 0.2cm Now we make some assumptions on the initial data; namely, we assume that the initial data is such that the initial total mass and angular momentum are the same as those of the rotating star solution (those two quantities are conserved quantities). Therefore, we require \vskip 0.2 cm I$_1$) \begin{equation}\label{initial mass} \int \rho_0(x)dx=\int \tilde \rho(x)dx=M. \end{equation} Moreover we assume \vskip 0.2cm I$_2$) For the initial angular momentum $j (x, 0)=rv_0^{\theta}(r, z)=: j_0(r, z)$ ($r=\sqrt {x_1^2+x_2^2}$, $z=x_3$ for $x=(x_1, x_2, x_3)$, we assume $j(x, 0)$ only depends on the total mass in the cylinder $\{y\in\RR^3, r(y)\le r(x)\}$, i.e. , \begin{equation}\label{ia} j(x, 0)=j_0\left(m_{\rho_0}(r(x))\right).\end{equation} Finally, we assume that the initial profile of the angular momentum per unit mass is the same as that of the rotating star solution, i. e., \vskip 0.2cm I$_3$) \begin{equation}\label{ia1} j_0^2(m)=L(m), \qquad 0\le m\le M,\end{equation} where $L(m)$ is the profile of the square of the angular momentum of the rotating star defined in Section 2. ((\ref{ia}) implies that we require that $v_0^{\theta}(r, z)$ only depends on $r$.)\\ In order to state our stability result, we need some notation. Let $\lambda$ be the number in Theorem 2.2, i.e., \begin{equation}\label{lam} \begin{cases} & A'(\tilde \rho(x))+\int_{r(x)}^{\infty}L(m_{\tilde \rho}(s))s^{-3}ds-B\tilde \rho(x)=\lambda, \ x\in \Gamma,\\ &\int_{r(x)}^{\infty}L(m_{\tilde \rho})(s))s^{-3}ds-B\tilde \rho(x)\ge \lambda, \qquad x\in \RR^3-\Gamma,\end{cases}\end{equation} with $A$ defined in (2.3), and $\Gamma$ defined in (2.10).\\ For $\rho\in L^1\cap L^{\gamma}$, we define, \begin{equation} d(\rho, \tilde \rho)=\int [A(\rho)-A(\tilde \rho)] +(\rho-\tilde \rho)\int_{r(x)}^{\infty}\{\frac{L(m_{\tilde \rho}(s))}{s^3}ds-\lambda-B\tilde \rho\}dx. \end{equation} \begin{rem} For $x\in \Gamma$, in view of (\ref{A}) and (3.28), we have, \begin{align} &(A(\rho)-A(\tilde \rho))(x) +(\int_{r(x)}^{\infty}\frac{L(m_{\tilde \rho}(s))}{s^3}ds-\lambda-B\tilde \rho(x))(\rho-\tilde \rho)\notag\\ &= (A(\rho)-A(\tilde \rho)-A'(\tilde \rho)(\rho-\tilde \rho))(x)\notag\\ &= \frac{p(\rho)-p(\tilde \rho)-p'(\tilde \rho)(\rho-\tilde \rho)}{\gamma-1}(x)\ge 0.\end{align} Thus, for $\rho\in W_M$, \begin{equation} d(\rho, \tilde \rho)\ge 0. \end{equation} Moreover, $d(\rho, \tilde \rho)= 0$ if and only if $\rho=\tilde \rho$, and if $\gamma\le 2$, \begin{equation} d(\rho, \tilde \rho)\ge C||\rho-\tilde \rho||_2^2, \qquad \rho\in W_M.\end{equation}\end{rem} We also define \begin{align}\label{d1}d_1(\rho, \tilde \rho) &=\frac{1}{2}\int\frac{\rho(x) L(m_{\rho}(r(x))-\tilde \rho(x) L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\notag\\ &-\int \int_{r(x)}^{\infty}s^{-3}L(m_{\tilde \rho}(s))ds(\rho(x)-\tilde \rho(x))dx, \end{align} for $\rho\in W_M$. We shall show later that $d_1\ge 0$. Our main stability result in this paper is the following global-in-time stability theorem. \begin{thm}\label{th5.1} Let $\tilde \rho$ be a minimizer of the functional $F$ in $W_M$, and assume that it is unique up to a vertical shift. Suppose $\gamma>4/3$ and the above assumptions A1)-A4) and I$_1$)- I$_3$) hold. Moreover, assume that the angular momentum of the rotating star solution $\tilde \rho$ satisfies (\ref{L}), (\ref{L1}) and (2.13). Let $(\rho, {\bf v}, \Phi)(x, t)$ be an axi-symmetric weak solution of the Cauchy problem (\ref{1.1}), (\ref{initial}) with $\rho(\cdot, t)\in L^1\cap L^{\gamma}$, $\rho|{\bf v}|^2(\cdot, t)\in L^1$ and $\nabla\Phi(\cdot, t)=-\nabla B\rho(\cdot, t)\in L^2$. If the total energy $E(t)$ (c.f. (3.5)) is non-increasing with respect to $t$, then for every $\epsilon>0$, there exists a number $\delta>0$ such that if \begin{align} &d(\rho_0, \tilde \rho)+\frac{1}{8\pi}||\nabla B\rho_0-\nabla B\tilde \rho||_2^2+ |d_1(\rho_0, \tilde \rho)|\notag\\ &+\frac{1}{2}\int \rho_0(x)(|v^r_0|^2+|v^3_0|^2)(x)dx <\delta,\end{align} then there is a vertical shift $a{\bf e_3}$ ($a\in \RR$, ${\bf e_3}=(0, 0, 1)$) such that, for every $t>0$ \begin{align} &d(\rho(t), T^a\tilde \rho)+\frac{1}{8\pi}||\nabla B\rho(t)-BT^a\tilde \rho||_2^2+|d_1(\rho(t), T^a\tilde \rho)|\notag\\ &+\frac{1}{2}\int \rho(x, t)(|v^r(x, t)|^2+|v^3(x, t)|^2)dx <\epsilon, \end{align} where $T^a\tilde \rho(x)=:\tilde \rho(x+a{\bf e_3}).$ \end{thm} \begin{rem} The vertical shift $a{\bf e_3}$ appearing in the theorem is analogous to a similar phenomenon which appears in the study of stability of viscous traveling waves in conservation laws, whereby convergence is to a ``shift`` of the original traveling wave. \end{rem} \begin{rem}Without the uniqueness assumption for the minimizer of $F$ in $W_M$, we can have the following type of stability result, as observed in \cite{Rein} for the non-rotating star solutions. Suppose the assumptions in Theorem \ref{th5.1} hold. Let $\mathcal{S}_M$ be the set of all minimizers of $F$ in $W_M$ and $(\rho, {\bf v}, \Phi)(x, t)$ be an axi-symmetric weak solution of the Cauchy problem (\ref{1.1}), (\ref{initial}) with $\rho(\cdot, t)\in L^1\cap L^{\gamma}$, $\rho|{\bf v}|^2(\cdot, t)\in L^1$ and let $\nabla\Phi(\cdot, t)=-\nabla B\rho(\cdot, t)\in L^2$. If the total energy $E(t)$ is non-increasing with respect to $t$, then for every $\epsilon>0$, there exists a number $\delta>0$ such that if \begin{align} &\inf_{\tilde \rho\in \mathcal{S}_M}\left[ d(\rho_0, \tilde \rho)+\frac{1}{8\pi}||\nabla B\rho_0-\nabla B\tilde \rho||_2^2+ |d_1(\rho_0, \tilde \rho)|\right]\notag\\ &+\frac{1}{2}\int \rho_0(x)(|v^r_0|^2+|v^3_0|^2)(x)dx <\delta,\end{align} then for every $t>0$ \begin{align} &\inf_{\tilde \rho\in \mathcal{S}_M}\left[d(\rho(t), T^a\tilde \rho)+\frac{1}{8\pi}||\nabla B\rho(t)-BT^a\tilde \rho||_2^2+|d_1(\rho(t), T^a\tilde \rho)|\right]\notag\\ &+\frac{1}{2}\int \rho(x, t)(|v^r(x, t)|^2+|v^3(x, t)|^2)(x)dx <\epsilon. \end{align} The proof of this follows exactly along the same line as that for Theorem \ref{th5.1}. \end{rem} In order to prove Theorem \ref{th5.1}, we need several lemmas. First we have \begin{lem}\label{lem5.2} Suppose the angular momentum of the rotating star solutions satisfies (\ref{L}), (\ref{L1}) and (2.13). For any $\rho(x)\in W_M$, if \begin{equation}\label{extra} \lim_{r\to 0+}{L(m_\rho(r)+m_{\tilde \rho}(r))m_{\sigma}(r)}{r^{-2}}=0, \end{equation} where $\sigma=\rho-\tilde \rho,$ then \begin{equation}\label{dd1} d_1(\rho, \tilde \rho)\ge 0, \end{equation} where $d_1$ is defined by (\ref{d1}). \end{lem} \begin{proof} First, we introduce some notation. For an axi-symmetric function $f(x)=f(r, z)$ ($r=\sqrt {x_1^2+x_2^2},\ z=x_3$ for $x=(x_1, x_2, x_3)$), we let \begin{equation} \hat f(r)=2\pi r\int_{-\infty}^{+\infty} f(r, z)dz,\end{equation} \begin{equation}\label{xd1} m_f(r)=\int_{\{x: \sqrt{x_1^2+x_2^2}\le r\}}f(x)dx=\int_0^r \hat f(s)ds,\end{equation} so that \begin{equation}\label{dx2} m'_f(r)=\hat f(r).\end{equation} In order to show (\ref{dd1}), we let \begin{equation} \sigma(x)=(\rho-\tilde \rho)(x),\end{equation} and for $0\le \alpha\le 1$, we define \begin{align} Q(\alpha)&=\frac{1}{2}\int\frac{(\tilde \rho+\alpha\sigma)(x) L(m_{\tilde \rho+\alpha\sigma}(r(x)))-\tilde \rho(x) L(m_{\tilde \rho}(r(x)))}{r^2(x)}dx\notag\\ &-\alpha\int \int_{r(x)}^{\infty}s^{-3}L(m_{\tilde \rho}(s))ds\sigma(x)dx. \end{align} Then \begin{equation}\label{5.37} Q(0)=0,\ Q(1)=d_1(\rho,\ \tilde \rho). \end{equation} Since \begin{equation} m_{\tilde \rho+\alpha\sigma}(r(x))=\int_0^{r(x)}2\pi s \int_{-\infty}^{+\infty} (\tilde \rho+\alpha \sigma)(s, z)dzds, \end{equation} we have \begin{equation}\frac{d}{d\alpha} m_{\tilde \rho+\alpha\sigma}(r(x))=\int_0^{r(x)}2\pi s \int_{-\infty}^{+\infty}\sigma(s, z)dzds= m_{\sigma}(r(x)).\end{equation} Therefore, \begin{align}\label{dx1} Q'(\alpha)&=\frac{1}{2}\int\frac{\sigma(x) L(m_{\tilde \rho+\alpha\sigma}(r(x)))}{r^2(x)}dx\notag\\ &+\frac{1}{2}\int\frac{(\tilde \rho+\alpha\sigma)(x) L'(m_{\tilde \rho+\alpha\sigma}(r(x)))m_{\sigma}(r(x))}{r^2(x)}dx\notag\\ &-\int \int_{r(x)}^{\infty}s^{-3}L(m_{\tilde \rho}(s))ds\sigma(x)dx, \end{align} and in view of (\ref{dx2}), \vskip 0.2cm \begin{equation}\label{dx3} \frac{d}{dr} L(m_{\tilde \rho+\alpha\sigma}(r))=L'(m_{\tilde \rho+\alpha\sigma}(r))(\ \hat{\tilde \rho} +\alpha \hat \sigma)(r).\end{equation} \vskip 0.2cm \noindent Therefore, by virtue of (\ref{dx3}) and (\ref{dx2}), we obtain \begin{align}\label{dx5'} & \frac{1}{2}\int\frac{(\tilde \rho+\alpha\sigma)(x) L'(m_{\tilde \rho+\alpha\sigma}(r(x)))m_{\sigma}(r(x))}{r^2(x)}dx\notag\\ &=\frac{1}{2}\int_0^{+\infty}(\ \hat{\tilde \rho}+\alpha\hat\sigma)(r) L'(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-2}dr\notag\\ &=\frac{1}{2}\int_0^{+\infty}\frac{d}{dr}[L(m_{\tilde \rho+\alpha\sigma}(r))]m_{\sigma}(r)r^{-2}dr.\end{align} \vskip 0.2cm \noindent For $0\le \alpha\le 1$, since (cf. (2.13)) $L'(m)\ge 0$, we have \begin{equation} \label{good1} L(m_{\tilde \rho+\alpha\sigma}(r))\le L(m_{\tilde \rho+\rho}(r)).\end{equation} This, together with (\ref{extra}), implies \begin{equation}\label{good2} \lim_{r\to 0+}L(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-2}=0.\end{equation} Moreover, since $m_{\sigma}(+\infty)=\int\sigma(x)dx=\int (\rho-\tilde \rho)(x)=0$ and $$\lim_{r\to \infty}L(m_{\tilde \rho+\alpha\sigma}(r)=L(M),$$ we have \begin{equation}\label{good3} \lim_{r\to \infty}L(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-2}=0.\end{equation} \vskip 0.2cm \noindent It follows from (\ref{dx5'}), (\ref{good2}), (\ref{good3}) and integration by parts that \begin{align}\label{dx5} & \frac{1}{2}\int\frac{(\tilde \rho+\alpha\sigma)(x) L'(m_{\tilde \rho+\alpha\sigma}(r(x)))m_{\sigma}(r(x))}{r^2(x)}dx\notag\\ &=-\frac{1}{2}\int_0^{+\infty}\hat \sigma (r)L(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-2}dr\notag\\ &+\int_0^{+\infty}L(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-3}dr. \end{align} Since \begin{equation} \int_0^{+\infty}\hat \sigma (r)L(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-2}dr=\int\frac{\sigma(x) L(m_{\tilde \rho+\alpha\sigma}(r(x)))}{r^2(x)}dx,\end{equation} and \begin{equation} \int \int_{r(x)}^{\infty}s^{-3}L(m_{\tilde \rho}(s))ds\sigma(x)dx=\int_0^{+\infty}\hat \sigma(r)\int_{r}^{\infty}s^{-3}L(m_{\tilde \rho}(s))dsdr,\end{equation} (\ref{dx1}) and (\ref{dx5}) imply \begin{align}\label{dx6} Q'(\alpha)&=\int_0^{+\infty}L(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-3}dr\notag\\ &-\int_0^{+\infty}\hat \sigma(r)\int_{r}^{\infty}s^{-3}L(m_{\tilde \rho}(s))dsdr.\end{align} Using (\ref{xd1}), we have $m_\sigma(r)=\int_0^r\hat \sigma(s)ds$, so substituting this into the first term in (\ref{dx6}) and interchanging the order of integration gives \begin{align}\label{dx7}&\int_0^{+\infty}L(m_{\tilde \rho+\alpha\sigma}(r))m_{\sigma}(r)r^{-3}dr\notag\\ &=\int_0^{+\infty}\int_0^r r^{-3}L(m_{\tilde \rho+\alpha\sigma}(r))\hat \sigma(s)dsdr\notag\\ &=\int_0^{+\infty}\hat \sigma(s)\int_s^{+\infty}r^{-3}L(m_{\tilde \rho+\alpha\sigma}(r))drds\notag\\ &=\int_0^{+\infty}\hat \sigma(r)\int_r^{+\infty}s^{-3}L(m_{\tilde \rho+\alpha\sigma}(s))dsdr.\end{align} Hence (\ref{dx6}) and (\ref{dx7}) yield \begin{equation}\label{dx8} Q'(\alpha)=\int_0^{+\infty}\hat \sigma(r)\int_{r}^{\infty}s^{-3}(L(m_{\tilde \rho+\alpha\sigma}(s))-L(m_{\tilde \rho}(s)))dsdr,\end{equation} and therefore \begin{equation}\label{dx9} Q(0)=Q'(0)=0.\end{equation} Differentiating (\ref{dx8}) again, we obtain \begin{equation}\label{dx91} \frac{d^2Q(\alpha)}{d\alpha^2}=\alpha\int_0^{+\infty}\hat \sigma(r)\int_{r}^{\infty}s^{-3}L'(m_{\tilde \rho+\alpha\sigma}(s))m_\sigma(s)dsdr,\end{equation} and interchanging the order of integration gives \begin{equation}\label{dx10} \frac{d^2Q(\alpha)}{d\alpha^2}=\alpha\int_0^{+\infty}s^{-3}\int_0^s\hat \sigma(r)dr L'(m_{\tilde \rho+\alpha\sigma}(s))m_\sigma(s)ds.\end{equation} Noting that $\int_0^s\hat \sigma(r)dr=m_\sigma(s)$, we obtain \begin{equation}\label{dx101} \frac{d^2Q(\alpha)}{d\alpha^2}=\alpha\int_0^{+\infty}s^{-3}L'(m_{\tilde \rho+\alpha\sigma}(s))(m_\sigma(s))^2ds.\end{equation} Therefore, if $L'(m)\ge 0$ for $0\le m\le M$, then \begin{equation}\label{dx11}\frac{d^2Q(\alpha)}{d\alpha^2}\ge 0,\ for \ 0\le\alpha\le 1.\end{equation} This, together with (\ref{dx9})and (\ref{5.37}), yields $d_1(\rho, \tilde \rho)=Q(1)\ge 0.$ \end{proof} \begin{lem}\label{lem5.3} Let $(\rho, {\bf v})$ be a solution of the Cauchy problem (\ref{1.1}), (\ref{initial}) as stated in Theorem 3.1, then \begin{align}\label{ed} &E(\rho, {\bf v})(t)-F(\tilde \rho)\notag\\ &=d(\rho(t),\tilde \rho)+d_1(\rho(t), \tilde \rho) -\frac{1}{8\pi}||\nabla (B\rho(\cdot, t)-B\tilde \rho)||_2^2\notag\\ &+\frac{1}{2}\int\rho (|v^r|^2+|v^3|^2)(x, t)dx.\end{align}\end{lem} \begin{proof} From A1)-A3), for any $x\in G_t$ we have \begin{equation}\label{jj1}j^2(x, t)=j_0^2(\xi_{-t}(x)),\end{equation} (see (3.26)). In view of (3.22) and (3.27), \begin{equation}\label{jj2} j^2(x, t)=j_0^2(\xi_{-t}(x))=L(m_{\rho_0}(r(\xi_{-t}(x))), \end{equation} for $x\in G_t$. This, together with (3.21), yields \begin{equation}\label{jj3} j^2(x, t)=L(m_{\rho(t)}(r(x))),\qquad x\in G_t.\end{equation} Therefore, by (\ref{en}), we have \begin{align}\label{jj4} E(\rho(t), {\bf v}(t)) &=\int A(\rho)(x, t)dx+\frac{1}{2}\int \frac{\rho(x, t) L(m_{\rho(t)}(r(x))}{r^2(x)}dx\notag\\ &-\frac{1}{8\pi}\int|\nabla B\rho|^2(x, t)dx+\frac{1}{2}\int \rho(|v^r|^2+|v^3|^2)(x, t)dx.\end{align} Here we have used the fact that $$\int \frac{\rho(x, t) L(m_{\rho(t)}(r(x))}{r^2(x)}dx=\int_{G_t} \frac{\rho(x, t) L(m_{\rho(t)}(r(x))}{r^2(x)}dx,$$ which holds because $D_t=\{x\in R^3: \rho(x, t)>0\}$, $G_t\subset D_T$ and $meas(D_t-G_t)=0.$ It follows from (2.5) and (\ref{jj4}) that \begin{align}\label{jj5} &E(\rho, {\bf v})(t)-F(\tilde \rho)\notag\\ &=\int (A(\rho)(x,t)-A(\tilde \rho)(x))dx\notag\\ &+\frac{1}{2}\int\frac{\rho(x, t) L(m_{\rho(t)}(r(x))-\tilde \rho(x) L(m_{\tilde \rho}(r(x))}{r^2(x)}dx\notag\\ &-\frac{1}{8\pi}(||\nabla B\rho(x, t)||^2-||\nabla B\tilde \rho||_2^2)\notag\\ &+\frac{1}{2}\int\rho (|v^r|^2+|v^3|^2)(x, t)dx.\end{align} On the other hand, \begin{align}\label{jj6} &-\frac{1}{8\pi}(||\nabla B\rho(\cdot, t)||_2^2-||\nabla B\tilde \rho||_2^2)\notag\\ &=-\frac{1}{8\pi}||\nabla (B\rho(\cdot, t)-\nabla B\tilde \rho)||_2^2-\frac{1}{4\pi}\int \nabla B\tilde \rho(x)\cdot (\nabla B\rho(x, t)-\nabla B\tilde \rho(x))dx.\end{align} Noting that $\Delta (B\rho-B\tilde \rho)=-4\pi (\rho-\tilde \rho)$, and integrating by parts (this is legitimate, cf. \cite{rein1}) gives, \begin{align}\label{jj8} &-\frac{1}{4\pi}\int \nabla B\tilde \rho(x)\cdot (\nabla B\rho(x, t)-\nabla B\tilde \rho(x))dx\notag\\ &=\frac{1}{4\pi}\int B\tilde \rho(x)(\Delta B\rho(x, t)-\Delta B\tilde \rho(x))dx\notag\\ &=\int B\tilde \rho(x) (\rho(x, t)- \tilde \rho(x))dx.\end{align} By (\ref{jj5})-(\ref{jj8}), and noting (\ref{d1}), we have \begin{align}\label{jj9} & E(\rho, {\bf v})(t)-F(\tilde \rho)\notag\\ &=\int \left(A(\rho)-A(\tilde \rho)+(\rho-\tilde \rho)\{\int_{r(x)}^{\infty}\frac{L(m_{\tilde \rho}(s))}{s^3}ds-B\tilde \rho\}\right)dx\notag\\ &+d_1(\rho(t), \tilde \rho)-\frac{1}{8\pi}(||\nabla (B\rho(x, t)- B\tilde \rho)||_2^2\notag\\ &+\frac{1}{2}\int\rho (|v^r|^2+|v^3|^2)(x, t)dx.\end{align} Since $\rho(\cdot, t)\in W_M$, $\int \rho(x, t)dx=\int\tilde \rho(x)dx=M.$ Thus $\int \lambda(\rho(x, t)-\tilde \rho(x))dx=0.$ Therefore, the first term in (\ref{jj9}) is the same as $d(\rho(t), \tilde \rho)$ defined by (3.29). This completes the proof of the lemma. \end{proof} \noindent We are now in a position to prove Theorem 3.1. \vskip 0.3cm \noindent {\it Proof of Theorem 3.1.} Assume the theorem is false. Then there exist $\epsilon_0>0$, $t_n>0$ and initial data $\rho_n(x,0)\in W_M$ and ${\bf v}_n(x, 0)$ such that for all $n\in \mathbb{N}$, \begin{align}\label{bn1} &d(\rho_n(0), \tilde \rho)+d_1(\rho_0, \tilde \rho)+\frac{1}{8\pi}||\nabla B\rho_n(0)-\nabla B\tilde \rho||_2^2\notag\\ &+\frac{1}{2}\int \rho_n(x, 0)(|v_n^r(x, 0)|^2+|v_n^3(x, 0)|^2)(x)dx<\frac{1}{n}, \end{align} but for any $a\in \RR$, \begin{align}\label{bn2} &d(\rho_n(t_n), T^a\tilde \rho)+ d_1(\rho(t), T^a\tilde \rho)+\frac{1}{8\pi}||\nabla B\rho_n(t_n)-\nabla BT^a\tilde \rho||_2^2\notag\\ &+\frac{1}{2}\int \rho_n(x, t_n)(|v_n^r(x, t_n)|^2+|v_n^3(x, t_n)|^2)(x)dx\ge \epsilon_0. \end{align} By (\ref{ed}) and (\ref{bn1}), we have \begin{equation}\label{bn3} \lim_{n\to \infty}E(\rho_n(0), {\bf v}_n(0))=F(\tilde \rho). \end{equation} Since $E(\rho_n(t), {\bf v}_n(t))$ is non-increasing in time, \begin{equation} \lim_{n\to \infty}\sup F(\rho_n(t_n))\le \lim_{n\to \infty} E(\rho_n(t_n), {\bf v}_n(t_n)) \le \lim_{n\to \infty} E(\rho_n(0), {\bf v}_n(0))=F(\tilde \rho).\end{equation} (The first inequality holds because we have, similar to (3.71), $$E(\rho, {\bf v})(t)-F(\rho(t))=\frac{1}{2}\int \rho(|v^r|^2+|v^3|^2)(x, t)dx\ge 0,\ t\ge 0.)$$ Therefore $\{\rho_n(\cdot, t_n)\}\subset W_M$ is a minimizing sequence for the functional $F$. We apply Theorem \ref{aa} to conclude that there exists a sequence $\{a_n\}\subset \RR$ such that up to a subsequence, \begin{equation} ||\nabla(B\rho_n(t_n)-BT^{a_n}\tilde \rho)||_2\to 0, \end{equation} as $n\to \infty$; this is where we use the assumption that the minimizer is unique up to a vertical shift. Note also that for any $\rho\in W_M$ and $a\in \RR$, \begin{equation}\ \begin{cases}& ||\nabla B(T^a\rho)-\nabla B\tilde \rho||_2= ||\nabla B(\rho)-\nabla BT^{-a}\tilde \rho||_2,\\ &d(T^a \rho, \tilde \rho)=d(\rho, T^{-a}\tilde \rho),\ and\ d_1(T^a \rho, \tilde \rho)=d_1(\rho, T^{-a}\tilde \rho).\end{cases}\end{equation} Thus, by (\ref{ed}), the fact that the energy is non-increasing in time, and $F(T^a\rho)=F(\rho)$, we have for any $\rho\in W_M$ and $a\in \RR$, \begin{align} &E(\rho_n(t_n), {\bf v}_n(t_n))-F(T^{a_n}\tilde \rho)\notag\\ &=d(\rho_n(t_n),T^{a_n}\tilde \rho)+d_1(\rho(t_n), T^{a_n}\tilde \rho)\notag\\ &-\frac{1}{8\pi}||\nabla(B\rho_n(t_n)-BT^{a_n}\tilde \rho)||_2^2\notag\\ &+\frac{1}{2}\int\rho_n (|v_n^r|^2+|v_n^3|^2)(x, t_n)dx\notag\\ &\le E(\rho_n(0), {\bf v}_n(0))-F(T^{a_n}\tilde \rho)\notag\\ &=E(\rho_n(0), {\bf v}_n(0))-F(\tilde \rho)\to 0,\end{align} as $n\to\infty$. Since $$||\nabla B\rho_n(t_n)-\nabla BT^{a_n}\tilde \rho||_2\to 0,$$ as $n\to \infty$, $d(\rho_n(t_n),\tilde \rho)\ge 0$ (cf. (3.31)) and $d_1(\rho(t_n), \tilde \rho)\ge 0$ (cf. A4) and (3.37)), we have \begin{align}&d(\rho_n(t_n),T^{a_n}\tilde \rho)+d_1(\rho(t_n), T^{a_n}\tilde \rho)\notag\\ &+\frac{1}{8\pi}||\nabla(B\rho_n(t_n)-T^{a_n}B\tilde \rho)||_2^2\notag\\ &+\frac{1}{2}\int\rho_n (|v_n^r|^2+|v_n^3|^2)(x, t_n)dx\to 0, \end{align} as $n\to \infty$. This contradicts (\ref{bn2}), and completes the proof. \\ \section{Stability of General Entropy Solutions } In this section, we shall obtain a stability theorem for general entropy weak solutions. We begin with the definition of entropy weak solution. \vskip 0.2cm \noindent {\bf Definition 4.1}. A weak solution (defined in Section 3) on $\RR^3\times [0, T]$ is called an {\it entropy weak solution} of (\ref{1.1}) if it satisfies the following ``entropy inequality'': \begin{equation}\label{entropy1} \partial_t \eta +\sum_{j=1}^{3}\partial_{x_j}q_j+\rho \sum_{j=1}^{3}\eta_{m_j}\Phi_{x_j}\le 0,\end{equation} in the sense of distributions; i.e., \begin{equation}\label{entro} \int_0^T\int_{\RR^3}\left(\eta \beta_t+{\bf q}\cdot \nabla \beta-\rho \sum_{j=1}^{3}\eta_{m_j}\Phi_{x_j}\beta\right)dxdt+\int_{\RR^3}\beta(x, 0)\eta(x, 0)dx\ge 0,\end{equation} for any nonnegative Lipschitz continuous test function $\beta$ with compact support in $[0, T)\times \RR^3$. Here the ``entropy'' function $\eta$ and ``entropy flux" functions $q_j$ and ${\bf q}$, are defined by \begin{equation} \begin{cases} &\eta=\frac{|{\bf m}|^2}{2\rho}+\rho\int_0^\rho\frac{p'(s)}{s^2}ds=\frac{|{\bf m}|^2}{2\rho}+\frac{\rho^\gamma}{\gamma-1},\\ &q_j=\frac{|{\bf m}|^2m_j}{2\rho^2}+m_j\int_0^{\rho}\frac{p'(s)}{s}ds=\frac{|{\bf m}|^2}{2\rho}+\frac{\gamma\rho^\gamma}{\gamma-1}\quad (j=1, 2, 3),\\ &{\bf q}=(q_1,\ q_2, \ q_3). \end{cases} \end{equation} \begin{rem} The inequality (\ref{entropy1}) is motivated by the second law of thermodynamics (\cite{lax}), and plays an important role in shock wave theory (\cite{smoller}). For smooth solutions, the inequality in (\ref{entropy1}) can be replaced by equality. \end{rem} For a general entropy weak solution, our stability result is given by the following theorem: \begin{thm}\label{thm5.1} Suppose $1<\gamma\le 2$. Let $(\rho, {\bf m}, \Phi)(x, t)$ ($t\in [0, T]$, $x\in\RR^3$) with $(\rho, {\bf m})\in L^{\infty}( \RR^3\times [0, T])$, be a weak solution of (\ref{1.1}) satisfying the entropy condition (\ref{entropy1}) and let $(\bar\rho, \bar{ \bf m}, \bar \Phi)(x, t)$, $t\in [0, T],\ x\in \RR^3$ be any solution of (\ref{1.1}) satisfying $(\bar\rho, \bar {\bf m})\in W^{1, \infty}_{loc}(\RR^3\times [0, T]).$ Assume \begin{equation} Z(T)=: \sup_{0\le t \le T}(||\rho(\cdot, t)||_{\infty}(||\rho(\cdot, t)||_{\infty}+||\bar \rho(\cdot, t)||_{\infty})^{2-\gamma}(Vol S(t))^{2/3}+||\nabla_x\bar{\bf v}(\cdot, t)||_{\infty})<+\infty,\end{equation} and \begin{equation}\label{v13}\frac{{\bf m}}{ \rho}, \frac{\bar {\bf m}}{\bar \rho}\in L^{\infty}(\RR^3\times [0, T]).\end{equation} where $S(t)=Supp |\rho-\bar \rho|(\cdot, t)$. Then there is a constant $C(T)$ depending on $T$ and $Z(T)$ such that \begin{equation} Y(t)\le C(T) Y(0),\qquad 0\le t\le T,\end{equation} where $$Y(t)= D(\rho, \bar \rho)(t)+||\sqrt{\rho}(\nabla \Phi-\nabla \bar\Phi)||_2^2(t)+\int \rho(x, t)|{\bf v}-\bar {\bf v}|^2(x, t) dx, $$ $$\Phi=-B\rho, \bar \Phi=-B \bar\rho,$$ and $$D(\rho,\ \bar \rho)= \int\frac{p(\rho)-p(\bar\rho)-p'(\bar\rho)(\rho-\bar\rho)}{\gamma-1}dx.$$ \end{thm} \begin{rem} The function $(\bar \rho, \bar{\bf m}, \bar \Phi)$ in the theorem could be, but is not necessarily, a rotating star solution. \end{rem} \begin{rem} For $1<\gamma\le 2$, it is easy to see $$D(\rho,\ \bar \rho)\ge C||\rho-\bar \rho||_2^2, $$ for some constant $C>0$ if $\rho, \bar\rho \in L^{\infty}(\RR^3\times [0, T])$. \end{rem} \noindent {\it Proof of Theorem \ref{thm5.1}}\\ \noindent Letting $U=(\rho, {\bf m})^{\mathrm T}$ with ${\bf m}=(m_1, m_2, m_3)=\rho{\bf v}$ and $\bar U=(\bar \rho, \bar {\bf m})^{\mathrm T}$, we can write system (\ref{1.1}) as \begin{equation}\label{1.2} \begin{cases} &U_t+\sum_{j=1}^{3}F_j(U)_{x_j}=-\rho \nabla \Phi,\\ &\Delta \Phi=4\pi \rho. \end{cases} \end{equation} Here the flux functions $F_j(U)$ are given by \begin{equation} \begin{cases} & F_1(U)=\left(m_1, p(\rho)+\frac{m_1^2}{\rho},\ \frac{m_1m_2}{\rho},\ \frac{m_1m_3}{\rho}\right)^{\mathrm T},\\ & F_2(U)=\left(m_2, \frac{m_1m_2}{\rho}, p(\rho)+\frac{m_2^2}{\rho},\ \frac{m_2m_3}{\rho}\right)^{\mathrm T},\\ &F_3(U)=\left(m_3, \frac{m_1m_3}{\rho}, \frac{m_2m_3}{\rho},\ p(\rho)+\frac{m_3^2}{\rho}\right)^{\mathrm T}.\end{cases}\end{equation} The entropy and entropy fluxes $\eta$ and ${\bf q}$ are as in (4.3) and satisfy \begin{equation}\label{entropy2} \nabla q_j(U)=\nabla \eta (U)\nabla F_j(U), \qquad j=1,\ 2, 3,\end{equation} as is easily verifiable. Since $U$ is an entropy weak solution \begin{equation}\label{entrox1} \partial_t \eta(U) +\sum_{j=1}^{3}\partial_{x_j}q_j(U)+\rho \sum_{j=1}^{3}\eta_{m_j}(U)\Phi_{x_j}\le 0,\end{equation} in the sense of distributions. Because $\bar U\in W_{loc}^{1,\ \infty}$ is a weak solution of (\ref{1.1}), we have \begin{equation}\label{entrox2} \partial_t \eta(\bar U) +\sum_{j=1}^{3}\partial_{x_j}q_j(\bar U)+\rho \sum_{j=1}^{3}\eta_{m_j}(\bar U)\bar \Phi_{x_j}= 0.\end{equation} We define the relative entropy-entropy flux pairs by \begin{equation}\label{re} \begin{cases}&\eta^*(U, \bar U)=\eta(U)-\eta(\bar U)-\nabla \eta(\bar U)(U-\bar U),\\ &q_j^*(U, \bar U)=q_j(U)-q_j(\bar U)-\nabla \eta(\bar U)(F_j(U)-F_j(\bar U))\ (j=1, 2, 3).\end{cases}\end{equation} Using (\ref{entrox1}) and (\ref{entrox2}) gives \begin{align}\label{entropy} &\partial_t \eta^*+\sum_{j=1}^3 \partial_{x_j}q^*_j\notag\\ &=(\partial_t \eta(U)+\sum_{j=1}^3 \partial_{x_j}q_j(U))-(\partial_t \eta(\bar U)+\sum_{j=1}^3 \partial_{x_j}q_j(\bar U))\notag\\ &-\nabla^2\eta(\bar U)\{(\bar U_t, U-\bar U)+(\sum_{j=1}^3\partial_{x_j}\bar U, F_j(U)-F_j(\bar U))\}\notag\\ &-\nabla \eta(\bar U)\{(U-\bar U)_t+\sum_{j=1}^3\partial_{x_j}(F_j(U)-F_j(\bar U))\}\notag\\ &\le (\nabla \eta(U)-\nabla\eta(\bar U))R-\nabla^2 \eta (\bar U)(\bar R, U-\bar U)\notag\\ &-\nabla^2 \eta (\bar U)\sum_{j=1}^3\left(\partial_{x_j}\bar U,\ F_j(U)-F_j(\bar U)-F_j'(\bar U)(U-\bar U)\right),\end{align} in the sense of distributions, where \begin{equation} R=(0, -\rho \nabla \Phi)^{\mathrm T},{~\rm and~} \bar R=(0, -\bar \rho \nabla \bar\Phi)^{\mathrm T}.\end{equation} It is easy to check that \begin{align} &(\nabla \eta(U)-\nabla\eta(\bar U))R-\nabla^2 \eta (\bar U)(\bar R, U-\bar U)\notag\\ &=-\rho({\bf v}-\bar {\bf v})\cdot(\nabla \Phi-\nabla \bar \Phi),\end{align} so that \begin{align} &\partial_t \eta^*+\sum_{j=1}^3 \partial_{x_j}q^*_j\notag\\ &\le -\rho({\bf v}-\bar {\bf v})\cdot(\nabla \Phi-\nabla \bar \Phi)\notag\\ &-\nabla^2 \eta (\bar U)\sum_{j=1}^3\left(\partial_{x_j}\bar U,\ F_j(U)-F_j(\bar U)-F_j'(\bar U)(U-\bar U)\right),\end{align} in the sense of distributions. That is, for any nonnegative, Lipschitz continuous test function $\psi$ on $\RR^3\times [0, T)$, with compact support, we have \begin{align}\label{121} &\int_0^T\int_{\RR^3}\left(\partial_t\psi \eta^*+\sum_{j=1}^3 \partial_{j}\psi q^*_j\right)dxdt+\int_{\RR^3}\psi(x, 0)\eta^*(x, 0)dx\notag\\ &\ge \int_0^T\int_{\RR^3}\psi \rho({\bf v}-\bar {\bf v})\cdot(\nabla \Phi-\nabla \bar \Phi)dxdt\notag\\ &+\int_0^T\int_{\RR^3}\psi\nabla^2 \eta (\bar U)\sum_{j=1}^3\left(\partial_{x_j}\bar U,\ F_j(U)-F_j(\bar U)-F_j'(\bar U)(U-\bar U)\right)dxdt.\end{align} A calculation gives \begin{equation}\label{qua} \nabla^2\eta(U)=\begin{pmatrix} &\frac{m^2}{\rho^3}+\frac{p''(\rho)}{\gamma-1} & -\frac{m_1}{\rho^2} & -\frac{m_2}{\rho^2} & -\frac{m_3}{\rho^2}\\ &-\frac{m_1}{\rho^2} & \frac{1}{\rho} & 0 & 0\\ &-\frac{m_2}{\rho^2} & 0 & \frac{1}{\rho} & 0\\ &-\frac{m_3}{\rho^2} & 0 & 0 & \frac{1}{\rho} \end{pmatrix},\end{equation} and also \begin{equation}\label{eta} \eta^*=\frac{p(\rho)-p(\bar\rho)-p'(\bar\rho)(\rho-\bar\rho)}{\gamma-1}+\frac{1}{2}\rho|\vec v-\bar{\vec{v}}|^2. \end{equation} So, for $1<\gamma\le 2$, \begin{equation}\label{eta2} \eta^*\ge c_1 (||\rho(\cdot, t)||_{\infty}+||\bar \rho(\cdot, t)||_{\infty})^{\gamma-2}(\rho-\bar \rho)^2 +\frac{1}{2}\rho|{\bf v}-\bar {\bf v}|^2\ge 0, \end{equation} for some positive constant $c_1$. A further calculation yields, using (\ref{qua}), \begin{align}\label{122} & \nabla^2 \eta (\bar U)\sum_{j=1}^3\left(\partial_{x_j}\bar U,\ F_j(U)-F_j(\bar U)-F_j'(\bar U)(U-\bar U)\right)\notag\\ &=\{p(\rho)-p(\bar\rho)-p'(\bar\rho)(\rho-\bar\rho)\}\sum_{j=1}^3\partial_j\bar {v}_j\notag\\ &+\frac{1}{2}\sum_{i, j=1}^3\rho(v_i-\bar v_i)(v_j-\bar v_j)(\partial_j\bar v_i+\partial_i\bar v_j). \end{align} Here and in the following, we use the notation: $$\partial_j=\frac{\partial}{\partial{x_j}},\qquad j=1,\ 2,\ 3.$$ Therefore, by (\ref{eta}) and (\ref{122}), we have \begin{align}\label{123} & |\nabla^2 \eta (\bar U)\sum_{j=1}^3\left(\partial_{x_j}\bar U,\ F_j(U)-F_j(\bar U)-F_j'(\bar U)(U-\bar U)\right)|(x, t)\notag\\ & \le C ||\nabla_x \bar {\bf v}(\cdot, t)||_{\infty}\eta^*(x, t), \end{align} for $x\in\RR^3$, $t\in [0, T)$ and some constant $C>0$. Thus, (\ref{121})-(\ref{123}) yield \begin{align}\label{124} &\int_0^T\int_{\RR^3}\left(\partial_t\psi \eta^*+\sum_{j=1}^3 \partial_{j}\psi q^*_j\right)dxdt+\int_{\RR^3}\psi(x, 0)\eta^*(x, 0)dx\notag\\ &\ge \int_0^T\int_{\RR^3}\psi \rho({\bf v}-\bar {\bf v})\cdot(\nabla \Phi-\nabla \bar \Phi)dxdt\notag\\ &- C \sup_{0\le t\le T}||\nabla_x \bar {\bf v}(\cdot, t)||_{\infty}\int_0^T\int_{\RR^3}\psi\eta^*(x, t)dxdt. \end{align} Using (\ref{v13}), it is easy to see that there exists a positive constant $\Lambda$, which may depend on $T$, such that \begin{equation}\label{125} (\sum_{j=1}^3|q^*_j|^2)^{1/2}(x, t)\le \Lambda \eta^*(x, t), \qquad (x, t)\in \RR^3\times [0, T]. \end{equation} For fixed $L>0$, $t\in (0, T)$ and small $\epsilon>0$, we consider the test function $\psi (x, \tau)=\varsigma(x, \tau)\vartheta(\tau)$ defined by \begin{equation}\label{test1} \vartheta (\tau)=\begin{cases} &1, \qquad\qquad\qquad 0\le \tau<t\\ &\frac{1}{\epsilon}(t-\tau)+1, \quad t\le \tau<t+\epsilon\\ & 0, \qquad\qquad\qquad t+\epsilon\le \tau<T, \end{cases} \end{equation} \begin{equation}\label{test2} \varsigma(x, \tau)=\begin{cases} &1, \qquad\qquad\qquad\qquad\qquad\qquad (x, \tau)\in R_1\\ &\frac{1}{\epsilon}[L+\Lambda(t-\tau)-|x|]+1,\qquad (x, \tau)\in R_2\\ &0, \qquad\qquad\qquad\qquad\qquad\qquad(x, \tau)\in R_3, \end{cases} \end{equation} where $$R_1=\{(x, \tau): 0\le \tau<T,\ 0\le |x|<L+\Lambda(t-\tau)\},$$ $$R_2=\{(x, \tau): 0\le \tau<T, L+\Lambda(t-\tau)\le |x|<L+\Lambda(t-\tau)+\epsilon\},$$ $$R_3=\{(x, \tau): 0\le \tau<T, \ |x|>L+\Lambda(t-\tau)+\epsilon\},$$ and $\Lambda$ is the constant given in (\ref{125}). Substituting this in (\ref{124}), a straightforward calculation yields, \begin{align}\label{test3} &\frac{1}{\epsilon}\int_{t}^{t+\epsilon}\int_{|x|<L}\eta^*(x, \tau)dxd\tau\notag\\ &\le \int_{|x|<L+\Lambda t}\eta^*(x, 0)dx\notag\\ &-\frac{1}{\epsilon}\int_0^t\int_{L+\Lambda(t-\tau)\le |x|<L+\Lambda(t-\tau)+\epsilon}\left\{\Lambda\eta^*+\sum_{j=1}^3\frac{x_j}{|x|}q^*_j\right\}dxd\tau\notag\\ &-\int_0^t\int_{|x|<L+\Lambda(t-\tau)}\rho({\bf v}-\bar {\bf v})\cdot(\nabla \Phi-\nabla \bar \Phi)dxd\tau\notag\\ &+C\sup_{0\le \tau \le T}||\nabla_x \bar {\bf v}(\cdot, \tau)||_{\infty}\int_0^t\int_{|x|<L+\Lambda(t-\tau)}\eta^*(x, \tau)dxd\tau+O(\epsilon). \end{align} \vskip 0.2cm \noindent The second term on the right-had side of (\ref{test3}) is negative in view of (\ref{125}), together with Cauchy-Schwartz inequality. Letting $\epsilon\to 0^+$ in (\ref{test3}) gives \begin{align}\label{test4} &\int_{|x|<L}\eta^*(x, t)dx\notag\\ &\le \int_{|x|<L+\Lambda t}\eta^*(x, 0)dx\notag\\ &-\int_0^t\int_{|x|<L+\Lambda(t-\tau)}\rho({\bf v}-\bar {\bf v})\cdot(\nabla \Phi-\nabla \bar \Phi)dxd\tau\notag\\ &+C\sup_{0\le \tau \le T}||\nabla_x \bar {\bf v}(\cdot, \tau)||_{\infty}\int_0^t\int_{|x|<L+\Lambda(t-\tau)}\eta^*(x, \tau)dxd\tau. \end{align} We now let $L\to +\infty$ in (\ref{test4}) to get \begin{align}\label{test5} &\int \eta^*(x, t)dx\notag\\ &\le \int \eta^*(x, 0)dx\notag\\ &-\int_0^t\int\rho({\bf v}-\bar {\bf v})\cdot(\nabla \Phi-\nabla \bar \Phi)dxd\tau\notag\\ &+C\sup_{0\le \tau \le T}||\nabla_x \bar {\bf v}(\cdot, \tau)||_{\infty}\int_0^t\int\eta^*(x, \tau)dxd\tau. \end{align} The second term on the right hand side can be estimated as follows. By Cauchy-Schwartz inequality, we have \begin{align}&\label{x9}|\int\rho({\bf v}-\bar {\bf v})\cdot (\nabla \Phi-\bar \nabla \Phi)(x, \tau)dx \notag\\ &\le \frac{1}{2}\int\rho|{\bf v}-\bar {\bf v}|^2(x, \tau)dx +\frac{1}{2}\int\rho|\nabla \Phi-\bar \nabla \Phi|^2(x, \tau)dx.\end{align} Applying Lemma \ref{lem2.2}, we obtain \begin{align}\label{x10}&\int\rho|\nabla \Phi-\bar \nabla \Phi|^2(x, t)dx\notag\\ &\le ||\rho(\cdot,\tau)||_{\infty}||\nabla (\Phi-\bar \Phi)(\cdot, \tau||_2^2\notag\\ &\le C ||\rho(\cdot,\tau)||_{\infty}\left(\int|\rho-\bar \rho|^{4/3}(x, t)dx\right)\left(\int|\rho-\bar \rho|^{4/3}(x, \tau)dx\right)^{2/3}\notag\\ &=C ||\rho(\cdot,t)||_{\infty}\left(\int_{S(\tau)}|\rho-\bar \rho|^{4/3}(x, t)dx\right)\left(\int_{S(\tau)}|\rho-\bar \rho|^{4/3}(x, \tau)dx\right)^{2/3}, \end{align} where $$S(\tau)=supp |\rho-\bar \rho|(\cdot, \tau).$$ It follows from H${\rm \ddot{o}}$lder's inequality that \begin{equation}\label{x11}\int_{S(t)}|\rho-\bar \rho|^{4/3}(x, t)dx\le \left(\int_{S(\tau)}|\rho-\bar \rho|^2(x, \tau)dx\right)^{2/3}(vol S(\tau))^{1/3},\end{equation} and \begin{equation}\label{x12}\left(\int_{S(t)}|\rho-\bar \rho|(x, \tau)dx\right)^{2/3}\le \left(\int_{S(t)}|\rho-\bar \rho|^2(x, \tau)dx\right)^{1/3}(vol S(\tau))^{1/3}.\end{equation} Then using (\ref{x10})-(\ref{x12}) we obtain \begin{equation} \label{x13}\int\rho|\nabla \Phi-\bar \nabla \Phi|^2(x, \tau)dx\le C ||\rho(\cdot, \tau)||_{\infty}(||\rho(\cdot, \tau)||_{\infty}+||\bar \rho(\cdot, \tau)||_{\infty})^{2-\gamma}||(\rho-\bar \rho)(\cdot, \tau)||_2^2(Vol S(t)))^{2/3}.\end{equation} \ In view of (\ref{eta2}), (4.29), (4.30) and (4.34), we have \begin{equation} \int \eta^*(x, t)dx\le \int \eta^*(x, 0)dx + C Z(T)\int_0^t \int\eta^*(x, \tau)dxd\tau,\end{equation} for $0\le t\le T$, where $$Z(T)=: sup_{0\le t \le T}(||\rho(\cdot, t)||_{\infty}(||\rho(\cdot, t)||_{\infty}+||\bar \rho(\cdot, t)||_{\infty})^{2-\gamma} (Vol S(t))^{2/3}+||\nabla_x\bar{\bf v}(\cdot, t)||_{\infty}).$$ Then (4.6) follows from Gronwall's inequality applied to (4.35) and using (4.19) and (4.20). This completes the proof of Theorem 4.1. \section{ Uniform A Priori Estimates} The theorem proved in this section gives a uniform a priori estimate for the entropy weak solution defined in (4.2) of the Cauchy problem (\ref{1.1}) and (\ref{initial}). As we shall see, this estimate justifies some assumptions made in Section 3 and should be useful for obtaining the existence of global weak solutions for the Cauchy problem. \begin{thm}\label{thm4.2} If $(\rho, {\bf m})\in L^{\infty}([0, T]; L^1(\RR^3))$ satisfies the first equation in (1.1) in the sense of distributions, then \begin{equation}\label{6.1}\int_{\RR^3} \rho(x, t)dx=\int_{\RR^3}\rho(x, 0)dx=:M, \qquad 0<t<T.\end{equation} Let $(\rho, {\bf m}, \Phi)$ be a weak solution defined in Definition 3.1. Suppose $(\rho, {\bf m}, \Phi)$ satisfies the entropy condition (4.2), $\rho\in L^{\infty}([0, T]; L^1(\RR^3))\cap L^{\infty}([0, T]; L^r(\RR^3))$ for some $r$ satisfying $r>3/2$ and $r\ge \gamma$, ${\bf m}\in L^{\infty}([0, T]; L^s(\RR^3))$ ($s>3$), $(\eta, {\bf q})\in L^{\infty}([0, T]; L^1(\RR^3))$, where $\eta$ and ${\bf q}$ are given in (4.3). Moreover, we assume that $(\rho, {\bf m})$ has the following additional regularity: \begin{equation}\label{ca} \lim_{h\to 0}\int_0^t\int_{\RR^3}|\rho(x, \tau+h)-\rho(x, \tau)|dxd\tau=0, \quad t\in (0, T), a.e.\end{equation} Then \\ \begin{equation}\label{6.2} E(t)\le E(0), \qquad 0<t<T,\end{equation} and if $\gamma>\frac{4}{3}$, then \begin{equation}\label{estimate1} H(t)\le C_1 H(0)+ C_2, \qquad 0<t<T, \end{equation} where $C_1$ and $C_2$ are two positive constants only depending on $\gamma$ and $M$ (cf. (\ref{6.1})), where $$H(t)= \int_{\RR^3}\{\frac{\rho^\gamma}{\gamma-1}+\frac{|{\bf m}|^2}{2\rho}+\frac{1}{8\pi}|\nabla\Phi|^2\}(x, t)dx, \quad t\in [0, T)$$. \end{thm} \begin{rem} (\ref{6.1}) and (\ref{6.2}) justify some assumptions made in Section 3 on the conservation of total mass and non-increase of energy. \end{rem} \begin{rem} The boundedness of $\int_{\RR^3}\rho^\gamma(x,t) dx$ was proved in \cite{LY} for smooth solutions if $\gamma>4/3$. Here we prove that this is still true for general week solutions satisfying the entropy condition even without assuming that $\rho\in L^{\infty}$. In fact, the global existence of radial $L^{\infty}$-solutions was proved in \cite{Wang} for (\ref{1.1}) outside a ball. The blow-up of $L^{\infty}$-norm of the radial solutions of (\ref{1.1}) in the entire $\RR^3$ space was discussed in \cite{MK} and \cite{DXY}, respectively. \end{rem} \begin{rem}\label{rem12} Condition (\ref{ca}) can be assured by the following condition \begin{equation}\label{cb} \lim_{\epsilon\to 0}\sup_{0\le \tau\le T, |y|\le 1}\int_{\RR^3}|\rho(x, \tau)-\rho(x-\epsilon y, \tau)dx=0, \end{equation} if $(\rho, {\bf m})\in L^{\infty}([0, T]; L^1(\RR^3))$; this is proved in the Appendix. Note that (\ref{ca}) is the $L^1$ modulus of continuity in time and (\ref{cb}) is the $L^1$ modulus of continuity in space. \end{rem} In order to prove this theorem, we begin with the following lemma. \begin{lem} If $f\in L^{r}(\RR^3)$ ($r\ge 1$), then \begin{equation}\label{0751}Bf\in \begin{cases}L^p(\RR^3), {\rm ~with~} 1/p=1/r-2/3, \qquad {\rm if~} r<3/2,\\ L^{\infty}(\RR^3), \qquad {\rm if~} r\ge 3/2;\end{cases}\end{equation} and \begin{equation}\label{0752}\nabla (Bf)\in \begin{cases} L^q(\RR^3), {\rm ~with~} 1/q=1/r-1/3, \qquad {\rm if~} r<3,\\ L^{\infty}(\RR^3), \qquad {\rm if~} r\ge 3.\end{cases}\end{equation} \end{lem} The proof of this lemma follows from the extended Young inequality (cf. \cite{RS}, p. 32). \begin{lem} Suppose $0\le \rho\in L^{\infty}([0, T]; L^1(\RR^3))$ and $\frac{{\bf m}}{\sqrt\rho}\in L^{\infty}([0, T]; L^2(\RR^3))$, then \begin{equation}\label{0753} {\bf m}\in L^{\infty}([0, T]; L^1(\RR^3)).\end{equation}\end{lem} \begin{proof} Using H$\ddot{\rm o}$lder inequality, we have \begin{equation}\label{0754} \int |{\bf m}|dx=\int \sqrt {\rho}\frac{|{\bf m}|}{\sqrt {\rho}}dx\le (\int \rho dx)^{1/2}(\int \frac{|{\bf m}|^2}{\rho})^{1/2}.\end{equation} Note that (\ref{0754}) implies ${\bf m}\in L^{\infty}([0, T]; L^1(\RR^3))$. \end{proof} \begin{rem} $\eta\in L^{\infty}([0, T]; L^1(\RR^3))$ implies $\frac{{\bf m}}{\sqrt\rho}\in L^{\infty}([0, T]; L^2(\RR^3))$.\end{rem} \begin{lem} Let $(\rho, {\bf m}, \Phi)$ be a weak solution defined in Definition 3.1. Suppose $(\rho, {\bf m}, \Phi)$ satisfies the entropy condition (4.2), $\rho\in L^{\infty}([0, T]; L^1(\RR^3))\cap L^{\infty}([0, T]; L^r(\RR^3))$ for some $r$ satisfying $r>3/2$ and $r\ge \gamma$, ${\bf m}\in L^{\infty}([0, T]; L^s(\RR^3))$ ($s>3$), $(\eta, {\bf q})\in L^{\infty}([0, T]; L^1(\RR^3))$, where $\eta$ and ${\bf q}$ are given in (4.3). Then, for any $\tau\in [0, T)$, we have \begin{equation}\label{entropy12} \int_{\RR^3}\eta(x, \tau)dx-\int_0^{\tau}\int_{\RR^3} {\bf m}\cdot \nabla \Phi dxdt\le \int_{\RR^3}\eta(x, 0)dx, \quad \tau\in (0, T), a.e. \end{equation} \end{lem} \begin{proof} For a fixed $\tau\in (0, T)$, and small positive $\epsilon$ and $R>0$, we define \begin{equation}\label{thetat} \theta(t)=\begin{cases} & 1, \qquad\qquad\qquad 0\le t\le \tau,\\ &-\frac{1}{\epsilon}(t-\tau)+1, \qquad \tau\le t\le \tau+\epsilon, \\ & 0, \qquad\qquad\qquad \tau+\epsilon\le t\le T, \end{cases}\end{equation} and for $x\in \RR^3$, \begin{equation}\label{alphax} \alpha(x)=\begin{cases} & 1, \qquad\qquad\qquad |x|\le R,\\ &-\frac{1}{\epsilon}(|x|-R)+1, \qquad R\le |x|\le R+\epsilon, \\ & 0, \qquad\qquad\qquad |x|\ge R+\epsilon. \end{cases}\end{equation} Let $\beta(x, t)=\theta(t)\alpha(x)$, then $\beta(x, t)$ is Lipschitz continuous, with compact support in $[0, T)\times \RR^3$. Using (\ref{entro}), a calculation yields \begin{align}\label{entropyj} &-\frac{1}{\epsilon}\int_{\tau}^{\tau+\epsilon}\int_{|x|\le R}\eta(x, t)dxdt-\frac{1}{\epsilon}\int_{\tau}^{\tau+\epsilon}\int_{R\le |x|\le R+\epsilon} \eta(x, t)\alpha(x)dxdt\notag\\ &-\frac{1}{\epsilon}\int_0^{\tau+\epsilon}\int_{R\le |x|\le R+\epsilon}(\sum_{j=1}^3q_j\frac{x_j}{|x|})\theta(t)dxdt\notag\\ &+\int_{|x|\le R}\eta(x,0)dx+\int _{R\le |x|\le R+\epsilon}\eta(x, 0) \alpha(x)dx\notag\\ &+\int_0^{\tau}\int_{|x|\le R} {\bf m}\cdot \nabla \Phi dxdt+\int_{\tau}^{\tau+\epsilon}\int_{|x|\le R+\epsilon}{\bf m}\cdot\nabla \Phi \beta(x, t)dxdt\ge 0. \end{align} Since $(\eta, {\bf q})\in L^{\infty}([0, T], L^1(\RR^3)$, we have \begin{equation}\label{rxyz1} \lim_{R\to \infty}\int_{R\le |x|\le R+\epsilon}\eta(x, t)\alpha(x)dx=0, \qquad a.e.,\ t\in [0, T], \end{equation} \begin{equation}\label{rxyz2} \lim_{R\to \infty}\int_{R\le |x|\le R+\epsilon}\eta(x, 0)\alpha(x)dx=0, \end{equation} and \begin{equation}\label{rxyz3} \lim_{R\to \infty}\int_{R\le |x|\le R+\epsilon}(\sum_{j=1}^3 q_j\frac{x_j}{|x|})\theta(t) dx=0, \qquad a.e.,\ t\in [0, T]. \end{equation} We let $R\to \infty$ in (\ref{entropyj}) to get \begin{align}\label{entropyk} &-\frac{1}{\epsilon}\int_{\tau}^{\tau+\epsilon}\int_{\RR^3}\eta(x, t)dxdt +\int_{\RR^3}\eta(x,0)dx\notag\\ &+\int_0^{\tau}\int_{\RR^3} {\bf m}\cdot \nabla \Phi dxdt+\int_{\tau}^{\tau+\epsilon}\int_{\RR^3}{\bf m}\cdot\nabla \Phi \beta(x, t)dxdt\ge 0. \end{align} Because $\rho\in L^{\infty}([0, T]; L^r(\RR^3))$ with $r>3/2$, by (5.7) we have $\nabla\Phi\in L^{\infty}([0, T]; L^q(\RR^3)$ with $q>3$. It then follows from (5.8), the assumption ${\bf m}\in L^{\infty}([0, T]; L^s(\RR^3))$ with $s>3$ and Holder's inequality that $${\bf m}\cdot \nabla \Phi\in L^{\infty}([0, T]; L^1(\RR^3)).$$ This implies $$\lim_{\epsilon\to 0}\int_{\tau}^{\tau+\epsilon}\int_{\RR^3}{\bf m}\cdot\nabla \Phi \beta(x, t)dxdt=0.$$ Letting $\epsilon\to 0$ in (\ref{entropyk}), we obtain (\ref{entropy12}). \end{proof} \begin{lem}\label{phit} Let $(\rho, {\bf m}, \Phi)$ be an entropy weak solution defined in Section 4 satisfying the conditions in Lemma 5.3. Then \begin{equation}\label{171} \partial_t\Phi(x, t)=-\int_{\RR^3}{\bf m}(y, t)\cdot \nabla_y(\frac{1}{|y-x|})dy.\end{equation} Moreover \begin{equation}\label{172} \partial_t\Phi\in L^{\infty}([0, T]; L^1(\RR^3)),\end{equation} and \begin{equation}\label{173} \partial_t \Phi\in L^{\infty}([0, T]\times\RR^3). \end{equation}\end{lem} \begin{proof} The key is to prove (\ref{171}). Once (\ref{171}) is proved, (\ref{172}) and (\ref{173}) follow from the fact that ${\bf m}\in L^{\infty}([0, T]; L^1(\RR^3))\cap L^{\infty}([0, T]; L^s(\RR^3))$ and the extended Young's inequality (cf. \cite{RS}, p.32). In order to prove (\ref{171}), we use the fact that $(\rho, {\bf m})$ satisfies the first equation of (\ref{1.1}) in the sense of distributions. For this purpose, we choose a $C^{\infty}$ function $\delta(z) $ ($z\in \RR^1$) with compact support in the interval $[1, 2]$ satisfying $0\le \delta(z)\le 1$ and $\int_{-\infty}^{+\infty}\delta(z)dz=1$, and let \begin{equation}\label{174} \delta_{\epsilon}(z)=\frac{1}{\epsilon}\delta(\frac{z}{\epsilon}),\ \alpha_{\epsilon}(z)=\int_{-\infty}^z \delta_{\epsilon}(s)ds, \quad z\in \RR^1, \end{equation} for small positive $\epsilon$. For $y\in \RR^3$, $0<\epsilon<\frac{1}{2}$ and $R>1$, we set \begin{equation} f_{\epsilon}^R(y)=\begin{cases} &\alpha_{\epsilon}(|y|), \qquad\qquad\qquad |y|\le \frac{R}{2}+\epsilon,\\ & \alpha_{\epsilon}(R+2\epsilon-|y|), \qquad |y|\ge \frac{R}{2}+\epsilon.\end{cases} \end{equation} Then \begin{equation} \begin{cases} & f_{\epsilon}^R(y)=0,\qquad\qquad as\ |y|\le \epsilon,\ or\ |y|\ge R+\epsilon, \\ & 0\le f_{\epsilon}^R(y)\le 1, \qquad as\ \epsilon\le |y|\le 2\epsilon, \ or\ R\le |y|\le R+\epsilon,\\ & f_{\epsilon}^R(y)=1,\qquad\qquad as\ 2\epsilon\le |y|\le R.\end{cases}\end{equation} For $x\in \RR^3$, we choose \begin{equation} g_{\epsilon}^R(y)=f_{\epsilon}^R(y-x)\frac{1}{|y-x|}.\end{equation} Then $g_{\epsilon}^R(y)\in C^{\infty}_0(\RR^3)$ for any fixed $x\in \RR^3.$ Since $(\rho, {\bf m})$ satisfies the first equation of (\ref{1.1}) in the sense of distributions, it is easy to show (see \cite{freisel} for instance), $\int_{\RR^3}\rho(y, t) g_{\epsilon}^R(y)dy$ is differentiable in $t$ for $t\in [0, T]$ a. e., and satisfies \begin{equation}\label{keyidea} \frac{d}{dt}\int_{\RR^3}\rho(y, t) g_{\epsilon}^R(y)dy=\int_{\RR^3}{\bf m}(y, t)\cdot \nabla_yg_{\epsilon}^R(y)dy, \qquad t\in [0, T],\ a.~e.\end{equation} We also let \begin{equation}\label{gepsilon} g_{\epsilon}(y)=\lim_{R\to \infty} g_{\epsilon}^R(y), \qquad y\in\RR^3. \end{equation} Then we show (\ref{171}) in the following steps.\\ \underline{Step 1}. We show that $\int_{\RR^3}\rho(y, t)g_{\epsilon}(y)dy$ is differentiable for $t\in (0, T]$, a.e., and \begin{equation}\label{step1}\frac{d}{dt}\int_{\RR^3}\rho(y, t)g_{\epsilon}(y)dy=\int_{\RR^3}{\bf m}\cdot \nabla_y g_{\epsilon}(y)dy,\end{equation} for $t\in (0, T], a. e.$ \\ For this purpose, we prove that \begin{align}\label{step11}& \frac{1}{h} \int_{\RR^3}\frac{\rho(y, t+h)-\rho(y, t)}{h}g_{\epsilon}^R(y)dy \to \int_{\RR^3}{\bf m}(y, t)\cdot \nabla_yg_{\epsilon}^R(y)dy\notag\\&{\rm as} ~{h\to 0} ~{\rm uniformly~ in}~ R ~{\rm for} ~R\ge 1.\end{align} This is proved as follows. Since $(\rho, {\bf m})$ satisfies the first equation of (\ref{1.1}) in the sense of distributions and $g_{\epsilon}^R(y)\in C^{\infty}_c(\RR^3)$, it is easy to verify (see \cite{freisel} for instance), \begin{equation}\label{tt}\int_{\RR^3}(\rho(y, t+h)-\rho(y, t))g_{\epsilon}^R(y)dy=\int_t^{t+h}\int_{\RR^3}{\bf m}(y,s)\cdot \nabla_yg_{\epsilon}^R(y)dyds, \end{equation} for $[t, t+h]\subset [0, T].$ Thus, \begin{equation}\label{tt1}\lim_{h\to 0}\int_{\RR^3}\frac{(\rho(y, t+h)-\rho(y, t))}{h}g_{\epsilon}^R(y)dy=\lim_{h\to 0}\frac{1}{h}\int_t^{t+h}\int_{\RR^3}{\bf m}(y,s)\cdot \nabla_yg_{\epsilon}^R(y)dyds. \end{equation} On the other hand, \begin{align}\label{geg} &\frac{1}{h}\int_t^{t+h}\int_{\RR^3}{\bf m}(y,s)\cdot \nabla_yg_{\epsilon}^R(y)dyds-\int_{\RR^3}{\bf m}(y,t)\cdot \nabla_yg_{\epsilon}^R(y)dy\notag\\ &=\frac{1}{h}\int_t^{t+h}\int_{\RR^3}({\bf m}(y,s)-{\bf m}(y,t))\cdot \nabla_yg_{\epsilon}^R(y)dyds\notag\\ &=\frac{1}{h}\int_t^{t+h}\int_{\epsilon\le |y-x|\le R}({\bf m}(y,s)-{\bf m}(y,t))\cdot \nabla_y(\frac{\alpha_{\epsilon}(|y-x|)}{|y-x|})dyds\notag\\ &+\frac{1}{h}\int_t^{t+h}\int_{R\le |y-x|\le R+\epsilon}({\bf m}(y,s)-{\bf m}(y,t))\cdot \nabla_yg_{\epsilon}^R(y)dyds.\end{align} The first term can be handled as follows. For $h>0$, \begin{align}\label{gege1} &|\frac{1}{h}\int_t^{t+h}\int_{\epsilon\le |y-x|\le R}({\bf m}(y,s)-{\bf m}(y,t))\cdot \nabla_y(\frac{\alpha_{\epsilon}(|y-x|)}{|y-x|})dyds|\notag\\ &\le |\frac{1}{h}\int_t^{t+h}\int_{\epsilon\le |y-x|\le R}|{\bf m}(y,s)-{\bf m}(y,t)|(\frac{\delta_{\epsilon}(|y-x|)}{|y-x|}+\frac{\alpha_{\epsilon}(|y-x|)}{|y-x|^2})dyds\notag\\ &\le \frac{2}{\epsilon h}\int_t^{t+h}\int_{\epsilon\le |y-x|\le 2\epsilon}|{\bf m}(y,s)-{\bf m}(y,t)|\frac{1}{|y-x|}dyds\notag\\ &+ \frac{1}{ h}\int_t^{t+h}\int_{2\epsilon\le |y-x|\le R}|{\bf m}(y,s)-{\bf m}(y,t)|\frac{1}{|y-x|^2}dyds\notag\\ &\le \frac{2}{\epsilon^2 h}\int_t^{t+h}\int_{\epsilon\le |y-x|\le 2\epsilon}|{\bf m}(y,s)-{\bf m}(y,t)|dyds\notag\\ &+ \frac{1}{ 4\epsilon^2 h}\int_t^{t+h}\int_{2\epsilon\le |y-x|\le R}|{\bf m}(y,s)-{\bf m}(y,t)|dyds.\end{align} The last term in (\ref{geg}) can be estimated as follows. \begin{align}\label{gege2} &|\frac{1}{h}\int_t^{t+h}\int_{R\le |y-x|\le R+\epsilon}({\bf m}(y,s)-{\bf m}(y,t))\cdot \nabla_yg_{\epsilon}^R(y)dyds|\notag\\ &\le (\frac{1}{\epsilon }+\frac{1}{R})\frac{1}{h}\int_t^{t+h}\int_{R\le |y-x|\le R+\epsilon}|{\bf m}(y,s)-{\bf m}(y,t)|\frac{1}{|y-x|}dyds\notag\\ &\le (\frac{1}{\epsilon }+\frac{1}{R})\frac{1}{h} \int_t^{t+h}\int_{\RR^3}|{\bf m}(y,s)-{\bf m}(y,t)|dyds.\end{align} Since we choose $R>1$, (\ref{geg}), (\ref{gege1}) and (\ref{gege2}) yield, \begin{align}\label{geg1} &|\frac{1}{h}\int_t^{t+h}\int_{\RR^3}{\bf m}(y,s)\cdot \nabla_yg_{\epsilon}^R(y)dyds-\int_{\RR^3}{\bf m}(y,t)\cdot \nabla_yg_{\epsilon}^R(y)dy|\notag\\ &\le (\frac{9}{4\epsilon^2}+\frac{1}{\epsilon}+1)\frac{1}{h}\int_t^{t+h}\int_{\RR^3}|{\bf m}(y,s)-{\bf m}(y,t)|dyds.\end{align} Since ${\bf m}\in L^{\infty}([0, T]; L^1(\RR^3)$, we have \begin{equation} \lim_{h\to 0+}\frac{1}{ h}\int_t^{t+h}\int_{\RR^3}|{\bf m}(y,s)-{\bf m}(y,t)|dyds=0, \quad t\in [0, T],\ a. e.\end{equation} Therefore $\frac{1}{h}\int_t^{t+h}\int_{\RR^3}({\bf m}(y,s)-{\bf m}(y,t))\cdot \nabla_yg_{\epsilon}^R(y)dyds$ converges to zero as $h\to 0+$ uniformly in $R$ for $R>1$. By a similar approach, we can show that $\frac{1}{h}\int_{t-h}^{t}\int_{\RR^3}({\bf m}(y,s)-{\bf m}(y,t))\cdot \nabla_yg_{\epsilon}^R(y)dyds$ converges to zero as $h\to 0-$ uniformly in $R$ for $R>1$. This verifies (\ref{step11}). (\ref{step1}) follows by the following argument, using (5.22) and (5.24). \begin{align}\label{step12}&\frac{d}{dt}\int_{\RR^3}\rho(y, t)g_{\epsilon}(y)dy\notag\\ & =\lim_{h\to 0}\lim_{R\to \infty}\int_{\RR^3}\frac{\rho(y, t+h)-\rho(y, t)}{h}g_{\epsilon}^R(y)dy\notag\\ &= \lim_{R\to \infty}\lim_{h\to 0}\int_{\RR^3}\frac{\rho(y, t+h)-\rho(y, t)}{h}g_{\epsilon}^R(y)dy\notag\\ &=\lim_{R\to \infty}\frac{d}{dt}\int_{\RR^3}\rho(y, t)g_{\epsilon}^R(y)dy\notag\\ &=\lim_{R\to \infty}\int_{\RR^3}{\bf m}\cdot \nabla_y g_{\epsilon}^R(y)dy\notag\\ &=\int_{\RR^3}{\bf m}\cdot \nabla_y g_{\epsilon}(y)dy.\end{align} \underline{Step 2.} In this step, we show that \begin{align}\label{71211} & \int_{\RR^3}\rho(y, t)g_{\epsilon}(y)dy \to \int_{\RR^3}\frac{\rho(y, t)}{|y-x|}dy{\rm~ as~}\epsilon\to 0\notag\\ &{\rm~uniformly~in~} t {\rm~ for~} t\in (0, T),\end{align} and \begin{equation}\label{71212}\lim_{\epsilon\to 0}\int_{\RR^3}{\bf m}(y,t)\cdot\nabla_yg_{\epsilon}(y)dy=\int_{\RR^3}{\bf m}(y,t)\cdot\nabla_y(\frac{1}{|y-x|})dy, \end{equation} for $t\in (0, T)$. \\ We prove (\ref{71211}) as follows. Since $\rho\in L^{\infty}([0, T]; L^r(\RR^3))$ with $r>3/2$ and $r\ge \gamma$, we have, by using H$\rm\ddot{o}$lder inequality, \begin{align}\label{71213} &|\int_{\RR^3}\frac{\rho(y, t)}{|y-x|}dy-\int_{\RR^3}\rho(y, t)g_{\epsilon}(y)dy|\le \int_{\epsilon\le |y-x|\le 2\epsilon}\frac{\rho(y, t)}{|y-x|}dy\notag\\ &\le (\int_{\epsilon\le |y-x|\le 2\epsilon}\rho^r(y, t)dy)^{1/r}(\int_{\epsilon\le |y-x|\le 2\epsilon}\frac{1}{|y-x|^l}dy)^{1/l}\notag\\ &\le ||\rho||_{L^{\infty}([0, T]; L^r(\RR^3))}{(\int_{\epsilon}^{2\epsilon} 4\pi s^{2-l}ds)}^{1/l}, \end{align} where $l=\frac{r}{r-1}$. Since $r>3/ 2$, $l<3$, (\ref{71211}) follows. Next (\ref{71212}) can be shown as follows. Since ${\bf m}\in L^{\infty}([0, T]; L^s(\RR^3))$ for $s>3$, we have \begin{align}\label{71214} &|\int_{\RR^3}{\bf m}(y, t)\cdot\nabla_yg_{\epsilon}(y)dy-\int_{\RR^3}{\bf m}(y, t)\cdot\nabla_y\frac{1}{|y-x|}dy|\notag\\ &=|\int_{\epsilon\le |y-x|\le 2\epsilon} {\bf m}(y, t)\cdot \nabla_y(g_{\epsilon}(y)-\frac{1}{|y-x|})dy|\notag\\ &=|\int_{\epsilon\le |y-x|\le 2\epsilon} {\bf m}(y, t)\cdot \nabla_y(\frac{1}{|y-x|}(\alpha_{\epsilon}(|y-x|)-1))dy|\notag\\ &\le\int_{\epsilon\le |y-x|\le 2\epsilon} |{\bf m}(y, t)|(\frac{1}{|y-x|^2}+\frac{1}{|y-x|}\delta_{\epsilon}(|y-x|)))dy\notag\\ &\le\frac{2}{\epsilon}\int_{\epsilon\le |y-x|\le 2\epsilon} |{\bf m}(y, t)|\frac{1}{|y-x|}dy\notag\\ &\le \frac{2}{\epsilon}||{\bf m}||_{L^{\infty}([0, T]; L^s(\RR^3))}(\int_{\epsilon\le |y-x|\le 2\epsilon} \frac{1}{|y-x|^q}dy)^{1/q}\notag\\ &\le \frac{2}{\epsilon}||{\bf m}||_{L^{\infty}([0, T]; L^s(\RR^3))}\left(\int_{\epsilon}^{ 2\epsilon}4\pi \tau^{2-s'}d\tau \right)^{1/s'}, \end{align} where $q=\frac{s}{s-1}$. Since $s>3$, then $q<3/2$. Therefore, (\ref{71212}) is proved. \\ By (\ref{71211}) and (5.24), we have that $\int_{\RR^3}\frac{\rho(y, t)}{|y-x|}dy$ is differentiable with respect to $t$ for $(t, x)\in (0 , T)\times \RR^3$, a. e. Moreover, by (5.24), (\ref{71211}) and (\ref{71212}), we obtain, \begin{align} \frac{d}{dt}\int_{\RR^3}\frac{\rho(y, t)}{|y-x|}dy&=\frac{d}{dt}(\lim_{\epsilon\to 0}\int_{\RR^3}\rho(y, t)g_{\epsilon}(y)dy)\notag\\ &=\lim_{\epsilon\to 0}\frac{d}{dt}\int_{\RR^3}\rho(y, t)g_{\epsilon}(y)dy=\lim_{\epsilon\to 0}\int_{\RR^3}{\bf m}(y, t)\cdot \nabla_yg_{\epsilon}(y)dy\notag\\ &=\int_{\RR^3}{\bf m}(y, t)\cdot \nabla_y(\frac{1}{|y-x|})dy. \end{align} This proves (5.18). (5.19) and (5.20) then follows as we showed at the beginning of the proof of this Lemma. \end{proof} {\it Proof of Theorem 5.1}\\ We prove Theorem 5.1 in the following steps. \\ \vskip 0.2cm \noindent \underline{Step 1} In this step, we prove (5.1). This can be proved by using (5.25) in which $g_{\epsilon}^R(y)$ is replaced by $f_{\epsilon}^R(y)$, i.e., \begin{equation}\label{5.441}\frac{d}{dt}\int_{\RR^3}\rho(y, t)f_{\epsilon}^R(y)dy=\int_{\RR^3}{\bf m}(y, t)\cdot \nabla_y f_{\epsilon}^R(y)dy, t\in [0, T], a. e., \end{equation} where $f_{\epsilon}^R$ is defined in (5.22). We integrate ({\ref{5.441}) to get \begin{equation}\label{5.442}\int_{\RR^3}\rho(y, t)f_{\epsilon}^R(y)dy-\int_{\RR^3}\rho(y, 0)f_{\epsilon}^R(y)dy=\int_0^t\int_{\RR^3}{\bf m}(y,s)\cdot \nabla_y f_{\epsilon}^R(y)dy, \end{equation} By using a same argument as in the proof of Lemma 5.4, we can prove $$\lim_{\epsilon\to 0}\lim_{R\to \infty}\int_{\RR^3}\rho(y, t)f_{\epsilon}^R(y)dy=\int_{\RR^3}\rho(y, t)dy,$$ $$\lim_{\epsilon\to 0}\lim_{R\to \infty}\int_{\RR^3}\rho(y, 0)f_{\epsilon}^R(y)dy=\int_{\RR^3}\rho(y, 0)dy, $$ and $$\lim_{\epsilon\to 0}\lim_{R\to \infty}\int_0^t\int_{\RR^3}{\bf m}(y,s)\cdot \nabla_y f_{\epsilon}^R(y)dy=0.$$ (5.1) follows from (\ref{5.442}) by letting $R\to \infty$ and $\epsilon\to 0$. \vskip 0.2cm \noindent \underline{Step 2} In this step, we show that \begin{equation}\label{5.41} \int_0^t\int_{\RR^3} \rho(x, s)\partial_s \Phi(x, s)dxds=\frac{1}{2}\left(\int_{\RR^3}(\rho\Phi)(x, t)dx-\int_{\RR^3}(\rho\Phi)(x, 0)dx\right), \ t\in [0, T). \end{equation} This is can be proved as follows. \begin{align}\label{5.42} & \int_0^t \int_{\RR^3}\rho(x, s)\partial_s \Phi(x, s)dxds\notag\\ &=\lim_{ h\to 0}\frac{1}{h}\int_0^t\int_{\RR^3}\rho(x, s)\int_{\RR^3}\frac{\rho(y, s+h)-\rho(y, s)}{|x-y|}dydxds\notag\\ &= \lim_{h\to 0}\frac{1}{h}\left(\int_h^{t+h}\int_{\RR^3}\int_{\RR^3}\frac{\rho(x, s-h)\rho(y, s)}{|x-y|}dydxds-\int_0^t\int_{\RR^3}\int_{\RR^3}\frac{\rho(x, s)\rho(y, s)}{|x-y|}dydxds\right)\notag\\ &=\lim_{h\to 0}\frac{1}{h}\int_h^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, s-h)-\rho(x, s))\rho(y, s)}{|x-y|}dydxds\notag\\ &+\lim_{h\to 0}\frac{1}{h}\int_t^{t+h}\int_{\RR^3}\int_{\RR^3}\frac{\rho(x, s-h)\rho(y, s)}{|x-y|}dydxds -\lim_{h\to 0}\frac{1}{h}\int_0^{h}\int_{\RR^3}\int_{\RR^3}\frac{\rho(x, s-h)\rho(y, s)}{|x-y|}dydxds\notag\\ &=\lim_{h\to 0}\frac{1}{h}\int_h^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, s-h)-\rho(x, s))\rho(y, s)}{|x-y|}dydxds\notag\\ &+\int_{\RR^3}(\rho \Phi)(x, t)dx-\int_{\RR^3}(\rho\Phi)(x, 0)dx. \end{align} On the other hand, \begin{align}\label{5.43} &\frac{1}{h}\int_h^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, s-h)-\rho(x, s))\rho(y, s)}{|x-y|}dydxds\notag\\ &=\frac{1}{h}\int_0^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))\rho(y, \tau)}{|x-y|}dydxd\tau\notag\\ &+\frac{1}{h}\int_0^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))(\rho(y, \tau+h)-\rho(y, \tau))}{|x-y|}dydxd\tau\notag\\ &-\frac{1}{h}\int_{t-h}^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))\rho(y, \tau+h)}{|x-y|}dydxd\tau.\end{align} Since $$\lim_{h\to 0}\frac{1}{h}\int_{\RR^3}\frac{\rho(y, \tau+h)-\rho(y, \tau)}{|x-y|}dy=-\partial_{\tau}\Phi(x, \tau)$$, \begin{equation}\label{5.44}|\frac{1}{h}\int_{\RR^3}\frac{\rho(y, \tau+h)-\rho(y, \tau)}{|x-y|}dy|\le |\partial_{\tau}\Phi(x, \tau)|+1,\end{equation} for small $|h|$. Therefore, \begin{align}\label{5.45}&|\frac{1}{h}\int_0^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))(\rho(y, \tau+h)-\rho(y, \tau))}{|x-y|}dydxd\tau |\notag\\ &\le (||\partial_t\Phi ||_{L^{\infty}([0, T]\times \RR^3}+1)\int_0^{t}\int_{\RR^3}|(\rho(y, \tau+h)-\rho(x, \tau)|dyd\tau. \end{align} Then (\ref{ca}), (5.20) and (\ref{5.45}) imply \begin{equation}\label{5.46} \lim_{h\to 0}\frac{1}{h}\int_0^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))(\rho(y, \tau+h)-\rho(y, \tau))}{|x-y|}dydxd\tau=0.\end{equation} Similarly, we have, for small $|h|$, \begin{align}\label{5.47} &|\frac{1}{h}\int_{t-h}^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))\rho(y, \tau+h)}{|x-y|}dydxd\tau|\notag\\ &\le (||\partial_t\Phi ||_{L^{\infty}([0, T]\times \RR^3}+1)\int_{t-h}^{t}\int_{\RR^3}\rho(y, \tau+h)dyd\tau. \end{align} Since $\rho\in L^{\infty}([0, T]; L^1(\RR^3))$, (\ref{5.47}) implies, \begin{equation}\label{5.48}\lim_{h\to 0}\frac{1}{h}\int_{t-h}^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))\rho(y, \tau+h)}{|x-y|}dydxd\tau=0. \end{equation} Hence, (\ref{5.43}), (\ref{5.46}) and (\ref{5.48}) yield \begin{align}\label{5.49} &\lim_{h\to 0}\frac{1}{h}\int_h^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, s-h)-\rho(x, s))\rho(y, s)}{|x-y|}dydxd\tau\notag\\ &=\lim_{h\to 0}\frac{1}{h}\int_0^{t}\int_{\RR^3}\int_{\RR^3}\frac{(\rho(x, \tau)-\rho(x, \tau+h))\rho(y, \tau)}{|x-y|}dydxd\tau. \end{align} This, together with (\ref{5.42}), implies (\ref{5.41}). \vskip 0.2cm \noindent\underline{Step 3} In this step, we prove (5.3). \\ Since $\rho\in L^{\infty}([0, T; L^1(\RR^3))\cap L^{\infty}([0, T]; L^r(\RR^3))$, where $r>3/2$ and $r\ge \gamma$, we have, in view of (5.7) that \begin{equation}\label{5.52} \nabla \Phi\in L^{\infty} ([0, T]; L^{3/2}(\RR^3))\cap L^{\infty}([0, T]; L^{\lambda}(\RR^3)), \end{equation} if $r<3$, where $\frac{1}{\lambda}=\frac{1}{r}-\frac{1}{3}$. We also know that $\lambda>3$ if $r>3/2$. Similarly, by (5.7), we have \begin{equation}\label{5.53} \nabla \Phi\in L^{\infty} ([0, T]; L^{3/2}(\RR^3))\cap L^{\infty}([0, T]\times \RR^3 ), \end{equation} if $r\ge 3$. Furthermore, because $(\rho, {\bf m})$ satisfies the first equation of (1.1) in the sense of distributions, then by a density argument as in \cite{freisel}, in view of (5.19), (5.20), (\ref{5.52} and (\ref{5.53}), we have, \begin{align}\label{5.54} &\int_{\RR^3}(\rho\Phi)(x, t)dx-\int_{\RR^3}(\rho\Phi)(x, 0)dx\notag\\ &=\int_0^t\int_{\RR^3}\rho(x, s)\partial_s\Phi(x,s)dxds+\int_0^t\int_{\RR^3}{\bf m}(x, s)\cdot \nabla\Phi(x,s)dxds, \end{align} for $t\in [0, T)$. This, together with (5.10) and (5.44), implies (5.3), due to the fact \begin{equation}\label{5.55} E(t)=\int_{\RR^3}\eta(x, t)dx-\frac{1}{2}\int_{\RR^3}(\rho\Phi)(x, t)dx, \end{equation} for $t\in [0, T)$. \vskip 0.2cm \noindent\underline{Step 4} In this step, we proof (5.4).\\ First, since $\rho\in L^{\infty}([0, T]; L^1(\RR^3)\cap L^{\infty}([0, T]; L^r(\RR^3)$ with $r>3/2$, it follows from \cite{lieb}), \cite{rein1} and \cite{Rein} that $$\frac{1}{2}\int_{\RR^3}(\rho\Phi)(x, t)dx= -\frac{1}{8\pi}\int_{\RR^3}|\nabla \Phi|^2(x, t)dx, \quad t\in [0, T].$$ Using (2.19), we have, for $\gamma>4/3$ \begin{equation}\label{00x}\frac{1}{8\pi}|\nabla\Phi|^2\}dx=\int \frac{1}{2}\rho B\rho dx \le C\int \rho^{4/3}dx(\int \rho dx)^{2/3}=M^{2/3}\int \rho^{4/3}dx, \end{equation} where $A(\rho)$ is given by (2.3). Taking $p=1$, $q=4/3$, $r=\gamma$, and $a=\frac{\frac{3}{4}\gamma-1}{\gamma-1}$ in Young's inequality (2.17), we obtain, \begin{equation} ||\rho||_{4/3}\le ||\rho||_1^a||\rho||_{\gamma}^{1-a}=M^a||\rho||_{\gamma}^{1-a}.\end{equation} This is \begin{equation}\label{hahax}\int\rho^{4/3}dx\le M^{\frac{4}{3}a}(\int\rho^\gamma dx)^b, \end{equation} where $b=\frac{1}{3(\gamma-1)}$. Since $\gamma>4/3$, we have $0<b<1$. Therefore, (\ref{00x}) and (\ref{hahax}) imply \begin{equation}\label{0001x} \int\frac{1}{2}\rho B\rho dx \le C(\gamma-1)^bM^{\frac{4}{3}a+\frac{2}{3}}(\int A(\rho)dx)^b.\end{equation} Using the inequality (cf.[15] p. 145) \begin{equation}\label{0002x}\alpha\beta\le \epsilon\alpha^s+\epsilon^{-t/s}\beta^t,\end{equation} if $s^{-1}+t^{-1}=1$ ($s, t>1$) and $\epsilon>0$, since $b<1$, we can bound $C(\gamma-1)^bM^{\frac{4}{3}a+\frac{2}{3}}(\int A(\rho)dx)^b$ by $\frac{1}{2}\int A(\rho)dx+C_2$, where $C_2$ is a constant depending only on $M$ and $\gamma$ (we can take $\epsilon=1/2$ and $s=1/b$ and $t=(1-s^{-1})^{-1}$ in (\ref{0002}) since $s>1$ due to $0<b<1$). Therefore, \begin{equation}\label{5.56} \frac{1}{2}|\int_{\RR^3}(\rho\Phi)(x,t)dx|=\frac{1}{8\pi}||\nabla\Phi(\cdot, t)||_2^2\le \frac{1}{2}\int_{\RR^3}\frac{\rho^\gamma(x, t)}{\gamma-1}dx+C,\end{equation} for $t\in [0, T),$ where $C$ is a constant only depending on $M=\int_{\RR^3}\int \rho(x,t)dx=\int_{\RR^3}\rho(x, 0)dx$ (cf. (5.1)) and $\gamma$. This, together with (5.3), implies (5.4). \section{ Appendix} In this appendix, we prove the following theorem which is Remark \ref{rem12} in Section 5. \\ \noindent {\bf Theorem } If $(\rho, {\bf m})\in L^{\infty}([0, T]; L^1(\RR^3))$ satisfies the first equation of (\ref{1.1}) in the sense of distributions, then \begin{equation}\label{cb1} \lim_{\epsilon\to 0}\sup_{0\le t\le T, |y|\le 1}\int_{\RR^3}|\rho(x, t)-\rho(x-\epsilon y, t)|dx=0, \quad t\in (0, T), a.e., \end{equation} implies \begin{equation}\label{ca1} \lim_{h\to 0}\int_0^t\int_{\RR^3}|\rho(x, t+h)-\rho(x, t)|dx=0, \quad t\in (0, T), a.e.\end{equation} \begin{proof} For any fixed $t \in (0, T)$ and small $h$, we let $$w(x)=\rho(x, t+h)-\rho(x, t).$$ First, we note that if $\psi(x)\in C^1(\RR^3))$ with $\psi$ and $\nabla\psi$ being bounded in $\RR^3$, then \begin{equation}\label{7.1} \int_{\RR^3}w(x)\psi(x)dx=\int_{t}^{t+h}\int_{\RR^3}{\bf m}(x, s)\cdot\nabla \psi(x)dxds.\end{equation} This is because $(\rho, {\bf m}) \in L^{\infty}([0, T]; L^1(\RR^3))$ satisfies the first equation of (\ref{1.1}) in the sense of distributions. The justification of (\ref{7.1}) is standard, for instance, see \cite{freisel}. In view of (\ref{7.1}), we have \begin{equation}\label{7.2} |\int_{\RR^3}w(x)\psi(x)dx|\le h \sup_{x\in \RR^3}|\nabla\psi(x)|||m||_{L^{\infty}([0, T]; L^1(\RR^3))}.\end{equation} We choose $\psi$ as $$\psi(x)=\int_{\RR^3}sgn (x-\epsilon y)\delta(y)dy, $$ where $sgn$ is the sign function, $\delta\in C_0^\infty(\RR^3)$ is a smooth function satisfying $0\le \delta(y)\le 1$,$\int_{\RR^3}\delta(y)dy=1$ and $supp\ \delta\subset \{y\in \RR^3: |y|\le 1\}$. Then $|\nabla\psi|\le \frac{C}{\epsilon}$ for some constant $C$. Moreover, \begin{align}\label{7.3} &|\int_{\RR^3}w(x)\psi(x)dx-\int_{\RR^3}|w(x)|dx|\notag\\ &=|\int_{\RR^3}\int_{\RR^3}(w(x)-w(x-\epsilon y))sgn(x-\epsilon y)\delta(y)dydx|\notag\\ &\le \sup_{|y|\le 1}\int_{\RR^3}|w(x)-w(x-\epsilon y)|dy\notag\\ &\le \sup_{|y|\le 1}\int_{\RR^3}|\rho(x, t)-\rho(x-\epsilon y, t)|dx+\sup_{|y|\le 1}\int_{\RR^3}|\rho(x, t+h)-\rho(x-\epsilon y, t+h)|dx. \end{align} Therefore, \begin{align}\label{7.4} &\int_{\RR^3}|w(x)|dx\notag\\ & \le \sup_{|y|\le 1}\int_{\RR^3}|\rho(x, t)-\rho(x-\epsilon y, t)|dx+\sup_{|y|\le 1}\int_{\RR^3}|\rho(x, t+h)-\rho(x-\epsilon y, t+h)|dx\notag\\ &+\frac{Ch}{\epsilon}||m||_{L^{\infty}([0, T]; L^1(\RR^3))}.\end{align} We let $h\to 0$ first in (\ref{7.4}), (\ref{ca1}) follows from (\ref{cb1}) because $\epsilon$ is arbitrary. \end{proof} \centerline{\bf Acknowledgments} Luo was supported in part by the National Science Foundation under Grant DMS-0606853. Smoller was supported in part by the National Science Foundation under Grant DMS-0603754. We thank Wen-Qing Xu for a helpful discussion. Luo is grateful to Cathleen Morawetz for her encouragement and interest in this work. \bibliographystyle{plain}
2,877,628,090,594
arxiv
\section{Introduction} In this paper, we study a generalized self-dual Chern-Simons-Higgs gauge theory introduced by Burzlaff, Chakrabarti, Tchrakian in \cite{BCT}. The Lagrangian density of the model in $(2+1)$ dimensions is \[\mathcal{L}=\sqrt{2}\e\epsilon^{\mu\nu\alpha}\Big[A_\alpha-2i\Big(1-\frac{|\phi|^2}{2}\Big)\phi\overline{D_\mu\phi}\Big]F_{\mu\nu}+2(1-|\phi|^2)^2|D_{\mu}\phi|^2-V,\] where $A=(A_0,A_1,A_2)$ is a 3-vector gauge field, $F_{\alpha\beta}=\frac{\partial}{\partial x_\alpha} A_\beta-\frac{\partial}{\partial x_\beta}A_\alpha$ is the corresponding curvature, $\phi=\phi_1+i\phi_2$ is a complex scalar field called the Higgs field, $D_j=\frac{\partial}{\partial x_{j}}-iA_{j}$, $j=0,1,2$ is the gauge covariant derivative associated with $A$, $\alpha, \beta, \mu, \nu=0, 1, 2,$ $\e>0$ is a constant referred to as the Chern-Simons coupling parameter, $\epsilon^{\alpha\beta\gamma}$ is the Levi-Civita totally skew-symmetric tensor with $\epsilon^{012}=1$, $V$ is the Higgs potential function. The corresponding Bogomol'nyi equations for unknowns $\phi$, $A$ defined on $\RN$ are \begin{equation*} \begin{aligned} \left\{ \begin{array}{ll} & D_1\phi=iD_2\phi, \\& (1-|\phi|^2)F_{12}=i(D_1\phi\overline{D_2\phi}-\overline{D_1\phi}D_2\phi)+\frac{1}{2\e^2}|\phi|^2(1-|\phi|^2)^2. \end{array}\right.\end{aligned} \end{equation*} In view of Jaffe-Taubes' argument in \cite{JT}, we introduce unknown $v$ defined by \[\phi(z)=\exp\Big(\frac{v(x)}{2}+i\sum_{j=1}^N\mbox{arg}(z-p_j)\Big), \ z=x_1+ix_2\in\mathbb{C},\]where $\{p_j\}_{j=1}^N$ are the zeros of $\phi(z)$, allowing their multiplicities. Then we obtain the following reduced equation: \begin{equation} (1-e^{v})\Delta v-e^{v(x)}|\nabla v|^2 +\frac{1}{\e^{2}}e^{v(x)}\left( 1-e^{v(x)}\right)^2 =4\p {\displaystyle \sum \limits_{j=1}^{N}} \delta_{p_j}.\label{01} \end{equation} Here $p_j$ is called a vortex point. The equation \eqref{01} can be considered in $\RN$ or a two dimensional flat torus $\Omega$ due to the theory suggested by 't Hooft in \cite{'tH}. We fix $\e>0$ for a while. In $\RN$, a solution $v(x)$ is called a topological solution if $\lim_{|x|\to+\infty}v(x)=0$, and is called a non-topological solution if $\lim_{|x|\to+\infty}v(x)=-\infty$. Yang in \cite{Yang} found topological multi-vortex solutions of \eqref{01} by using the variational structure of the elliptic problem to produce an iteration scheme that yields the desired solution. After then, Chae and Imanuvilov in \cite{CI} constructed a non-topological multi-vortex solution $v(x)$ of \eqref{01} satisfying $v(x)=-(2N+4+\sigma)\ln|x|+O(1)$ as $|x|\to+\infty$ for some $\sigma>0$. To obtain the non-topological solution of \eqref{01}, the authors in \cite{CI} observed that \eqref{01} is a perturbation of the Liouville equation and applied the arguments developed in \cite{CI0}. In \cite{CI0}, Chae and Imanuvilov showed the existence of non-topological multi-vortex solutions of the relativistic Chern-Simons-Higgs model (see \eqref{sceq} below), using the implicit function theorem argument with Lyapunov-Schmidt reduction method. Now we consider the equation \eqref{01} on flat two torus $\Omega$, where $\e$ goes to $0$. Since $(1-e^{v})\Delta v-e^{v(x)}|\nabla v|^2 =\mbox{div}((1-e^{v})\nabla v)$, any solution $v(x)$ to \eqref{01} satisfies \begin{equation}\label{uniforml1}\int_{\Omega}\frac{1}{\e^2}e^{v(x)}(1-e^{v(x)})^2dx=4\pi N.\end{equation} Moreover, from the maximum principle (see also \cite[Lemma 3.1]{H}), we note that any solution $v(x)$ to \eqref{01} satisfies \begin{equation}\label{negative}v(x)\le0\ \ \textrm{on}\ \ \Omega.\end{equation} For the well known Chern-Simons-Higgs equation with $\e\to0$ (see \eqref{sceq} below), the corresponding properties \eqref{uniforml1} and \eqref{negative} were important to classify the solutions according to their asymptotic behavior as $\e\to0$ (see \cite{DJLPW,CK}). So it is natural that we expect that the solutions to \eqref{01} can also be classified according to the asymptotic behavior. Now we have the following theorem: \begin{theorem}\label{Lp} For any given vortex configuration $\{p_j\}$, let $v_{\e}$ be a sequence of solutions of (\ref{01}). Then, up to subsequence, one of the following holds true: (i) $v_{\e}\to0$ a.e. as $\varepsilon\to0$. Moreover, $v_{\e}\to0$ in $L^p(\Omega)$ for any $p>1$ (topological type); (ii) $v_{\e}\to-\infty$ a.e. as $\varepsilon\to0$ (non-topological type). \end{theorem} Recently, Han in \cite[Theorem 3.1]{H} proved the existence of critical value of the coupling parameter $\e_c=\e_c(p_1,...,p_N)>0$ such that there is a solution to \eqref{01} on $\Omega$ if and only if $0<\e\le\e_c$. He obtained a maximal solution $v_{\e,M}$ to \eqref{01} by using a super-sub solutions method (see \cite[Theorem 2.1]{H}). Here the maximal solution means that $v_{\e, M}\ge v_\e$ on $\Omega$ for any solution $v_\e$ to \eqref{01}. In \cite[Lemma 3.5]{H}, he also showed that the maximum solutions $v_{\e,M}$ of \eqref{01} are a monotone family in the sense that $v_{\e_1,M}>v_{\e_2,M}$ whenever $0<\e_1 < \e_2 < \e_c$. Therefore, in view of Theorem \ref{Lp}, the maximal solution obtained in \cite{H} is a topological solution. At this point, one might ask the existence of non-topological solution to \eqref{01} on $\Omega$. In this paper, we obtain the affirmative answer for this question by constructing a bubbling non-topological solution solution $v_\e$ to \eqref{01} on $\Omega$ satisfying \begin{equation}\label{finalgoal} \lim_{\e\to0} \sup_{\Omega}v_{\e}=-\infty,\ \frac{e^{v_{\e}}}{\int_{\Omega}e^{v_{\e}}dx }\rightarrow \frac{1}{k}\sum_{i=1}^{k}\delta_{q_{i}}, \ q_i\in\Omega\setminus[\cup_{i=1}^{2k}\{p_{i}\}], \end{equation} in the sense of measure as $\e\to0$.\\ For the construction of bubbling solution solution $v_\e$ to \eqref{01} on $\Omega$ satisfying \eqref{finalgoal}, we assume that $N=2k\in2\mathbb{N}$. We note that since the equation \eqref{01} is quasi-linear, it is not easy to deal it directly. As in \cite{H,TY,Yang}, we introduce a new dependent variable $u$ defined by \[u=F(v):=1+v-e^v.\] We have that $F'(v)=1-e^v$ and $F''(v)=-e^v$, which implies $F$ is strictly increasing and invertible over $(-\infty,0)$. Let $G$ be the inverse function of $F$ over $(-\infty,0]$. Then we see that $G(u)=v=G(1+v-e^v)$. Let $u_\e=F(v_\e)=1+v_\e-e^{v_\e}$. Then $v_\e$ satisfies \eqref{01} if and only if $u_\e$ satisfies \begin{equation} \Delta u_\e+\frac{1}{\varepsilon^{2}}e^{G(u_\e(x))}\left( 1-e^{G(u_\e(x))}\right)^2=4\p {\displaystyle \sum \limits_{i=1}^{2k}} \delta_{p_i}. \label{001 \end{equation} We remark that if $\lim_{\e\to0} \sup_{\Omega}u_{\e}=-\infty$, then the equation \eqref{001} would be a perturbation of bubbling solutions $W_\e$ of the following Chern-Simons-Higgs equation: \begin{equation} \Delta W_\e+\frac{1}{\varepsilon^{2}}e^{W_\e(x)}\left( 1-e^{W_\e(x)}\right) =4\p {\displaystyle \sum \limits_{i=1}^{2k}} \delta_{p_i}\ \ \textrm{on}\ \ \Omega. \label{sceq \end{equation} The relativistic Chern-Simons-Higgs model has been proposed in \cite{HKP} and independently in \cite{JW} to describe vortices in high temperature superconductivity. The above equation was derived from the Euler-Lagrange equations of the CSH model via a vortex ansatz, see \cite{HKP, JW, T2, Y}. The equation \eqref{sceq} has been extensively studied not only in a flat torus $\Omega$ but also in the whole $\RN$. We refer the readers to \cite{CY,CI0, CFL, Choe0, Choe, CK,LY0, LY1, NT, NT1, SY0, T1,T2} and references therein. Among them, in a recent paper \cite{LY1}, Lin and Yan succeeded to construct bubbling non-topological solutions to \eqref{sceq} on $\Omega$. Compared to \eqref{sceq}, our equation has a difficulty caused by the nonlinear terms including implicit function $G$. Therefore, to choose a suitable approximate solution, we should investigate the behavior of the function $G$ near $-\infty$ and carry out the analysis carefully. To state our result exactly, we introduce the following notations:\\ Let $G$ be the Green function satisfying \begin{equation*} -\Delta_x G(x,y)=\delta_y -\frac{1}{|\Omega|} \quad\mbox{for }~ x, y\in \Omega, \quad\mbox{and}\quad \int_\Omega G(x,y)dx=0. \end{equation*} We let $\gamma(x,y)=G(x,y)+\frac{1}{2\pi}\ln|x-y|$ be the regular part of the Green function $G(x,y)$, and \begin{equation*} u_0(x) \equiv -4\pi\sum^{N}_{i=1}G(x,p_{i}). \end{equation*} Then $u_0$ satisfies the following problem \begin{equation*} \begin{aligned} \left\{ \begin{array}{ll} &\Delta u_0 =-\frac{4\pi N}{|\Omega|}+4\pi\sum_{i=1}^{N}\delta_{p_i},\\ &\int_{\Omega}u_0dx=0. \end{array}\right.\end{aligned} \end{equation*}We remind that $N=2k$. We denote $\Omega^{(k)}:=\{(x_1,...,x_k)\ | x_i\in\Omega\setminus[\cup_{i=1}^{2k}\{p_{i}\}]\ \textrm{for}\ 1\le i\le k, x_i\neq x_j \ \textrm{if}\ i\neq j\}$. Let ${\bf{q}}= \left( q_{1},...,q_{k}\right)\in\Omega^{(k)}$ be the critical point of the following function: \[ G^{\ast}\left( \bf{q}\right) :=\sum_{i=1}^{k}u_0\left( q_{i}\right) +8\pi \sum_{i\neq j}G\left( q_{i},q_{j}\right) . \] We defin \[ D\left( \bf{q}\right) :=\lim_{r\rightarrow0}\left( \sum_{i=1}^{k}\rho_{i}\left( \int_{\Omega_{i}\setminus B_{r}\left( q_{i}\right) }\frac{e^{f_{{\bf{q}},i} -1}{\left \vert y-q_{i}\right \vert ^{4}}dy-\int_{\mathbb{R}^2\setminus \Omega_{i}}\frac {1}{\left \vert y-q_{i}\right \vert ^{4}}dy\right) \right) , \] where $\Omega_{i}$ is any open set satisfying $\Omega_{i}\cap \Omega _{j}=\emptyset$ if $i\neq j$, $\cup_{i=1}^{k}\bar{\Omega}_{i}=\bar{\Omega}$, $B_{d_{i}}\left( x_{i}\right) \subset\subset \Omega_{i}$, $i=1,...,k, \[ f_{{\bf{q}},i}\left( y\right) :=8\pi \left( \gamma \left( y,q_{i}\right) -\gamma \left( q_{i},q_{i}\right) +\sum_{j\neq i}\left( G\left( y,q_{j}\right) -G\left( q_{i},q_{j}\right) \right) \right) +u_0\left( y\right) -u_0\left( q_{i}\right), \] and \[ \rho_{i}=e^{8\pi \left( \gamma \left( q_{i},q_{i}\right) +\sum_{j\neq i}G\left( q_{i},q_{j}\right) \right) +u_0\left( q_{i}\right) }. \] At this point, we introduce our main result. \begin{theorem}\label{blmix} Let ${\bf{q}}=(q_1,...,q_k)\in\Omega^{(k)}$ be a non-degenerate critical point of $G^{\ast}\left( \bf{q} \right) $. Suppose that $D\left( \bf{q}\right) <0$. Then for $\varepsilon>0$ small, there exists a non-topological solution solution $v_{\varepsilon}$ to \eqref{01} such tha \[\lim_{\e\to0} \sup_{\Omega}v_{\e}=-\infty,\ \ \frac{e^{v_{\e}}}{\int_{\Omega}e^{v_{\e}}dx }\rightarrow \frac{1}{k}\sum_{i=1}^{k}\delta_{q_{i}}\text{ in the sense of measure as}\ \e\to0. \] \end{theorem}To the best of our knowledge, Theorem \ref{blmix} is the first result for the existence of non-topological solution solutions to \eqref{01} on $\Omega$. We remark that in our paper, a limiting equation for \eqref{01} is Liouville equation since $\lim_{\e\to0} \sup_{\Omega}v_{\e}=-\infty$. It would be an interesting problem to find other types of non-topological solution solution to \eqref{1}, for example, satisfying $\sup_\Omega v_{\e}\ge -c_0>-\infty$ for some constant $c_0>0$. The organization of this paper is as follows. In Section 2, we prove Theorem \ref{Lp}. In Section 3, to prove Theorem \ref{blmix}, we present some preliminaries results and discuss about the invertibility of a linearized operator. Moreover, we find a suitable approximate solution and complete the proof of Theorem \ref{blmix}. \section{proof of Theorem \ref{Lp}} \emph{Proof of Theorem \ref{Lp}:}\\ Our arguments will be based on \cite[Theorem 3.1]{DJLPW}. We consider the equation \eqref{001}, which is equivalent to \eqref{01}. Let $\{u_\e\}$ be a sequence of solutions of \eqref{001}. Let $d_{\varepsilon}=\frac{1}{|\Omega|}\int_{\Omega} u_{\e} dx$ and $u_{\e}=w_{\varepsilon}+u_{0} +d_{\varepsilon}$. Then $w_\e$ satisfies \begin{equation} \begin{aligned}\label{wemain} \Delta w_{\e}+\frac{1}{\varepsilon^2} e^{G(u_{\e})}(1-e^{G(u_{\e})})^2=\frac{4\pi N}{|\Omega|}\quad\mbox{on }~ \Omega, \end{aligned} \end{equation} and $\int_{\Omega} w_{\varepsilon} dx=0$.\par We claim that there exist $C_q>0$ such that $\|\nabla w_{\varepsilon}\|_{L^q(\Omega)}\le C_q$ for any $q\in(1,2)$. Let $q'=\frac{q}{q-1}>2$. Then \begin{equation}\begin{aligned}\label{normexpression} &\|\nabla w_{\varepsilon}\|_{L^q(\Omega)} \\&\le\sup\Big\{\Big|\int_{\Omega}\nabla w_{\varepsilon}\nabla\phi dx\Big|\ \Big|\ \ \phi\in W^{1,q'}(\Omega),\ \int_{\Omega}\phi dx=0,\ \|\phi\|_{W^{1,q'}(\Omega)}=1\Big\}. \end{aligned}\end{equation} By lemma 7.16 in \cite{GT}, if $\int_{\Omega}\phi dx=0$, then there exist $c,\ C>0$ such that \begin{equation}\label{GTineq} |\phi(x)|\le c\int_{\Omega}\frac{|\nabla\phi|}{|x-y|}dy\le C\|\nabla\phi\|_{L^{q'}(\Omega)}\ \ \textrm{for}\ x\in\Omega. \end{equation} Thus in view of (\ref{wemain}), (\ref{GTineq}), and \eqref{uniforml1}, we see that there exists constant $C>0$, independent of $\phi$ satisfying $\int_{\Omega}\phi dx=0$ and $\|\phi\|_{W^{1,q'}(\Omega)}=1$, such that \begin{equation} \begin{aligned} \Big|\int_{\Omega}\nabla w_{\varepsilon}\nabla\phi dx\Big|&=\Big|\int_{\Omega}\Delta w_{\varepsilon}\phi dx\Big| \\&\le\|\phi\|_{L^\infty(\Omega)}\Big|\int_{\Omega} |\frac{1}{\varepsilon^2} e^{G(u_{\varepsilon})}(1-e^{G(u_{\e})})^2|dx+4\pi N \Big|\le C. \end{aligned} \end{equation} Now using (\ref{normexpression}), we complete the proof of our claim.\par In view of Poincar\'{e} inequality, we also have $\|w_{\varepsilon}\|_{L^q(\Omega)}\le c\|\nabla w_{\varepsilon}\|_{L^q(\Omega)}$. Then there exist $w \in W^{1,q}(\Omega)$ and $p>1$ such that, as $\varepsilon\to0$, \begin{equation}\label{wecone} w_{\varepsilon}\rightharpoonup w \ \ \textrm{weakly in}\ W^{1,q}(\Omega),\ w_{\varepsilon}\to w \ \ \textrm{strongly in}\ L^p(\Omega),\ w_{\varepsilon}\to w \ \textrm{a.e.}. \end{equation} Since $v_{\e}\le0$ on $\Omega$, we see that $u_{\e}\le0$ and $0\le e^{d_{\varepsilon}}\le1$. Then there exists $A \ge0$ such that $\limsup_{\varepsilon\to0}e^{d_{\varepsilon}}=A $. If $A\equiv0$, that is, $\lim_{\varepsilon\to0}d_{\varepsilon}=-\infty$, then by using \eqref{wecone}, we get that $u_{\e}=w_{\varepsilon}+d_{\varepsilon}+u_{0}\to -\infty$ a.e. in $\Omega$.\\ If $A>0$, then by using Fatou's lemma and (\ref{wecone}), we see that \begin{equation*} \begin{aligned} 4\pi N \varepsilon^2&= \int_{\Omega}e^{G(u_{\varepsilon})}(1-e^{G(u_{\e})})^2dx \ge\int_{\Omega} e^{G(w+u_0+\ln A)}(1- e^{G(w +u_{0}+\ln A)})^2dx, \end{aligned} \end{equation*} which implies that $G(w+u_0+\ln A)=-\infty$ or $G(w+u_0+\ln A)=0$ a.e. in $\Omega$. Since $G$ is strictly increasing on $(-\infty,0)$ and $G(0)=0$, we see that $w+u_0+\ln A=-\infty$ or $w+u_0+\ln A=0$ a.e. in $\Omega$. By $A>0$ and $w, u_0\in L^p(\Omega)$, we have $w+u_0+\ln A=0$ a.e. in $\Omega$. From $\int_{\Omega} w+u_0dx=0$, we see that $A\equiv 1$, and $w+u_0=0$ a.e. in $\Omega$. By using \eqref{wecone}, we get that $u_{\e}=w_{\varepsilon}+d_{\varepsilon}+u_{0}\to w +\ln A +u_{0}=0$ a.e. in $\Omega$ and $u_{\e}\to0$ in $L^p(\Omega)$ for any $p>1$ (since $q\in(1,2)$ in \eqref{normexpression} can be arbitrary). Now we complete the proof of Theorem \ref{Lp}. \hfill$\Box$ \section{Existence of bubbling non-topological solution solution} In this section, we want to construct a bubbling non-topological solution solution $v_\e$ to \eqref{01} satisfying $\lim_{\e\to0} \sup_{\Omega}v_{\e}=-\infty$, and \[ \frac{e^{v_{\e}}}{\int_{\Omega}e^{v_{\e}}dx }\rightarrow \frac{1}{k}\sum_{i=1}^{k}\delta_{q_{i}}\text{ in the sense of measure as}\ \e\to0, \] where ${\bf{q}}=(q_1,...,q_k)\in\Omega^{(k)}$ is a non-degenerate critical point of $G^{\ast}\left( \bf{q} \right) $ and $D\left( \bf{q}\right) <0$.\\ Without loss of generality, from now on, we assume that $|\Omega|=1$. We note that $v_\e$ satisfies \eqref{01} if and only if $u_\e=F(v_\e)=1+v_\e-e^{v_\e}$ satisfies \begin{equation} \Delta u_\e+\frac{1}{\varepsilon^{2}}e^{G(u_\e(x))}\left( 1-e^{G(u_\e(x))}\right)^2 =4\p {\displaystyle \sum \limits_{i=1}^{2k}} \delta_{p_i}. \label{1 \end{equation} As we mentioned in the introduction, if $\lim_{\e\to0}\sup_{\Omega}u_{\varepsilon}=-\infty$, then $u_{\varepsilon}$ would be related to the following Chern-Simons-Higgs equation \[ \Delta W_\e+\frac{1}{{\varepsilon}^{2}}e^{W_\e (y)}\left( 1-e^{W_\e(y)}\right) =4\pi\sum_{j=1}^{2k} \delta_{p_{j}}. \] In \cite{LY1}, bubbling solutions for the above Chern-Simons-Higgs equation have been constructed as following \ \begin{array} [c]{ccccc W_{\varepsilon}(y) & \simeq & u_0(y)+w_{\bf{x},\mu}^{\ast}(y)-\int_{\Omega}w_{\bf{x},\mu ^{\ast}(z)dz +c\left( w_{\bf{x},\mu}\right) \end{array} \] where ${\bf{x}}= ( x_{1},...,x_{k} ),$ $x_i\in\Omega$, $\mu\in \lbrack \frac{\beta_{0}}{\sqrt{{\varepsilon}}},\frac{\beta_{1} {\sqrt{{\varepsilon}}}]\ \textrm{ for some}\ 0<\beta_{0}\ll1,\ \beta_{1}\gg1,$ \[ \rho_{i}:=e^{8\pi \gamma \left( x_{i},x_{i}\right) +8\pi \sum_{j\neq i}G\left( x_{j},x_{i}\right) +u_0\left( x_{i}\right) },\] \[ \left( \mu_{1},...,\mu _{k}\right):=\Big(\mu,\sqrt{\frac{\rho_{1}}{\rho_{2}}}\mu,...,\sqrt{\frac{\rho_{1}}{\rho_{k}}}\mu\Big), \] and $d>0$ is a fixed small constant, $d_{i}^{2}:=d-1/\mu_{i}^{2},\ u_{x_{i},\mu_{i}}(y):=\ln \frac{8\mu_{i}^{2}}{\left( 1+\mu_{i}^{2}\left \vert y-x_{i}\right \vert ^{2}\right) ^{2}},$ \ \begin{array} [c]{rcl w_{\bf{x},\mu}^{\ast}\left( y\right) & := & \sum_{i=1}^{k}w_{x_{i},\mu_{i} ^{\ast}\left( y\right) ,\\ w_{x_{i},\mu_{i}}^{\ast}\left( y\right) & := & \left \{ \begin{array} [c]{ll u_{x_{i},\mu_{i}}\left( y\right) +8\pi \gamma \left( y,x_{i}\right) \left( 1-\frac{1}{{d} \mu_{i}^{2}}\right) , & y\in B_{d_{i}}\left( x_{i}\right) ,\\ u_{0,\mu_{i}}\left( d_{i}\right) +8\pi \left( G\left( y,x_{i}\right) -\frac{1}{2\pi}\ln \frac{1}{d_{i}}\right) \left( 1-\frac{1}{{d} \mu_{i ^{2}}\right) , & y\in \Omega\setminus B_{d_{i}}\left( x_{i}\right) , \end{array} \right. \end{array} \text{ \] \[w_{\bf{x},\mu}(y):=w_{\bf{x},\mu}^{\ast}(y)-\int_{\Omega}w_{\bf{x},\mu ^{\ast}(z)dz,\] \[ c\left( w_{\bf{x},\mu}\right):=\ln\frac{16k\pi {\varepsilon}^{2}}{\int_{\Omega }e^{u_0+w_{\bf{x},\mu}}dy\left( 1+\sqrt{1-32k\pi {\varepsilon}^{2}\frac{\int_{\Omega }e^{2( u_0+w_{\bf{x},\mu}) }dy}{\left( \int_{\Omega}e^{u_ +w_{\bf{x},\mu}}dy\right) ^{2}}}\right) }. \] We note that $u_{x_{i},\mu_{i}}$ satisfies \[ \left \{ \begin{array} [c]{rcl -\Delta u_{x_{i},\mu_{i}}\left( y\right) & = & e^{u_{x_{i},\mu_{i}}\left( y\right) }\text{ in }\mathbb{R}^2,\\ \int_{\mathbb{R}^2}e^{u_{x_{i},\mu_{i}}\left( y\right) } dy& = & 8\pi. \end{array} \right. \] We denote \[\tilde{W}_{\bf{x},\mu}(y):=w_{\bf{x},\mu}^{\ast}(y)-\int_{\Omega}w_{\bf{x},\mu}^{\ast}(z)dz +c\left( w_{\bf{x},\mu}\right).\] We want to find solution $u_\e$ to \eqref{1} in the following form: \begin{equation}\label{approximatesol} \begin{aligned} u_\e(y)&=1+ u_0(y)+w_{\bf{x},\mu}^{\ast}(y)-\int_{\Omega}w_{\bf{x},\mu}^{\ast}(z)dz +c\left( w_{\bf{x},\mu}\right) +\eta_{{\bf{x}},\mu}(y)\\&=1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}, \end{aligned} \end{equation} where $\eta_{{\bf{x}},\mu}$ is a perturbation term. To find $\eta_{{\bf{x}},\mu}$ which makes that $u_\e$ in the form \eqref{approximatesol} is a solution to \eqref{1}, we consider the following linearized operator \begin{equation*} \begin{aligned} L_{{\bf{x}},\mu}\left( \eta_{{\bf{x}},\mu}\right) := \left( \Delta+h_{\mu}\left( y\right) \right) \eta\quad \textrm{with}\ h_{\mu}\left( y\right) :=\sum_{i=1}^{k}1_{B_{d_{i}}\left( x_{i}\right) }e^{u_{x_{i},\mu_{i}}\left( y\right)}. \end{aligned} \end{equation*} We see that $u_\e$ is a solution to \eqref{1} if $\eta_{{\bf{x}},\mu}$ satisfies \begin{equation}\label{err} L_{\bf{x},\mu}\eta_{{\bf{x}},\mu}= g_{{\bf{x}},\mu}\left( \eta_{{\bf{x}},\mu}\right), \end{equation}wher \begin{equation*} \begin{aligned} &g_{{\bf{x}},\mu}(\eta_{{\bf{x}},\mu}) := h_{\mu}\left( y\right) \eta_{{\bf{x}},\mu}+ {8k\pi}-\Delta \tilde{W}_{\bf{x},\mu}-\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}(1- e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})})^2 . \end{aligned} \end{equation*} To show the invertibility of the linear operator $L_{\bf{x},\mu}$, we need to introduce suitable function spaces. For fixed a small constant $\alpha\in(0,\frac{1}{2})$, we define \[ \rho \left( y\right) =\left( 1+\left \vert y\right \vert \right) ^{1+\frac{\alpha}{2}},\ \hat{\rho}\left( y\right) =\frac{1}{\left( 1+\left \vert y\right \vert \right) \left( \ln \left( 2+\left \vert y\right \vert \right) \right) ^{1+\frac{\alpha}{2}}}. \] Let $\Omega^{\prime}:=\cup_{i=1}^{k}B_{d_{i}}\left( x_{i}\right) $ and $\tilde{\xi}_{i}\left( y\right) :=\xi \left( x_{i}+\mu^{-1}y\right)$. We say that $\xi\in \mathbb{X}_{\alpha,{\bf{x}},\mu}$ if \begin{align*}\left \Vert \xi \right \Vert _{\mathbb{X}_{\alpha,{\bf{x}},\mu}}^{2}&:=\sum_{i=1}^{k}\left( \left \Vert \Delta \tilde{\xi}_{i}\rho \right \Vert _{L^{2}\left( B_{2d_{i}\mu_{i}}(0)\right) }^{2}+\left \Vert \tilde{\xi}_{i}\hat{\rho}\right \Vert _{L^{2}\left( B_{2d_{i}\mu_{i}}(0)\right) }^{2}\right) +\left \Vert \Delta \xi \right \Vert _{L^{2}\left( \Omega \setminus \Omega^{\prime}\right) }^{2}+\left \Vert \xi \right \Vert _{L^{2}\left( \Omega \setminus \Omega^{\prime}\right) }^{2}\\&<+\infty,\end{align*} and $\xi\in \mathbb{Y}_{\alpha,{\bf{x}},\mu}$ if \[\left \Vert \xi \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}^{2}:=\sum_{i=1}^{k}\frac{1}{\mu_{i}^{4 }\left \Vert \tilde{\xi}_{i}\rho \right \Vert _{L^{2}\left( B_{2d_{i}\mu_{i }(0)\right) }^{2}+\left \Vert \xi \right \Vert _{L^{2}\left( \Omega \setminus \Omega^{\prime}\right) }^{2}<+\infty. \] Let $\chi_{i}\left( \left \vert y\right \vert \right) $ be a smooth function satisfying $\chi_{i}=1$ in $B_{d_{i}}\left( 0\right) $, $\chi_{i}=0$ in $\mathbb{R}^2\setminus B_{2d_{i}}\left( 0\right) $, and $0\leq \chi_{i}\leq1$. We use the following notations \begin{align*} Y_{{\bf{x}},\mu,0} & :=-\frac{1}{\mu_{1}}+\sum_{i=1}^{k}\sqrt{\frac {\rho_{1}}{\rho_{i}}}\frac{2\chi_{i}\left( y-x_{i}\right) }{\mu_{i}\left( 1+\mu_{i}^{2}\left \vert y-x_{i}\right \vert ^{2}\right) },\\ Y_{x_{i},\mu_{i},j} & :=\chi_{i}\left( y-x_{i}\right) \frac{\mu_{i ^{2}\left( y_{j}-x_{ij}\right) }{1+\mu_{i}^{2}\left \vert y-x_{i}\right \vert ^{2}},\text{ }i=1,...,k,\text{ }j=1,2, \end{align*}where $x_i=(x_{i1},x_{i2})$ and $y=(y_1,y_2)$. The estimations for $Y_{{\bf{x}},\mu,0}$, $Y_{x_{i},\mu_{i},j}$ has been known: \begin{lemma}\label{kernelapp}\cite{LY1}\[ L_{{\bf{x}},\mu}Y_{{\bf{x}},\mu,0}=O\left( \mu^{-3}\right) \text{, }L_{{\bf{x}},\mu}Y_{x_{i},\mu_{i},j}=O\left( 1\right) \text{, }i=1,...,k,\text{ }j=1,2.\] \end{lemma} \begin{proof} See the estimation (3.8) in \cite{LY1}. \end{proof} From Lemma \ref{kernelapp}, we see that $Y_{{\bf{x}},\mu,0}$, $Y_{x_{i},\mu_{i},j}$ are the approximate kernels for $L_{{\bf{x}},\mu}$.\\ Let \[ Z_{{\bf{x}},\mu,0}=-\Delta Y_{{\bf{x}},\mu,0}+h_{\mu}\left( y\right) Y_{{\bf{x}},\mu,0}, \] an \[ Z_{x_{i},\mu_{i},j}=-\Delta Y_{x_{i},\mu_{i},j}+h_{\mu}\left( y\right) Y_{x_{i},\mu_{i},j},\text{ }i=1,...,k\text{, }j=1,2\text{. \] We define two subspace of $\mathbb{X}_{\alpha,{\bf{x}},\mu}$, $\mathbb{Y}_{\alpha,{\bf{x}},\mu}$ as \begin{align*} E_{{\bf{x}},\mu} & :=\{\xi \in \mathbb{X}_{\alpha,{\bf{x}},\mu}\ | \int_{\Omega}Z_{{\bf{x}},\mu,0}\xi dx=\int_{\Omega}Z_{x_{i},\mu_{i},j \xi dx=0\text{, }i=1,...,k,\text{ }j=1,2\},\\ F_{ {\bf{x}},\mu} & :=\{\xi \in \mathbb{Y}_{\alpha,{\bf{x}},\mu}\ | \int_{\Omega}Y_{{\bf{x}},\mu,0}\xi dx=\int_{\Omega}Y_{x_{i},\mu_{i},j \xi dx=0\text{, }i=1,...,k,\text{ }j=1,2\}. \end{align*} and projection operator $Q_{\bf{x},\mu}:\mathbb{Y}_{\alpha,{\bf{x}},\mu }\rightarrow F_{\bf{x},\mu}$ b \[ Q_{\bf{x},\mu}\xi=\xi-c_{0}Z_{{\bf{x}},\mu,0}-\sum_{j=1}^{2}\sum _{i=1}^{k}c_{ij}Z_{x_{i},\mu_{i},j}, \]where $c_0, \ c_{i,j}$ are chosen so that $Q_{\bf{x},\mu}\xi\in F_{\bf{x},\mu}$. For the projection operator $Q_{\bf{x},\mu}$, we have the following result. \begin{lemma}\cite[Lemma 3.1]{LY1}\label{pp1} There is a constant $C>0$, independent of $\bf{x}$ and $\mu$, such that \[\|Q_{{\bf{x}},\mu}u\|_{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}\le C\|u\|_{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}.\] \end{lemma} The following lemma will be useful for our arguments. \begin{lemma}\label{tildeU} $\frac{1}{{\e}^2}e^{\tilde{W}_{\bf{x},\mu}}=O(\sum_{i=1}^ke^{u_{x_i,\mu_i}}1_{B_{d_i}}(x_i)+O({\e})(1-\sum_{i=1}^k1_{B_{d_i}(x_i)})).$ \end{lemma} \begin{proof} On $B_{d_i}(x_i)$, we see that \begin{equation*}\begin{aligned}\frac{1}{{\e}^2}e^{\tilde{W}_{\bf{x},\mu}}&=\frac{1}{{\e}^2}e^{w_{\bf{x},\mu}^{\ast}-\int_{\Omega}w_{\bf{x},\mu}^{\ast}(z)dz +c\left( w_{\bf{x},\mu}\right)}\\&=\frac{1}{{\e}^2}e^{u_{x_{i},\mu_{i}}\left( y\right) +\sum_{j\neq i}u_{0,\mu_{j}}\left( d_j\right)+\Gamma_i-\int_{\Omega}w_{\bf{x},\mu}^{\ast}(z)dz +c\left( w_{\bf{x},\mu}\right) },\end{aligned}\end{equation*} where $\Gamma_i:=8\pi\Big[\gamma \left( y,x_{i}\right) \left( 1-\frac{1}{{d} \mu_{i}^{2}}\right) +\sum_{j\neq i}( {G}_{{\bf{x}},\mu}\left( y,x_{j}\right)+\frac{1}{2\pi}\ln d_j) \left( 1-\frac{1}{{d} \mu_{j}^{2}}\right) \Big].$ In the proof of \cite[Proposition 2.1]{LY1}, the following estimations were obtained (see the estimations (2.13) and (2.22) in \cite{LY1}): \[-\int_\Omega w^*_{x_i,\mu_i}(y)dy=2\ln\mu_i+O(1), \quad c(w_{\bf{x},\mu})=-6\ln\mu+O(1).\] Moreover, $u_{0,\mu_{j}}(d_j)=O(\ln\frac{1}{\mu_i^2})$ on $\Omega\setminus[B_{d_j}(x_j)]$ and $\mu_i=O(\frac{1}{\sqrt{{\e}}})$ for all $i=1,...,k$. Thus we get that \begin{equation*}\begin{aligned}\frac{1}{{\e}^2}e^{\tilde{W}_{\bf{x},\mu}}&= \frac{1}{{\e}^2}e^{u_{x_{i},\mu_{i}}\left( y\right)}O(\mu^{-2(k-1)+2k-6})=O(e^{u_{x_{i},\mu_{i}}\left( y\right)}) \ \ \textrm{on}\ \ B_{d_i}(x_i).\end{aligned}\end{equation*} Similarly we get that \begin{equation*}\begin{aligned}\frac{1}{{\e}^2}e^{\tilde{W}_{\bf{x},\mu}}&=O({\e})\ \ \textrm{on}\ \ \Omega\setminus[\cup_{i}B_{d_i}(x_i)].\end{aligned}\end{equation*} \end{proof} The following invertibility result for the operator $Q_{\bf{x},\mu} L_{{\bf{x}},\mu} $ obtained in \cite{LY1} is essential for our arguments: \begin{theorem}\cite[Theorem A.2]{LY1} \label{pp2} The operator $Q_{\bf{x},\mu} L_{{\bf{x}},\mu} $ is an isomorphism from $E_{\bf{x},\mu}$ to $F_{\bf{x},\mu}$. Moreover, if $w\in E_{\bf{x},\mu}$ and $h\in F_{\bf{x},\mu}$ satisfy \[Q_{\bf{x},\mu} L_{{\bf{x}},\mu}w=h,\] then there is a constant $C>0$, independent of $\bf{x}$ and $\mu$, such that \[\|w\|_{\LI}+\|w\|_{\mathbb{X}_{\alpha,{\bf{x}},\mu}}\le C\ln\mu\|h\|_{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}.\] \end{theorem} We define $\tilde{g}_{{\bf{x}},\mu}\left( \eta\right) $ as \[ \tilde{g}_{{\bf{x}},\mu}\left( \eta\right) :=h_{\mu}\left( y\right) \eta-\frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\et }\left( 1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}\right) -\Delta \tilde {W}_{\bf{x},\mu}+8k\pi. \] The function $\tilde{g}_{{\bf{x}},\mu}\left( \eta\right) $ was introduced in \cite{LY1}, and the following estimations were obtained: \begin{lemma}\cite[Proposition 3.2]{LY1}\label{estg} There is an $\varepsilon_{0}>0$, such that for each $\varepsilon \in(0,\varepsilon_{0}]$, $\bf{x}$ which is closed to $\bf{q}$ with $\left \vert DG^{\ast }\left( {\bf{x}}\right) \right \vert \leq \frac{C}{\mu}$, and $\mu \in \lbrack \frac{\beta_{0}}{\sqrt{\varepsilon}},\frac{\beta_{1}}{\sqrt{\varepsilon}}]$, if $\eta, \eta'\in E_{{\bf{x}},\mu}$ satisfies $\|\bar{\eta}\|_{L^\infty(\Omega)}+\|\bar{\eta}\|_{\mathbb{X}_{\alpha,{\bf{x}},\mu}}\le\frac{1}{\mu}$ where $\bar{\eta}=\eta, \eta'$, then we have \begin{equation}\label{p12} \begin{aligned}\left \Vert \tilde{g _{{\bf{x}},\mu}\left( \eta\right) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}\leq \frac {C}{\mu^{2-\frac{\alpha}{2}}},\end{aligned}\end{equation} and \begin{equation} \left \Vert \tilde{g}_{{\bf{x}},\mu}\left( \eta\right) -\tilde{g}_{2,x,\mu }\left( \eta'\right) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}\leq \frac {C}{\mu }\left \Vert \eta-\eta'\right \Vert _{L^{\infty }\left( \Omega \right) }, \label{n1 \end{equation}where $C>0$ is a constant, independent of ${\bf{x}}, \mu, \eta, \eta'$. \end{lemma} \begin{proof} See \cite[(3.16)]{LY1} for the estimation \eqref{p12} and \cite[(3.21),(3,22)]{LY1} for the estimation \eqref{n1}. \end{proof} We remind that ${\bf{q}}$ is a non-degenerate critical point of $G^{\ast}\left( \bf{q} \right) $ with $D\left( \bf{q}\right) <0$. Now we have the following proposition. \begin{proposition}\label{pp} There is an $\varepsilon_{0}>0$, such that for each $\varepsilon \in(0,\varepsilon_{0}]$, $\bf{x}$ which is closed to $\bf{q}$ with $\left \vert DG^{\ast }\left( {\bf{x}}\right) \right \vert \leq \frac{C}{\mu}$, and $\mu \in \lbrack \frac{\beta_{0}}{\sqrt{\varepsilon}},\frac{\beta_{1}}{\sqrt{\varepsilon}}]$, there exists $\eta_{{\bf{x}},\mu}\in E_{\bf{x},\mu}$ satisfyin \begin{equation}\label{fix2} Q_{\bf{x},\mu}( L_{{\bf{x}},\mu}\left( \eta_{{\bf{x}},\mu}\right)) =Q(g_{{\bf{x}},\mu}(\eta_{{\bf{x}},\mu})) \end{equation} Moreover, \begin{align*} \left \Vert \eta_{{\bf{x}},\mu}\right \Vert _{L^{\infty}}+\left \Vert \eta_{{\bf{x}},\mu }\right \Vert _{\mathbb{X}_{\alpha,{\bf{x}},\mu}} \leq \frac{C\ln \mu}{\mu^{2-\frac{\alpha}{2}}},\end{align*} where $C>0$ is independent of $\e>0$. Here $\alpha\in(0,\frac{1}{2})$ is the same constant as in $\mathbb{X}_{\alpha,{\bf{x}},\mu}$ and $\mathbb{Y}_{\alpha,{\bf{x}},\mu}$. \end{proposition} \begin{proof} Define \ \begin{array} [c]{c S_{\bf{x},\mu}:=\left \{ \eta \in E_{\bf{x},\mu}\ |\ \left \Vert \eta\right \Vert _{L^{\infty}\left( \Omega \right) }+\left \Vert \eta\right \Vert _{\mathbb{X}_{\alpha,{\bf{x}},\mu}}\leq \frac{1}{\mu}\right \}. \end{array} \] We denote $\left \Vert \eta\right \Vert _{S_{{\bf{x}},\mu}}:= \left \Vert \eta\right \Vert _{L^{\infty}\left( \Omega \right) }+\left \Vert \eta\right \Vert _{\mathbb{X}_{\alpha,{\bf{x}},\mu}}$ as the norm in $S_{\bf{x},\mu}$. We consider the following mapping \ \begin{array} [c]{c B_{\bf{x},\mu}:\eta \rightarrow (Q_{\bf{x},\mu}L_{{\bf{x}},\mu})^{-1}[Q_{\bf{x},\mu}{g}_{{\bf{x}},\mu}\left( \eta_{{\bf{x}},\mu}\right)] . \end{array} \] Step 1. First, we claim that $B_{\bf{x},\mu}$ maps $S_{\bf{x},\mu}$ to $S_{\bf{x},\mu}$. In view of Theorem \ref{pp2}, and Lemma \ref{pp1}, we have for some constant $C>0$, independent of $\e>0$, \begin{align*} \left \Vert B_{\bf{x},\mu}\left( \eta\right) \right \Vert _{S_{\bf{x},\mu}} \leq C\ln \mu \left \Vert {g}_{{\bf{x}},\mu}\left(\eta\right) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}. \end{align*} From the definition of $\tilde{g}_{{\bf{x}},\mu}\left( \eta\right) $ and $G(1+s-e^{s})=s$, we see that \begin{equation*} \begin{aligned} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right) \\&=\frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) -\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta)} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta)})^2 \\&=\frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) -\frac{1}{{\varepsilon}^{2}}e^{ u_0+\tilde{W}_{\bf{x},\mu}+\eta } (1-e^{ u_0+\tilde{W}_{\bf{x},\mu}+\eta})^2 \\& +\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})^2 \\&-\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta)} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta)})^2 . \end{aligned}\end{equation*}If $t=1+s-e^s$, then we see that $G(t)=G(1+s-e^{s})=s$, $dt=(1-e^{s})ds$, and $\frac{d}{dt}G(t)=\frac{ds}{dt}=\frac{1}{1-e^{s}}=\frac{1}{1-e^{G(t)}}$.\\ Since $\frac{d}{dt}e^{G(t)}(1-e^{G(t)})^2=e^{G(t)}(1-e^{G(t)})(1-3e^{G(t)})\frac{d}{dt}G(t)=e^{G(t)}(1-3e^{G(t)})$, we have for some $\theta\in(0,1)$, \begin{equation*} \begin{aligned} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right) =\frac{1}{{\varepsilon}^{2}}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) \\&-\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-\theta e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})} (1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-\theta e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}.\end{aligned}\end{equation*} Since $\frac{d}{dt}e^{G(t)}(1-3e^{G(t)})=\frac{e^{G(t)}(1-6e^{G(t)})}{1-e^{G(t)}}$ and $\frac{d}{dt}G(t)= \frac{1}{1-e^{G(t)}}$, we have for some $\theta'\in(\theta,1)$ and $\theta''\in(\theta',1)$, \begin{equation*} \begin{aligned} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right) \\&=\frac{1}{{\varepsilon}^{2}}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) \\&-\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})} (1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta} \\& +\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})} (1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta} \\& -\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-\theta e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})} (1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-\theta e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta} \\&=\frac{1}{{\varepsilon}^{2}}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) -\frac{1}{{\varepsilon}^{2}}e^{ 2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta} (1-3e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) \\& +\frac{e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta'e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})} (1-6e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})(\theta-1) e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta}}{\e^2(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})}, \end{aligned}\end{equation*} and thus \begin{equation*} \begin{aligned} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right) =\frac{2e^{3u_0+3\tilde{W}_{\bf{x},\mu}+3\eta}}{{\varepsilon}^{2}} \\& +\frac{\{e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta'e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})}-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})}\} }{\e^2(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})}\\&\times (1-6e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})(\theta-1) e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta} \\& +\frac{ (1-6e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})(\theta-1) e^{3u_0+3\tilde{W}_{\bf{x},\mu}+3\eta}}{\e^2(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})} \\&=\frac{2e^{3u_0+3\tilde{W}_{\bf{x},\mu}+3\eta}}{{\varepsilon}^{2}} \\& +\frac{(1-6e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})(1-\theta')(\theta-1) e^{3u_0+3\tilde{W}_{\bf{x},\mu}+3\eta} }{\e^2(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta'' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})} \\& +\frac{ (1-6e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})(\theta-1) e^{3u_0+3\tilde{W}_{\bf{x},\mu}+3\eta}}{\e^2(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta- \theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})})}. \end{aligned}\end{equation*} By Lemma \ref{tildeU} and $G(-\infty)=-\infty$, we see that as $\e\to0$, $e^{\tilde{W}_{\bf{x},\mu}}=O(\e)$ and $u_0+\tilde{W}_{\bf{x},\mu}+\eta\to\ -\infty$ uniformly on $\Omega$, which implies that \begin{equation} \begin{aligned}\label{totaldiff} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right) = O\Big( \frac{e^{3u_0+3\tilde{W}_{\bf{x},\mu}+3\eta}}{\e^2}\Big)\ \ \textrm{on}\ \ \Omega.\end{aligned}\end{equation} In view of Lemma \ref{tildeU}, we see that $e^{\tilde{W}_{\bf{x},\mu}}=O(\e^3)$ on $\Omega\setminus[(\cup_{i=1}^k B_{d_i}(x_i)]$, and thus \begin{equation} \begin{aligned}\label{l1} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right)=O(\e^7) \ \textrm{on} \ \Omega\setminus[(\cup_{i=1}^k B_{d_i}(x_i)]. \end{aligned}\end{equation} On $B_{d_i}(x_i)$, from Lemma \ref{tildeU}, we see that $e^{\tilde{W}_{\bf{x},\mu}}=O\Big(\frac{\mu_i^2\e^2}{(1+\mu_i^2|y-x_i|^2)^2}\Big)=O\Big(\frac{\e}{(1+\mu_i^2|y-x_i|^2)^2}\Big)$ and \begin{equation*} \begin{aligned} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right) =O\Big(\frac{\e}{(1+\mu_i^2|y-x_i|^2)^6}\Big), \end{aligned}\end{equation*} which implies that \begin{equation}\label{l3} \begin{aligned} & \frac{1}{\mu_i^2} \|[ {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right)](x_i+\mu_i^{-1}y) (1+|y|)^{1+\frac{\alpha}{2}}\|_{L^2(B_{d_i\mu_i}(0))} \\& = \Big\|\frac{O(\e^2)(1+|y|)^{1+\frac{\alpha}{2}}}{(1+|y|^2)^6} \Big\|_{L^2(B_{d_i\mu_i}(0))} = O(\e^2). \end{aligned}\end{equation} From the above arguments \eqref{l1}-\eqref{l3}, we have \begin{equation}\label{p11} \begin{aligned}\left \Vert {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left( \eta\right) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu }\leq \frac{C}{\mu^{2}}.\end{aligned}\end{equation} Combining Lemma \ref{estg} and the estimation \eqref{p11} together, we obtai \begin{equation} \left \Vert {g}_{{\bf{x}},\mu}\left(\eta\right) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}\leq \frac{C}{\mu^{2-\frac{\alpha}{2}}}. \label{p09 \end{equation} By \eqref{p09}, we see that for large $\mu>0$ (i.e. for small $\e>0$), $B_{\bf{x},\mu}$ maps $S_{\bf{x},\mu}$ to $S_{\bf{x},\mu}$. Step 2. Now we claim that $B_{\bf{x},\mu}$ is a contraction map. In view of Theorem \ref{pp2}, and Lemma \ref{pp1}, there is a constant $C>0$, independent of $\e>0$, satisfying for any $ \eta ,\eta' \in S_{\bf{x},\mu}$, \begin{equation}\begin{aligned}\label{p06} & \left \Vert B_{\bf{x},\mu}\left( \eta\right) -B_{\bf{x},\mu}\left( \eta' \right) \right \Vert _{S_{\bf{x},\mu } \leq C \ln \mu \left \Vert g_{\bf{x},\mu}\left( \eta\right) -g_{\bf{x},\mu}( \eta' ) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}. \end{aligned}\end{equation} To estimate $\left \Vert {g}_{{\bf{x}},\mu}\left(\eta\right) - g_{{\bf{x}},\mu}(\eta') \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}$, we observe tha \begin{align*} \left \Vert {g}_{{\bf{x}},\mu}\left(\eta\right) - g_{{\bf{x}},\mu}(\eta') \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}} & \leq \left \Vert {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde {g}_{{\bf{x}},\mu}\left( \eta\right) -\left( g_{{\bf{x}},\mu}(\eta') -\tilde{g}_{{\bf{x}},\mu}\left( \eta ^{\prime}\right) \right) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}\\ & +\left \Vert \tilde{g}_{{\bf{x}},\mu}\left( \eta\right) -\tilde{g _{{\bf{x}},\mu}\left( \eta'\right) \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}}. \end{align*} We see that \begin{align*} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde {g}_{{\bf{x}},\mu}\left( \eta\right) -\left( g_{{\bf{x}},\mu}(\eta') -\tilde{g}_{{\bf{x}},\mu}\left( \eta ^{\prime}\right) \right) \\&= \frac{1}{\e^2}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta}(1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) -\frac{1}{\e^2}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\eta'}(1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta'}) \\& +\frac{1}{\e^2}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta})}(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta} )})^2 \\&-\frac{1}{\e^2}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta'-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta'})}(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta'-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta'} )})^2 \\&-\frac{1}{\e^2}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta)}(1- e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta )})^2 +\frac{1}{\e^2}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta')}(1- e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta' )})^2.\end{align*} Then for some numbers $\xi_0, \xi_1, \xi_2, \xi_3$ between $\eta$ and $\eta'$, and some $\theta\in(0,1)$, $\theta'\in(\theta,1)$, we have \begin{align*} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde {g}_{{\bf{x}},\mu}\left( \eta\right) -\left( g_{{\bf{x}},\mu}(\eta') -\tilde{g}_{{\bf{x}},\mu}\left( \eta ^{\prime}\right) \right) \\&= \frac{1}{\e^2}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\xi_0}(2-3e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_0})(\eta-\eta') \\& +\frac{ e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1})}}{\e^2}(1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1} )})(1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1})(\eta-\eta') \\&+\frac{1}{\e^2}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_2)}(1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_2)})(\eta'-\eta) \\&= \frac{1}{\e^2}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\xi_0}(2-3e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_0})(\eta-\eta') \\& +\Big\{\frac{ e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1})}}{\e^2}(1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1} )})\\&-\frac{ e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_2)}}{\e^2}(1-3e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_2 )})\Big\}(\eta-\eta') \\&-\frac{1}{\e^2}e^{2u_0+2\tilde{W}_{\bf{x},\mu}+2\xi_1}(1-3e^{G(u_0+\tilde{W}_{\bf{x},\mu}+\xi_1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1})})(\eta-\eta') \\&=O\Big(\frac{e^{2u_0+2\tilde{W}_{\bf{x},\mu}}}{\e^2}\Big)(\eta-\eta') \\&+ \frac{(1-6e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_3-\theta e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1})})}{\e^2(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_3-\theta e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1})})}(\eta-\eta')(\xi_1-\xi_2-e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1}) \\&\times \Big\{\frac{e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_3}-\theta e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1}}{(1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\xi_3-\theta' e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_1})})}+e^{u_0+\tilde{W}_{\bf{x},\mu}+\xi_3}\Big\} \\&=O\Big(\frac{e^{2u_0+2\tilde{W}_{\bf{x},\mu}}}{\e^2}\Big)(\eta-\eta')+ O\Big(\frac{e^{u_0+\tilde{W}_{\bf{x},\mu}}}{\e^2}\Big)(\eta-\eta')^2.\end{align*} On $\Omega\setminus [\cup B_{d_i}(x_i)]$, we see that from Lemma \ref{tildeU}, \begin{equation} \begin{aligned}\label{n2} & \| {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde {g}_{{\bf{x}},\mu}\left( \eta\right) -\left( g_{{\bf{x}},\mu}(\eta') -\tilde{g}_{{\bf{x}},\mu}\left( \eta ^{\prime}\right) \right) \|_{L^\infty(\Omega\setminus [\cup B_{d_i}(x_i)])} \\&\le O(\e)(\|\eta-\eta'\|_{L^\infty(\Omega)}+(\|\eta\|_{L^\infty(\Omega)}+\|\eta'\|_{L^\infty(\Omega)})\|\eta-\eta'\|_{L^\infty(\Omega)}) . \end{aligned}\end{equation} On $B_{d_i}(x_i)$, from Lemma \ref{tildeU}, we see that $e^{\tilde{W}_{\bf{x},\mu}}=O\Big(\frac{\mu_i^2\e^2}{(1+\mu_i^2|y-x_i|^2)^2}\Big)=O\Big(\frac{\e}{(1+\mu_i^2|y-x_i|^2)^2}\Big)$ and\begin{equation} \begin{aligned}\label{n3} & \frac{1}{\mu_i^2} \|[ {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde {g}_{{\bf{x}},\mu}\left( \eta\right) -\left( g_{{\bf{x}},\mu}(\eta') -\tilde{g}_{{\bf{x}},\mu}\left( \eta ^{\prime}\right) \right)](x_i+\mu_i^{-1}y) (1+|y|)^{1+\frac{\alpha}{2}}\|_{L^2(B_{d_i\mu_i}(0))} \\& =(\|\eta\|_{L^\infty(\Omega)}+\|\eta'\|_{L^\infty(\Omega)}+O(\e))\|\eta-\eta'\|_{L^\infty(\Omega)}. \end{aligned}\end{equation} From Lemma \ref{estg}, and the above estimations \eqref{n2}-\eqref{n3}, we have \begin{equation}\label{p07} \begin{aligned} & \left \Vert {g}_{{\bf{x}},\mu}\left(\eta\right) - g_{{\bf{x}},\mu}(\eta') \right \Vert _{\mathbb{Y}_{\alpha,{\bf{x}},\mu}} \le(\|\eta\|_{L^\infty(\Omega)}+\|\eta'\|_{L^\infty(\Omega)}+O(\e^{\frac{1}{2}}))\|\eta-\eta'\|_{L^\infty(\Omega)}. \end{aligned}\end{equation} In view of the estimations (\ref{p06})-(\ref{p07}), we obtain that $B_{\bf{x},\mu}$ is a contraction map on $S_{\bf{x},\mu}$. Step 3. In view of Step 1, Step 2, and contraction mapping theorem, there exists a unique solution $ \eta_{{\bf{x}},\mu} \in S_{\bf{x},\mu}$ of (\ref{fix2}). Moreover, from Theorem \ref{pp2}, Lemma \ref{pp1}, and \eqref{p09}, we obtain that \begin{align*} \left \Vert \eta_{{\bf{x}},\mu}\right \Vert _{L^{\infty}}+\left \Vert \eta_{{\bf{x}},\mu }\right \Vert _{\mathbb{X}_{\alpha,{\bf{x}},\mu}} \leq \frac{C\ln \mu}{\mu^{2-\frac{\alpha}{2}}}, \end{align*} where $C>0$ is independent of $\e>0$. Now we complete the proof of Proposition \ref{pp}. \end{proof} By Proposition \ref{pp}, we get that for any $\mu\in \left[ \frac{\beta_{0}}{\sqrt{\varepsilon} ,\frac{\beta_{1}}{\sqrt{\varepsilon}}\right] $, and any $\bf{x}$ close to $\bf{q}$, where ${\bf{q}}$ is a non-degenerate critical point of $G^{\ast}\left( \bf{q} \right) $ with $D\left( \bf{q}\right) <0$, there is $\eta_{{\bf{x}},\mu} \in S_{\bf{x},\mu}$ such tha \begin{equation}\begin{aligned} \label{final} &\Delta(\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}) \\&=- \frac{1}{{\varepsilon}^{2}}e^{G(u_0+\tilde {W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}(1-e^{G(u_0+\tilde{W}_{x,\mu }+\eta_{{\bf{x}},\mu})})^2 +8k\pi+c_{0}Z_{{\bf{x}},\mu,0}+\sum_{i=1}^{k}\sum_{j=1}^{2} c_{ij}Z_{x,\mu,j},\end{aligned} \end{equation} where $c_{0}$, $c_{ij}$ are constants satisfying \[ L_{{\bf{x}},\mu}\left( \eta_{{\bf{x}},\mu}\right) -g_{{\bf{x}},\mu}\left( \eta_{{\bf{x}},\mu}\right)-c_{0}Z_{{\bf{x}},\mu,0}-\sum_{i=1}^{k}\sum_{j=1}^{2} c_{ij}Z_{x,\mu,j} \in F_{{\bf{x}},\mu}.\] In the following, we will choose $\bf{x},\mu$ suitably ( depending on $\varepsilon$ ) such that the corresponding $c_{0}$, $c_{ij}$ are zero and hence the solution $ \eta_{{\bf{x}},\mu} $ is exactly the solution to (\ref{err}) which implies that $ u_\e = 1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}$ is a solution to \eqref{1}. It is standard to prove the following lemma. \begin{lemma}\label{ppp} If \begin{equation*}\begin{aligned} &\int_{\Omega}\Big[ \Delta \eta_{{\bf{x}},\mu} +\frac{1}{{\varepsilon}^{2}}e^{G(u_0+\tilde {W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}(1-e^{G(u_0+\tilde{W}_{x,\mu }+\eta_{{\bf{x}},\mu})})^2+\Delta\tilde{W}_{\bf{x},\mu}-8k\pi \Big] Y_{x_{i},\mu_{i},j}dx=0, \end{aligned} \end{equation*} an \begin{equation*}\begin{aligned} &\int_{\Omega}\Big[ \Delta \eta_{{\bf{x}},\mu} +\frac{1}{{\varepsilon}^{2}}e^{G(u_0+\tilde {W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}(1-e^{G(u_0+\tilde{W}_{x,\mu }+\eta_{{\bf{x}},\mu})})^2+\Delta\tilde{W}_{\bf{x},\mu}-8k\pi \Big] Y_{{\bf{x}},\mu,0}dx=0, \end{aligned} \end{equation*} then $c_{0}=c_{ij}=0$ for $i=1,...,k$ and $j=1,2$. \end{lemma} Let $x_i=(x_{i1},x_{i2})$. By using the proof of \cite[Theorem 1.2]{LY1}, we get the following result. \begin{proposition}\label{pppp} We hav \begin{equation}\begin{aligned} &\int_{\Omega}\Big[ \Delta \eta_{{\bf{x}},\mu} +\frac{1}{{\varepsilon}^{2}}e^{G(u_0+\tilde {W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}(1-e^{G(u_0+\tilde{W}_{x,\mu }+\eta_{{\bf{x}},\mu})})^2+\Delta\tilde{W}_{\bf{x},\mu}-8k\pi \Big] Y_{x_{i},\mu_{i},j}dx \\&=A_0\frac{\partial G^{\ast}\left( {\bf{x}}\right) }{\partial x_{ij}}+O\left( \frac{\ln \mu}{\mu^{2-\frac{\alpha}{2}}}\right)\ \ \textrm{for}\ \ j=1,2, \label{1p1 \end{aligned} \end{equation} an \begin{equation}\begin{aligned} &\int_{\Omega}\Big[\Delta \eta_{{\bf{x}},\mu} +\frac{1}{{\varepsilon}^{2}}e^{G(u_0+\tilde {W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}(1-e^{G(u_0+\tilde{W}_{x,\mu }+\eta_{{\bf{x}},\mu})})^2+\Delta\tilde{W}_{\bf{x},\mu}-8k\pi \Big] Y_{{\bf{x}},\mu,0}dx \\&=\frac{8}{\rho_{1}\mu^{3}}\left( \sum_{i=1}^{k} \rho_{i}\left( \int_{\Omega_{i}\setminus B_{d_i}\left( x_i\right) \frac{e^{f_{{\bf{x}},i}}-1}{\left \vert y-x_{i}\right \vert ^{4}}-\int_{\mathbb{R}^2\setminus \Omega_{i}}\frac{1}{\left \vert y-x_{i}\right \vert ^{4}}\right) \right) \\&+B_0{\varepsilon}^{2}\mu +\frac{1}{\mu^{3}}O\left( \left \vert DG^{\ast}\left( {\bf{x}}\right) \right \vert ^{2}\ln \mu+\delta^{2}\right) +O\left( \frac{1}{\mu^{5}}\right), \label{1p2 \end{aligned} \end{equation} where $A_0, B_0>0$ are constants, $\delta>0$ is any small constant, $\Omega _{1},...,\Omega_{k}$ are any open set with $\Omega_{i}\cap \Omega_{j =\emptyset$ for $i\neq j$, $\cup_{i=1}^{k}\bar{\Omega}_{i}=\Omega$, $B_{d_{i }\left( x_{i}\right) \subset\subset \Omega_{i}$, $i=1,...,k$, an \begin{equation*}\begin{aligned} &f_{{\bf{x}},i}\left( y\right) = 8\pi \left( \gamma \left( y,x_{i}\right) -\gamma \left( x_{i},x_{i}\right) {\displaystyle \sum \limits_{j\neq i}^{k}} \left( G\left( y,x_{j}\right) -G\left( x_{i},x_{j}\right) \right) \right) +u_0\left( y\right) -u_0\left( x_{i}\right). \end{aligned}\end{equation*} \end{proposition} \begin{proof} In \cite{LY1}, if $\eta_{{\bf{x}},\mu}\in \mathbb{X}_{\alpha,{\bf{x}},\mu}$ satisfies $\left \Vert \eta_{{\bf{x}},\mu}\right \Vert _{L^{\infty}\left( \Omega \right) }+\left \Vert \eta_{{\bf{x}},\mu}\right \Vert _{\mathbb{X}_{\alpha,{\bf{x}},\mu}}\leq \frac{C\ln \mu}{\mu^{2-\frac{\alpha}{2}}}$, then the followings hold: \begin{equation}\begin{aligned} &\int_{\Omega}\left( \Delta \left( \tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}\right) +\frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu }\left( 1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}}\right) -8k\pi \right) Y_{x_{i},\mu_{i},j}dx\\& =A_0\frac{\partial G^{\ast}\left( {\bf{x}}\right) }{\partial x_{ij}}+O\left( \frac{\ln \mu}{\mu^{2-\frac{\alpha}{2}}}\right)\ \ \textrm{for}\ \ j=1,2, \end{aligned}\label{1p11 \end{equation} an \begin{equation}\begin{aligned}& \int_{\Omega}\left( \Delta \left( \tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}\right) +\frac {1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}}\left( 1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}}\right) -8k\pi \right) Y_{{\bf{x}},\mu,0}dx\\& =\frac{8}{\rho_{1}\mu^{3}}\left( {\displaystyle \sum \limits_{i=1}^{k}} \rho_{i}\left( \int_{\Omega_{i}\setminus B_{d_i}\left( x_i\right) \frac{e^{f_{{\bf{x}},i}}-1}{\left \vert y-x_{i}\right \vert ^{4}}-\int_{\mathbb{R}^2\setminus \Omega_{i}}\frac{1}{\left \vert y-x_{i}\right \vert ^{4}}\right) \right) +B_0{\varepsilon}^{2}\mu \\& +\frac{1}{\mu^{3}}O\left( \left \vert DG^{\ast}\left( {\bf{x}}\right) \right \vert ^{2}\ln \mu+\delta^{2}\right) +O\left( \frac{1}{\mu^{5}}\right) . \end{aligned}\label{1p12 \end{equation} Indeed, the estimation \eqref{1p11} was obtained in \cite[(4.26)]{LY1} and the estimation \eqref{1p12} was obtained in \cite[(4.28)]{LY1}.\\ Comparing to our integral \eqref{1p1}-\eqref{1p11} and (\ref{1p2})-\eqref{1p12}, the differences are the following integrals \begin{equation*}\begin{aligned} &\int_{\Omega}\Big\{ \frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}}) \\& -\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})})^2\Big\} Y_{x_{i},\mu_{i},j}dx, \end{aligned}\end{equation*} an \begin{equation*}\begin{aligned} &\int_{\Omega}\Big\{ \frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}}) \\& -\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})})^2\Big\} Y_{{\bf{x}},\mu,0}dx. \end{aligned}\end{equation*} From \eqref{totaldiff}, we remind that \begin{equation}\label{estforu21} \begin{aligned} & {g}_{{\bf{x}},\mu}\left(\eta\right) -\tilde{g}_{{\bf{x}},\mu}\left(\eta\right) \\&=\frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta}) -\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta)} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta)})^2 \\&=O\Big( \frac{e^{3u_0+3\tilde{W}_{\bf{x},\mu}+3\eta}}{\e^2}\Big)\ \ \textrm{on}\ \ \Omega. \end{aligned}\end{equation} From Lemma \ref{tildeU}, we see that \begin{equation}\begin{aligned} e^{\tilde{W}_{\bf{x},\mu}}=O\Big(\frac{\e}{(1+\mu_i^2|y-x_i|^2)^2}\Big)\ \textrm{on}\ B_{2d_i}(x_i)\ \textrm{and}\ e^{\tilde{W}_{\bf{x},\mu}}=O(\e^3)\ \textrm{on}\ \Omega\setminus[\cup B_{d_i}(x_i)]. \label{estforu2}\end{aligned}\end{equation} Then from \eqref{estforu21}-\eqref{estforu2}, we see that \begin{equation}\begin{aligned}\label{estforu3} &\int_{\Omega}\Big\{ \frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}}) \\& -\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})})^2\Big\} Y_{x_{i},\mu_{i},j}dy\\& =\int_{\Omega}O\Big(\frac{e^{3u_0+3\tilde{W}_{\bf{x},\mu}}}{{\varepsilon}^{2}}\Big)Y_{x_{i},\mu_{i},j}dy =O\Big(\frac{1}{\mu^{3}}\Big).\end{aligned}\end{equation} Similarly, we also see that from \eqref{estforu21}-\eqref{estforu2}, \begin{equation}\begin{aligned}\label{estforu4} &\int_{\Omega}\Big\{ \frac{1}{{\varepsilon}^{2}}e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}} (1-e^{u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}}) \\&-\frac{1}{{\varepsilon}^{2}}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})} (1-e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})})^2\Big\} Y_{{\bf{x}},\mu,0}dy \\& =\int_{\Omega}O\Big(\frac{e^{3u_0+3\tilde{W}_{\bf{x},\mu}}}{{\varepsilon}^{2}}\Big) Y_{{\bf{x}},\mu,0}dy =O\Big(\frac{1}{\mu^5}\Big).\end{aligned}\end{equation} In view of \eqref{1p11}-\eqref{estforu4}, we complete the proof of Proposition \ref{pppp}. \end{proof} \textbf{Completion of the proof of Theorem \ref{blmix}}: In view of Proposition \ref{pp}, we can find $\eta_{{\bf{x}},\mu}$ satisfying \eqref{final}. To complete the proof of Theorem \ref{blmix}, we need to find $({\bf{x}},\mu)$ suitably depending on $\varepsilon>0$ such that the corresponding $c_{0}$, $c_{ij}$ are zero in \eqref{final}. By using Proposition \ref{pppp}, we see that the conditions in Lemma \ref{ppp} are equivalent to \begin{equation}DG^*({\bf{x}})=O(\frac{\ln\mu}{\mu^{2-\frac{\alpha}{2}}}),\label{final1}\end{equation}and \begin{equation}\begin{aligned}\label{final2}&\frac{8}{\rho_{1}\mu^{3}}\left( {\displaystyle \sum \limits_{i=1}^{k}} \rho_{i}\left( \int_{\Omega_{i}\setminus B_{d_i}\left( x_i\right) \frac{e^{f_{{\bf{x}},i}}-1}{\left \vert y-x_{i}\right \vert ^{4}}-\int_{\mathbb{R}^2\setminus \Omega_{i}}\frac{1}{\left \vert y-x_{i}\right \vert ^{4}}\right) \right) + B_0{\varepsilon}^{2}\mu \\&\ =\frac{1}{\mu^{3}}O\left( \left \vert DG^{\ast}\left( {\bf{x}}\right) \right \vert ^{2}\ln \mu+\delta^{2}\right) +O\left( \frac{1}{\mu^{5}}\right).\end{aligned}\end{equation} Since $D({\bf{q}})<0$, we can find a small $\delta>0$, such that for $\bf{x}$ close to $\bf{q}$, we have \begin{equation*}\begin{aligned} {\displaystyle \sum \limits_{i=1}^{k}} \rho_{i}\left( \int_{\Omega_{i}\setminus B_{d_i}\left( x_i\right) \frac{e^{f_{{\bf{x}},i}}-1}{\left \vert y-x_{i}\right \vert ^{4}}-\int_{\mathbb{R}^2\setminus \Omega_{i}}\frac{1}{\left \vert y-x_{i}\right \vert ^{4}}\right) +O(\delta^2) <0.\end{aligned}\end{equation*} Then we obtain a solution $({\bf{x}},\mu )=({\bf{x}}(\e),\mu(\e))$ of \eqref{final1}-\eqref{final2} satisfying \begin{equation*}|DG^*({\bf{x}}(\e))|=O(\e^{1-\frac{\alpha}{4}}\ln\e), \ \ \mu(\e)\in\Big(\frac{\beta_0}{\sqrt{\e}},\frac{\beta_1}{\sqrt{\e}}\Big),\end{equation*} which implies the existence of a solution $u_\e$ to \eqref{1}. In view of $e^{\tilde{W}_{\bf{x},\mu}}=O(\e)$ on $\Omega$, $u_\e=F(v_\e)$, and \begin{equation*} \begin{aligned} &u_\e(y)=1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu}, \end{aligned} \end{equation*} we obtain that $\lim_{\e\to0} \sup_{\Omega}v_{\e}=-\infty$. Moreover, we remind that from \cite{LY1}, \begin{equation}\label{est2}\begin{aligned}\int_{B_{d_i}(x_i)}e^{w_{{\bf{x}},\mu}^*+u_0}dx=\frac{8^{k-1}\rho_1}{\Pi_{i=2}^k\mu_i^2}\Big(8\pi+O(\frac{\ln\mu}{\mu^2})\Big),\end{aligned}\end{equation} \begin{equation}\label{est3}\begin{aligned} \int_{\Omega}e^{w_{{\bf{x}},\mu}^*+u_0}dx=\frac{8^{k-1}\rho_1}{\Pi_{i=2}^k\mu_i^2}\Big(8k\pi+O(\frac{\ln\mu}{\mu^2})\Big), \end{aligned}\end{equation} and \begin{equation}\label{est4}\begin{aligned}w_{{\bf{x}},\mu}^*(x)=\sum_{i=1}^k w_{x_i,\mu_i}^*(x)=-2k\ln\mu+O(1)\ \textrm{ on}\ \Omega\setminus[\cup_{i=1}^{k}B_\delta(x_i))\ \textrm{for any}\ \delta>0.\end{aligned}\end{equation} Indeed, the estimation \eqref{est2} was obtained in \cite[(2.9)]{LY1}, the estimation \eqref{est3} was obtained in \cite[(2.10)]{LY1}, and the estimation \eqref{est4} was obtained in \cite[(2.12)]{LY1}. Then we obtain that \[\frac{e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}}{\int_{\Omega}e^{G(1+u_0+\tilde{W}_{\bf{x},\mu}+\eta_{{\bf{x}},\mu})}dx}=\frac{e^{G(u_{\e})}}{\int_{\Omega}e^{G(u_{\e})}dx }= \frac{e^{v_{\e}}}{\int_{\Omega}e^{v_{\e}}dx }\rightarrow \frac{1}{k}\sum_{i=1}^{k}\delta_{q_{i}}, \]in the sense of measure as $\e\to0$. At this point, we complete the proof of Theorem \ref{blmix}.\hfill$\square$ {\bf Acknowledgement}\\ The author wishes to thank the anonymous referees very much for careful reading and valuable comments.
2,877,628,090,595
arxiv
\section{Introduction} Deep neural networks have gained great success on a wide range of tasks such as visual recognition and machine translation~\citep{lecun2015deep}. They usually require a large number of labeled data that can be prohibitively expensive to collect, and even with sufficient supervision their performance can still be poor when being generalized to a new environment. The problem of discrepancy between training and testing data distribution is commonly referred to as \textit{domain shift} or \textit{covariant shift}~\citep{shimodaira2000improving}. To alleviate the effect of such shift, \textit{domain adaptation} sets out to obtain a model trained in a label-rich source domain to generalize well in an unlabeled target domain~\citep{pan2010survey}. Domain adaptation has benefited various applications in many practical scenarios, including but not limited to object detection under challenging conditions~\citep{chen2018domain}, cost-effective learning using only synthetic data to generalize to real-world imagery~\citep{vazquez2013virtual}, etc. Prevailing methods for unsupervised domain adaptation (UDA) are mostly based on \textit{domain alignment} which aims to learn domain-invariant features by reducing the distribution discrepancy between the source and target domain using some pre-defined metrics such as maximum mean discrepancy~\citep{gretton2007kernel,gretton2012optimal}. Recently,~\citet{ganin15} proposed to achieve domain alignment by domain-adversarial training (DAT) that reverses the gradients of a domain classifier to maximize domain confusion. Having yielded remarkable performance gain, DAT was employed in many subsequent UDA methods~\citep{long2017conditional, chen2019transferability, liu2019transferable}. Nevertheless, DAT with gradient reverse layers still face three critical restrictions when applying it to practical scenarios. (1) DAT cannot continuously provide effective gradients for learning domain-invariant representations. The reason is that the binary domain classifier has high-capacity to discriminate two domains and thus overwhelms adversarial training, which is usually solved by manually adjusting the weights of adversarial loss according to specific tasks such as~\cite{shu2018dirt}. (2) DAT cannot deal with pixel-level \textit{domain shift} that are frequently encountered in visual tasks~\citep{hoffman18a}. (3) The domain-invariant features learned by DAT are only based on intuition and learning theory~\cite{ben2010theory} but difficult to interpret, which impedes the investigation of the underlying mechanism of adversarial domain adaptation. To overcome the aforementioned difficulties, we propose a novel adversarial approach, namely Max-margin Domain-Adversarial Training (MDAT), to realize stable and comprehensive (\textit{i.e.} both feature-level and pixel-level) domain alignment. MDAT works based on a carefully-designed Adversarial Reconstruction Network (ARN). Specifically, ARN consists of a shared feature extractor, a label predictor, and a reconstruction network (\textit{i.e.} decoder) that serves as a domain classifier. MDAT enables an adversarial game between the feature extractor and the decoder. The decoder focuses on reconstructing features on source domain and pushing target features away from a \textit{margin}, while the feature extractor aims to fool the decoder by generating target features that can be reconstructed. In this adversarial way, three critical issues are subtly solved: (1) the max-margin loss reduces the discriminative capacity of domain classifier, balancing and stabilizing domain-adversarial training; (2) without involving any new network structures, MDAT achieves both pixel-level and feature-level domain alignment; (3) reconstructing adapted features to images reveals how the source and target domains are aligned by adversarial training. We evaluate ARN with MDAT on both visual and non-visual UDA benchmarks. It shows more training procedure and achieves significant improvement to DAT on all tasks with pixel-level or higher-level \textit{domain shift}. We also observe that it is insensitive to the choices of hyperparameters and as such is favorable for replication in practice. In principle, our approach is generic and can be used to enhance any domain adaptation methods that leverage domain alignment as an ingredient. \section{Related Work} Domain adaptation aims to transfer knowledge from one domain to another. \citet{ben2010theory} provide an upper bound of the test error on the target domain in terms of the source error and the $\mathcal{H}\triangle\mathcal{H}$-distance. As the source error is stationary for a fixed model, the goal of most UDA methods is to minimize the $\mathcal{H}\triangle\mathcal{H}$-distance by reducing some metrics such as Maximum Mean Discrepancy (MMD)~\citep{TzengHZSD14,Longicml15} and CORAL~\citep{sun2016deep}. Inspired by Generative Adversarial Networks (GAN)~\citep{goodfellow2014generative}, \citet{ganin15} proposed to learn domain-invariant features by Domain-Adversarial Training (DAT), which has inspired many UDA methods thereafter. For example, \citet{zhang2019bridging} propose a new divergence for distribution comparison based on minimax optimization and \citet{wang2019negative} discover that filtering our unrelated source samples helps avoid negative transfer in DAT. Adversarial Discriminative Domain Adaptation (ADDA) tried to fool the label classifier by adversarial training but not in an end-to-end manner. CyCADA~\citep{hoffman18a} and PixelDA~\citep{bousmalis2017unsupervised} leveraged GAN to conduct both feature-level and pixel-level domain adaptation, which yields significant improvement yet the network complexity is high. Recent works explore that DAT deteriorates feature learning, and hence they propose to overcome it by generating transferable examples~\citep{liu2019transferable} or involving extra regularizer to retain discriminability~\citep{chen2019transferability}. These approaches can also be directly applied to MDAT for further enhancement. Another line of approaches that are relevant to our method is reconstruction network (\textit{i.e.} decoder network), which enables unsupervised image-to-image translation by learning pixel-level features~\citep{zhu2017unpaired}. In UDA, \citet{ghifary2016deep} employed a decoder network for pixel-level adaptation, and Domain Separate Network (DSN)~\citep{bousmalis2016domain} further leveraged multiple decoder networks to learn domain-specific features. These approaches treat the decoder network as an independent component for augmented feature learning that is irrelevant to domain alignment~\citep{glorot2011domain}. In this paper, we propose to innovatively utilize decoder network as domain classifier in MDAT which enables both feature-level and pixel-level domain alignment in a stable and straightforward fashion. \begin{figure*}[!htp] \centering \includegraphics[width=0.85\textwidth]{figure/ARN.pdf} \caption{The proposed architecture is composed of a shared feature extractor $G_e$ for two domains, a label predictor $G_y$ and a reconstruction network $G_r$. In addition to the basic supervised learning in the source domain, our adversarial reconstruction training enables the extractor $G_e$ to learn domain-invariant features. Specifically, the network $G_r$ aims to reconstruct the source samples $x^s$ and to impede the reconstruction of the target samples $x^t$, while the extractor $G_e$ tries to fool the reconstruction network in order to reconstruct the target samples $x^t$.} \label{fig:framework} \end{figure*} \section{Problem Formulation} \subsection{Problem Definition and Notations} In unsupervised domain adaptation, we assume that a model works with a labeled dataset $\textbf{X}_S$ and an unlabeled dataset $\textbf{X}_T$. Let $\textbf{X}_S=\{(\textbf{x}^s_i, y^s_i)\}_{i\in [{N_s}]}$ denote the labeled dataset of $N_s$ samples from the source domain, and the certain label $y^s_i$ belongs to the label space $Y$ that is a finite set ($Y={1,2,...,K}$). The other dataset $\textbf{X}_T=\{\textbf{x}^t_i\}_{i\in [{N_t}]}$ has $N_t$ samples from the target domain but has no labels. We further assume that two domains have different distributions, \textit{i.e.} $\textbf{x}^s_i \sim \mathcal{D}_S$ and $\textbf{x}^t_i \sim \mathcal{D}_T$. In other words, there exist some \textit{domain shift}~\citep{ben2010theory} between $\mathcal{D}_S$ and $\mathcal{D}_T$. The ultimate goal is to learn a model that can predict the label $y^t_i$ given the target input $\textbf{x}^t_i$. \subsection{Unbalanced Minimax Game in Domain-Adversarial Training} To achieve domain alignment, Domain-Adversarial Training (DAT) is a minimax game between a shared feature extractor $F$ for two domains and a domain classifier $D$. The domain classifier is trained to determine whether the input sample belongs to the source or the target domain while the feature extractor learns to deceive the domain classifier, which is formulated as: \begin{equation} \min_{F} \max_{D} \mathbb{E}_{x\sim D_s}[\ln{F(x)}]+\mathbb{E}_{x\sim D_t}[\ln{(1-D(F(x)))}]. \end{equation} We usually utilize Convolutional Neural Network (CNN) as the feature extractor and fully connected layers (FC) as the domain classifier. Theoretically, DAT reduces the cross-domain discrepancy and helps learn domain-invariant representations~\citep{ganin2016domain}. However, the training of DAT is rather unstable. Without sophisticated tuning of the hyper-parameters, DAT cannot often reach the convergence. Through empirical experiments, we observe that such instability is due to the imbalanced adversarial game between $D$ and $F$. The binary domain discriminator $D$ can easily achieve convergence with very high accuracy at an early training epoch, while it is much harder for the feature extractor $F$ to fool the domain discriminator and to simultaneously perform well on the source domain. In this sense, there is a strong likelihood that the domain classifier overwhelms DAT, and the only solution is to palliate the training of $D$ by tuning the hyper-parameters according to different tasks. In our method, we restrict the capacity of the domain classifier so as to form a minimax game in a harmonious manner. Inspired by the max-margin loss in Support Vector Machine (SVM)~\citep{cristianini2000introduction} (\textit{i.e.} hinge loss), if we push the source domain and the target domain away from a margin rather than as far as possible, then the training task of $F$ to fool $D$ becomes much easier. For a binary domain classifier, we define the margin loss as \begin{equation} \mathcal{L}_{mg}(y)=[0, m-t \cdot y]^{+}, \end{equation} where $y$ is the predicted domain label, $[\cdot]^{+}$ is $max(0,\cdot)$, $m$ is a positive margin and $t$ is the ground truth label for two domains (assuming $t=-1$ for the source domain and $t=1$ for the target domain). Then we introduce our MDAT scheme based on an innovative network architecture. \subsection{Max-margin Domain-Adversarial Training} Besides the training instability issue, DAT also suffers from restrictive feature-level alignment -- lack of pixel-level alignment. To realize stable and comprehensive domain alignment together, we first propose an Adversarial Reconstruction Network (ARN) and then elaborate MDAT. As depicted in Figure \ref{fig:framework}, our model consists of three parts including a shared feature extractor $G_e$ for both domains, a label predictor $G_y$ and a reconstruction network $G_r$. Let the feature extractor $G_e(\textbf{x};\theta_e)$ be a function parameterized by $\theta_e$ which maps an input sample $\textbf{x}$ to a deep embedding $\textbf{z}$. Let the label predictor $G_y(\textbf{z};\theta_y)$ be a task-specific function parameterized by $\theta_y$ which maps an embedding $\textbf{z}$ to a task-specific prediction $\hat{y}$. The reconstruction network $G_r(\textbf{z};\theta_r)$ is a decoding function parameterized by $\theta_r$ that maps an embedding $\textbf{z}$ to its corresponding reconstruction $\hat{\textbf{x}}$. The first learning objective for the feature extractor $G_e$ and the label predictor $G_y$ is to perform well in the source domain. For a supervised $K$-way classification problem, it is simply achieved by minimizing the negative log-likelihood of the ground truth class for each sample: \begin{equation} \mathcal{L}_{task}=\sum_{i=1}^{N_s}\mathcal{L}_y(\textbf{x}_i^s, \textbf{y}_i^s)=-\sum_{i=1}^{N_s}\textbf{y}^s_i \cdot \log G_y(G_e(\textbf{x}_i^s;\theta_e);\theta_y), \end{equation} where $\textbf{y}^s_i$ is the one-hot encoding of the class label $y^s_i$ and the logarithm operation is conducted on the softmax predictions of the model. The second objective is to render feature learning to be domain-invariant. This is motivated by the \textit{covariate shift} assumption~\citep{shimodaira2000improving} that indicates if the feature distributions $S(\textbf{z})=\{G_e(\textbf{x};\theta_e)|\textbf{x}\sim \mathcal{D}_S \}$ and $T(\textbf{z})=\{G_e(\textbf{x};\theta_e)|\textbf{x}\sim \mathcal{D}_T \}$ are similar, the source label predictor $G_y$ can achieve a similar accuracy in the target domain. To this end, we first design a decoder network $G_r$ that serves as a domain classifier. In MDAT, we train the decoder network $G_r$ to only reconstruct the features in the source domain and to push the features in the target domain away from a margin. In this way, the decoder has the functionality of distinguishing the source domain from the target domain. The objective of training $G_r$ is formulated as \begin{equation}\label{eq:r_loss}\small \min_{\theta_r} \sum_{i=1}^{N_s+N_t} \mathcal{L}_{mg}(\mathcal{L}_r (\textbf{x}_i))= \min_{\theta_r} \sum_{i=1}^{N_s} \mathcal{L}_r (\textbf{x}_i^s) + \sum_{j=1}^{N_t}[m-\mathcal{L}_r(\textbf{x}^t_j)]^+, \end{equation} where $m$ is a positive margin and $\mathcal{L}_r(\cdot)$ is the mean squared error (MSE) term for the reconstruction loss defined as \begin{equation} \mathcal{L}_r(\textbf{x}) = || G_r(G_e(\textbf{x};\theta_e);\theta_r) - \textbf{x} ||^2_2, \end{equation} where $||\cdot||^2_2$ denotes the squared $L_2$-norm. Compared the normal binary domain classifier (\textit{e.g.} fully connected layers), the decoder network is tailored as a smoothing domain discriminator by separating two domains from a specific margin rather than as far as possible. Oppositely, to form an adversarial game, the feature extractor $G_e$ learns to deceive $G_r$ such that the learned target features are indistinguishable to the source ones, which is formulated by: \begin{equation}\label{eq:e_loss} \min_{\theta_e} \sum_{j=1}^{N_t} \mathcal{L}_r(\textbf{x}_j^t). \end{equation} Then the whole learning procedure of ARN with MDAT can be formulated by: \begin{align} & \min_{\theta_e, \theta_c} \sum_{i=1}^{N_s}\mathcal{L}_y(\textbf{x}_i^s, \textbf{y}_i^s) + \alpha \sum_{j=1}^{N_t} \mathcal{L}_r(\textbf{x}_j^t), \\ & \min_{\theta_r} \sum_{i=1}^{N_s} \mathcal{L}_r (\textbf{x}_i^s) + \sum_{j=1}^{N_t}[m-\mathcal{L}_r(\textbf{x}^t_j)]^+, \end{align} where $\mathcal{L}_y$ denotes the negative log-likelihood of the ground truth class for labeled sample $(\textbf{x}_i^s, \textbf{y}_i^s)$ and $\alpha$ controls the interaction of the loss terms. In the following section, we derive an optimal solution of MDAT and provide theoretical justifications on how MDAT reduces the distribution discrepancy for UDA. \subsection{Optimal Solution of MDAT} Considering the adversarial game between a reconstruction network $R$ and a feature extractor $E$ (\textit{i.e.} $G_r$ and $G_e$ in our network, respectively), we prove that if the feature extractor $E$ maps both source domain $x^s\sim P_S(x^s)$ and target domain $x^t\sim P_T(x^t)$ to a common feature space $\mathcal{Z}=\{z=E(x)|z\sim P_z(z) \}$, the MDAT system reaches a Nash Equilibrium. This theoretically explains how MDAT enables the feature extractor to learn domain-invariant features. Similar to EBGAN~\cite{zhao2016energy}, we assume $E$ and $R$ have infinite capacity. Denote $R$ as the MSE of the reconstruction network. We first define two objectives: \begin{equation} V(E,R)=\int_{x^s,x^t}\mathcal{L}_{mg}(x^s,x^t)P_S(x^s)P_T(x^t)dx^s dx^t \end{equation} \begin{equation} U(E,R)=\int_{x^t}\mathcal{L}_r(x^t)P_T(x^t)dx^t \end{equation} In MDAT, we train the feature extractor $E$ to minimize the quantity $V(E,R)$ and train the reconstruction network $R$ to minimize the quantity $U(E,R)$. A Nash equilibrium of our system is a pair $(E^*,R^*)$ that satisfies: \begin{equation}\label{eq:v} V(E^*,R^*) \le V(E^*,R) \quad \forall R \end{equation} \begin{equation}\label{eq:u} U(E^*,R^*) \le U(E,R^*) \quad \forall E \end{equation} \begin{theorem} If a feature extractor $E$ maps both source domain $x^s\sim P_S(x^s)$ and target domain $x^t\sim P_T(x^t)$ to a common feature space $\mathcal{Z}=\{z=E(x)|z\sim P_z(z) \}$, the system reaches a Nash equilibrium $(E^*,R^*)$ and $V(E^*,R^*)=m$. \end{theorem} \textit{Proof.} We first prove Eq.\ref{eq:v}: \begin{align}\nonumber V(E^*,R)&=\int_{x^s}P_S(x^s)R(E^*(x^s))dx^s\\ &+\int_{x^t}P_T(x^t)\big[m-R\big(E^*(x^t)\big)\big]^+dx^t\\ &=\int_z\Big(P_z(z)R(z)+P_z(z)[m-R(z)]^+\Big)dz \end{align} As we know $f(x)=x+[m-x]^+$ is monotonically increasing on $[0,+\infty)$, $V(E^*,R)$ reaches its minimum when $R^*(z)=0$: \begin{align} V(E^*,R^*)&=m\int_z P_z(z)dz=m \end{align} When $R^*(z)=0$, we expand $U(E,R^*)$ in Eq.\ref{eq:u}: \begin{align} U(E,R^*)&=\int_{x^t}R^*\big(E(x^t)\big)P_T(x^t)dx^t\\ &=\int_z R^*(z)dz=0 \end{align} As $U(E,R^*)\geq 0$, we get $U(E^*,R^*) \le U(E,R^*)$. It can be easily observed that the optimal solution of MDAT is a Nash equilibrium when the feature extractor maps two domains into a common feature space, \textit{i.e.} aligning the distributions in the feature space. \subsection{Connection to Domain Adaptation Theories}\label{sec:da-theory} We further investigate how the proposed method connects the learning theory of domain adaptation. The rationale behind domain alignment is motivated from the learning theory of non-conservative domain adaptation problem by Ben-David et al.~\citep{ben2010theory}: \begin{theorem} Let $\mathcal{H}$ be the hypothesis space where $h \in \mathcal{H}$. Let $(\mathcal{D}_S, \epsilon_s)$ and $(\mathcal{D}_T, \epsilon_t)$ be the two domains and their corresponding generalization error functions. The expected error for the target domain is upper bounded by \begin{equation} \epsilon_t(h) \le \epsilon_s(h) + \frac{1}{2} d_\mathcal{\mathcal{H}\triangle\mathcal{H}}(\mathcal{D}_S,\mathcal{D}_T) + \lambda, \forall h \in \mathcal{H}, \end{equation} where the ideal risk $\lambda=\min_h [\epsilon_s(h)+\epsilon_t(h)]$, and $d_\mathcal{\mathcal{H}\triangle\mathcal{H}}(\mathcal{D}_S,\mathcal{D}_T)=2\sup_{h_1,h_2 \in \mathcal{H}} |\Pr_{x\sim \mathcal{D}_S}[h_1(x)\ne h_2(x)] - \Pr_{x\sim \mathcal{D}_T}[h_1(x)\ne h_2(x)]|$ \end{theorem} Theoretically, when we minimize the $\mathcal{H}\triangle\mathcal{H}$-distance, the upper bound of the expected error for the target domain is reduced accordingly. As derived in DAT~\citep{ganin15}, assuming a family of domain classifiers $\mathcal{H}_d$ to be rich enough to contain the symmetric difference hypothesis set of $\mathcal{H}_p$, such that $\mathcal{H}_p\triangle\mathcal{H}_p=\{ h|h=h_1 \oplus h_2, \, h_1,h_2 \in \mathcal{H}_p \}$ where $\oplus$ is XOR-function, the empirical $\mathcal{H}_p\triangle\mathcal{H}_p$-distance has an upper bound \textit{w.r.t.} the optimal domain classifier $h$: \begin{equation}\label{eq:bound}\footnotesize d_{\mathcal{H}_p \triangle \mathcal{H}_p}(\hat{\mathcal{D}}_S,\hat{\mathcal{D}}_T) \le 2\sup_{h\in \mathcal{H}_d}|\Pr_{\textbf{z}\sim \hat{\mathcal{D}}_S}[h(\textbf{z})=0] + \Pr_{\textbf{z}\sim \hat{\mathcal{D}}_T}[h(\textbf{z})=1]-1|, \end{equation} where $\hat{\mathcal{D}}_S$ and $\hat{\mathcal{D}}_T$ denote the distributions of the source and target feature space $\mathcal{Z}_S$ and $\mathcal{Z}_T$, respectively. Note that the MSE of $G_r$ plus a ceiling function is a form of domain classifier $h(\textbf{z})$, \textit{i.e.} $\lceil [m-\mathcal{L}_r(\cdot)]^+-0.5 \rceil$ for $m=1$. It maps source samples to $0$ and target samples to $1$ which is exactly the upper bound in Eq.\ref{eq:bound}. Hence, our reconstruction network $G_r$ maximizes the domain discrepancy with a margin and the feature extractor learns to minimize it adversarially. \begin{table*}[!htp] \small \centering \caption{We compare with general, statistics-based (\textbf{S}), reconstruction-based (\textbf{R}) and adversarial-based (\textbf{A}) domain adaptation approaches. We repeated each experiment for 3 times and report the average and standard deviation (std) of the test accuracy in the target domain.} \label{tab:benchmark} \vspace{1mm} \begin{tabular}{lcccc} \toprule \multicolumn{1}{r|}{Source} & MNIST & USPS & SVHN & SYN \\ \multicolumn{1}{r|}{Target} & USPS & MNIST & MNIST & SVHN \\ \midrule \multicolumn{1}{l|}{\textit{Source-Only model}} & 78.2 & 63.4 & 54.9 & 86.7 \\ \multicolumn{1}{l|}{\textit{Train on target}} & 96.5 & 99.4 & 99.4 & 91.3 \\ \midrule \multicolumn{1}{l|}{[\textbf{S}] MMD~\citep{Longicml15}} & 81.1 & - & 71.1 & 88.0 \\ \multicolumn{1}{l|}{[\textbf{S}] CORAL~\citep{sun2016deep}} & 80.7 & - & 63.1 & 85.2 \\ \midrule \multicolumn{1}{l|}{[\textbf{R}] DRCN~\citep{ghifary2016deep}} & 91.8 & 73.7 & 82.0 & 87.5 \\ \multicolumn{1}{l|}{[\textbf{R}] DSN~\citep{bousmalis2016domain}} & 91.3 & - & 82.7 & 91.2 \\ \midrule \multicolumn{1}{l|}{[\textbf{A}] DANN (DAT)~\citep{ganin2016domain}} & 85.1 & 73.0 & 74.7 & 90.3 \\ \multicolumn{1}{l|}{[\textbf{A}] ADDA~\citep{tzeng2017adversarial}} & 89.4 & 90.1 & 76.0 & - \\ \multicolumn{1}{l|}{[\textbf{A}] CDAN~\citep{long2017conditional}} & 93.9 & 96.9 & 88.5 & - \\ \multicolumn{1}{l|}{[\textbf{A}] CyCADA~\citep{hoffman18a}} & 95.6 & 96.5 & 90.4 & - \\ \multicolumn{1}{l|}{[\textbf{A}] BSP+DANN~\citep{chen2019transferability} } & 94.5 & 97.7 & 89.4 & - \\ \multicolumn{1}{l|}{[\textbf{A}] MCD~\citep{saito2018maximum} } & 96.5 & 94.1 & 96.2 & - \\ \multicolumn{1}{l|}{[\textbf{A}] CADA~\citep{zou2019consensus} } & 96.4 & 97.0 & 90.9 & - \\ \midrule \multicolumn{1}{l|}{\textbf{ARN w.o. MDAT}} & 93.1$\pm$0.3 & 76.5$\pm$1.2 & 67.4$\pm$0.9 & 86.8$\pm$0.5 \\ \multicolumn{1}{l|}{\textbf{ARN with MDAT (proposed)}} & \textbf{98.6$\pm$0.3} & \textbf{98.4$\pm$0.1} & \textbf{97.4$\pm$0.3} & \textbf{92.0$\pm$0.2 } \\ \bottomrule \end{tabular} \end{table*} \subsection{Discussions} Compared with the conventional DAT-based methods that are usually based on a binary logistic network~\citep{ganin15}, the proposed ARN with MDAT is more attractive and incorporates new merits conceptually and theoretically: \textbf{(1) Effective gradients and balanced adversarial training.} Using the decoder as domain classifier with a margin loss to restrain its overwhelming capacity in adversarial training, the adversarial game can continuously provide effective gradients for training the feature extractor, leading to better alignment and balanced adversarial training. Moreover, through the experiments in Section \ref{sec:exp}, we discover that our method shows more stable training procedure and strong robustness to the hyper-parameters, \textit{i.e.} $\alpha$ and $m$, greatly alleviating the parameters tuning for model selection. \textbf{(2) Richer information for comprehensive domain alignment.} Rather than typical DAT that uses a bit of domain information, MDAT utilizes the reconstruction network as the domain classifier that captures more domain-specific and pixel-level features during the unsupervised reconstruction~\citep{bousmalis2016domain}. Therefore, MDAT further helps address pixel-level domain shift apart from the feature-level shift, leading to comprehensive domain alignment in a straightforward manner. \textbf{(3) Feature interpretability for method validation.} MDAT allows us to visualize the features by directly reconstructing target features to images by the decoder network. It is crucial to understand to what extent the features are aligned since this helps to reveal the underlying mechanism of adversarial domain adaptation. We interpret these adapted features in Section \ref{sec:analyses}. \begin{table}\small \centering \caption{Comparisons on WiFi gesture recognition.}\label{tab:ida} \vspace{1mm} \begin{tabular}{l|c}\toprule Method & Accuracy (\%)\\\midrule \textit{Source-only} & 58.4$\pm$0.7 \\ {[\textbf{S}] MMD~\cite{Longicml15}} & 61.2$\pm$0.5 \\ {[\textbf{R}] DRCN~\cite{ghifary2016deep}} & 69.3$\pm$0.3 \\ {[\textbf{A}] DANN (DAT)~\cite{ganin15}} & 68.2$\pm$0.2 \\ {[\textbf{A}] ADDA~\cite{tzeng2017adversarial}} & 71.5$\pm$0.3 \\ {[\textbf{A}] CADA~\cite{zou2019consensus}} & 88.8$\pm$0.1 \\ \textbf{ARN with MDAT} & \textbf{91.3$\pm$0.2} \\ \bottomrule \end{tabular} \end{table} \section{Experiment}\label{sec:exp} We evaluate the proposed approach on several visual and non-visual UDA tasks with varying degrees of \textit{domain shift}. Then detailed analyses are conducted \textit{w.r.t.} toy dataset, parameter sensitivity, gradient and feature visualization. Dataset descriptions and implementation details are attached in the supplementary materials. \subsection{Setup} \textbf{Digits}~\cite{ganin2016domain}. We utilize four digit datasets including \textbf{MNIST}, \textbf{USPS}, \textbf{SVHN} and Synthetic Digits (\textbf{SYN}) that form four transfer tasks: MNIST$\to$USPS, USPS$\to$MNIST, SVHN$\to$MNIST and SYN$\to$SVHN. \textbf{Office-Home}~\cite{venkateswara2017deep} is a challenging UDA dataset including 15,500 images from 65 categories. It comprises four extremely distinct domains: Artistic images (\textbf{Ar}), ClipArt (\textbf{Cl}), Product images (\textbf{Pr}), and Real-World images (\textbf{Rw}). We evaluate on all twelve transfer tasks. \textbf{WiFi Gesture Recognition}~\citep{zou2019consensus} consists of six gestures recorded by Channel State Information (CSI)~\citep{xie2018precise}. Each CSI sample is a 2D matrix that depicts the gesture with the surrounding layout environment. Thus, the CSI data collected in two environments forms two domains, which formulates a spatial adaptation problem. We compare with state-of-the-art UDA methods that perform three ways of domain alignment. Specifically, \textbf{MMD} regularization~\citep{Longicml15} and \textbf{CORAL}~\citep{sun2016deep} are based on statistical distribution matching. \textbf{DRCN}~\citep{ghifary2016deep} and \textbf{DSN}~\citep{bousmalis2016domain} use the reconstruction network for UDA, while more prevailing UDA methods adopt domain-adversarial training including \textbf{DANN}~\cite{ganin15}, \textbf{ADDA}~\citep{tzeng2017adversarial}, \textbf{CyCADA}~\citep{hoffman18a}, \textbf{CDAN}~\cite{long2017conditional}, \textbf{MCD}~\cite{saito2018maximum}, \textbf{CADA}~\citep{zou2019consensus}, \textbf{TransNorm}~\cite{wang2019transferable} and \textbf{BSP}~\cite{chen2019transferability}. The baseline results are reported from their original papers where available. We used \textbf{Pytorch} to implement our model. For \textbf{Digits} dataset, we follow the same protocol in~\cite{hoffman18a} and the same network architecture of~\cite{ganin15}. For \textbf{Office-Home}, we adopt ResNet-50 pretrained on ImageNet as our backbone. According to the standard protocols in~\cite{Longicml15}, we employ all the labeled source samples and unlabeled target samples for training. For \textbf{WiFi Gesture Recognition} data, we employ the modified LeNet and the standard protocol in~\citep{zou2019consensus}. The designs of $G_r$ are the inverse of $G_e$ with pooling operation replaced by upsampling. We fix $\alpha=0.02$ and $m=5$ in all the experiments, which are obtained on \textbf{SVHN}$\to$\textbf{MNIST} by Baysian optimization~\citep{malkomes2016bayesian}. We adopt mini-batch SGD optimizer with momentum of 0.9 and the progressive training strategy in DANN~\citep{ganin15}. \subsection{Overall Results} The classification accuracies on Digits are shown in Table~\ref{tab:benchmark}. Our method outperforms all other methods on four transfer tasks. Specifically, for \textbf{SVHN$\to$MNIST} where severe pixel-level domain shift exists, our method significantly improves \textbf{DANN} by 22.7\%, which justifies the efficacy of \textbf{ARN} for addressing pixel-level shift. Our method also performs well when the target domain are quite small, achieving 98.6\% accuracy on \textbf{MNIST$\to$USPS}. In Table~\ref{tab:ida}, our method improves the source-only model by 32.9\% on WiFi spatial adaptation problem, which indicates that \textbf{MDAT} is also helpful for non-visual domain adaptation problem. Table~\ref{tab:office-home} shows the performance on large-scale dataset and \textbf{MDAT} yields better performance against other domain alignment approaches. \begin{table*}[htbp] \centering \caption{Accuracy (mean) of unsupervised domain adaptation on \textit{Office-Home} datasets across 3 independent runs.}\label{tab:office-home} \resizebox{\textwidth}{!}{ \begin{tabular}{l|cccccccccccc|c} \toprule Method & Ar$\to$Cl & Ar$\to$Pr & Ar$\to$Rw & Cl$\to$Ar & Cl$\to$Pr & Cl$\to$Rw & Pr$\to$Ar & Pr$\to$Cl & Pr$\to$Rw & Rw$\to$Ar & Rw$\to$Cl & Rw$\to$Pr & Avg \\ \midrule ResNet-50~\cite{he2016deep} & 34.9 & 50.0 & 58.0 & 37.4 & 41.9 & 46.2 & 38.5 & 31.2 & 60.4 & 53.9 & 41.2 & 59.9 & 46.1 \\ DAN~\cite{Longicml15} & 43.6 & 57.0 & 67.9 & 45.8 & 56.5 & 60.4 & 44.0 & 43.6 & 67.7 & 63.1 & 51.5 & 74.3 & 56.3 \\ DANN (DAT)~\cite{ganin15} & 45.6 & 59.3 & 70.1 & 47.0 & 58.5 & 60.9 & 46.1 & 43.7 & 68.5 & 63.2 & 51.8 & 76.8 & 57.6 \\ TransNorm+DANN~\cite{wang2019transferable} & 43.5 & 60.9 & 72.1 & 51.0 & 61.5 & 62.5 & 49.6 & 46.8 & 70.4 & 63.7 & 52.2 & 77.9 & 59.3 \\ CDAN~\cite{long2017conditional} & 49.0 & 69.3 & 74.5 & 54.4 & 66.0 & 68.4 & 55.6 & 48.3 & \textbf{75.9} & 68.4 & \textbf{55.4} & 80.5 & 63.8 \\ \textbf{ARN with MDAT} (Proposed) & \textbf{51.3} & \textbf{69.7} & \textbf{76.2} & \textbf{59.5} & \textbf{68.3} & \textbf{70.0} & \textbf{57.2} & \textbf{48.9} & 75.8 & \textbf{69.1} & 55.3 & \textbf{80.6} & \textbf{65.2} \\ \bottomrule \end{tabular} } \end{table*} \begin{figure*}[!htp] \centering \subfigure[Source-only Model]{ \includegraphics[width=0.26\textwidth]{figure/toy/process/source-only.png}\label{fig:toy-source}} \subfigure[DANN]{ \includegraphics[width=0.26\textwidth]{figure/toy/process/DANN.png}\label{fig:toy-dann}} \subfigure[MDAT]{ \includegraphics[width=0.26\textwidth]{figure/toy/process/MDAT2.png}\label{fig:toy-mdat}} \caption{The \textit{inter-twining moons} toy problem. Red and green dots indicate source samples, while blue dots are target samples. Black lines indicate the changes of decision boundaries during 10 epochs training.} \label{fig:toydata} \end{figure*} \begin{figure}[!htp] \centering \subfigure[Convergence]{ \label{fig:loss-compare} \includegraphics[width=0.23\textwidth]{figure/loss_comparison.pdf}\label{fig:train-loss}} \subfigure[Test Accuracy]{ \label{fig:acc-compare} \includegraphics[width=0.23\textwidth]{figure/accuracy_comparison.pdf}} \caption{The training procedure \textit{w.r.t.} loss and test accuracy. $\mathcal{L}_e$ is the training loss of reconstructing target samples in Eq. \ref{eq:e_loss}. $\mathcal{L}_r$ is the training loss of reconstruction network in Eq. \ref{eq:r_loss}. $\mathcal{L}_d$ is the domain loss in DAT~\citep{ganin15}. $\alpha$ is the penalty term of $\mathcal{L}_e$ and $\mathcal{L}_d$ in MDAT and DAT, respectively.} \label{fig:training} \end{figure} \subsection{Analyses}\label{sec:analyses} \textbf{Ablation study.} To verify the contribution of the reconstruction network $G_r$ and \textbf{MDAT}, we discard the term $\mathcal{L}_r(\textbf{x}^t)$ in Eq.\ref{eq:r_loss}, and evaluate the method, denoted as \textbf{ARN w.o. MDAT} in Table \ref{tab:benchmark}. Comparing it with source-only model, we can infer the improvement of reconstructing target samples. \textbf{ARN w.o. MDAT} improves tasks with low-level \textit{domain shift} such as \textbf{MNIST$\leftrightarrow$USPS}, which conforms with our discussion that unsupervised reconstruction is instrumental in learning pixel-level features. Comparing \textbf{ARN w.o. MDAT} with the original \textbf{ARN}, we can infer the contribution of \textbf{MDAT}. Table \ref{tab:benchmark} shows that the \textbf{MDAT} achieves an impressive margin-of-improvement. For \textbf{USPS$\to$MNIST} and \textbf{SVHN$\to$MNIST}, the MDAT improves \textbf{ARN w.o. MDAT} by around 30\%. It demonstrates that MDAT that helps learn domain-invariant features is the main reason for the tremendous improvement. \textbf{Toy dataset.} We study the behavior of MDAT on a variant of \textit{inter-twinning moons} 2D problem, where the target samples are rotated $30^{\circ}$ from the source samples. 300 samples are generated for each domain using \textbf{scikit-learn}~\cite{pedregosa2011scikit}. The adaptation ability is investigated by comparing MDAT with DANN and source-only model. As shown in Figure~\ref{fig:toydata}, we visualize the changing boundaries during 10 epochs training. In Figure~\ref{fig:toy-source}, the model is overfitting the source domain, and the decision boundary does not change. In Figure~\ref{fig:toy-dann} and~\ref{fig:toy-mdat}, both DANN and MDAT adapt the boundaries to the target samples, but MDAT shows faster and better adaptation during 10 epochs. Integrating the training procedure of \textbf{SVHN}$\to$\textbf{MNIST} in Figure~\ref{fig:train-loss}, we justify that more effective gradients are provided by MDAT for better adaptation performance. \begin{table}[!t] \centering \caption{Accuracy (\%) on \textbf{SVHN$\to$MNIST}.}\label{tab:sensitivity} \vspace{1mm} \scalebox{0.72}{ \begin{tabular}{c|cccccccc} \toprule $\alpha$ & 0.01 & 0.03 & 0.07 & 0.1 & 0.2 & 0.3 & 0.5 & 1.0 \\ \hline DANN & 71.1 & 74.1 & 72.7 & 74.1 & 74.7 & 9.6 & 9.7 & 10.3\\ ARN ($m=1$) & 95.7 & 95.9 & 93.3 & 93.2 & 80.1 & 75.3 & 73.1 & 67.5 \\ \midrule $m$ & 0 & 0.1 & 0.3 & 0.5 & 1.0 & 2.0 & 5.0 & 10.0 \\ \hline ARN ($\alpha=2e^{-2}$) & 64.3 & 64.5 & 75.2 & 90.0 & 96.0 & 97.4 & 97.7 & 96.7 \\ \bottomrule \end{tabular}} \end{table} \textbf{Gradients and stability analysis.} We further study the training procedure of MDAT on \textbf{SVHN$\to$MNIST} \textit{w.r.t.} loss and target accuracy in Figure~\ref{fig:loss-compare} and \ref{fig:acc-compare}, respectively. In Figure \ref{fig:loss-compare}, ARN has steadily decreasing loss ($\mathcal{L}_r$) for all $\alpha$, but the domain loss in DAT ($\mathcal{L}_d$) becomes extremely small at the beginning. These observations conform with our intuition: the domain classifier in DAT is too strong to impede the adversarial training, while MDAT provides more effective gradients for training feature extractor by restricting the capacity of domain classifier. With effective gradients, the adversarial game is more balanced, which is validated in Figure \ref{fig:acc-compare} where the test accuracy of ARN is more stable than that of DAT across training epochs. \begin{table*}[t] \centering \caption{Visualizing the source image, target images and reconstructed target images (R-Target Images) for four digit adaptation tasks.}\label{tab:visual} \vspace{1mm} \scalebox{0.9}{ \begin{tabular}{|c|c|c|c|}\toprule & Source Images & Target Images & R-Target Images \\ \midrule \textbf{MNIST$\to$USPS} & \includegraphics[width=0.22\textwidth]{figure/interpret/mu1.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/mu2.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/mu3.png} \\ \midrule % \textbf{USPS$\to$MNIST} & \includegraphics[width=0.22\textwidth]{figure/interpret/um1.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/um2.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/um3.png} \\ \midrule % \textbf{SVHN$\to$MNIST} & \includegraphics[width=0.22\textwidth]{figure/interpret/sm1.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/sm2.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/sm3.png} \\ \midrule % \textbf{SYN$\to$SVHN} & \includegraphics[width=0.22\textwidth]{figure/interpret/ds1.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/ds2.png} & \includegraphics[width=0.22\textwidth]{figure/interpret/ds3.png} \\ \bottomrule \end{tabular}} \end{table*} \begin{figure*}[t] \centering \subfigure[Source-only Model]{ \label{fig:source-tsne} \includegraphics[width=0.3\textwidth]{figure/source_tsne.pdf}} \subfigure[DANN]{ \label{fig:dann2-tsne} \includegraphics[width=0.3\textwidth]{figure/dann_tsne.pdf}} \subfigure[ARN]{ \label{fig:arn-tsne} \includegraphics[width=0.3\textwidth]{figure/arn_tsne.pdf}} \caption{T-SNE visualization on \textbf{SVHN$\to$MNIST} with their corresponding domain labels (red: target; blue: source) and category labels (10 classes) shown in the left and right subfigures, respectively.} \label{fig:visualize-tsne} \end{figure*} \textbf{Parameter sensitivity.} We investigate the sensitivity of $\alpha$ and $m$ on \textbf{SVHN$\to$MNIST}. In Table \ref{tab:sensitivity}, the results show that ARN achieves good performance as $\alpha \in [0.01,0.1]$. Even with larger $\alpha$, ARN is able to achieve convergence. In comparison, denoting $\alpha$ as the weight of adversarial domain loss ($\mathcal{L}_d$), the DANN cannot converge when $\alpha>0.2$ due to the imbalanced adversarial game between the overwhelming domain classifier and the feature extractor. For the sensitivity of $m$, the accuracy of ARN exceeds 96.0\% as $m\geq1$. In Section~\ref{sec:da-theory}, as $m\geq1$, the decoder serves as a domain classifier. These analyses validate that ARN is more insensitive to the parameters than that of DANN. Even in the worst cases, ARN can always achieve convergence. \textbf{T-SNE embeddings.} We analyze the performance of domain alignment for DANN (DAT)~\citep{ganin15} and ARN (MDAT) by plotting T-SNE embeddings of the features $\textbf{z}$ on the task \textbf{SVHN$\to$MNIST}. In Figure \ref{fig:source-tsne}, the source-only model obtains diverse embeddings for each category but the domains are not aligned. In Figure \ref{fig:dann2-tsne}, the DANN aligns two domains but the decision boundaries of the classifier are vague. In Figure \ref{fig:arn-tsne}, the proposed ARN effectively aligns two domains for all categories and the classifier boundaries are much clearer. \textbf{Interpreting adapted features via reconstruction.} One of the key advantages of ARN is that by visualizing the reconstructed target images we can infer how the features are domain-invariant. We reconstruct the MDAT features of the target domain and visualize them in Table \ref{tab:visual}. It is observed that the target features are reconstructed to source-like images by the decoder $G_r$. As discussed before, intuitively, MDAT forces the target feature to mimic the source feature distribution, which conforms with the visualization. Similar to image-to-image translation, this indicates that our method conducts implicit feature-to-feature translation that transfers the target features to source-like features, and hence the features are domain-invariant. \section{Conclusion} We proposed a new domain alignment approach namely Max-margin Domain-adversarial Training (MDAT) and a MDAT-based deep neural network for unsupervised domain adaptation. The proposed method offers effective and stable gradients for feature learning via an adversarial game between the feature extractor and the reconstruction network. The theoretical analysis provides justifications on how it minimizes the distribution discrepancy. Extensive experiments demonstrate the effectiveness of our method and we further interpret the features by visualization that conforms with our insight. Potential evaluation on semi-supervised learning constitutes our future work.
2,877,628,090,596
arxiv
\section{Introduction} \label{sec:intro} \IEEEPARstart{M}{etasurfaces}, defined as artificial structures with subwavelength thickness, have enabled the realization of novel compact devices with unprecedented electromagnetic control. Frequency selectivity, absorption, anomalous reflection/transmission, polarization conversion, and focusing are among the many electromagnetic functionalities that can be achieved through the careful design of the metasurfaces \cite{Chen2016, Glybovski2016}. With such unprecedented control of the response to the impinging wave, metasurfaces have led to important breakthroughs in electromagnetic cloaking, imaging, as well as in the creation of ultra-efficient, miniaturized antennas for sensors and implantable communication devices \cite{Chen2011, ChenXZ2012, Li2017b, Vellucci2017, Tasolamprou2017, Tsilipakos2018a}. A metasurface is generally defined as a planar array of periodic or quasi-periodic subwavelength elements, whose structure and coupling determine the electromagnetic function. As long as the elements remain subwavelength in size, the working principle of metasurfaces can be applied from microwaves to the visible range \cite{Glybovski2016}. Between these two extremes lies the terahertz (THz) band, for which designs have been reported to manipulate the phase, amplitude or polarization of the waves reflected or transmitted by the metasurface \cite{Qu2015, Zhang2016a, Liu2016a, Qu2017}. The main issue of conventional metasurfaces are the lack of adaptivity and reconfigurability as, in most designs, the electromagnetic function and its scope are fixed once the unit cell is designed. In order to avoid re-designing and re-fabricating metasurfaces each time a change in frequency or functionality is required, one can introduce tunable or switchable elements in the design of unit cells \cite{Oliveri2015}. The resulting reconfigurable metasurfaces can be globally or locally tunable depending on the specific design, and better yet through appropriate control means, they can become programmable \cite{Liu2018ISCAS, Liu2018}. Coding metamaterials, sometimes also referred to as \emph{digital} metamaterials, are a particular type of programmable metamaterials that discretize the number of states of a unit cell \cite{liu2017concepts, Liang2015a, DellaGiovampaola2014, Cui2014}. Each state is represented by a number of bits that are used to make the actual metasurface. A desired global response is achieved through a medium profile that is not necessarily periodical. Such structure, when built using locally switchable elements, can be elegantly described as a bit or state matrix and digitally controlled through reconfigurable devices such as Field-Programmable Gate Arrays (FPGAs) \cite{Cui2014}. Several examples implementing polarization control, focusing control, or beam manipulation in the GHz range can be found in the literature \cite{Cui2014, Yang2016, Huang2017}. Graphene, with its outstanding optoelectrical properties, has been recently introduced as a key enabler of a myriad of applications in countless domains \cite{Novoselov2012, Wu2012, Low2014, Hosseininejad2017}. It is well known that graphene naturally supports Surface Plasmon Polaritons (SPP) in the terahertz band, and therefore, becomes an excellent option for the implementation of terahertz sources \cite{Jornet2014TRANSCEIVER} and antennas \cite{Correas2017}, among others. The plasmonic nature of graphene at terahertz frequencies leads to miniaturized devices \cite{LlatserComparison}, whereas its inherent tunability has been leveraged in frequency-agile or reconfigurable concepts \cite{tamagnone2012reconfigurable, Hosseininejad2016, Hosseininejad2018EuCAP}. Some of such designs are array-based, and similar to programmable metasurfaces, they achieve reconfigurability by switching the state of its elements, i.e. \emph{tuning them in or out} \cite{Huang2012ARRAY, Xu2014MIMO, Hosseininejad2018WCNC}. \begin{figure*}[!ht] \centering {\includegraphics[width=1\textwidth]{./figures/Figure1.png} \vspace{-0.3cm} \caption{Sketch of the programmable graphene-based digital metasurface for THz beam steering and its design flow: from the unit cell to the global controller.\label{fig:summary}} \vspace{-0.3cm} \end{figure*} The above-mentioned properties turn graphene into a unique material for the implementation of terahertz reconfigurable metasurfaces. First explorations in this regard considered graphene reflectarrays and studied their amplitude-phase responses when tuning the chemical potential of the graphene \cite{Carrasco2013a}. By means of local tuning of the graphene elements through electrostatic biasing, the scattering profile of the reflectarray can be modified to achieve beam steering \cite{Orazbayev2017}, focusing \cite{Hosseininejad2019}, diffusive scattering \cite{Rouhi2017a}, cloaking \cite{Biswas2018} or wave vorticity control \cite{Chang2016}. \hl{These functionalities have been achieved in the microwave regime thanks to the use of} phase change materials (PCMs) \cite{Chen2015SR}, semiconductor diodes \cite{Perruisseau2010}, \hl{or microelectromechanical systems (MEMS)} \cite{Ma2014light}. \hl{However, as we approach to terahertz frequencies, diodes and MEMS become lossy and too large to be integrated within individual unit cells. On the other hand, PCMs offer limited reconfigurability as they generally switch among two states only}. \hl{With graphene}, the design can be greatly simplified and the device can be reconfigured much faster. Although the natural switchability of graphene in the terahertz band matches perfectly with the coding metamaterial paradigm, which has been explored in \cite{Wang2015} for the first time, the lack of literature to present a clear methodology for the unit cell design based on graphene and the coding of the metasurface seems evident. To bridge this gap, this paper presents a comprehensive methodology for the design of programmable metasurfaces from the unit cell to the metasurface controller (Figure \ref{fig:summary}). The proposed methodology is then applied to develop a metasurface for fine-grained beam steering at terahertz frequencies. The metasurface acts as a reflectarray that forms dynamically reconfigurable phase gradients in the X and Y directions, through which the reflected beam can be driven to any desired direction. The unit cells of the reflectarray are based on a graphene-insulator-graphene stack that achieves wide phase tuning via electrostatic biasing of the graphene patches. With two bits per unit cell and the appropriate controller, the proposed metasurface achieves a very wide steering range with low beam width. The proposed metasurface is particularly suitable for wireless communication applications. In this context, the use of the lower part of the THz spectrum (our design operates at $f=2$ THz) becomes extremely attractive due to the abundance of bandwidth that allows to satisfy the extreme data rate demands of 5G networks and beyond \cite{Akyildiz2014a}. Communication in the THz band, however, requires overcoming high path losses mainly through directive antennas with very narrow beams and through the use of smart programmable reflectors \cite{Akyildiz2018, Akyildiz2016, Tan2018, Liaskos2018a, Hosseininejad2018WCNC}. It is thus fundamental that these devices be capable of steering the THz beam with high precision to track the users and avoid interrupting communication. In summary, the main contributions of this paper are: \begin{itemize} \item The development of a comprehensive methodology for the design of graphene-based programmable terahertz metasurfaces for beam steering, from the unit cell up to the global controller. \item The use of the proposed methodology to design and evaluate a 2-bit coding metasurface for beam steering. Wide steering range with a sharp reflected beam and low overheads are demonstrated. The chosen frequency of operation is $f=2$ THz, within the range expected for THz wireless communication applications. \item A scalability analysis illustrating the relation between the different design parameters and performance metrics, and uncovering several co-design opportunities. \end{itemize} The rest of this paper is organized as shown schematically in Fig. \ref{fig:summary}. Section \ref{sec:geometry} presents a design space exploration of graphene-based unit cells from the perspectives of size, chemical potential, and number of states. Section \ref{sec:coding} formulates a design flow for beam steering coding metasurfaces, which is then tested by showing the effective steering of the antenna beam in several directions. Section \ref{sec:antenna} discusses and evaluates the implementation of the scheme that actually controls and (re)programs of the metasurface. Finally, Section \ref{sec:disc} outlines the main scalability trends and co-design opportunities of the proposed design. Section \ref{sec:conclusion} concludes the paper. \begin{figure*}[!ht] \centering \subfigure[Full layer unit cell (\textsc{1L}).\label{fig:1G}]{\includegraphics[width=1\columnwidth]{./figures/2a.PNG}} \subfigure[Single patch unit cell (\textsc{1G}).\label{fig:1G-patch}]{\includegraphics[width=1\columnwidth]{./figures/2b.png}} \subfigure[Dual patch unit cell (\textsc{2G}).\label{fig:2G-patch}]{\includegraphics[width=1\columnwidth]{./figures/2c.png}} \vspace{-0.2cm} \caption{A schematic representation of the graphene unit cells with their respective equivalent circuit models.} \label{fig:unit cells} \vspace{-0.3cm} \end{figure*} \section{Graphene-based unit cell} \label{sec:geometry} The design of any metasurface starts with its most basic building block, namely, the unit cell. For beam manipulation, we need to provide a unit cell with the ability of controlling the phase response over a wide range of values \cite{Qu2017}. Moreover, since the proposed device acts as a reflectarray, the unit cell needs to yield a high reflection amplitude at all times. To enable dynamic reconfigurability, it is necessary that both objectives can be met without physically changing any geometry. In this paper, reconfigurability is achieved at THz frequencies by means of the electrostatic tuning of graphene. \subsection{Graphene Modeling} We analyze different unit cells that leverage the tunability of graphene to achieve the desired phase variation with reasonable losses and without the need of changing any geometry. To drive the design and to perform an accurate evaluation of different proposals, we model graphene as an infinitesimally thin sheet with surface impedance $Z = 1/\sigma(\omega)$, where $\sigma(\omega)$ is the frequency-dependent conductivity of graphene. The complex conductivity is given by \begin{equation} \sigma\left(\omega\right)=\frac{2e^{2}}{\pi\hbar}\frac{k_{B}T}{\hbar}\ln\left[2\cosh\left[\frac{\mu_{c}}{2k_{B}T}\right]\right]\frac{i}{\omega+i\tau^{-1}},\label{eq:sigma_graphene} \end{equation} where $e$, $\hbar$ and $k_{B}$ are constants corresponding to the charge of an electron, the reduced Planck constant and the Boltzmann constant, respectively \cite{Hanson2008}. Variables $T$, $\tau$ and $\mu_{c}$ correspond to the temperature, the relaxation time and the chemical potential of the graphene layer. Note that this expression neglects the edge effects of the graphene and considers that the Drude-like intraband contribution dominates, which are experimentally validated assumptions at the sizes and frequencies considered in this work \cite{AbadalTCOM}. On the one hand, the phase control in graphene metasurface is achieved via changes in its complex conductivity when biased -- the effect that can be modeled through the chemical potential value $\mu_{c}$. The chemical potential can be controlled through electrostatic biasing, and therefore, we can meet the phase change requirement. On the other hand, the amplitude response depends on the losses within graphene, which are mostly influenced by the relaxation time value $\tau$. Note that the relaxation time is proportional to the carrier mobility, which depends on the quality of the material. For the purpose of this work, losses will be affordable as long as the carrier mobility of the graphene sheets is on the order of 10,000 cm\textsuperscript{2}V\textsuperscript{-1}s\textsuperscript{-1}, which is achievable with current fabrication and encapsulation techniques \cite{Banszerus2015}. Thus, the amplitude requirement can be met as well. \subsection{Unit Cell Design} Figure \ref{fig:unit cells} shows a schematic representation of the proposed unit cells together with their approximate equivalent circuit models. We numerically simulate different designs in CST Microwave Studio \cite{CST} to obtain the amplitude and phase responses. Then, through the equivalent circuit models, we verify the results of the numerical approach, and reason about the behavior of different unit cells. In all cases, we assume a lateral size of $d_{u} = 20\,\upmu$m. This value is around $\lambda_{0}/8$ for the targeted frequency of operation (2 THz), enough to provide the subwavelength behavior required in the metasurface. The relaxation time of graphene is assumed to be $\tau = 0.6$ ps, which is compatible with the carrier mobility requirements mentioned above. The first unit cell (Fig. \ref{fig:1G}) consists of a fully covered layer of graphene on top of a silicon substrate with refractive index $n_{Si} = 3.45$ and thickness $d_{Si} = 10 \upmu$m along with a metallic ground plane on the backside. In such a unit cell, graphene is a lossy medium that can be modeled through an $RL$ circuit. Fig. \ref{fig:results1G} plots the amplitude and phase responses of reflection coefficient for such unit cell versus frequency and the chemical potential. It is observed that the amplitude response is within an acceptable range, whereas the phase range is not wide enough --around 135\textsuperscript{o} at 2 THz assuming a maximum chemical potential range of 1 eV. A very good agreement is also obtained between the numerical results and the equivalent circuit model. \begin{figure*}[!ht] \centering \label{fig:1G1}{\includegraphics[width=2\columnwidth]{./figures/Figure3.png}} \vspace{-0.2cm} \caption{a) Amplitude and b) phase responses of reflection coefficient for the \textsc{1L} unit cell. Effect of chemical potential variation in c) amplitude and d) phase for various frequency. Unless noted, $E_{F} = 0.7$ eV, $\tau = 0.6$ ps, and $f = 2$ THz. \label{fig:results1G}} \vspace{-0.1cm} \end{figure*} \begin{figure*}[!ht] \centering \label{fig:1Gpatch1}{\includegraphics[width=2\columnwidth]{./figures/Figure4.png}} \vspace{-0.2cm} \caption{a) Amplitude and b) phase responses of reflection coefficient for the \textsc{1G} unit cell. Unless noted, $E_{F} = 0.2$ eV, $\tau = 0.6$ ps, $d_{G} = 16 \upmu$m, and $f = 2$ THz. Effect of chemical potential and patch size variation in c) amplitude and d) phase for a constant frequency $f = 2$ THz.\label{fig:results1Gp}} \vspace{-0.1cm} \end{figure*} \begin{figure*}[!ht] \centering \label{fig:2Gpatch1}{\includegraphics[width=2\columnwidth]{./figures/Figure5.png}} \vspace{-0.2cm} \caption{a) Amplitude and b) phase responses of reflection coefficient for the \textsc{2G} unit cell. In the frequency response figures, $E_{F} = 0.2$ eV, $\tau = 0.6$ ps, $d_{G} = 16 \upmu$m. Effect of top and bottom layers chemical potential in c) amplitude and d) phase for a constant frequency (the four chosen coding states used in the simulation are marked by different signs). In these figures, $\tau = 0.6$ ps, $d_{G} = 12 \upmu$m and $f = 2$ THz. \label{fig:results2Gp}} \vspace{-0.2cm} \end{figure*} The second unit cell (Fig. \ref{fig:1G-patch}) consists of a graphene patch that partially covers the unit cell. The substrate and ground plane remain unchanged. In this case, a capacitance is introduced to model the coupling effects generated between the edges of adjacent graphene patches. As shown in Fig. \ref{fig:results1Gp}, the size of the graphene patch provides an extra degree of freedom to deliver the target amplitude and phase responses. Exploring the \{chemical potential, patch size\} design space, we observe that there is a tradeoff between the amplitude and phase variation. For a patch size of 6 $\upmu$m, the phase range covers almost 360\textsuperscript{o} with less than 0.8 eV chemical potential variation. However, the amplitude response also has a very large variation, which discourages the choice of such design point. A patch size of 8 $\upmu$m or larger provides a better amplitude response with a reasonable phase change variation. The third unit cell (Fig. \ref{fig:2G-patch}) is composed of a graphene-insulator-graphene stack placed over the substrate along with a ground plane on the backside. High-density polyethylene (HDPE) is chosen as the insulator due to its particularly low losses in the terahertz band \cite{Zhou2014a}. The refractive index of the insulator is $n_{HDPE} = 1.54$ and its thickness is $d_{HDPE} = 4 \upmu$m. The equivalent circuit model of this structure consists of two parallel $RLC$ cells representing each of the graphene sheets. As shown in Fig. \ref{fig:results2Gp}, this unit cell achieves a much wider phase variation, and by addressing each graphene patch independently, provides an extra degree of freedom to choose the states of the metasurface. \hl{According to the formulation presented in [], the initial values for RLC model parameters are estimated. For the dual patch unit cell, due to the coupling effect between two graphene layers, small changes can occur in the RLC parameters which are optimized by genetic algoritm. The parameters for three unit cell are calculated respectively as....} Regarding fabrication feasibility of the proposed low-profile structure (10 $\upmu$m substrate), there are advanced Silicon substrate thinning techniques that can be used to achieve an ultra-thinning down to 4 $\upmu$m without damage occurred due to thinning processes \cite{kim2014ultra}. \subsection{Unit Cell Discrete States} \label{sec:states} The results above show the response of the metasurface for a continuous range of chemical potentials. However, in order to design a bit-programmable metasurface, we need to discretize the potentials to obtain a finite set of addressable states. The first decision concerns the number of target states, and thus, the number of bits required to address a unit cell. Here, the number of states will determine the phase difference between consecutive unit cell states. For the application at hand, there is a relation between the phase difference and the steering resolution, i.e., the angle difference between consecutive achievable beam directions (see Sec. \ref{sec:disc} for details). Therefore, the number of bits must be chosen carefully. Let us now exemplify this process by deriving the states required for the metasurface to work at 2 THz. We start by addressing the \textsc{1G} unit cell with a single bit. From the design space exploration shown in Fig. \ref{fig:results1Gp}, we choose design points that have high amplitude and a phase difference of approximately 180\textsuperscript{o}. A good choice is $d_{G} = 8 \upmu$m with $\mu_{c} = \{0.6, 1.28\}$ eV corresponding to a bit combination of $B=\{0,1\}$. The resulting amplitude and phase responses, illustrated in Fig. \ref{fig:1code}, provide a constant reflection coefficient of around 0.7 and deliver the targeted 180\textsuperscript{o} phase shift. Two-bit coding leads to a phase shift resolution of 90\textsuperscript{o} and would improve the beam steering accuracy substantially. The \textsc{1G} unit cell, however, barely meets the amplitude and phase shift requirements with a 90\textsuperscript{o} resolution. With $d_{G} = 8 \upmu$m, there is no combination of chemical potentials capable of avoiding the region of low amplitude around 0.9 eV. For larger patch sizes, the phase response is not wide enough to accommodate two bits. For three or more bits, this unit cell would not be suitable for beam steering, at least for the relaxation time values and geometry considered in this work. \begin{figure}[!t] \centering \label{fig:1code1}{\includegraphics[width=0.7\columnwidth]{./figures/Figure6.png}} \vspace{-0.2cm} \caption{a) Amplitude and b) phase of reflection coefficient for 1-bit digital metasurface by \textsc{1G} unit cell. \label{fig:1code}} \vspace{-0.2cm} \end{figure} Alternatively, the \textsc{2G} unit cell offers much more freedom and is capable of accommodating two or more bits. Addressing the \textsc{2G} unit cell with two bits, one can find suitable design points with $d_{G}=12 \upmu$m. Using the design space exploration from Fig. \ref{fig:results2Gp}, good performance is obtained for the following up-layer and down-layer chemical potentials, respectively: $\mu_{c,1} = \{0.6, 1.3, 0.1, 0.4\}$ eV and $\mu_{c,2} = \{0, 0.6, 0.1, 0.1\}$ eV corresponding to the bit combinations $B = \{00, 01, 10, 11\}$. It is observed in Fig. \ref{fig:2code} that these states consistently achieve a reflection coefficient around 0.7 and a phase difference of 90\textsuperscript{o} covering the whole phase space. In Section \ref{sec:antenna}, we discuss how to electronically achieve these states. \begin{figure}[!t] \centering \label{fig:2code1}{\includegraphics[width=0.7 \columnwidth]{./figures/Figure7.png}} \vspace{-0.2cm} \caption{a) Amplitude and b) phase of reflection coefficient for 2-bit digital metasurface in \textsc{2G} unit cell. \label{fig:2code}} \vspace{-0.2cm} \end{figure} \section{Coding Metasurface Terahertz Antenna} \label{sec:coding} To illustrate the design approach of a terahertz metasurface for beam steering application, a metasurface including $M\times N$ controllable unit cells is considered. Our design allows one to introduce a phase gradient by smartly changing the chemical potential $\mu_{c}$ of the graphene sheets from one unit cell to another. In this case, we need to use the generalized reflection law to evaluate the response of the metasurface \cite{yu2011light}. In the following, we first derive the conditions required to achieve beam steering to a desired direction $\{\theta_{r}, \phi_{r}\}$ in Section \ref{sec:formula}. Then, we define the design and configuration flow to achieve such the desired direction by our proposed design in Section \ref{sec:flow}. Finally, we evaluate the performance of the proposed metasurface in Section \ref{sec:evaluation}. \subsection{Generalized Reflection Law Formulation} \label{sec:formula} Consider a reflective metasurface under illumination of an incident plane wave at elevation angle $\theta_{i}$ and azimuth angle $\phi_{i}$ according to the coordinate system shown in Fig. \ref{fig:coor}. The incident wave vector $k_{i}$ can be written as \begin{equation} k_{i} = k_{ix} \hat{x} + k_{iy} \hat{y} + k_{iz} \hat{z} \end{equation} where $\{k_{ix}, k_{iy}, k_{iy}\}$ are the wave vector coordinates, \hl{given} by \begin{equation} \begin{array}{l} \label{eq:ki} k_{ix} = k_{i} \sin{\theta_{i}}\cos{\phi_{i}} = k_{0} n_{i} \sin{\theta_{i}}\cos{\phi_{i}} \\ k_{iy} = k_{i} \sin{\theta_{i}}\sin{\phi_{i}} = k_{0} n_{i} \sin{\theta_{i}}\sin{\phi_{i}} \\ k_{iz} = k_{i} \cos{\theta_{i}} = k_{0} n_{i} \cos{\theta_{i}} \end{array} \end{equation} The same formulation can be applied to the reflected wave vector $k_{r}$ given the elevation angle $\theta_{r}$ and azimuth angle $\phi_{r}$ of the reflected wave. Assuming that the metasurface imposes the phase profile $\Phi(x,y)$, we assign it the virtual wave vector $k_{\Phi}$ so that \begin{equation} \label{eq:kphi} k_{\Phi} = k_{\Phi x} \hat{x} + k_{\Phi y} \hat{y} = \frac{d\Phi}{dx}\hat{x} + \frac{d\Phi}{dy}\hat{y} = \nabla \Phi_{x} \hat{x} + \nabla \Phi_{y} \hat{y} \end{equation} where $\nabla_{x} \Phi = \tfrac{d\Phi}{dx}$ and $\nabla_{y} \Phi = \tfrac{d\Phi}{dy}$ are the phase gradients along the $x$ and $y$ directions, respectively. Applying the boundary conditions of the tangential components of the electromagnetic fields, the momentum conservation law for wave vectors can be expressed as \begin{equation}\label{eq:BC} \begin{array}{l} k_{ix} + k_{\Phi x} = k_{rx} \\ k_{iy} + k_{\Phi y} = k_{ry} \end{array} \end{equation} and substituting \eqref{eq:ki} and \eqref{eq:kphi} in \eqref{eq:BC} yields \begin{equation}\label{eq:dphi} \begin{array}{l} k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx} = k_{r} \sin{\theta_{r}}\cos{\phi_{r}} \\ k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy} = k_{r} \sin{\theta_{r}}\sin{\phi_{r}} \end{array} \end{equation} \begin{figure}[!t] \centering {\includegraphics[width=1\columnwidth]{./figures/coordinateSystem.pdf}} \vspace{-0.3cm} \caption{Coordinate system used in the formulations of generalized reflection law.\label{fig:coor}} \vspace{-0.1cm} \end{figure} By mathematically simplifying the above equations as shown in the Appendix, the reflected elevation angle $\theta_{r}$ and azimuth angle $\phi_{r}$ are obtained as \begin{equation} \label{eq:simp} \begin{array}{l} \theta_{r} = \arcsin{\frac{\sqrt{(k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx})^{2} + (k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy})^{2}}}{k_{r}}} \\ \phi_{r} = \arctan{\frac{k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy}}{k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx}}} \end{array} \end{equation} When the metasurface is illuminated by a normally incident wave ($\theta_{i}=\phi_{i}=0$), and assuming air as the medium of the incident and reflected wave, we can simplify the formulas as \begin{equation} \label{eq:reflected} \begin{array}{l} \theta_{r} = \arcsin{\frac{\sqrt{(\nabla_{x}\Phi)^{2}+(\nabla_{y}\Phi)^{2}}}{k_{0}}} \\ \phi_{r} = \arctan{\frac{\nabla_{y}\Phi}{\nabla_{x}\Phi}}, \end{array} \end{equation} which relates the phase gradient in the metasurface to the direction of the reflected wave. \subsection{Design Flow} \label{sec:flow} Using \eqref{eq:reflected}, the reflected angles for all phase profiles of the metasurface can be calculated. The design of the metasurface can be then thought as an inversion process. We need to estimate the necessary phase profile to achieve the desired elevation and azimuth angles of the reflected wave. To this end, and again assuming a normal incident plane wave and air as the medium, we can rearrange \eqref{eq:dphi} as \begin{equation} \label{eq:dphi2} \begin{array}{l} dx = \frac{\lambda_{0} d\Phi}{2\pi\cos{\phi_{r}}\sin{\theta_{r}}} \\ dy = \frac{\lambda_{0} d\Phi}{2\pi\sin{\phi_{r}}\sin{\theta_{r}}} \end{array} \end{equation} where $d\Phi$ describes the phase difference between adjacent unit cell states. Starting from here, the design methodology requires the knowledge of the unit cell dimensions and the number of states. The design flow is as follows, \vspace{0.1cm} \noindent \textbf{1. Obtaining the cluster size (in $\upmu$m):} Assume that the coding metasurface can choose among $2^{n}$ states for each unit cell (referred to as $n$-bit coding). In this case, the granularity of the gradient is $d\Phi = \pi/2^{n-1}$. Therefore, the lateral dimensions of the required cluster of unit cells ($d_{cx}$ and $d_{cy}$), as shown in Fig. \ref{fig:summary}, are obtained by substituting $d\Phi$ in Eq. \eqref{eq:dphi2}, \begin{equation} \label{eq:dphi3} \begin{array}{l} d_{cx} = \frac{\lambda_{0}}{2^{n}\cos{\phi_{r}}\sin{\theta_{r}}} \\ d_{cy} = \frac{\lambda_{0}}{2^{n}\sin{\phi_{r}}\sin{\theta_{r}}}. \end{array} \end{equation} It is worth noting that the results in \eqref{eq:dphi2} can be negative if $d\Phi$ becomes negative, which implies that the gradient needs to be reversed at the coding stage. We will later see that the algorithm for metasurface coding already takes the direction of the gradient into consideration. Also, the value of $\phi_{r}$ determines the difference between $d_{cx}$ and $d_{cy}$, which impacts on the shape of the reflected beam. The larger difference results in the more elliptical shape of the reflected beam. \vspace{0.1cm} \noindent \textbf{2. Obtaining the cluster size (in number of unit cells):} The nature of a metasurface, consisting an array of unit cells, dictates the discretization of space. Therefore, the values of $d_{cx}$ and $d_{cy}$ needs to be approximated in an integer number of unit cells. For this purpose, we consider that the number of unit cells in the $x$ and $y$ directions, designated by $c_{x} \in \mathbb{Z}$ and $c_{y} \in \mathbb{Z}$, respectively, are rounded as \begin{equation} \label{eq:cluster} c_{x} = \lfloor \frac{d_{cx}}{d_{u}} \rceil \,\, ,\,\, c_{x} = \lfloor \frac{d_{cy}}{d_{u}} \rceil. \end{equation} Figure \ref{fig:cx} shows the absolute value of $c_{x}$ as a function of the target direction for a normally incident plane wave. It is observed that $c_{x}$ becomes arbitrarily large as the reflected angle approaches $\theta_{r} = 0$ (white areas of the figure). This is consistent with the fact that such direction implies specular reflection, which can only be realized with a homogeneous surface, i.e. zero gradient. For directions approaching $\phi=\pi/2,~3\pi/2$, $c_{x}$ also becomes large because the gradient is only needed in the $y$ axis. On the contrary, $c_{x}$ approaches zero in the co-planar directions, where an infinite gradient would be required. The black area in Fig. \ref{fig:cx} denotes $c_{x} < 1$, which is unfeasible. \begin{figure}[!t] \centering {\includegraphics[width=0.8\columnwidth]{./figures/cx2.png}} \vspace{-0.4cm} \caption{Absolute value of $c_{x}$ as a function of the desired direction of reflection. \label{fig:cx}} \vspace{-0.1cm} \end{figure} \vspace{0.1cm} \noindent \textbf{3. Obtaining the size of the super unit cell:} To calculate the size of the super unit cell, designated by $s_{x} \in \mathbb{Z}$ and $s_{y} \in \mathbb{Z}$, in number of unit cells, one needs to apply \begin{equation} \label{eq:dphi4} s_{x} = 2^{n}c_{x} \,\, ,\,\, s_{y} = 2^{n}c_{y}. \end{equation} \subsection{Evaluation} \label{sec:evaluation} In this section, we evaluate the proposed metasurface both numerically and analytically. We numerically model the four states of the $2G$ unit cell in CST \cite{CST}, and apply the formulation developed above to assign the states to different unit cells of an $M\times N$ metasurface. Then, we obtain the response of the metasurface in the form of the far field pattern produced by a normally incident plane wave. We assume that the beam covers the whole metasurface. \begin{figure*}[!t] \centering \label{fig:Meta1N}{\includegraphics[width=1.8\columnwidth]{./figures/Figure10.png}} \vspace{-0.2cm} \caption{Radiation pattern of the metasurface structure at 2 THz with the main reflected beam pointed at $\{\phi_{r}=30^{o},\theta_{r}=45^{o}\}$ by a) numerical approach and b) analytical approach. The incident wave is normal to the metasurface. \label{fig:Meta1}} \vspace{-0.2cm} \end{figure*} \begin{figure*}[!t] \centering \label{fig:Meta22}{\includegraphics[width=1\textwidth]{./figures/Figure11.png}} \vspace{-0.5cm} \caption{Radiation pattern of the metasurface structure at 2 THz with the main reflected beam pointed at the directions (numerical approach): a) $\{\phi_{r}=130^{o},\theta_{r}=30^{o}\}$, b) $\{\phi_{r}=230^{o},\theta_{r}=20^{o}\}$, and c) $\{\phi_{r}=340^{o},\theta_{r}=60^{o}\}$. The incident wave is normal to the metasurface. \label{fig:Meta2}} \vspace{-0.2cm} \end{figure*} \begin{table*}[!t] \caption{Design and Performance Results.} \vspace{-0.2cm} \label{tab:performance} \footnotesize \centering \begin{tabular}{|c||cc|cc|cc|cc|} \hline Direction & $d_{cx}$ & $d_{cy}$ & $c_{x}$ & $c_{y}$ & $\phi_{3dB}$ & $\theta_{3dB}$ & $Err_{\phi}$ & $Err_{\theta}$ \\ $\{\phi_{r}, \theta_{r}\}$ & \multicolumn{2}{c|}{($\upmu$m)} & \multicolumn{2}{c|}{(cells)} & \multicolumn{2}{c|}{(\textsuperscript{o})} & \multicolumn{2}{c|}{(\%)} \\ \hline \{30\textsuperscript{o}, 45\textsuperscript{o}\} & 61.24 & 106.07 & 3 & 5 & 5 & 5.25 & 2.5 & 3.3 \\ \{130\textsuperscript{o}, 30\textsuperscript{o}\} & 116.68 & 97.91 & 6 & 5 & 8.25 & 4.75 & 0.58 & 4.16 \\ \{230\textsuperscript{o}, 20\textsuperscript{o}\} & 170.57 & 143.13 & 8 & 7 & 10.5 & 4 & 0.11 & 6.25 \\ \{340\textsuperscript{o}, 60\textsuperscript{o}\} & 46.08 & 126.6 & 2 & 6 & 4.5 & 8.25 & 0.15 & 3.75 \\ \hline \end{tabular} \vspace{-0.3cm} \end{table*} In the analytical approach that is used to verify the numerical results, the reflection phase $\Phi(p,q)$ of each unit cell of size $d_{u}$ is assumed to be exactly either 0, $\pi/2$, $\pi$, or $3\pi/2$. Assuming a designed phase distribution assigned to the unit cells, we can express the far-field scattering pattern $F(\theta,\phi)$ as \begin{equation} F(\theta,\phi) = f_{E}(\theta,\phi)\times f_{A}(\theta,\phi) \end{equation} where $\theta$ and $\phi$ are the elevation and azimuth angles of an arbitrary direction, respectively, and $f_{E}(\theta,\phi)$ and $f_{A}(\theta,\phi)$ are the element factor (pattern function of unit cell) and array factor (pattern function of unit cell arrangement), respectively. Here, the unit cells are assumed to be isotropic, and therefore the scattering pattern depends only on the array factor \begin{equation} \begin{array}{l} F(\theta,\phi) = \sum_{p=1}^{M}\sum_{q=1}^{N}exp\{-j[\Phi(p,q)+ \\ +\, kd_{u}(p-1/2)\sin{\theta}\cos{\phi}\, + \\ +\, kd_{u}(q-1/2)\sin{\theta}\sin{\phi}]\}. \end{array} \end{equation} We first evaluate the metasurface when configured to steer the beam at $\{\phi_{r}=30^{o},\theta_{r}=45^{o}\}$ with $M=N=100$. Following the design flow from Section \ref{sec:flow} and assuming 20-$\upmu$m unit cells and 2-bit coding, we obtain that $d_{cx} = 61.24 \,\upmu$m and $d_{cy} = 106.07 \,\upmu$m, which leads to a cluster of $3\times 5$. The super unit cell thus extends for $12\times 20$ unit cells. Figure \ref{fig:Meta1} shows the far field pattern of the resulting metasurface, which confirms that there is a good agreement between the numerical and analytical solutions. The reason for the small differences between the numerical and theoretical results in sidelobe levels could be the marginal unit cells in the clusters, super unit cells, and the whole structure. We obtain the amplitude and phase of the reflection coefficient of the proposed unit cells while taking into consideration mutual coupling between the same adjacent unit \hl{cells} by assuming a periodic boundary condition. While in the real metasurface structure for beam steering, there are some marginal unit cells which their adjoining unit cells are not similar with them. Consequently, the peridicity condition is broken. In the simulation of full structure, the correct coupling of marginal unit cells is considered while in the theoretical analysis of entire metasurface structure, it is ignored perforce. It is seen that the reflected beam indeed points to the target direction. The steering error, evaluated as the difference between the target and achieved angles, is 2.5\% and 3.3\% in $\phi$ and $\theta$, respectively. The 3-dB width of the beam is approximately 5\textsuperscript{o} in both cases. To further verify the validity of the approach, we reconfigure the metasurface to operate at three different steering directions. Figure \ref{fig:Meta2} shows how the proposed metasurface design is capable of achieving the desired responses and Table \ref{tab:performance} summarizes the characteristics and performance of the resulting configurations. A wide range of reflected angles is achieved with clusters of 2--8 unit cells, achieving in all cases beam widths below 11\textsuperscript{o} (minimum 4\textsuperscript{o}) with steering error below 7\% (minimum 0.11\%). Note that the error and beam width generally increase when approaching \emph{forbidden areas} in the design space, where the gradient tends to zero or infinity. Also, the reflected beam for the cases \{30\textsuperscript{o}, 45\textsuperscript{o}\} and \{340\textsuperscript{o}, 60\textsuperscript{o}\} especially the latter) tend to be elliptical due to the larger difference between $d_{cx}$ and $d_{cy}$, as hinted in Section \ref{sec:flow}. In addition, to achieve a continuous beam scanning ability of coding metasurface with minimum angle variation, the convolution approach can be leveraged to steer the far-field pattern to a predetermined direction \cite{liu2016convolution}. Regarding the Fourier relation between the field distribution on the coding metasurface and the resultant scattering pattern in the far-field, one can shift the reflected pencil beam by adding the two calculated phase gradient coding \hl{patterns} so that the total phase gradient deflection angle is equal to the desired angle. \section{Programmability and Implementation Issues} \label{sec:antenna} The final steps in the design of our beam steering device relate to the elements that control and excite the metasurface. More precisely, we need to conceive a setup that takes the target reflected angle as input and modifies the metasurface accordingly. To this end, in Sec. \ref{sec:controller} we propose a controller that automatically converts the target reflected angle into a bit matrix defining the states of each unit cell. Then, in Sec. \ref{sec:biasing} we discuss the biasing scheme required to address each unit cell with the appropriate voltage (chemical potential). Finally, we review source considerations in Sec. \ref{sec:source}. \subsection{Controller Design} \label{sec:controller} To achieve programmability, it is necessary to attach the metasurface to a digital device capable of translating the beam steering requirements into the global metasurface state. Algorithm \ref{alg1} shows a pseudocode that exemplifies this function. The process starts by calculating the size of the unit cell clusters $c_{x}\times c_{y}$ as a function of the bit number per cell $n$, and the dimension of the unit cell $d_{u}$. Then, the gradient can be built easily by assigning consecutive states to adjacent clusters of unit cells. As already mentioned in Section \ref{sec:flow}, \eqref{eq:dphi3} and \eqref{eq:cluster} can produce negative values, in which case the order of states is reversed. Algorithm \ref{alg1} assumes that all unit cells are addressed by a centralized device, probably an FPGA. However, since the metasurface implements a discretized gradient, it would be relatively straightforward to come up with an algorithm that can calculate the required state in a distributed way, only relying on the state of the immediate neighbour. Such simplified scheme would be suitable for the rising Software-Defined Metamaterial (SDM) paradigm \cite{Liaskos2015, AbadalACCESS}, which aims to provide programmable metamaterials that can be reconfigured via an integrated network of controllers that drive unit cells individually. In that case, an external entity called gateway would receive the command of changing the direction of the beam. The gateway would compute $c_{x}$ and $c_{y}$, then rely them to the first controller together with $n$. The first controller would be initialized and pass its state along with $c_{x}$, $c_{y}$, and $n$ to their neighbours, which would repeat the process until the whole metasurface is programmed. \subsection{Actuator Design} \label{sec:biasing} The actuator is a circuit that translates the state matrix $[B]$ provided by the controller into the matrix of appropriate voltages $[V_{G}]$ that, in turn, leads to the required chemical potentials in each graphene patch of the metasurface. As shown in Fig. \ref{fig:actuator}, a set of voltage level shifters and a matrix of multiplexers would be enough for this purpose. Note that several independent sets of multiplexers (two in our case) may be required to drive the graphene patches of individual unit cells. It is also worth noting that only five distinct voltages are needed in our case, because several states share the same target chemical potentials according to the calculations made in Section \ref{sec:states}. \begin{algorithm}[!t] \caption{Algorithm for clustered gradient formation.} \label{alg1} \begin{algorithmic} \small \STATE Inputs: du, phiR, thetaR, n, f \STATE \STATE /* CALCULATION OF THE CLUSTER SIZES */ \STATE lambda = 3e8/f; \STATE dcx = lambda/(2\^{}n*cos(phiR)*sin(thetaR)); \STATE dcy = lambda/(2\^{}n*sin(phiR)*sin(thetaR)); \STATE cx = round(dcx/du); // MAY BE NEGATIVE \STATE cy = round(dcy/du); // MAY BE NEGATIVE \STATE \STATE /* CALCULATION OF THE STATE MATRIX */ \STATE for(i=1; i$<$M; i++) \{ \STATE $\,\,\,\,$for(j=1; j$<$N; j++) \{ \STATE $\,\,\,\,\,\,\,\,$B(i,j) = (round(i/cx) + round(j/cy)) mod(2\^{}n); \STATE $\,\,\,\,$\} \STATE \} \end{algorithmic} \end{algorithm} The actual voltages required at the output of the level shifters mainly depend on the graphene biasing structure and the required chemical potential \cite{Huang2012ARRAY, Gomez2015}. The configuration assumed in this paper is similar to that used in \cite{Huang2012ARRAY}, which couples graphene capacitively with a back gate through a thin Al$_{2}$O$_{3}$ layer. Essentially, this scheme shifts the operation of graphene between the Dirac point, where the chemical potential is minimum, and a point where the potential reaches the maximum desired point. The resulting chemical potential $\mu_{c}$ relates to the change of voltage $\Delta v_{g}$ as \begin{equation} \label{eq:SLG} \Delta v_{g} = \frac{e \mu_{c}^2 t}{\pi \hbar^2 v_{F}^2 \varepsilon_{0} \varepsilon_{r}}, \end{equation} where $e$ is the elementary charge, $\hbar$ is the reduced Planck constant, $v_{F} \approx 10^{6} $ m/s is the Fermi velocity, $\varepsilon_{0}$ is the vacuum permittivity, whereas $\varepsilon_{d}$ and $t$ are the permittivity and thickness of the material below graphene \cite{Yu2009Chemical}. Figure \ref{fig:voltage} illustrates the voltage ranges required to achieve a certain target chemical potential range. As directly implied by \eqref{eq:SLG}, the voltage requirements increase quadratically with the target chemical potential range. To limit the requirements, one can either minimize the space between the gate and the graphene layer or use materials with high dielectric constant. However, the former is determined by technological constraints, and the latter needs to take into consideration the cost and other characteristics of the material. The chemical potential range required by our metasurface can be obtained easily once the unit cell states are defined. In the present design, $\Delta\mu_{c} = 1.3$ eV. Assuming a Al$_{2}$O$_{3}$ layer ($\varepsilon_{r} = 9.1$) with thickness $t = 10$ nm, achievable with current technologies \cite{Huang2012ARRAY}, the resulting voltage range is 24.9 V. \begin{figure}[!t] \centering {\includegraphics[width=0.7\columnwidth]{./figures/actuator.pdf}} \vspace{-0.3cm} \caption{Sample implementation of the actuator for the metasurface based on the $2G$ unit cell and 2-bit coding.\label{fig:actuator}} \vspace{-0.3cm} \end{figure} \begin{figure*}[!ht] \centering \subfigure[Voltage as a function of the dielectric thickness.\label{fig:voltT}]{\includegraphics[width=1\columnwidth]{./figures/voltage_t.pdf}} \subfigure[Voltage as a function of the dielectric constant.\label{fig:voltE}]{\includegraphics[width=1\columnwidth]{./figures/voltage_er.pdf}} \vspace{-0.1cm} \caption{Voltage range required at the level shifting stage to achieve a given target chemical potential range.\label{fig:voltage}} \vspace{-0.3cm} \end{figure*} \subsection{Towards an Experimental Setup} \label{sec:source} Figure \ref{fig:setup} illustrates a possible measurement setup for the experimental \hl{validation} of the metasurface. \hl{The testbed would mainly consist of a fiber-coupled time domain spectroscopy (THz-TDS) system with a fixed source (generally harder to calibrate) and a movable receiver placed on a rotatory platform. The source is based on a photoconductive antenna coupled to a focusing or collimating lens that minimizes spreading losses. Additional optics such as parabolic reflectors can be incorporated to meet the receiver sensitivity as well as the source--metasurface--receiver distance requirements. It is worth noting that such a scheme has been used successfully in other works proving anomalous reflection in the THz band} \cite{Liu2016a, Liang2015a}\hl{. A very similar system has been built in} \cite{Kokkoniemi2016} \hl{for the measurement of the reflection coefficient of surfaces in the terahertz band entirely with commercial solutions.} \begin{figure}[!ht] \centering {\includegraphics[width=0.8\columnwidth]{./figures/setup.pdf}} \vspace{-0.3cm} \caption{\hl{Sketch of a potential measurement setup for the proposed device.}\label{fig:setup}} \vspace{-0.3cm} \end{figure} \section{Discussion} \label{sec:disc} In this section, we qualitatively discuss several cross-cutting issues related to the design of the metasurface. \vspace{0.1cm} \noindent \textbf{Scalability analysis:} To accommodate the proposed design flow to different beam steering specifications, it is crucial to understand which are the key design parameters and what performance metrics do they affect. Here, we highlight some: \begin{itemize} \item The size of the unit cell presents an interesting tradeoff. While it may be difficult to achieve a wide phase range if the unit cell is too small with respect to the wavelength (see Figs. \ref{fig:results1G} and \ref{fig:results1Gp}), reducing its dimensions leads to a raise in the maximum achievable phase gradient. This is useful to achieve better control at the end-fire directions ($\theta_{r}\to 90$\textsuperscript{o}) of the metasurface, as exemplified by Fig. \ref{fig:rangeDX}. For instance, we can achieve beam steering at $\theta_{r} > 70$\textsuperscript{o} only for $d_{u} < 5 \upmu$m. For angles even closer to $\theta_{r}=90$\textsuperscript{o}, a design converting the incident wave into a surface wave may be required \cite{Tcvetkova2017}. In any case, note that such fine-grained control at THz frequencies can only be achieved with graphene, thanks to its support of plasmonic slow-wave propagation in this frequency band. \item Increasing the number of bits provides better control on the phase as it allows to draw the phase gradient more accurately, with more clusters and less unit cells per cluster. This is of special importance in directions close to the boresight ($\theta_{r}\to 0$\textsuperscript{o}), where a subtle gradient is required. Fig. \ref{fig:resDX} exemplifies this for a design targeting $\theta_{r} = 5.37$\textsuperscript{o} with fixed size, but increasing number of bits. The 3-bit instance greatly reduces the side lobes and has its maximum at $\theta_{r} = 5.26$\textsuperscript{o}, whereas the beam moves away from the desired direction for the 2-bit and 1-bit cases (4.92\textsuperscript{o} and 4.22\textsuperscript{o}, respectively). Note, however, that the gain in accuracy comes at the cost of a substantially higher complexity at the controller and the actuator. \item In arrays, adding more antennas allows to reduce the beam width. The same principle should apply in our design, as exemplified in Section \ref{sec:evaluation} with the array factor formulation. We verified such hypothesis by fixing the gradient and doubling the number of supercells once and twice (from $M=N=40$ to $M=N=160$). The resulting far field patterns, shown in Fig. \ref{fig:sizeDX}, clearly demonstrate that the beam is sharpened without significantly changing the direction of maximum energy. In fact, the beam width is reduced by a factor proportional to the increase in metasurface size, i.e. from 50\textsuperscript{o} and 5\textsuperscript{o} to 12\textsuperscript{o} and 1.3\textsuperscript{o} in the $\theta$ and $\phi$ angles, respectively. \end{itemize} \begin{figure*}[!ht] \centering \subfigure[Achieved $\theta_{r}$ for different values $d_{u}$.\label{fig:rangeDX}]{\includegraphics[width=0.68\columnwidth]{./figures/range_dx.pdf}} \subfigure[Far field patterns for different number of bits (top) and metasurface sizes (bottom).\label{fig:resDX} \label{fig:sizeDX}]{\includegraphics[width=1.3\columnwidth]{./figures/res+size_dx.png}} \vspace{-0.1cm} \caption{Performance scalability analysis of the metasurface from the unit cell, coding, and complete device perspectives.} \vspace{-0.3cm} \end{figure*} \vspace{0.1cm} \noindent \textbf{Co-design opportunities:} Understanding the design flow from the point of view of the unit cell, the metasurface, or the device as a whole helps to identify possible co-design opportunities. For instance, the unit cell configuration and the coding determine the complexity in terms of number of required voltage levels, as well as the quantity and size of the multiplexers. For an $n$-bit coding with $g$ independently biased graphene patches, we may require up to $g\cdot 2^{n}$ levels and $g$ multiplexers with $2^{n}$ inputs. However, advanced design exploration techniques may allow to find design points that reduce the voltage range, the number of required levels, and the multiplexer inputs while gracefully degrading the performance of the system. For instance, in our 2-bit implementation with the $2G$ unit cell, we could take $\mu_{c,1} = \mu_{c,2} = \{0.1, 0.6\}$ and still achieve reasonable performance, but with a $4\times$ reduction of voltage range and number of levels and greatly simplfying the multiplexing circuits. \vspace{0.1cm} \noindent \textbf{Adaptive clusterization:} The strength of the proposed design flow is its simplicity. By fixing the cluster size first and then statically building the super cells, the state matrix can be calculated easily. However, the rounding operation used in the number of unit cells per cluster (Equation \eqref{eq:cluster}) introduces an error, specially for large unit cells, that is later amplified by the static building of super cells. Both issues can be alleviated by simply inverting the design flow, i.e. obtaining the size of the super cell first, and then breaking it down into unequal clusters. For instance, a super cell composed by 18 unit cells can be coded with 4, 5, 4, and 5 unit cells per state; otherwise, the super cell would be statically coded to either 4 or 5 unit cells per state, leading to a significant error. In a similar approach, the coding algorithm could dynamically adapt the number of states, using fewer bits in those directions that require a very large gradient. \section{Conclusion} \label{sec:conclusion} This paper has presented the complete design, from the unit cell up to the programming algorithm, of a reconfigurable digital metamaterial for beam steering in the terahertz band. The tunability of graphene is exploited at the unit cell level to provide a phase range close to $2\pi$, whereas the generalized Snell's law of reflection has been used to derive the phase gradients required to target the beam to the desired direction. The results confirm the validity of the approach, which for normal incidence achieves a very broad reflection range with angle-dependent beam widths and steering errors. Considering normal incidence, the analytical formulation also models forbidden (and unreasonable) reflection directions effectively as infinite gradients. Finally, the scalability analysis confirms that the beam width depends on the size of the metasurface, the reflection range depends on the size of the unit cells, and the steering error and side lobe levels depend on the number of phases that the graphene-based unit cells can implement. Future works could leverage the comprehensive methodology developed herein to optimize the unit cell design and phase gradient formation to reduce the overhead of the solution and further improve the beam steering performance. \section*{Acknowledgment} This work has been partially funded by Iran's National Elites Foundation (INEF), the Spanish Ministry of \emph{Econom\'ia y Competitividad} under grant PCIN-2015-012, and by ICREA under the ICREA Academia programme. Also, the authors would like to thank Christoph S{\"u}{\ss}meier and the anonymous reviewers for their invaluable feedback \section*{Appendix} To extract $\phi_{r}$ from \eqref{eq:dphi}, we divide both expressions and apply basic trigometry to obtain \begin{equation} \tan{\phi_{r}} = \frac{k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy}}{k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx}} \end{equation} which yields \begin{equation} \phi_{r} = \arctan{\frac{k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy}}{k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx}}} \end{equation} To extract $\theta_{r}$ from \eqref{eq:dphi}, we square and sum both expressions: \begin{equation} \begin{array}{l} k_{r}^{2} \sin{\theta_{r}}^{2}\cos{\phi_{r}}^{2} + k_{r}^{2} \sin{\theta_{r}}^{2}\sin{\phi_{r}}^{2} = \\ = (k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx})^{2} + (k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy})^{2} \end{array} \end{equation} which, after applying basic trigonometry, becomes \begin{equation} \begin{array}{l} k_{r}^{2} \sin{\theta_{r}}^{2} = \\ = (k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx})^{2} + (k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy})^{2} \end{array} \end{equation} Isolating, we obtain \begin{equation} \begin{array}{l} k_{r} = \arcsin{ \frac{\sqrt{(k_{i} \sin{\theta_{i}}\cos{\phi_{i}} + \frac{d\Phi}{dx})^{2} + (k_{i} \sin{\theta_{i}}\sin{\phi_{i}} + \frac{d\Phi}{dy})^{2} }}{k_{r}} } \end{array} \end{equation}
2,877,628,090,597
arxiv
\section{Modified MOND inertia vs. modified MOND gravity} MOND is a modification of non-relativistic dynamics involving an acceleration constant $a_{0}$. In the formal limit $a_{0}\rar0$ standard Newtonian dynamics is restored. In the deep MOND limit, $a_{0}\rightarrow \infty$, $a_{0}$ and $G$ appear in the combination $(Ga_{0})$. Much of the NR phenomenology follows from this simple prescription, including the asymptotic flatness of rotation curves, the mass-velocity relations (baryonic Tully-fisher and Faber Jackson relations), mass discrepancies in LSB galaxies, etc.. There are many realizations (theories) that embody the above dictates, relativistic and non-relativistic. \par The possibly very significant fact that $a_{0}\sim cH_0\sim c(\Lambda/3)^{1/2} $ may hint at the origin of MOND, and is most probably telling us that a. MOND is an effective theory having to do with how the universe at large shapes local dynamics, and b. in a Lorentz universe (with $H_0=0,~ \Lambda=0$) $a_{0}=0$ and standard dynamics holds. \par We can broadly classify modified theories into two classes (with the boundary not so sharply defined): In modified-gravity (MG) formulations the field equation of the gravitational field (potential, metric) is modified; the equations of motion of other degrees of freedom (DoF) in the field are not. In modified-inertia (MI) theories the opposite it true. More precisely, in theories derived from an action modifying inertia is tantamount to modifying the kinetic (free) actions of the non-gravitational degrees of freedom. Local, relativistic theories in which the kinetic actions are of the standard form with some physical metric are of the MG type; so, relativistic MI theories are non-local or non-metric. \par Start, for example, from the standard NR action $$ S=-{1\over 8\pi G}\intd^3r~(\grad\phi)^2-\sum_i m_i\phi( \textbf{r}_i) +\sum_i m_i\int dt ~v_i^2(t)/2, $$ which describes a system of masses $m_i$ interacting gravitationally. Modifying gravity would be modifying the free action of the gravitational potential (the first term) into something like $-(a_{0}^2/8\pi G)\intd^3r~F(a_{0},\phi,\grad\phi, ...)$, where in the deep MOND limit $F\propto a_{0}^{-3}$ (e.g. the theory of Bekenstein and Milgrom 1984). In MI we replace the particle kinetic action by $~\sum m_i S_K[a_{0},\{ \textbf{r}_i(t)\}]$, where $\{ \textbf{r}_i(t)\}$ represents the full trajectory of particle $i$ and the kinetic action is a functional of it. In the deep MOND limit $S_K\rightarrow{1\over a_{0}}s_K[\{ \textbf{r}(t)\}].$ In such theories the equation of motion of a particle in the (unmodified) gravitational potential, $\phi$, is of the form $~ \textbf{A}[\{ \textbf{r}(t)\}, \textbf{r}(t),a_{0}]=-\grad\phi[ \textbf{r}(t)],$ where the inertia functional $ \textbf{A},$ of the dimensions of acceleration, is a functional of the whole trajectory and a function of the instantaneous position; it reduces to the acceleration for $a_{0}\rightarrow 0$. For $a_{0}\rightarrow\infty$ the equation of motion takes the form $\textbf{U} [\{ \textbf{r}(t)\}, \textbf{r}(t)]=-a_{0}\grad\phi[ \textbf{r}(t)].$ Special relativity entails a familiar example of modified (non-MOND) inertia with the standard NR particle kinetic action being replaced by $S_K=-\int\delta\tau=-\int[1-(v/c)^2]^{1/2}~dt$ such that the equation of motion becomes $\textbf{F}=md(\gamma\textbf{v})/dt=m\textbf{A}=m\gamma[\textbf{a}+\gamma^2(\textbf{a}\cdot\textbf{v})\textbf{v}/c^2].$ \par With the exception of some heuristic proposals described in Milgrom (1994, 1999), all MOND theories proposed to date are of the MG type (e.g. Bekenstein \& Milgrom 1984, Soussa \& Woodard 2003, Bekenstein 2004, Sanders 2005). \section{Some properties of non-relativistic modified inertia theories} In Milgrom (1994, 1999) I derived certain general properties of NR MI formulations of MOND for particle dynamics: If we retain Galilei invariance in addition to the requirements of Newtonian and MOND limits, the particle kinetic action has to be non-local in time. For example, an action of the form $\int f(a/a_{0})v^2~dt$ can give the desired MOND dynamics, but is not Galilei invariant. The Lorentz invariant action $-\int F(a^\mu a_\mu/a_{0}^2)d\tau$ ($a^\mu=d^2x^\mu/d\tau^2$), replacing the Lorentz free particle action $-\int d\tau$, does have a Galilei invariant NR limit, but this is, alas, $-\int F(a^2/a_{0}^2)dt$, which is not the correct NR action. It seems to me that if we forgo Galilei invariance we should replace it with a more general symmetry, one that involves $a_{0}$, and that reduces to Galilei when $a_{0}\rightarrow 0$. This must then entail a corresponding extension of Lorentz invariance (see below). Given a particle kinetic action, $S_K$, bound trajectories satisfy an integral, virial relation of the form $S_K(1+\pd{lnS_K}{lna_{0}})={1\over 2}\langle \textbf{r}\cdot\grad\phi\rangle$ ($\langle\rangle$ is the time average). From this follows that for any circular orbit in an axi-symmetric potential we have $$\mu(g/a_{0})g=g_N,$$ where $g=v^2/r$ is the correct (MOND) acceleration, $G_N=-\partial \phi/\partial r$ the Newtonian acceleration, and $\mu(x)$ is simply derived from the action as restricted to circular orbits (we only have to know the action values for circular orbits to get $\mu(x)$). \section{Observable differences} While the most salient aspects of galaxy dynamics are very similar in mondified inertia and mondified gravity, there are important difference that may eventually help reject one in favor of the other. 1. The predictions of the two differ when forces other than gravity are present; e.g., in a Milliken-like experiment where strong gravity is almost balanced by an electric force, resulting in a sub-$a_{0}$ acceleration. In MG there should not be a MOND departure, as the gravitational field is large; in MI there should, as the total acceleration is small. Such an experiment does not seem feasible at present. 2. The definition of conserved quantities, and adiabatic invariants, (momentum, angular momentum, etc.) in terms of the non-gravitational degrees of freedom is different in the two approaches: these quantities are derived from the kinetic actions, which are modified in MI, but not in MG (for example, in SR the momentum is $m\gamma\textbf{v}$). All significant tests of MOND to date concern stationary situations and do not involve the conservation laws. But future studies involving formation, mergers, accretion, relaxation, etc. of and in galaxies may become accurate enough to constrain the type of underlying modification. 3. Even in simple stationary situations, predictions of observables, such as galaxy rotation curves, may differ somewhat in the two classes of theories. For example, we saw above that MI predicts $\mu(g/a_{0})g=g_N$ for the rotation curves, while MG (e.g. the NR modified-Poisson theory propound by Bekenstein and Milgrom 1984) give somewhat different results. The differences were considered by Brada \& Milgrom (1995); they are not large and are also partly masked by uncertainties in the form of the interpolating function $\mu$. But, with the number of galaxies with good data increasing, time may be ripe for a detailed analysis that might constrain $\mu(x)$ and simultaneously perhaps distinguish between the alternatives (see e.g. Famaey \& Binney 2005). 4. With MG we still have in the NR regime $\textbf{a}\equiv\dot\textbf{v}=-\grad\phi;$ so all test bodies have the same acceleration at the same position in the modified potential $\phi$ irrespective of their trajectory. With MI, the inertial force per unit mass $\textbf{A}$ is not the acceleration anymore; so the measured acceleration depends not only on position but on details of the trajectory as well. (In SR, e.g., electrons running perpendicular or parallel to an electric field have the same $d(\gamma\textbf{v})/dt$, but undergo different accelerations.) In particular, the function $\mu(a/a_{0})$ appearing above in the description of circular orbits in MI is not relevant for other trajectories, for which we do not even have a simple relation between the MOND and Newtonian accelerations. For instance, a term in the action of the form $\int~dt~f(a/a_{0})(\textbf{a}\cdot\textbf{v}/a_{0})^2$ enters strongly for linear trajectories, but does not affect circular trajectories at all (since it vanishes for them). The fact that in MI we have to specify an action that is a functional of the trajectory permits us an infinitely larger freedom then in MG. So we can make the modification strongly dependent on orbital eccentricity, or on the degree of binding of the orbit, etc. etc.. \par In galaxies, one measures instantaneous velocities and distances, assumes an orbit, and deduces the acceleration from these. If it were possible to directly measure the accelerations of bodies in the same position but on different orbits they should agree in MG but may differ in MI. It is difficult to estimate the expected differences without a specific theory. In the Newtonian regime the differences are small, of course, whereas in the NR MOND regime we saw that the equation of motion is of the form $\textbf{U} [\{ \textbf{r}(t)\}, \textbf{r}(t)]=-a_{0}\grad\phi[ \textbf{r}(t)],$ where $\textbf{U}$ has dimension of acceleration$^2$, and has the same value for all particles at the same position. The differences in the actual accelerations might then not be so strongly dependent on the orbit if, for example, $\textbf{U}$ is dominated by $a^2$. Perhaps a comparison between the behavior of massive bodies and light rays will enlighten us on this point, but for that we would need a relativistic version of MI. \par Closer to home, the Pioneer anomaly, if verified as a new-physics effect (Anderson et al. 2002), might provide a decisive test. It can be naturally explained in the context of MOND as MI but is difficult to explain in the context of a MG theory (Milgrom 2002): The Pioneer anomaly has no match in planetary motions for which a constant, unmodelled acceleration of the magnitude shown by the spacecraft is ruled out by a large margin. The planets probe heliocentric radii smaller than where the Pioneer anomaly has been found. So a MG theory may still have a little leeway by having the anomaly set in rather abruptly with distance just at the interim heliocentric radii (e.g., Sanders 2005). A MI explanation will build on the fact that the orbits of the spacecraft differ greatly from those of the planets: the former are close to linear and unbound, the latter quasi circular and bound. It is intriguing in this connection that the analysis for Pioneer 11 (Anderson et al. 2002) shows an onset of the anomaly just around the time where the spacecraft was kicked from a bound, nearly elliptical orbit to the unbound, almost linear orbit on which it is now (the corresponding event for Pioneer 10 is not covered). The onset still wants verification, but if real, it would be a signature of MI. \par In the sense discussed here, the dark matter doctrine is a kind of MG; so any indication that the mass discrepancy in galactic systems is due to MI will also argue against DM. \section{Possible approaches to MOND inertia} We do not have a MI theory for MOND at the level of satisfaction achieved for for MG formulations. This line of inquiry has attracted relatively little attention, perhaps because MI is technically more difficult to implement as a fundamental theory. But, instances of MI in effective theories are rife in physics, from the kinematics of electrons in solids and bodies in fluids, to mass renormalization and the Higgs mechanism in field theory. MOND too could result as such an effective theory. Special Relativity is another possible source of inspiration in seeking to modify inertia. It entails a modification of newtonian inertia, brought about by the imposition of a new symmetry: Lorentz invariance. Whichever idea we follow we should be guided by the cosmological connection of $a_{0}$, hinting that MOND might result only in the context of a non-Minkowskian universe, with $a_{0}$ reflecting the departure from flatness of space time. \subsection{derived, effective inertia} It is well known that objects moving in a medium with which they interact (electrons in solids, photons in refractive media, bodies in fluids) may exhibit a revised form of inertia. Surprisingly, it often happens that the interactions with the medium can all be encapsuled, at some level of approximation, as a reshaping of the inertial properties of the object: its motion is governed by a modified, effective ``free'' action with the degrees of freedom of the medium disappearing from the problem. MOND inertia, or indeed the whole of inertia, may result in a similar way. We then have to find an appropriate omnipresent medium, describe the interaction of all known physical DoF with it, and show that to a sufficient approximation this interaction can lead to inertia as we know it (with MOND). In other words, we want to show that the known actions of all DoF result as effective actions from such a mechanism. This would shed new light on Mach's principle because MOND brings into account a new connection between the universe at large and inertia. \par An effective theory can violate some of the hallowed principles of relativity even though the fundamental theory from which it is derive does not: Effective theories may be non-local, violate the equivalence principle at different levels, etc.. An effective theory also has a more limited applicability than its parent theory. So, if we derive an effective MOND inertia as we now apply it to galactic systems, with the acceleration constant and the interpolating function coming out of the model in the context of cosmology, this theory need then not be applicable to cosmology itself (perhaps not even to local systems involving strong gravitational fields). The hope is, however, that when we understand the origin of MOND in such terms, the role played by the inertia-modifying medium and the way it affects cosmology and other strong-field systems can also be understood. \par As discussed in Milgrom (1999), the vacuum might constitute an appropriate medium: we know it can define an inertial frame since an accelerated observed can detect it's acceleration with respect to the vacuum through the Unruh effect. And the vacuum is also affected by the cosmological state of the universe (e.g., the Gibbons Hawking effect) so it has the potential to explain the nonzero $a_{0}$ as a result of non-Minkowskian cosmology. (The field or medium responsible for the observed acceleration of the universe is also a potential candidate: its deduced present day density is numerically related to $a_{0}$, which could underlie the link.) I presented in Milgrom (1999) a heuristic argument showing how a MOND-like inertia could follow in this context. There are also pieces of evidence suggesting that kinetic actions can form spontaneously solely through interaction of DoF with the vacuum. For example, the mere interaction of the electromagnetic field with the charged DoF of the vacuum produces a contribution (of the standard form) to its kinetic action--the so called Heisenberg-Euler action (see e.g. Itzykson and Zuber 1980). But we are still a far cry from having a theory based on this idea. \par Some general questions arise when one embarks on such a program: The known instances of derived inertia start from standard physics; so all degrees of freedom start with their standard inertia, which is then modified by the interaction with the medium. Is MOND then also a correction on a preexisting inertia? Are there two contributions to inertia, one the standard, whose origin is just assumed by fiat, and another that modifies it into the MOND form? Or is there only one origin to inertia giving the standard form at $a_{0}\rightarrow 0$ and MOND at the other end? I suspect the latter because in the formal limit $a_{0}\rightarrow \infty$ inertia disappears; so; it may require fine tuning to have the two contributions to inertia cancel in the limit, standard inertia being independent of $a_{0}$. (But the MOND correction could also be multiplicative, in which case this argument is neutralized.) \par And, if inertia is to be produced totally from scratch, does that include the purported inertia-endowing medium itself? In the instances we have of derived inertia, the Newtonian inertial law is still obeyed exactly, and the difference between the effective inertial force and the actual rate of change of momentum of the object is taken up by the medium. This means that the medium itself can have momentum, hence must have inertia to begin with. It remains to be seen whether real-world inertia can be produced with a medium itself devoid of it. \par Another course of research in this vein is to construct mechanical models for inertia based on well understood physics, such as the inertia that is acquired by bodies moving in fluids; then to see in this framework whether MOND-like behavior can result in a context resembling cosmology. If successful this will tell us at least that the above program is feasible, and will perhaps teach us how to go about it. \subsection{New symmetries} In another approach we may try to construct MOND inertia on lines similar to those of special relativistic inertia, which follows from Lorentz invariance of the kinetic action. We could then seek a new symmetry that forces a form of the free actions compatible with MOND. (See, for example, an attempt by Kowalski-Glikman \& Smolin 2004 along such lines, using an extension of SR having two more constants beside the speed of light--so called ``triply special relativity''.) \par What is the space on which this new symmetry acts? Is it still space-time or a larger one? The extended, or modified, symmetry should appear, according to the cosmological connection of MOND, because we live in a cosmologically curved space-time; it should then disappear or return to Lorentz invariance when $a_{0}\rightarrow 0$. Presumably $a_{0}$ is to play the role similar to that of of the speed of light in SR whose appearance as a limiting speed has to do with the Minkowskian signature of space-time. But, in contrast, $a_{0}$ is not a limiting acceleration and there are no discontinuities as we cross it. This may be telling us that we should be looking for rotations between axes that span a manifold with Riemannian signature. \par Without having a concrete application in mind, I am personally intrigued by the following observations, which may give some reader a clue in the right direction. A de Sitter Universe (dSU), which approximates our universe as it is at present, is a maximally symmetric space time with positive curvature and Minkowskian signature. It can be viewed as a 4-D pseudo-sphere embedded in a flat 5-D Minkowski space, $M^5,$ centered at the origin, say. Consider an arbitrary, time-like world line $x^\mu(\tau)$ in the dSU having a local acceleration $a^\mu\equiv D^2 x^\mu/D\tau^2$, of magnitude $a=(-a^\mu g_{\mu\nu} a^\nu)^{1/2}$. Then the acceleration in the $M^5$ embedding space $a_5^A\equiv d^2x^A/d\tau^2$ has magnitude $a_5=(-a_5^A \eta_{AB} a_5^B)^{1/2}$, which can be shown to be related to $a$ by $a_5=(a^2+c^2\Lambda/3)^{1/2}$. Above, $g_{\mu\nu}$ is the metric in the dSU, $\eta_{AB}$ that of $M^5$, and $\Lambda=3/R^2$ the cosmological constant specifying the curvature radius, $R$, of the dSU. So, if we make the connection with MOND by defining $\hat\az=c(\Lambda/3)^{1/2}$ to play a similar role to $a_{0}$, we can write $a_5=(a^2+\hat\az^2)^{1/2}$. \par Inertial world lines, with $a^\mu=0$, are time-like geodesics of the dSU: great pseudo circles, which are the intersects of the dSU with (2-D) planes though the origin in the embedding space. It can be shown that world lines of finite, constant acceleration $a$ are the intersects of the dSU with planes at a (Minkowskian) distance $d$ from the origin with $d/R=a/(a^2+\hat\az^2)^{1/2}\equiv\lambda(a/\hat\az)$. For a body at some point $p$ on its world line compare two observers whose world lines go through $p$ and are tangent there to the body's world line, one is inertial, the other has the same acceleration, $a$, as our body at $p$. These two reference world lines correspond to two planes one through the origin and one a distance $R\lambda(a/\hat\az)$ from the origin ($p$ itself is by definition a distance $R$ from the origin). We can transform one plane to the other by a rotation through $p$ by an angle $\theta$ with $sin\theta=\lambda(a/\hat\az)$. So, kinematic factors such as $\lambda(a/\hat\az)$ resembling MOND's $\mu(a/\hat\az)$ appear in this contexts as geometrical quantities: matrix elements of a rotation taking one from an inertial observer to an accelerated one, just as the Lorentz factor $\gamma$ appears in the context of Lorentz transformations. Perhaps, in a similar manner, such factors can find their way into the equation of motion of particles to give a desired MOND behavior.
2,877,628,090,598
arxiv
\section{Introduction} Ultracold molecules are important for several applications in physics and chemistry. Cold molecules have already been used to test theories that extend the Standard Model of particle physics, for example by measuring the electron's electric dipole moment \cite{Hudson(1)11, Baron(1)14} or searching for changes in the fundamental constants \cite{Hudson:2006, Truppe(1)13}. The precision of those measurements can be improved by cooling the molecules to far lower temperatures \cite{Tarbutt(1)09, Tarbutt(1)13}. A lattice of ultracold polar molecules makes a well-controlled many-body quantum system where each particle interacts with all others through the long-range dipole-dipole interaction. This array can be used as a model system to study other strongly-interacting many-body quantum systems whose complexity is far too high to simulate on a computer \cite{Micheli:2006}. Ultracold polar molecules offer several advantages for storing and processing quantum information \cite{DeMille:2002, Andre:2006}, notably strong coupling to microwave photons and, through dipole-dipole interactions, to one another. The availability of ultracold molecules will also open up opportunities for studying and controlling chemical reaction dynamics in a whole new regime \cite{Krems:PCCP:2008}. Some species of ultracold polar molecules can be produced by association of ultracold atoms, either by photoassociation \cite{Deiglmayr(1)08, Shimasaki(1)15} or by magnetoassociation through a Feshbach resonance \cite{Ni(1)08, Takekoshi:RbCs:2014, Molony:RbCs:2014}. Often though, the molecules of interest cannot be formed this way, and then more direct cooling methods are needed. Molecules have been magnetically trapped at temperatures of about 0.5\,K by buffer-gas cooling with cryogenic helium \cite{Weinstein:CaH:1998, Tsikata(1)10}. Molecules in supersonic beams have been decelerated to rest and then trapped electrically and magnetically, typically with temperatures in the range 1-50\,mK \cite{Bethlem:trap:2000, Sawyer:2007}. Recently, laser cooling has been applied to SrF \cite{Shuman:2010, Barry:beam:2012}, YO \cite{Hummon:2013} and CaF \cite{Zhelyazkova(1)14}, and a magneto-optical trap of SrF has been demonstrated, producing molecules at a temperature of a few mK and a density of 4000\,cm$^{-3}$ \cite{Barry(1)14, McCarron(1)15}. It is likely that higher densities will be reached using more efficient loading methods, and lower temperatures may be reached if sub-Doppler cooling mechanisms are effective. A promising method to cool molecules to lower temperatures is sympathetic cooling through collisions with ultracold atoms. The main difficulty with this approach is that static electric and magnetic traps can only confine molecules in weak-field seeking states, but the lowest-energy state is always high-field-seeking. It follows that inelastic collisions can heat the molecules or can transfer them from trapped to untrapped states. This observation has motivated experimental \cite{Parazzoli:2011} and theoretical work \cite{Parazzoli:2011, Lara:PRA:2007, Wallis:MgNH:2009, Gonzalez-Martinez:hyperfine:2011, Wallis:LiNH:2011, Tscherbul:poly:2011, Gonzalez-Martinez:H+mol:2013} to search for atom-molecule systems where the ratio of elastic to inelastic collision cross sections is large. However, for most systems of interest, this ratio is too small for sympathetic cooling to work well. Notable exceptions are the Mg + NH system \cite{Wallis:MgNH:2009}, and the use of ultracold hydrogen as a coolant \cite{Gonzalez-Martinez:H+mol:2013}, but the experimental realization of those systems is exceptionally challenging. An alternative approach is to use a dynamic trap, which could be an alternating current (ac) trap, an optical dipole trap, or a microwave trap, so that molecules can be confined in their lowest energy states. In this case, inelastic collisions can only excite the molecule, but the energy available in the collision is typically too small for that and so all inelastic channels are energetically inaccessible. In previous work \cite{Tokunaga:2011} sympathetic cooling of a cloud of LiH molecules by ultracold Li atoms was simulated using a very simple model. The scattering was assumed to be isotropic, corresponding to either s-wave scattering or classical collisions of hard spheres. This is appropriate for collisions at very low energy. However, the differential cross sections at higher collision energies are typically peaked at low deflection angles, because many collisions sample mainly the long-range attraction. In the present work, we introduce a new collision model that takes account of the full energy dependence of the differential cross sections. We show that this model produces significantly slower sympathetic cooling in the early stages than the original energy-independent hard-sphere model. We also consider approximations to the full model and show that a model that uses hard-sphere scattering based on the energy-dependent transport cross section $\sigma_\eta^{(1)}$ \cite{Frye:2014} produces accurate results for the cooling of the molecules but not for heating and loss of the coolant atoms. The previous modeling work \cite{Tokunaga:2011} explored sympathetic cooling in three different types of trap: a static electric trap, an alternating current (ac) trap, and a microwave trap. A static electric trap can confine molecules only in rotationally excited states, and it was found that for Li+LiH the ratio of elastic to rotationally inelastic collisions was too small for such molecules to be cooled before they were ejected from the trap. An ac trap can confine molecules in the rotational ground state, so there are no inelastic collisions, but elastic collisions can transfer molecules from stable to unstable trajectories and it was found that this eventually causes all the molecules to be lost. A microwave trap \cite{DeMille:2004, Dunseith(1)15} can confine molecules in the absolute ground state, around the antinodes of a standing-wave microwave field, and sympathetic cooling in this trap was found to be feasible on a timescale of 10\,s \cite{Tokunaga:2011}. The microwave trap brings the benefits of a high trap depth and large trapping volume for polar molecules, especially compared to an optical dipole trap. In the present work, we simulate sympathetic cooling in a microwave trap in detail. We consider the following specific, experimentally realistic, scenario. Cold CaF molecules are produced either in a magneto-optical trap \cite{Barry(1)14, McCarron(1)15} or by Stark deceleration \cite{Wall(1)11, VanDenBerg(1)14}. In the first case the temperature might be about 2\,mK, and in the second about 30\,mK. The molecules are loaded into a magnetic trap, and then transported into a microwave trap. Here, the molecule cloud is compressed in order to improve the overlap with the atomic coolant, and this raises the initial temperature of the molecules to 20\,mK and 70\,mK respectively. A distribution of atoms, either $^{7}$Li or $^{87}$Rb, with an initial temperature of 100\,$\mu$K, is trapped magnetically and is overlapped with the cloud of molecules. We simulate the way in which elastic collisions reduce the molecular temperature towards the atomic temperature. Black-body heating out of the rovibrational ground state can be reduced below $10^{-4}$\,s$^{-1}$ by cooling the microwave trap to 77\,K \cite{Buhmann(1)08}. We start by describing our scattering calculations and the cross sections we obtain. Then we describe the simulation method we use, and study how the choice of collision model affects the simulation results. Next, we examine the cooling dynamics and evaluate which coolant, Rb or Li, is likely to be the best in practical situations. Because the cross section is very sensitive to the exact form of the atom-molecule interaction potential, especially at low energies, we study sympathetic cooling for a range of typical values of the s-wave scattering length. In addition to cooling the molecules, collisions either heat the atoms, raising the final temperature, or eject atoms from the trap, reducing the atomic density. These effects are particularly important if the atom number does not greatly exceed the molecule number. We study these effects and explain the results in terms of appropriate partial integrals over differential cross sections. Finally, we investigate how evaporative cooling of the atoms can be used to speed up the sympathetic cooling rate and lower the final temperature obtained. \section{Scattering calculations} \label{sec:crosssection} \begin{figure*}[tb] \centering \includegraphics[width=0.75\textwidth]{figures/crosssection} \caption{\label{crosssection} (Color online) Total elastic cross section, $\sigma_{\text{el}}$ (solid lines), and transport cross section, $\sigma_{\eta}^{(1)}$ (dashed lines), for positive (black) and negative (red/gray) signs of the scattering length. (a) CaF+$^{87}$Rb, $\left|a\right|=1.5\bar{a}$; (b) CaF+$^{87}$Rb, $\left|a\right|=0.5\bar{a}$; (c) CaF+$^{7}$Li, $\left|a\right|=1.5\bar{a}$; (d) CaF+$^{7}$Li, $\left|a\right|=0.5\bar{a}$.} \end{figure*} Exact scattering calculations on systems as complex as Li+CaF and Rb+CaF are not currently feasible. The combination of a deep chemical well, very large anisotropy of the interaction potential, and small CaF rotational constant mean that a very large rotational basis set would be needed for convergence. In addition, even if converged results could be achieved, uncertainties in the potential surface mean that no single calculation could be taken to represent the true system and many calculations on many surfaces would be needed to explore the range of possible behaviors \cite{Cvitas:li3:2007}. Instead we model the interactions with a simple single-channel model potential which we choose to be the Lennard-Jones potential, $V(r)=-C_6/r^6+C_{12}/r^{12}$, where $r$ is the intermolecular distance. We have shown previously \cite{Frye:2015} that, while a simple single-channel model cannot be expected to reproduce a full coupled-channel calculation, it can quantitatively reproduce the {\em range} of behaviors shown by full calculations. We obtain Lennard-Jones parameters for Li+CaF from {\em ab initio} calculations \cite{Morita:unpub:2015}. We obtain $C_{6 \rm ,Li+CaF} =1767\,E_{\rm h}a_0^6$ from direct fitting to the isotropic part of the long-range potential, where $E_{\rm h}$ is the Hartree energy and $a_{0}$ is the Bohr radius. We set $C_{12 \rm ,Li+CaF}=2.37 \times 10^{7}\,E_{\rm h}a_0^{12}$ to reproduce the depth of the complete potential, which is 7224 cm$^{-1}$. We use the depth of the complete potential in preference to the depth of the isotropic part of the potential because the very large anisotropy at short-range means the isotropic part of the potential is not representative of the interaction. To obtain a $C_6$ parameter for Rb +CaF we first separate $C_{6 \rm ,Li+CaF}$ into induction and dispersion contributions. Induction contributions for both systems are readily calculated from known values of the CaF dipole moment \cite{Childs:1984} and the static polarizabilities of the atoms \cite{Derevianko:2010}. The dispersion contribution for Rb+CaF can then be calculated from the dispersion contribution for Li+CaF using Tang's combining rule \cite{Tang:1969} with known homonuclear diatomic dispersion coefficients \cite{Derevianko:2010}, atomic polarizabilities \cite{Derevianko:2010} and a calculated CaF polarizability of $\alpha_{\rm CaF}=137 a_0^3$. The sum of these contributions gives $C_{6 \rm ,Rb+CaF}=3084\,E_{\rm h}a_0^6$. We estimate, by analogy to calculations on methyl fluoride \cite{Lutz:2014}, that the well depth for Rb+CaF will be about 2.5 times shallower than for Li+CaF. This sets $C_{12 \rm ,Rb+CaF}=1.8 \times 10^{8} \,E_{\rm h}a_0^{12}$. For our purposes, the key property of a potential is the s-wave scattering length, $a$, that it produces. In the present work, we vary the $C_{12}$ coefficient over a small range (with $C_6$ fixed) to vary the scattering length. We focus on four typical scattering lengths, $a= -1.5\bar{a}$, $-0.5\bar{a}$, $+0.5\bar{a}$, $+1.5\bar{a}$, where $\bar{a}$ is the mean scattering length of Gribakin and Flambaum \cite{Gribakin:1993}, $\bar{a}=20.2$ \AA\ for Li+CaF and $35.7$ \AA\ for Rb+CaF. Discussions of thermalization have usually assumed that the relevant cross section is the elastic cross section $\sigma_{\rm el}$, which is the unweighted integral of the differential cross section $d\sigma/d\omega$, \begin{equation} \sigma_{\rm el} = 2\pi \int \frac{d\sigma}{d\omega}\sin\Theta\,d\Theta, \end{equation} where $d\omega$ is an element of solid angle and $\Theta$ is the deflection angle in the center-of-mass frame. However, small-angle scattering contributes fully to $\sigma_{\rm el}$ but contributes relatively little to thermalization. The transport cross section that takes proper account of this is $\sigma_\eta^{(1)}$ \cite{Frye:2014}, \begin{equation} \sigma_\eta^{(1)}= 2\pi \int \frac{d\sigma}{d\omega} (1-\cos\Theta)\sin\Theta\,d\Theta. \label{eqn:sig_eta} \end{equation} In the present work, scattering calculations are carried out using the MOLSCAT package \cite{molscat:v14}. We use the DCS post-processor \cite{DCS} to calculate differential cross sections, and the SBE post-processor \cite{SBE} to calculate $\sigma_\eta^{(1)}$. The calculated elastic and transport cross sections for Li+CaF and Rb+CaF are shown in Fig.\,\ref{crosssection} for a variety of scattering lengths. At low energy, in the s-wave regime, the cross sections have constant limiting values of $4 \pi |a|^2$. This is the same for both $\sigma_{\rm el}$ and $\sigma_\eta^{(1)}$, because pure s-wave scattering is isotropic. The cross sections for positive and negative scattering lengths go to the same low-energy limit. However, as energy increases, the cross sections all diverge from one another. Those for negative scattering lengths, especially $a=-0.5\bar{a}$, show dramatic Ramsauer-Townsend minima as the scattering phase shift, and hence the s-wave cross section, passes through a zero \cite{Child:1974}. For $\sigma_\eta^{(1)}$ this minimum is further deepened by destructive interference between s-wave and p-wave scattering \cite{Frye:2014}. For $a=+1.5 \bar{a}$ a peak in both cross sections is seen (near $~10^{-3}$ K for Rb+CaF). This is a d-wave feature corresponding to the energy of the centrifugal barrier maximum. At higher energies, there are various shape resonances present for all cases. Nevertheless, once many partial waves contribute, the cross sections become less dependent on scattering length and approach classical limits. It may be noted that the cross sections for the two systems for the same value of $a/\bar{a}$ are very similar, apart from constant factors in energy and cross section. In fact they would be nearly identical if the cross sections were in units of $\bar{a}^2$ and energy in units of $\bar{E}=\hbar^2/(2\mu\bar{a}^2)$ \cite{Frye:2014, Gao:2001}, where $\bar{E}=9.51$ mK for Li+CaF and $0.543$ mK for Rb+CaF. This scaling means that, while the Rb+CaF cross sections are almost independent of scattering length at 10 mK and above, the Li+CaF cross sections are highly sensitive to scattering length at any energy below 100 mK. For stationary atoms the molecular kinetic energy in the laboratory frame, $E^\text{lab}_\text{CaF}$, is related to the collision energy in the center-of-mass frame, $E^\text{CM}$, by $E^\text{lab}_\text{CaF}=(m_{\text{CaF}}/\mu)E^\text{CM}$, where $\mu = m_{\text{CaF}}m_{\text{at}}/(m_{\text{CaF}}+m_{\text{at}})$ is the reduced mass of the collision system, $m_{\text{CaF}}$ is the molecular mass and $m_{\text{at}}$ is the atom mass. The ratio $E^\text{lab}_\text{CaF}/E^\text{CM}$ is 9.40 for Li+CaF and 1.68 for Rb+CaF. This introduces a further energy scaling between the two systems in addition to the difference in $\bar{E}$. Because the molecules are in the ground state, and the rotational excitation energy is far greater than the available collision energy, we assume that there are no inelastic collisions. It is known that there can be molecule-molecule inelastic collisions in the presence of the microwave field, even when the microwave frequency is well below the first rotational resonance \cite{Kajita(1)07, Avdeenkov:2009}. This is a concern for evaporative cooling of molecules, but less so for sympathetic cooling, where the density of molecules can be low. It is worth studying whether there can be atom-molecule inelastic collisions induced by the microwave field, but that is beyond the scope of this paper. \section{Simulation method} \label{sec:method} We assume that ground state CaF molecules are confined around the central antinode of a standing-wave microwave field, formed at the center of an open microwave cavity \cite{Dunseith(1)15}. The interaction potential of the molecules with the microwave field is \begin{equation} U({\bf r}) = -\Delta U \exp\left[-\frac{x^{2}}{w_x^{2}}-\frac{y^{2}}{w_y^{2}}\right] \cos^{2}\left(\frac{2\pi z}{\lambda}\right), \end{equation} where $\Delta U$ is the trap depth and we take $\Delta U/k_{\text{B}}=400$\,mK, $w_x=16.3$\,mm, $w_y=15.3$\,mm, and $\lambda=21.3$\,mm \cite{Dunseith(1)15}. The initial phase-space distribution of the molecules is assumed to be \begin{eqnarray} \label{phasespacedensity} f({\bf r},{\bf p}) &=&\frac{n_{0,\text{CaF}}}{(2 \pi m_{\rm CaF} k_{\text{B}} T)^{3/2}}\nonumber\\ &\times& \exp\left[-\frac{U({\bf r})-U(0)\! + p^2/(2m_{\rm CaF})}{k_{\text{B}} T}\right], \end{eqnarray} where $T$ is the initial temperature of the molecules and $n_{0,\text{CaF}}$ is the initial density at the center of the trap, which is fixed such that the total number of molecules is $N_{\text{CaF}}=10^{5}$. For most simulations, we take $T=70$\,mK in order to study sympathetic cooling from a high temperature. A distribution of ultracold atoms is overlapped with the molecules. The atoms are in a harmonic magnetic trap whose depth is 1\,mK. We assume that the distribution of atoms in phase space depends only on their energy. Therefore, at all times, the atoms have a Gaussian spatial distribution and a thermal velocity distribution with temperature $T_{\text{at}}$. They have an initial temperature of 100\,$\mu$K, an initial central density of $10^{11}$\,cm$^{-3}$, and an initial number of $10^{9}$. The corresponding initial $1/e$ radius is 1.2\,mm. This initial temperature and density can be reached by first collecting and cooling the atoms in a magneto-optical trap, followed by a brief period of sub-Doppler cooling in a molasses before loading into the magnetic trap. For Rb, polarization gradient cooling is an effective sub-Doppler cooling mechanism, while for Li velocity-selective coherent population trapping in a gray molasses can be used \cite{Grier(1)13, Burchianti(1)14}. Our approximation that the molecules are confined only by the microwave field, and the atoms only by the magnetic field, is a reasonable one, though our model could be extended to use the complete potential of both species in the combined fields. For each molecule, the simulation proceeds as follows. We solve the equation of motion in the microwave trap for a time step $\Delta t$ which is much smaller than the mean time between collisions. Then, using the current position, ${\bf r}$ and velocity, ${\bf v}$, of the molecule, we determine whether or not there should be a collision as follows. The velocity of an atom is chosen at random from a thermal distribution with temperature $T_{\text{at}}$. From the atomic and molecular velocities we calculate the collision energy in the center-of-mass frame, $E^\text{CM}$. The collision probability is $P=n_{\text{at}}({\bf r}) \sigma(E^\text{CM}) v_{\text{r}} \Delta t$, where $v_{\text{r}}$ is the relative speed of the atom and molecule, $n_{\text{at}}$ is the atomic density, and $\sigma(E^\text{CM})$ is either $\sigma_{\text{el}}$ or $\sigma_{\eta}^{(1)}$ (see Sec.\,\ref{Sec:CollisionModels}). A random number is generated in the interval from 0 to 1, and if this is less than $P$ a collision occurs. If there is no collision, the velocity of the molecule is unchanged. If there is a collision, the velocities are transformed into the center-of-mass frame, a deflection angle is determined as described below, and the new velocities transformed back into the laboratory frame. If the new total energy (kinetic energy plus trapping potential) is sufficient for the atom to escape from the trap, the atom, and its energy prior to the collision, are removed. The change in energy is shared among all the remaining atoms. Otherwise, the atom remains in the trap and the change in kinetic energy is shared between all the atoms. This algorithm is followed for each molecule in the distribution. The density and temperature of the atom cloud are updated to account for the atom loss and atom heating at this time step, and then the simulation proceeds to the next time step. With our choice of trap depth and initial atom temperature, there is a small evaporative cooling effect due to atom-atom collisions. For Rb, over the 50\,s timescale of our simulations, 8\% of the atoms are lost and the temperature falls to $80\,\mu$K. Prior to Sec.\,\ref{sec:evaporative}, we neglect this evaporative cooling effect in our simulations because we wish to isolate effects that are due to atom-molecule collisions. Then, in Sec.\,\ref{sec:evaporative}, we include atom-atom collisions and explore the effects of evaporative cooling. As we will see, the molecular velocity distributions obtained during the cooling process are far from thermal. There are some molecules that never have a collision during the whole simulation and so remain at high energy throughout. Almost all these molecules have a kinetic energy greater than 10\,mK, and they disproportionately skew the mean kinetic energy of the sample as a whole. Our interest is in the molecules that cool, and so we separate the kinetic energy distribution into two parts, above and below 10\,mK. To express how well the cooling works, we give the fraction of molecules in the low-energy part, and their mean kinetic energy, both as functions of time. \section{Collision models} \label{Sec:CollisionModels} In previous modeling \cite{Tokunaga:2011}, atoms and molecules collided like hard spheres. In this model, the momenta in the center-of-mass frame before and after a collision, ${\bf p}_{\text{c}}$ and ${\bf p}_{\text{c}}'$, are related by \begin{equation} {\bf p}_{\text{c}}' = {\bf p}_{\text{c}} - 2({\bf p}_{\text{c}} \cdot {\bf \hat{e}}){\bf \hat{e}}, \end{equation} where ${\bf \hat{e}}$ is a unit vector along the line joining the centers of the spheres, given by \begin{equation} {\bf \hat{e}} ={\bf \hat{p}}_{\text{c}} \sqrt{1-|{\bf b}|^{2}} + {\bf b}, \end{equation} where ${\bf \hat{p}}_{\text{c}}$ is a unit vector in the direction of ${\bf p}_{\text{c}}$ and ${\bf b}$ is a vector that lies in a plane perpendicular to ${\bf p}_{\text{c}}$ and whose magnitude is the impact parameter divided by the sum of the radii of the two spheres. For each collision, ${\bf b}$ is chosen at random from a uniform distribution, subject to the constraints ${\bf b}\cdot {\bf p}_{\text{c}} = 0$ and $|{\bf b}| \le 1$. \begin{figure}[tbh!] \centering \includegraphics[width=0.45\textwidth]{figures/collisionmodel} \caption{\label{collisionmodel} (Color online) Results of various collision models: (i) hard-sphere model with energy-independent cross section $4\pi\bar{a}^2$; (ii) full energy-dependent differential cross section model; (iii) hard-sphere model with $\sigma_{\text{el}}(E^\text{CM})$; (iv) hard-sphere model with $\sigma_{\eta}^{(1)}(E^\text{CM})$; (v) hard-sphere model with classical approximation to $\sigma_{\eta}^{(1)}(E^\text{CM})$. The graphs show: (a) Cross section versus collision energy; (b) fraction of molecules with kinetic energy below 10\,mK versus time; (c) mean kinetic energy of that fraction versus time. The coolant is Rb and $a=+1.5\bar{a}$.} \end{figure} The lines labeled (i) in Fig.~\ref{collisionmodel} show how the cooling proceeds for CaF + Rb when we use the hard-sphere model and choose the cross section to be independent of energy and equal to $4\pi\bar{a}^2=1.59 \times 10^{-16}$\,m$^{2}$. The cross section is shown in Fig.~\ref{collisionmodel}(a), while the cold fraction and the mean kinetic energy of that fraction are shown in parts (b) and (c), both as functions of time. As explained in Sec.\,\ref{sec:method}, the cold fraction is defined as the fraction with kinetic energy below 10\,mK. The cold fraction increases rapidly, and that fraction thermalizes quickly with the atoms. After just 4\,s, 85\% of the molecules are in the cold fraction and their mean energy is within 50\% of the 100\,$\mu$K temperature of the coolant atoms. The energy-independent hard-sphere (EIHS) model described above is reasonable at very low energy, but it has three deficiencies. First, it neglects the fact that the low-energy cross sections are actually $4\pi a^2$, where $a$ is the true scattering length as opposed to the mean scattering length. The true scattering length can take any value between $-\infty$ and $+\infty$, but is generally unknown for a specific system until detailed measurements are available to determine it. Secondly, the EIHS model neglects the fact that real cross sections are strongly energy-dependent, usually showing resonance structure on a background that drops off sharply with increasing energy, as shown in Fig.\ \ref{collisionmodel}(a). Thirdly, collisions with small deflection angles (forwards scattering) do not contribute efficiently to cooling, and the EIHS model neglects the fact that differential cross sections (DCS) at higher energies tend to be dominated by such forwards scattering, because many collisions encounter only the attractive long-range tail of the interaction potential. To remedy all these deficiencies, we introduce here a new model that we call the full DCS model. For this we calculate realistic integral and differential cross sections, as described above, for a variety of choices of the scattering length $a$. We use the elastic cross section $\sigma_{\text{el}}(E^\text{CM})$ from these calculations to determine the collision probability. This cross section is curve (ii) in Fig.~\ref{collisionmodel}(a), and it is smaller than in the EIHS model at collision energies above 8\,mK, but larger below 8\,mK. We then select a deflection angle $\Theta$ from a random distribution that reproduces the full differential cross section, $d\sigma/d\omega$, at energy $E^\text{CM}$. To select a deflection angle at random from this distribution, we form the cumulative distribution function, \begin{equation} S(\Theta) = \frac{2\pi}{\sigma_{\text{el}}}\int_{0}^{\Theta} \frac{d\sigma}{d\omega} \sin(\Theta') d\Theta', \end{equation} select a random number $r$ between 0 and 1, and find the value of $\Theta$ where $S(\Theta) = r$. The full DCS model is our most complete one and we have used it for all the simulations in the following sections. Its results for the choice $a=+1.5\bar{a}$ are shown by the lines labeled (ii) in Fig.~\ref{collisionmodel}. It may be seen that the cooling proceeds more slowly than in the EIHS model. It takes 14\,s for the cold fraction to reach 80\% and for the energy of that fraction to be within 50\% of the temperature of the atoms. The slower cooling is mainly due to the dominance of forward scattering at higher energies. There are three approximations to the full-DCS model that are worth considering because they avoid the tabulation of differential cross sections and cumulative distributions. The first of these is to use a hard-sphere collision model but to take the full energy-dependent elastic cross section from Fig.\ \ref{collisionmodel}(a). This produces the cooling behavior labeled (iii) in Figs.\ \ref{collisionmodel}(b) and (c). It may be seen that this model produces cooling slightly slower than the EIHS model, but considerably faster than the full DCS model. The second and more satisfactory approximation is to use a hard-sphere collision model but to take the full energy-dependent transport cross section $\sigma_\eta^{(1)}$, shown as line (iv) in Fig.\ \ref{collisionmodel}(a). We label this approach EDT-HS. It produces the cooling behavior labeled (iv) in Figs.\ \ref{collisionmodel}(b) and (c). It may be seen that it models the cooling of the molecules very accurately, because it takes proper account of the reduced efficiency of small-angle collisions for sympathetic cooling. However, as will be seen in Sec.\,\ref{Sec:HeatingAndLoss}, the EDT-HS approach does {\em not} adequately model heating and loss of the coolant atoms. It is worth exploring whether a classical calculation of $\sigma_{\eta}^{(1)}$ would suffice. Unlike the elastic cross section, $\sigma_{\eta{\rm ,class}}^{(1)}$ is finite because the factor of $1-\cos\Theta$ suppresses the divergence due to forwards scattering. We have calculated $\sigma_{\eta{\rm ,class}}^{(1)}$ for the Lennard-Jones potentials described above, \begin{equation} \sigma_{\eta{\rm ,class}}^{(1)}=2\pi\int_0^\infty b[1-\cos\Theta(b)]db, \end{equation} where $b$ is the impact parameter and $\Theta(b)$ is the classical deflection function \cite{Child:1974}. We find that it is very well approximated by the power law $\sigma_{\eta}^{(1)}(E^\text{CM}) = A (E^\text{CM}/C_{6})^{-1/3}$, with the dimensionless constant $A=4.79$. This cross section is labeled (v) in Fig.~\ref{collisionmodel}(a). It agrees well with the quantum-mechanical $\sigma_{\eta}^{(1)}(E^\text{CM})$ for Rb+CaF at high energies, as we would expect when many partial waves contribute. Remarkably, the temperature and cold fraction shown for this model in Fig.\ \ref{collisionmodel} agree very well with those for model (ii), even as the temperature approaches 100\,$\mu$K. This is an atypical result because, for $a=+1.5\bar{a}$, $\sigma_{\eta{\rm ,class}}^{(1)}$ is within a factor of about three of the quantum-mechanical $\sigma_{\eta}^{(1)}$ at all energies above 3\,$\mu$K. For other values of $a$, the two cross sections can differ by more than a factor of three at energies below about $2\bar{E}$, which is around 1\,mK for Rb+CaF. Note that the classical approximation will be less successful for a lighter coolant such as Li where $\bar{E}$ is far higher. \section{Approximate cooling rates} \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{figures/coolingrate} \caption{\label{coolingrate} (Color online) Cooling rate of molecules as a function of their kinetic energy, estimated from Eq.\,(\ref{dEdtApprox}), when the coolant is (a) Rb and (b) Li, and for various values of the s-wave scattering length: $a=+1.5\bar{a}$ (red solid line), $a=+0.5\bar{a}$ (blue dash-dot line), $a=-0.5\bar{a}$ (green dotted line), and $a=-1.5\bar{a}$ (black dashed line).} \end{figure} From the transport cross sections, $\sigma^{(1)}_{\eta}$, in Fig.~\ref{crosssection} we can make a useful estimate of the cooling rate of molecules as a function of their kinetic energy. For this estimate, we assume stationary atoms with a uniform density $n_{\text{at}}=10^{11}$\,cm$^{-3}$. The cooling rate is \begin{equation} \label{dEdtApprox} \frac{d{E^\text{lab}_\text{CaF}}}{d t}={n_{\text{at}}}{\sigma}(E^\text{CM}) v \overline{{\Delta}E}, \end{equation} where $v=(2E^\text{lab}_\text{CaF}/m_{\text{CaF}})^{1/2}$ is the speed of the molecule and $\overline{{\Delta}E}$ is the average energy transfer for a hard-sphere collision. $\Delta E$ is given explicitly as \begin{equation} \label{DeltaE} \Delta E=-\left(\frac{2\mu}{m_{\rm CaF}+m_{\rm at}}\right)(1-\cos\Theta)E^\text{lab}_\text{CaF}. \end{equation} Figure~\ref{coolingrate} shows the cooling rates obtained this way, which although only approximate are helpful for understanding the numerical results presented later. For collisions with Rb at energies above 10\,mK, the cooling rate does not depend strongly on the s-wave scattering length. This is the energy regime where the $a$-independent classical approximation to $\sigma_{\eta}^{(1)}(E^\text{CM})$ described in Sec.~\ref{Sec:CollisionModels} is accurate. Due to the small reduced mass in the lithium case, the classical limit is only reached for temperatures above 200\,mK, and so the cooling rate depends sensitively on $a$ over the whole energy range of interest. When $a$ is negative there is a minimum in the cooling rates corresponding to the Ramsauer-Townsend minimum in $\sigma_{\eta}^{(1)}(E^\text{CM})$. For Rb, at $a=-1.5\bar{a}$, this minimum is near 100\,$\mu$K, which is close to the temperature of the atoms in our simulations and so will not have a significant impact on the thermalization. For Li, the minimum occurs for kinetic energies between 1 and 10\,mK, and so it has a strong effect on the thermalization. Finally, we note that in the ultracold limit the cooling rate is almost an order of magnitude higher for Rb than for Li, reflecting the larger value of $\bar{a}$ for Rb + CaF relative to Li + CaF. \section{Cooling dynamics} \label{Sec:CoolingDynamics} \begin{figure}[tb] \centering \includegraphics[width=0.48\textwidth]{figures/phasespace} \caption{\label{phasespace}(Color online) Time evolution of the phase-space distribution of molecules in the $x$ direction. The cooling times are (a) 0\,s, (b) 2\,s, (c) 10\,s, and (d) 20\,s. The coolant is Rb and $a=+1.5\bar{a}$.} \end{figure} Figure~\ref{phasespace} shows the evolution of the $(x,v_{x})$ phase-space distribution of CaF when Rb atoms are used as the coolant, for the case where the s-wave scattering length is $a=+1.5\bar{a}$. At $t=0$ (Fig.~\ref{phasespace}(a)), the molecules fill the phase-space acceptance of the trap. At later times, more and more molecules congregate at the trap center as they are cooled by collisions with the atoms. After 20\,s (Fig.~\ref{phasespace}(d)), the distribution has separated into two parts. The majority are cooled to the center, but there are some that remain uncooled. These are molecules that have large angular momentum around the trap center and so are unable to reach the center where the atomic density is high. At $x=3$\,mm for example, the atomic density, and hence the collision rate, is a factor of 1000 smaller than at the center, and so molecules at this distance are unlikely to collide with atoms on the 20\,s timescale shown in the figure. These molecules can be cooled by expanding the size of the atom cloud, but only at the expense of the overall cooling rate \cite{Tokunaga:2011}. \begin{figure}[tb] \centering \includegraphics[width=0.48\textwidth]{figures/rbhistogram} \caption{\label{rbhistogram}(Color online) Kinetic energy distributions after 2\,s, 10\,s, and 20\,s. The coolant is Rb. Left panels have $a=+1.5\bar{a}$ while right panels have $a=-1.5\bar{a}$.} \end{figure} Figure \ref{rbhistogram}(a) shows histograms of the kinetic energy distribution of the molecules at three different times, 2, 10 and 20\,s, when the coolant is Rb and $a=+1.5\bar{a}$. These are the same times as chosen for the phase space distributions in Fig.\,\ref{phasespace}, and the results come from the same simulation. The initial distribution is a Maxwell-Boltzmann distribution with a temperature of 70\,mK, truncated at the trap depth of 400\,mK. The distribution rapidly separates into two parts, those that cool and those that do not. The latter are the molecules that never reach the trap center because of their large angular momentum, as discussed above. A significant fraction of molecules are cooled below 1\,mK after just 2\,s. After 10\,s the majority are in this group, and after 20\,s this cold fraction is almost fully thermalized with the atoms. We return to part (b) of Fig.\,\ref{rbhistogram} in the next section. \section{Sensitivity to the scattering length and the choice of coolant} \label{sec:sensitivity} At low energies, cross sections are very sensitive to the exact form of the scattering potential, as shown in Fig.\ \ref{crosssection}, and cannot be calculated accurately without independent knowledge of the scattering length. In our model Lennard-Jones potential, the full energy-dependence of the cross section is determined once the s-wave scattering length, $a$, is fixed. Here, we study how the simulation results change as we vary the value of $a$. The choice of coolant is also a crucial consideration, and so we compare the results for Li and Rb as coolants. \subsection{Evolution of the kinetic energy distributions} Figure \ref{rbhistogram} compares how the kinetic energy distributions evolve for two cases: $a=+1.5\bar{a}$ and $a=-1.5\bar{a}$, with Rb as the coolant. At 2\,s the two distributions are similar. The main difference is that the distribution extends to lower energies for $a=+1.5\bar{a}$. The similarity is due to the similar cooling rates at the high energies, as shown in Fig.\,\ref{coolingrate}(a), while the difference at low energy is due to the far higher cooling rate for $a=+1.5\bar{a}$ at energies below 1\,mK (compare the solid red and black dashed lines in Fig.\,\ref{coolingrate}(a)). Exactly the same trend is seen after 10\,s of cooling. Once again, the high-energy parts of the distributions are very similar, but the distribution extends to lower energies for the $a=+1.5\bar{a}$ case. After 20\,s the majority of the molecules have fully thermalized with the atoms and the two distributions are very similar to one another. Figure \ref{lihistogram} shows the corresponding histograms for the case of Li. Here, the cooling proceeds more slowly and so we have added a fourth pair of histograms showing the distributions after 40\,s. There is a great contrast between the positive and negative scattering lengths in this case. For $a=+1.5\bar{a}$ the distribution evolves in a very similar manner to the Rb case, but when $a=-1.5\bar{a}$ it takes a long time for the molecules to reach energies below 10\,mK. This is the effect of the Ramsauer-Townsend minimum which reduces the cooling rate estimated in Fig.\,\ref{coolingrate}(b) to 0.25\,s$^{-1}$ for kinetic energies near 20\,mK. Because the minimum is broad in energy, and there is a large mass mismatch between CaF and Li, a collision cannot take a molecule directly across the minimum. The molecules have to be cooled \textit{through} the minimum by multiple collisions, and that takes a long time. Once molecules have passed through this minimum, cooling to ultracold temperatures occurs on a similar timescale to the $a=+1.5\bar{a}$ case. \begin{figure}[tb] \centering \includegraphics[width=0.48\textwidth]{figures/lihistogram} \caption{\label{lihistogram}(Color online) Kinetic energy distributions after 2\,s, 10\,s, 20\,s and 40\,s. The coolant is Li. Left panels have $a=+1.5\bar{a}$ while right panels have $a=-1.5\bar{a}$.} \end{figure} \subsection{Cold fraction and mean kinetic energy} \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{figures/fraction} \caption{\label{fraction}(Color online) Fraction of molecules with kinetic energy below 10\,mK as a function of time for (a) Rb, and (b) Li, for four different values of the scattering lengths: $a=+1.5\bar{a}$ (red), $a=+0.5\bar{a}$ (blue), $a=-0.5\bar{a}$ (green), and $a=-1.5\bar{a}$ (black).} \end{figure} Figure~\ref{fraction}(a) shows the fraction of molecules with kinetic energy less than 10\,mK, as a function of time, for various values of $a$ when the coolant is Rb. This fraction is entirely insensitive to $a$. This is because the cooling rate is independent of $a$ for energies above 10\,mK, as we saw in Fig.\,\ref{coolingrate}. After 5\,s about 50\% of the molecules are in this cold fraction, and after 20\,s this exceeds 80\%. Figure~\ref{fraction}(b) shows the cold fraction versus time when the coolant is Li. We find a strong dependence on $a$ in this case. When $a=+1.5\bar{a}$, the increase in the cold fraction with time is similar to the Rb case. For this value of $a$ there is a maximum in the cooling rate at a kinetic energy of about 70\,mK (see Fig.\,\ref{coolingrate}(b)), which happens to match the initial temperature of the molecules, and so the cooling to below 10\,mK proceeds rapidly. The cold fraction reaches 50\% after 4\,s in this case. The increase in the cold fraction is slower for $a=+0.5\bar{a}$, reaching 50\% after 16\,s. The accumulation of cold molecules is exceedingly slow when $a$ is negative. When $a=-1.5\bar{a}$, the Ramsauer-Townsend minimum is at $E^\text{lab}_\text{CaF}= 20$\,mK, and it takes a long time for the molecules to cool through this minimum. The cold fraction reaches 50\% after 40\,s in this case. When $a=-0.5\bar{a}$, the Ramsauer-Townsend minimum is shifted to $E^\text{lab}_\text{CaF}= 10$\,mK, but the cross section at the minimum is a factor of five smaller, and so the cooling is even slower, taking 50\,s to reach 50\%. \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{figures/temperature} \caption{\label{temperature} (Color online) Mean kinetic energy of the cold fraction as a function of time when the coolant is (a) Rb and (b) Li, and for various values of the s-wave scattering length: $a=+1.5\bar{a}$ (red), $a=+0.5\bar{a}$ (blue), $a=-0.5\bar{a}$ (green), and $a=-1.5 \bar{a}$ (black).} \end{figure} Figure \ref{temperature}(a) shows the mean kinetic energy of the cold fraction as a function of time for various values of $a$ when Rb is used as the coolant. As for the cold fraction itself, this measure is almost independent of $a$. This may seem surprising, since the cooling rates estimated in Fig.\,\ref{coolingrate}(a) show a strong dependence on $a$ below a few mK. However, the mean kinetic energy is strongly influenced by molecules with kinetic energies close to the 10\,mK cutoff that defines the cold fraction, and at this energy the cooling rates show little dependence on $a$. We find a small difference in the cooling rates between positive and negative scattering lengths. For the positive $a$ values the molecular temperature is within a factor of two of the atomic temperature after 10\,s, while for the negative $a$ values this takes 14\,s. Figure \ref{temperature}(b) shows how the mean kinetic energy of the cold fraction evolves when Li is used as a coolant. In this case, the cooling depends sensitively on $a$. When $a=+1.5\bar{a}$ the evolution is similar to the Rb case. The cooling is much slower when $a=+0.5\bar{a}$ because the low-energy cross-section is nine times smaller. The cooling is even slower when $a$ is negative. This is because, in the energy region between 1 and 10\,mK, the Ramsauer-Townsend minimum greatly suppresses the cooling rate relative to the positive $a$ case, and because molecules with energies in this range have a strong influence on the mean. The fraction of molecules that are cooled below 10\,mK depends on the initial temperature, $T_{\text{i}}$. Figure~\ref{fraction20mK} compares this fraction for $T_{\text{i}}=20$\,mK and 70\,mK, for the case where Rb is the coolant and $a=+1.5\bar{a}$. These two initial temperatures correspond to temperatures of 2\,mK and 30\,mK prior to compression of the cloud in the microwave trap. When $T_{\text{i}}=20$\,mK, more than 99\% of the molecules are cold within 10\,s. \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{figures/fraction20mK} \caption{\label{fraction20mK}(Color online) Fraction of cold molecules as a function of time for the initial temperatures of $T_{\text{i}}=20$\,mK (red solid line), and $T_{\text{i}}=70$\,mK (black dashed line). The coolant is Rb and $a=+1.5\bar{a}$.} \end{figure} \section{Atom heating and loss} \label{Sec:HeatingAndLoss} \begin{figure}[tb] \centering \includegraphics[width=0.47\textwidth]{figures/heatinglossrate} \caption{\label{lossandheating} (a) Atom heating rate per molecule and (b) atom loss rate per molecule for the EDT-HS model (dashed line) and the full DCS model (solid line). The coolant is Rb, $a=+1.5\bar{a}$, and the molecules have an initial temperature of 70\,mK. There are $10^{5}$ molecules and $10^{9}$ atoms.} \end{figure} The energy transferred from molecules to atoms will either eject atoms from the trap, or will heat them up. As described in Sec.\,\ref{sec:method}, we suppose that atoms are lost from the trap if their total energy exceeds 1\,mK. This could be the actual depth of the trap, or an ``rf knife'' might be used to cut off the trap at this depth. Here, we investigate the heating and loss of atoms and the consequences for sympathetic cooling. We note that while the EDT-HS collision model correctly captures the molecule cooling dynamics when $\sigma_{\eta}^{(1)}$ is used as the cross section, it does not model correctly the atom heating and loss. Here, we highlight the difference between these two approaches by comparing the results obtained from the EDT-HS model and the full DCS model. Figure~\ref{lossandheating} shows how the heating and loss rates of the atoms change with time in the full DCS model and the EDT-HS model for the case of $10^5$ molecules and $10^9$ atoms. The two models show similar trends, so we first discuss these trends and then consider the differences between the models. At early times the majority of the molecules have energies far above the atom trap depth and so most collisions cause atom loss, rather than heating. The loss rate is high while the heating rate is low. Nevertheless, there is still some heating due to small-angle collisions with the molecules which transfer only a little energy to the atoms. The loss rate increases during the first second because the collision cross section and the atom-molecule overlap both increase as the molecules are cooled. As time goes on the loss rate falls because the molecules are cooler and there are fewer collisions with enough energy to kick atoms out of the trap. For the same reason the heating rate initially increases, but then decreases again as the molecules have less energy to transfer to the atoms. For most of the 20\,s period, the full DCS model gives more atom heating and more atom loss than the EDT-HS model. Only at long times, once the atoms and molecules are almost fully thermalized, do the two models give the same results. Integrating the results of the full DCS model shown in Fig.\,\ref{lossandheating}, we find that the total temperature increase of the trapped atoms is 1.3\,pK per molecule, while the total loss is 10 atoms per molecule. The energy deposited into the trapped atom cloud is only 1.8\% of the initial energy of the molecular cloud. In this sense, the sympathetic cooling process is remarkably efficient. We now turn to how the atom heating and loss rates can be {\em understood}, and explain why the two models give different results. Whether an atom is heated or lost depends on the kinetic energy kick it receives in the collision, as given by Eq.\,(\ref{DeltaE}) if the atoms are assumed to be stationary. An atom at the center of the trap is lost from the trap if the energy transferred in the collision exceeds the trap depth, $\Delta E > E_{\text{trap}}$. This occurs if the deflection angle exceeds a critical angle $\Theta_{\text{crit}}$ given by \begin{equation} \cos\Theta_{\text{crit}} = 1 - \left(\frac{m_{\rm CaF}+m_{\rm at}}{2\mu}\right) \left(\frac{E_{\text{trap}}}{E^\text{lab}_\text{CaF}}\right). \end{equation} At laboratory-frame energies below critical energy $E_{\rm crit} = (m_{\rm CaF}+m_{\rm at})/4\mu)E_{\text{trap}}$, no loss is possible, assuming stationary atoms at the center of the trap. This energy is 2.63 mK for Li+CaF and 1.04 mK for Rb+CaF. All collisions below this energy and collisions above this energy where $\Theta < \Theta_{\text{crit}}$ will not eject atoms from the trap, but still transfer energy and so heat the atom cloud, by an amount proportional to $1-\cos\Theta$. This suggests the possibility of defining cross sections for atom heating and loss as partial integrals of the differential cross section, \begin{equation} \label{sigmaloss} \sigma_{\text{loss}}=2\pi\int_{-1}^{\cos\Theta_{\text{crit}}} \frac{d\sigma}{d\omega} d\cos\Theta, \\ \end{equation} \begin{equation} \label{sigmaheat} \sigma_{\text{heat}}=2\pi\int_{\cos\Theta_{\text{crit}}}^{1} \frac{d\sigma}{d\omega} (1-\cos\Theta) d\cos\Theta. \end{equation} It is convenient to write these as integrals over $d\cos\Theta$ instead of $\sin\Theta\,d\Theta$ because the $\cos\Theta$ form allows us to show plots in which the integrals are simply areas that can be estimated by eye. Note that if $E^\text{lab}_\text{CaF}<E_{\rm crit}$ then $\sigma_{\text{heat}} = \sigma_{\eta}^{(1)}$, because all collisions cause heating rather than loss. For the full DCS model these integrals must be evaluated numerically, but in the hard-sphere model the DCS are isotropic, $d\sigma_{\rm HS}/d\omega = \sigma_{\rm HS}/(4\pi)$, and the integrals can be evaluated analytically to give \begin{equation} \sigma_{\rm loss,HS}=\frac{1}{2}(1+\cos\Theta_{\rm crit}) \sigma_{\rm HS} \end{equation} and \begin{equation} \sigma_{\rm heat,HS}=\frac{1}{4}(1-\cos\Theta_{\rm crit})^2 \sigma_{\rm HS}. \end{equation} \begin{figure}[htb] \centering \includegraphics[width=0.47\textwidth]{figures/DCS_20mK} \includegraphics[width=0.47\textwidth]{figures/DCS_2mK} \caption{\label{dcsplots} (Color online) Differential cross sections and their contributions to heating and loss. The solid black line shows the full quantum-mechanical $d\sigma/d\omega$, while the solid red line shows $(1-\cos\Theta)d\sigma/d\omega$; the dashed lines show the corresponding quantities for the EDT-HS model. The vertical line shows the value of $\Theta_{\text{crit}}$. (a) $E^\text{lab}_\text{CaF} = 2$\,mK. (b) $E^\text{lab}_\text{CaF} = 20$\,mK. The coolant is Rb and $a=+1.5\bar{a}$.} \end{figure} Figure \ref{dcsplots} shows differential cross sections at two energies that correspond to $E^\text{lab}_\text{CaF} = 2$ mK and 20 mK for Rb+CaF. Both full differential cross sections and those from the EDT-HS model are shown (solid and dashed black lines respectively), and the corresponding quantities weighted by $1-\cos\Theta$ are shown in red. The values of $\Theta_{\rm crit}$ at the two energies are shown as vertical lines. Integrals over the complete range of $\cos\Theta$ under the black lines correspond to $\sigma_{\rm el}$, and under red lines correspond to $\sigma_\eta^{(1)}$; the latter is the same for the full DCS and EDT-HS models by construction. $\sigma_{\rm loss}$ is the area under the black lines to the left of $\Theta_{\rm crit}$, and $\sigma_{\rm heat}$ is the area under the red lines to the right of $\Theta_{\rm crit}$. It can be seen that at 20 mK the full DCS has a very large forwards peak; this dominates $\sigma_{\rm heat}$, even though its contribution is suppressed by the $1-\cos\Theta$ weighting. The resulting $\sigma_{\rm heat}$ is many times larger than in the EDT-HS model, which has no forward peak. The full DCS also has a secondary peak near $\cos\Theta=0.75$, which is outside $\Theta_{\rm crit}$ and so contributes to atom loss; the resulting $\sigma_{\rm loss}$ is also larger than in the EDT-HS model. At the lower energy of 2 mK, $\Theta_{\rm crit}$ is near $\Theta=\pi/2$. There is still a large forwards peak but it no longer dominates due to the changed range of integration, leading to similar cross sections for the two models. \begin{figure}[tb] \centering \includegraphics[width=0.47\textwidth]{figures/heatloss_crosssections} \caption{\label{heat_loss_cross_sec} (Color online) Loss (red) and heating (blue) cross sections as a function of CaF laboratory energy for the EDT-HS model (dashed lines) and the full DCS model (solid lines). $\sigma_\text{el}$ (solid black line) and $\sigma_\eta^{(1)}$ (dashed black line) are shown for comparison. The coolant is Rb and $a=+1.5\bar{a}$. } \end{figure} Figure \ref{heat_loss_cross_sec} shows how the heating and loss cross sections vary over the range of energies relevant to the cooling process. As explained above, at low energy, $E^\text{lab}_\text{CaF}<E_\text{crit}$, we have $\sigma_\text{heat}=\sigma_\eta^{(1)}$ and $\sigma_\text{loss}=0$. Above $E_\text{crit}$ the heating cross section falls off rapidly; for the EDT-HS model it falls to negligibly small values by a few mK. The cross section for the full DCS is several times larger than that for the EDT-HS model in this tail, but it also falls towards zero. The loss cross sections for the two models agree surprisingly well ($\pm\sim30\%$) in an intermediate energy range from about 2\,mK to 60\,mK; the extent of this similarity is greatest for this particular scattering length ($a=+1.5\bar{a}$), but it also exists up to about 20\,mK for the other scattering lengths investigated. Above this intermediate range, $\sigma_\text{loss}$ for the full DCS model does become larger than for the EDT-HS model, as we expect. The large peak around 1.5 mK in the elastic cross section is a d-wave feature that causes a large amount of backwards scattering around that energy; this significantly enhances the loss cross section because at this energy $\Theta_\text{crit}$ is still near backwards scattering. The overall effect is that the full DCS model gives significantly larger rates of both atom heating and atom loss than the EDT-HS model, especially at higher energies, exactly as we see in Fig.\,\ref{lossandheating}. This is at first sight surprising because each atom-molecule collision causes either atom heating or atom loss. However, at higher energies the total collision rate is considerably greater in the full DCS model than in the EDT-HS model, because the former is determined by $\sigma_{\rm el}$ and the latter by $\sigma_\eta^{(1)}$. The effects of atom heating and loss will, of course, be most significant when the atom number does not greatly exceed the molecule number. Table \ref{moleculenumber} shows the results of simulations for a variety of molecule numbers, with the atom number fixed at $10^{9}$, and once again compares the full DCS and EDT-HS models. In the first three rows, the trap depth for the atoms is 1\,mK. When the atom number is 100 times the molecule number, atom heating and loss are not significant effects. For each molecule, the first few collisions carry away most of the energy, and almost all of these collisions cause atom loss, rather than heating. Thus, for this case, 11\% of the atoms are lost, and the atom cloud heats up by just 13\,$\mu$K. The molecules thermalize completely with the atoms, and the majority are in the cold fraction. When the atom number is only 10 times the molecule number, the effects are far more dramatic. At the end of the simulation only 2.2\% of the atoms remain, and the temperature of those remaining has increased to 259\,$\mu$K. Since there are so few atoms remaining, only 70\% of the molecules now reach kinetic energies below 10\,mK, and the temperature of this fraction is increased to 596\,$\mu$K. The EDT-HS collision model underestimates the atom loss and atom heating, and it predicts more cold molecules, with a lower final temperature, than the full DCS model. It is interesting to explore whether the atomic trap depth of 1\,mK used in the simulations above is optimum. The last three rows of Table \ref{moleculenumber} show the results of simulations with the atomic trap depth increased to 5\,mK. As expected, this results in less atom loss and more atom heating. The fraction of cold molecules increases a little, but the temperature of the cold fraction increases significantly. This is especially evident when the atom number is only 10 times the molecule number. It is clear that large atomic trap depths are not necessarily beneficial for sympathetic cooling, and indeed there might be advantages in adjusting the trap depth as cooling proceeds. \begin{table}[tb] \caption{\label{moleculenumber} The effect of different molecule numbers ($N_{\text{mol}}$), with atom number fixed at $10^{9}$, for two values of the trap depth $E_{\text{trap}}$: 1\,mK and 5\,mK. The columns give the fraction of remaining atoms $f_{\text{at}}$, the atomic temperature $T_{\text{at}}$, the fraction of cold molecules $f_{\text{mol}}$, and the molecular temperature $T_{\text{mol}}$ after 50\,s. The main values are for the full DCS model, and the values in brackets are for the EDT-HS model.} \centering \begin{tabular}{c|c|cccc} \hline \hline $E_{\text{trap}}$ & $N_{\text{mol}}$ & $f_{\text{at}}$ (\%) & $T_{\text{at}}$ (${\mu}$K) & $f_{\text{mol}}$ (\%) & $T_{\text{mol}}$ (${\mu}$K)\\ \hline & $10^7$ & 89 (92) & 113 (107) & 89 (89) & 113 (108)\\ 1\,mK & 5${\times}10^7$ & 38 (59) & 159 (136) & 88 (88) & 168 (144)\\ & $10^8$ & 2.2 (18) & 259 (180) & 70 (83) & 596 (246)\\ \hline & $10^7$ & 95 (96) & 151 (133) & 90 (89) & 153 (134)\\ 5\,mK & 5${\times}10^7$ & 75 (79) & 396 (291) & 90 (91) & 435 (299)\\ & $10^8$ & 50 (57) & 704 (518) & 85 (87) & 927 (624)\\ \hline \hline \end{tabular} \end{table} \section{The effect of evaporative cooling} \label{sec:evaporative} Evaporative cooling can be used to reduce the temperature further. It seems most efficient to apply the evaporation to the atoms, and sympathetically cool the molecules, rather than to apply the evaporation directly to the molecules. Therefore, we suppose that the evaporation is done in the magnetic trap by applying an rf field which induces transitions between trapped and anti-trapped Zeeman states at a value of magnetic field only reachable by the most energetic atoms (an ``rf knife''). We study the sympathetic cooling of CaF when this evaporative cooling is applied to Rb, for the two cases $a=+1.5\bar{a}$ and $a=-0.5\bar{a}$. As the molecules cool, the molecular cloud shrinks: by choosing an appropriate evaporative cooling ramp, the size of the atom cloud can be optimized throughout the sympathetic cooling process. We follow the theory and notation of evaporative cooling detailed in \cite{Ketterle(1)96}. For simplicity, we assume that the atoms are held in a harmonic trap. The rf knife is set so that an atom is lost if its energy exceeds $\eta k_{\text{B}} T$, where $\eta$ is set quite large so that only the high-energy tail of the distribution is cut off. The rate of change of atom number $N_{\text{at}}$ follows \begin{equation}\label{evap1} \frac{d N_{\text{at}}}{d t} = -\frac{N_{\text{at}}}{\tau_{\text{ev}}}, \end{equation} where $1/\tau_{\text{ev}}$ is the evaporation rate. It is given by \begin{equation}\label{tauevap} \tau_{\text{ev}} = \frac{\sqrt{2} e^{\eta}}{\eta} \tau_{\text{el}}, \end{equation} where $\tau_{\text{el}}$ is the mean time between atom-atom elastic collisions at the trap center. This scales with atom number as \begin{equation}\label{collisionTimeScaling} \frac{\tau_{\text{el}}}{\tau_{\text{el,i}}} = \left(\frac{N_{\text{at}}}{N_{\text{at,i}}}\right)^{\alpha - 1}, \end{equation} where $\alpha = \eta/3 - 1$ and the subscript i denotes the initial value. Using Eqs.\,(\ref{evap1}), (\ref{tauevap}) and (\ref{collisionTimeScaling}), we obtain \begin{equation}\label{evap2} \frac{1}{N_{\text{at,i}}} \frac{d N_{\text{at}}}{d t} = -\frac{\kappa}{\tau_{\text{el,i}}}\left(\frac{N_{\text{at}}}{N_{\text{at,i}}}\right)^{2-\alpha}, \end{equation} where $\kappa = \eta/(\sqrt{2} e^{\eta})$. The solution to this equation is \begin{equation} \frac{N_{\text{at}}(t)}{N_{\text{at,i}}} = \left(1-\left(\alpha-1\right)\kappa\frac{t}{\tau_{\text{el,i}}}\right)^{1/(\alpha - 1)}. \end{equation} The mean time between collisions at the start of evaporation is $\tau_{\text{el,i}} = 1/(\rho_0 \sigma \sqrt{2}\bar{v})=70.5$\,ms, where $\rho_{0} = 10^{11}$\,cm$^{-3}$ is the initial density at the trap center, $\sigma = 8\pi \times (95 a_{0})^{2}$ is the elastic cross section of $^{87}$Rb at low temperature~\cite{Egorov(1)13}, and $\sqrt{2}\bar{v} = 0.22$\,m/s is the mean relative velocity between two atoms at the initial temperature of 100\,$\mu$K. The temperature of the atoms scales as $T_{\text{at}}/T_{\text{at,i}} = (N_{\text{at}}/N_{\text{at,i}})^{\alpha}$, while the density scales as $n_{\text{at}}/n_{\text{at,i}} = (N_{\text{at}}/N_{\text{at,i}})^{1 -3\alpha/2}$. In our simulations, we change the atom number, temperature, density and radius in time, according to these results. Otherwise, the simulation is unchanged. We stop the evaporation when the atoms reach 1\,$\mu$K. \begin{figure}[tb] \centering \includegraphics[width=0.47\textwidth]{figures/evhistogram} \caption{\label{evhistogram} (Color online) Kinetic energy distributions at four different times (2\,s, 10\,s, 20\,s and 50\,s) and for two values of the evaporative cooling parameter: (a) $\eta = 5.52$, (b) $\eta = 8.14$. For comparison, a Maxwell-Boltzmann distribution at 1\,$\mu$K is shown by a red line. The coolant is Rb and $a=+1.5\bar{a}$.} \end{figure} Figure~\ref{evhistogram}(a) shows how the kinetic energy distribution of the molecules evolves with time when $\eta=5.52$ and $a=+1.5\bar{a}$. At 2\,s, the distribution is similar to the case without evaporation (see Fig.~\ref{rbhistogram}(a)), but by 10\,s there is a large difference. For this value of $\eta$ the atoms initially cool quickly, many atoms are ejected, and the density gradually increases. About half the molecules cool along with the atoms and these have kinetic energy below 100\,$\mu$K at 10\,s. The other half remain uncooled because they find themselves outside the rapidly shrinking atom cloud. After 50\,s the cold fraction is fully thermalized to the 1\,$\mu$K temperature of the atom cloud. Figure~\ref{evhistogram}(b) shows the corresponding evolution when $\eta =8.14$. In this case, the evaporation initially proceeds slowly, and the molecule distribution remains similar to the case without evaporation for the first 10\,s. Because the atom cloud shrinks more slowly a larger number of molecules are captured into the cold fraction, and these then cool to 1\,$\mu$K on a 50\,s timescale. \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{figures/evaporative} \caption{\label{evaporative} (Color online) Sympathetic cooling of molecules with evaporative cooling applied to the atoms. Graphs show the time evolution of (a) the fraction of molecules with kinetic energy below 1\,mK, when $a= +1.5\bar{a}$; (b) the mean kinetic energy of the ultracold fraction when $a= +1.5\bar{a}$; (c) the mean kinetic energy of the ultracold fraction when $a= -0.5\bar{a}$. (i, black) $\eta = 5.52$, (ii, red) $\eta = 6.67$, (iii, blue) $\eta = 8.14$. In (b) and (c), the dashed lines show how the atomic temperature evolves. The long-dash green line shows the atom temperature without evaporative cooling.} \end{figure} Figure~\ref{evaporative}(a,b) show the fraction of molecules with kinetic energy below 1\,mK, and the mean kinetic energy of that fraction, using $a=+1.5\bar{a}$ and three different values of $\eta$: 5.52, 6.67, and 8.14. When $\eta=8.14$ the atom cloud cools slowly at early times, and this gives the molecules enough time to thermalize with the atoms before the atom cloud shrinks too much. After this initial thermalization to the atom temperature, the molecular temperature follows the evaporative cooling of the atoms very closely. The ultracold fraction is high in this case, reaching 85\% after 50\,s. However, it takes the full 50\,s for this fraction to reach 1\,$\mu$K. For this value of $\eta$, the atom density increases by a factor of 70 over 50\,s, and the mean atom-molecule collision rate increases from 4\,s$^{-1}$ to 45\,s$^{-1}$. When $\eta = 6.67$ the atoms cool more rapidly and the cloud size shrinks more rapidly. Consequently, the ultracold fraction of molecules is reduced to 74\% but this fraction now reaches 1\,$\mu$K in 30\,s. When $\eta = 5.52$ the atoms initially cool quickly, but the cooling rate slows down as time goes on because the density does not increase rapidly enough to compensate for the decrease in atom velocity. The ultracold fraction of molecules reduces to 59\%. The mean kinetic energy of this fraction falls quickly, reaching 100\,$\mu$K in 3.4\,s, and 10\,$\mu$K in 17\,s. Therefore, evaporative cooling with a relatively low $\eta$ is a good strategy for cooling rapidly to temperatures above 10\,$\mu$K. However, the cooling slows down at longer times and it ultimately takes longer to reach 1\,$\mu$K than for the intermediate value of $\eta$. Finally, we consider the case where $a=-0.5\bar{a}$. This is a highly unfavorable case compared to $a=+1.5\bar{a}$, both because the elastic cross section in the ultracold limit is nine times smaller and because there is a deep Ramsauer-Townsend minimum in the cross section for collision energies slightly below 100\,$\mu$K, as can be seen in Fig.\,\ref{crosssection}. We find that the fraction of molecules with kinetic energy below 1\,mK is almost unchanged from that shown in Fig.\,\ref{evaporative}(a). This is to be expected since, at energies higher than 1\,mK, the cross sections for the two values of $a$ are not too different. Figure \ref{evaporative}(c) shows how the mean kinetic energy of the ultracold fraction evolves when $a=-0.5\bar{a}$. Because of the lower collision rate, the mean kinetic energy of the molecules lags behind that of the atoms, instead of the two being locked together as they are in the case of $a=+1.5\bar{a}$. The molecules are slow to reach 20\,$\mu$K for all values of $\eta$, because they have to cool through the Ramsauer-Townsend minimum to do so. For $\eta=5.52$, the atoms cool too quickly and the molecules have not thermalized with the atoms even after 50\,s. For $\eta=8.14$ the initial cooling rate of the atoms is slow enough that the molecule temperature can more closely follow the atom temperature, both reaching 1\,$\mu$K in about 50\,s. The cooling of the molecules is fastest for the intermediate value of $\eta$. In particular, the mean kinetic energy of the molecules falls rapidly as soon as it is below 20\,$\mu$K, and it reaches 1\,$\mu$K in 36\,s. We see that, even for this unfavorable value of $a$, evaporative cooling of the atoms can bring the molecule temperature down to 1\,$\mu$K on a reasonable timescale, provided a suitable value of $\eta$ is chosen. It is clear that knowledge of the actual atom-molecule scattering length will be needed to choose the optimum conditions for evaporative cooling. \section{Conclusions} \label{sec:conclusion} In this paper, we have addressed the methodology for modeling sympathetic cooling of molecules by ultracold atoms, and we have studied in detail the results of simulations for a prototype case where ground-state CaF molecules in a microwave trap are overlapped with ultracold Li or Rb atoms in a magnetic trap. This work leads to a number of conclusions which we now summarize. Previous work on sympathetic cooling used a hard-sphere model of collisions based on an elastic cross section. This is appropriate at very low energies (in the s-wave regime), but breaks down badly for heavy molecules in the millikelvin regime. We have shown that a hard-sphere model based on an elastic cross section significantly over-estimates the cooling rate for collision energies above the s-wave scattering regime. A hard-sphere collision model that uses the energy-dependent momentum transport cross section, $\sigma_{\eta}^{(1)}$, gives the correct molecule cooling rate, but underestimates both the heating of the atoms and the loss of atoms from the trap. We have therefore used the full differential cross section to model atom-molecule collisions, so that the cooling of the molecules and the associated heating and loss of atoms are all modelled accurately. We have studied sympathetic cooling of CaF with both Rb and Li over a range of typical values of the atom-molecule scattering length $a$. We find that Rb offers significant advantages over Li as a coolant for ground-state molecules. The mean scattering length ${\bar a}$ is almost twice as large for Rb, and so it is likely that the true scattering length will also be larger for Rb. The mean energy transfer is proportional to $\mu/(m_{{\rm CaF}}+m_{{\rm at}})$ which is 0.48 for Rb, but only 0.19 for Li. If $a$ happens to be negative there can be a deep Ramsauer-Townsend minimum in the cross section. For Li, the minimum typically occurs when $E^{{\rm lab}}_{{\rm CaF}}$ is between 1 and 10\,mK, and the molecules cool very slowly because their energies must pass through this minimum. For Rb, the minimum is shifted down an order of magnitude in energy, and so the molecules do not encounter the minimum until they have reached the ultracold regime. For Li, the cooling rate is very sensitive to the actual value of $a$, while for Rb the initial cooling rate is fairly insensitive to $a$ because the Rb+CaF cross section conforms closely to a classical result, independent of $a$, down to temperatures near 1\,mK. This brings less uncertainty about the likely results of sympathetic cooling experiments if Rb is used. These advantages of Rb as a coolant are likely to extend to other molecules of a similar or greater mass. Finally, it is experimentally easier to prepare large, dense samples of ultracold Rb than of ultracold Li. It should be noted that the preference for Rb over Li applies only to ground-state molecules that cannot be lost from the trap through inelastic collisions. For molecules in static magnetic or electric traps, a light collision partner such as Li, Mg or H provides a higher centrifugal barrier than a heavy one such as Rb, and this may be important for suppressing low-energy inelastic collisions \cite{Wallis:MgNH:2009, Wallis:LiNH:2011, Gonzalez-Martinez:H+mol:2013}. For molecules with an initial temperature of 70\,mK, cooled by Rb with a temperature of 100\,$\mu$K and a peak density $10^{11}$\,cm$^{-3}$, we find that, after 10\,s, 75\% of the molecules have cooled into a distribution with a temperature of 200\,$\mu$K. If the initial temperature of the molecules is reduced to 20\,mK, this fraction increases to 99\% due to improved overlap between molecule and atom clouds. By arranging for the atom trap depth to be far below the initial molecule temperature, we can ensure that the majority of the energy in the molecule cloud is removed by atoms that are lost from the trap, instead of heating the atom cloud. For efficient cooling the atom number should exceed the molecule number by at least a factor of 100. By applying evaporative cooling to the atoms, the molecules can be sympathetically cooled more rapidly, or they can be cooled to far lower temperatures. For values of the scattering length in the likely range, and with a suitable choice of evaporation ramp, 70\% of the molecules can be cooled to 1\,$\mu$K within about 30\,s. These are all encouraging results: using experimentally achievable atom numbers, densities and temperatures, sympathetic cooling to ultracold temperatures can work on a timescale that is short compared to achievable trap lifetimes. A good starting point for such experiments would be a mixed-species magneto-optical trap of molecules and atoms. Data underlying this article can be accessed at http://dx.doi.org/10.5281/zenodo.32993 and used under the Creative Commons CCZero licence. \acknowledgements This work has been supported by the UK Engineering and Physical Sciences Research Council (grant EP/I012044/1 and EP/M027716/1), and by the European Research Council. \bibliographystyle{apsrev4-1}
2,877,628,090,599
arxiv
\section{Introduction} Quantum states discrimination plays a fundamental role in quantum information processing. It is well known that a set of quantum states can be perfectly distinguished by positive operation value measurement (POVM) if and only if these states are pairwise orthogonal \cite{nils}. In a multipartite setting, due to the physical obstacles, sometimes we can not take a global measurement but only can use local operations with classical communication (LOCC). Bennett et al. \cite{Ben99} presented examples of orthogonal product states that are indistinguishable under LOCC and named such a phenomenon as quantum nonlocality without entanglement. The nonlocality here is in the sense that there exists some quantum information that could be inferred from global measurement but cannot be read from local correlations of the subsystems. A set of orthogonal states which is indistinguishable under LOCC is also called as being locally indistinguishable or nonlocal. The local indistinguishability has been practically applied in quantum cryptography primitives such as data hiding \cite{Terhal01,DiVincenzo02} and secret sharing \cite{Markham08,Rahaman15,WangJ17}. Since the work of Bennett et al. \cite{Ben99}, the problem of local discrimination of quantum states has attracted much attention. The maximally entangled states and the product states, as being two extreme sets among the pure states, their local distinguishability is the most attractive. Here we present an incomplete list of the results about the local distinguishability of maximally entangled states \cite{Gho01,Wal00,Wal02,Fan04,Nathanson05,Cohen07,Bandyopadhyay11,Li15,Fan07,Yu12,Cos13,Yu115,Wang19,Xiong19,Li20} and product states \cite{Ben99,Ran04,Hor03,Ben99b,DiVincenzo03,Zhang14,Zhang15,Zhang16,Xu16b,Xu16m,Zhang16b,Wang15,Wang17,Feng09, Yang13,Zhang17,Zhangj17,Halder18,Li18,Halder1909,Xu20a,Xu20b}. Another direction of related research is to study how much resource of entanglement are needed in order to distinguish quantum states which are locally indistinguishable \cite{Cohen08,Bandyopadhyay16,Zhang16E,Bandyopadhyay18,Lilv19}. Another important sets which are known to be locally indistinguishable are those unextendible product bases (UPB), sets of incomplete orthonormal product states whose complementary space has no product states \cite{Ben99b,DiVincenzo03,Feng06,J14,CJ15}. Recently, Halder \emph{et al.} \cite{Halder19} introduced a stronger form of local indistinguishability, i.e., local irreducibility. A set of multipartite orthogonal quantum states is said to be locally irreducible if it is not possible to locally eliminate one or more states from the set while preserving orthogonality of the postmeasurement states. Under this setting, they proposed the concept strong nonlocality without entanglement. A set of orthogonal multipartite product states is called to be strong nonlocality if it is locally irreducible for every bipartition of the systems. They provided the first two examples of strongly nonlocal sets of product states in $\mathbb{C}^3\otimes\mathbb{C}^3 \otimes\mathbb{C}^3$ and $\mathbb{C}^4\otimes\mathbb{C}^4 \otimes\mathbb{C}^4$ and raised the questions of how to extend their results to multipartite quantum systems and sets of unextendible product bases \cite{Halder19}. Quite recently, Zhang \emph{et al.} \cite{Zhang1906} extended the concept of strong nonlocality to more general settings. However, there are only a few sets which have been proven to be strongly nonlocal \cite{Halder19,Zhang1906,Rout1909,Rout1910,Tian20,Shi20S}. Most of the known results are in the tripartite quantum settings. Here we propose a form of nonlocality called genuine nonlocality whose nonlocality is lying between the local distinguishablity based nonlocality and the local irreducibility based strong nonlocality (the definition here is slightly different from that defined by Rout \emph{et al.} in Ref. \cite{Rout1909}). A set of orthogonal multipartite quantum states is said to be genuinely nonlocal if it is locally indistinguishable for every bipartition of the systems. A nature question arises: are there genuinely nonlocal set of fully product states for any possible multipartite quantum systems? In this paper, we tend to address this problem. The rest of this article is organized as follows. In Sec. \ref{second}, we give some necessary notation, definitions and some basic result of local nonlocality of bipartite product basis. In Sec. \ref{third}, we present a general method to obtaining genuinely nonlocal set of multipartite product states. Finally, we draw a conclusion and present some interesting problems in section \ref{fifth}. \vskip 8pt \section{Locally indistinguishable set of bipartite product states}\label{second} For any integer $n\geq 2$, we denote $U(n)$ to be the set of all unitary matrices of dimensional $n$. And throughout this paper, we use the following subset of unitary matrices $$ U_{FL}(n):=\{(h_{ij})_{i,j}^{n}\in U(n) | h_{1k},h_{k1} \neq 0, k=1,\cdots,n \}.$$ That is, the set of $n$ dimensional unitary matrices whose elements on the first and last rows are all nonzero. Let $n\geq 3$ be an integer and $\mathcal{H}$ be a Hilbert space of dimensional $n$. Assume that $\mathcal{B}=(|1\rangle,\cdots, |n\rangle)$ is an $n$-tuple of vectors in $\mathcal{H}$ and these $n$ vectors are consisting of an orthonormal basis of $\mathcal{H}$. And we call $\mathcal{B}$ an ordered orthonormal basis of $\mathcal{H}$. For any $H=(h_{jk})_{j,k=1}^{n-1}\in U_{FL}(n-1)$, we define two operations on $\mathcal{H}$ with respect to $\mathcal{B}$ {\small $$H_{\mathcal{B}}^{(U)}:=\displaystyle\sum_{j=1}^{n-1}\sum_{k=1}^{n-1} h_{jk} |j\rangle\langle k|, H_{\mathcal{B}}^{(D)}:=\displaystyle\sum_{j=1}^{n-1}\sum_{k=1}^{n-1} h_{jk} |j+1\rangle\langle k+1|.$$ } That is, under the computational basis $\{|1\rangle,\cdots, |n\rangle\}$, their matrix representations are as follows $$ \begin{array}{c} H_{\mathcal{B}}^{(U)}=\left[\begin{array}{ll} H& \mathbf{0}_{(n-1)\times 1}\\ \mathbf{0}_{1\times (n-1)}& 0 \end{array} \right],\\[3mm] H_{\mathcal{B}}^{(D)}=\left[\begin{array}{ll} 0& \mathbf{0}_{1\times (n-1)}\\ \mathbf{0}_{(n-1)\times 1}& H \end{array} \right]. \end{array} $$ We call them the up extension and down extension of $H$ with respect to $\mathcal{B}$ respectively. \begin{figure}[h] \includegraphics[width=0.48\textwidth,height=0.38\textwidth]{bipartite_fig.eps} \caption{\label{couple_two_bi}States structure corresponding to $\{| \psi\rangle\}$ in Theorem \ref{biparite_small_number} (or $\{| \Psi\rangle\}$ in Theorem \ref{triparite_nonlocal}). } \end{figure} Motivated by the constructions of nonlocal sets of product states in Ref. \cite{Xu20a}, the following theorem is a generalized version of their results but the proof is more elegant. \begin{theorem}\label{biparite_small_number} Let $x,y\geq 3$ be integers and $X\in U_{FL}(x-1)$, $Y\in U_{FL}(y-1)$. Let $\mathcal{H}_A$($\mathcal{H}_B$) be a Hilbert space of dimension $x$($y$) with an ordered orthonormal basis $\mathcal{A} =(|1\rangle_A,\cdots, |x\rangle_A)$($\mathcal{B} =(|1\rangle_B,\cdots, |y\rangle_B)$). The following $2(x+y)-4$ product states in $\mathcal{H}_A\otimes\mathcal{H}_B$ are locally indistinguishable (See Fig. \ref{couple_two_bi}) $$ \begin{array}{l} |\psi_{i}\rangle:=|1\rangle_A \otimes (Y_{\mathcal{B}}^{(U)} |i\rangle_B),\\ |\psi_{y-1+j}\rangle:=(X_{\mathcal{A}}^{(U)} |j\rangle_A) \otimes |y\rangle_B,\\ |\psi_{x+y-3+k}\rangle:=|x\rangle_A \otimes (Y_{\mathcal{B}}^{(D)} |k\rangle_B),\\ |\psi_{x+2y-4+l}\rangle:=(X_{\mathcal{A}}^{(D)} |l\rangle_A) \otimes |1\rangle_B,\\ \end{array} $$ where $1\leq i\leq y-1, 1\leq j\leq x-1, 2\leq k \leq y, 2\leq l\leq x.$ \end{theorem} \noindent {\emph{ Proof}.} Suppose Alice starts with a measurement $\{M_a^\dagger M_a\}_{a=1}^S$. The postmeasurement states should be orthogonal to each other, i.e. $$\langle\psi_i|M_a^\dagger M_a \otimes \mathbb{I}_B|\psi_j\rangle=0, \text{ for } i\neq j.$$ Let $M:=M_a^\dagger M_a$. Suppose its matrix representation under the ordered basis $\mathcal{A}$ is $(m_{ij})_{i,j=1}^x$. Then one finds $$M=\displaystyle \sum_{i=1}^x\sum_{j=1}^x m_{ij} |i\rangle_A \langle j|.$$ \begin{figure*} \includegraphics[width=0.4\textwidth,height=0.28\textwidth]{bipartite_formal.eps} \includegraphics[width=0.4\textwidth,height=0.28\textwidth]{bipartite_permute.eps} \caption{\label{two-compare} The left hand side draw the boundary states corresponding to the ordered bases $\mathcal{A}:=(|1\rangle_A,|2\rangle_A,|3\rangle_A,|4\rangle_A,|5\rangle_A,|6\rangle_A)$ and $\mathcal{B}:=(|1\rangle_B,|2\rangle_B,|3\rangle_B,|4\rangle_B,|5\rangle_B,|6\rangle_B,|7\rangle_B,|8\rangle_B,|9\rangle_B)$. The right hand side presents the boundary states corresponding to the ordered bases $\mathcal{A}':=(|2\rangle_A,|1\rangle_A,|3\rangle_A,|4\rangle_A,|5\rangle_A,|6\rangle_A)$ and $\mathcal{B}':=(|2\rangle_B,|1\rangle_B,|3\rangle_B,|4\rangle_B,|5\rangle_B,|6\rangle_B,|7\rangle_B,|9\rangle_B,|8\rangle_B)$} but draws them under the ordered bases $\mathcal{A}$ and $\mathcal{B}.$ \end{figure*} Because $M_a\otimes \mathbb{I}_y|\psi_1\rangle$ is orthogonal to the set of states $\{M_a\otimes \mathbb{I}_y|\psi_{x+2y-4+l}\rangle \big | l=2,3,\cdots, x\}$, we have the following equations $${}_{A}\langle 1| M X_{\mathcal{A}}^{(D)} |l\rangle_A=0, l=2,3,\cdots, x.$$ These equalities are equivalent to the matrix equality $[m_{12},m_{13},\cdots,m_{1x}]X=[0,0,\cdots,0]$. As $X$ is invertible, we have $[m_{12},m_{13},\cdots,m_{1x}]=[0,0,\cdots,0]$. As $M$ is Hermitian, we also have $[m_{21},m_{31},\cdots,m_{x1}]=[0,0,\cdots,0]$. Because $M_a\otimes \mathbb{I}_y |\psi_{x+y-3+2}\rangle$ is orthogonal to the set of states $\{M_a\otimes \mathbb{I}_y |\psi_{y-1+j}\rangle \big | j=1,2,\cdots, x-1\}$, we have the following equations $${}_{A}\langle x| M X_{\mathcal{A}}^{(U)}|j\rangle_A=0, j=1,2,\cdots, y-1.$$ These equalities are equivalent to the matrix equality $[m_{x1},m_{x2},\cdots,m_{x(x-1)}]X=[0,0,\cdots,0]$. As $X$ is invertible, we have $[m_{x1},m_{x2},\cdots,m_{x(x-1)}]=[0,0,\cdots,0]$. As $M$ is Hermitian, we also have $[m_{1x},m_{2x},\cdots,m_{(x-1)x}]=[0,0,\cdots,0]$. Because the set of states $\{M_a\otimes \mathbb{I}_y |\psi_{y-1+j}\rangle \big | j=1,2,\cdots, x-1\}$ are pairwise orthogonal to each other, we have the following equations $${}_A\langle j_1|{X_{\mathcal{A}}^{(U)}}^\dagger M X_{\mathcal{A}}^{(U)} |j_2\rangle_A =0, \text{ for } 1\leq j_1\neq j_2\leq x-1. $$ If we define $M^{(u)}:=(m_{ij})_{i,j=1}^{x-1}$, the above equalities are equivalent to $X^\dagger M^{(u)} X=\text{diag}(\alpha_1,\cdots,\alpha_{x-1})$. Since $X$ is a unitary matrix, we have $$M^{(u)} X=X\text{diag}(\alpha_1,\cdots,\alpha_{x-1}).$$ In the following, we will compare the first row of the matrices at both hand sides of the above equality. Suppose that $X=(X_{ij})_{i,j=1}^{x-1}$. Then the first row of $M^{(u)}X$ is $[m_{11}X_{11},m_{11}X_{12},\cdots, m_{11}X_{1(x-1)}]$. Meanwhile, the first row of $X\text{diag}(\alpha_1,\cdots,\alpha_{x-1})$ is $[\alpha_1X_{11},\alpha_2X_{12},\cdots, \alpha_{x-1}X_{1(x-1)}].$ Comparing these two vectors, one can derive $[\alpha_1,\alpha_2,\cdots,\alpha_{x-1}]=[m_{11},m_{11},\cdots,m_{11}]$ as $X_{11},X_{12},\cdots,X_{1(x-1)}$ are all nonzero. Hence $$M^{(u)}=X\text{diag}(\alpha_1,\cdots,\alpha_{x-1})X^\dagger= m_{11}\mathbb{I}_{x-1}.$$ Using the orthogonal relations among the states in $\{M_a\otimes \mathbb{I}_y |\psi_{x+2y-4+l}\rangle \big | l=2,3,\cdots, x\}$, we have $${}_A\langle l_1|{X_{\mathcal{A}}^{(D)}}^\dagger M X_{\mathcal{A}}^{(D)} |l_2\rangle_A =0, \text{ for } 2\leq l_1\neq l_2\leq x. $$ If we define $M^{(d)}:=(m_{ij})_{i,j=2}^x$, the above equalities are equivalent to $X^\dagger M^{(d)} X=\text{diag}(\beta_2,\cdots,\beta_{x})$. In a similar way (but here we should consider the last row instead of the first row), we can get $ M^{(d)}=m_{xx}\mathbb{I}_{x-1}. $ Therefore, the Hermitian matrix $M$ is of the form $m_{11} \mathbb{I}_x$. That is, Alice can only start with trivial measurement. By the symmetry of the constructed states, Bob can only start with trivial measurement.\hfill \vrule height7pt width 7pt depth 0pt \vskip 5pt \noindent {\bf Remark:} Notice that the product states we constructed in Theorem \ref{biparite_small_number} are spanned by the ``boundary states" with respect to the ordered bases $\mathcal{A}$ and $\mathcal{B}$ (the outermost layer of a rectangle under the ordered bases $\mathcal{A}$ and $\mathcal{B}$). One finds that $ \text{span}_\mathbb{C}\{|\psi_1\rangle,\cdots, |\psi_{2x+2y-4}\rangle\}$ is equal to $$\text{span}_{\mathbb{C}}\{\mathcal{A}_i\otimes\mathcal{B}_j | i\in\{1,x\} \text{ or } j\in \{1,y\}\}$$ where $\mathcal{A}_i$($\mathcal{B}_j$) is the $i$-th ($j$-th) element of the $x$($y$)-tuples $\mathcal{A}$ ($\mathcal{B}$) (see Fig. \ref{two-compare}). \section{Constructing genuinely nonlocal set from known ones}\label{third} As any set of orthogonal product states in $\mathbb{C}^2\otimes\mathbb{C}^d$ is locally distinguishable \cite{DiVincenzo03}, a necessary condition for an orthogonal set of fully product states in $\bigotimes_{i=1}^L\mathbb{C}^{d_i}$ to be genuinely nonlocal is $d_i\geq 3$ for all $i$. In this section, we show that there always exists some genuinely nonlocal set of fully product states in $\bigotimes_{i=1}^L\mathbb{C}^{d_i}$ if the previous necessary condition is fulfilled. \begin{theorem}\label{MainTheorem} Let $L\geq 3$ and $d_i\geq 3 \ (i=1,2,\cdots, L)$ be integers. Then there always exists an orthogonal set of fully product states in $\otimes_{i=1}^L\mathbb{C}^{d_i}$ that is genuinely nonlocal. \end{theorem} This conclusion can be derived from Theorem \ref{biparite_small_number}, Theorem \ref{triparite_nonlocal}, Proposition \ref{Strong_Product_four} of this paper and the genuinely nonlocal set of $\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3$ constructed in Ref. \cite{Rout1909}. \vskip 5pt Notice that if $S=\{|\phi_k\rangle_A|\theta_k\rangle_B\}_{k=1}^N$ is $A|B$ locally indistinguishable, then $\mathcal{S}=\{|\phi_k\rangle_A|\theta_k\rangle_B|\varphi\rangle_{A_1}|\vartheta\rangle_{B_1}\}_{k=1}^N$ is also $AA_1|BB_1$ locally indistinguishable. Otherwise, in the first setting, Alice and Bob can prepare the ancillar qudit states as $|\varphi\rangle_{A_1}$, $|\vartheta\rangle_{B_1}$ respectively on themselves side and using the latter distinguish strategy to locally distinguish the set of states in $S$. Moreover, we have the following observation(see also in Ref. \cite{Rout1910}) \begin{observation}\label{obser1} Let $S=\{\ket{\Psi_k}_{AB} \}_{k=1}^{N}$ be a nonlocal product set shared between Alice and Bob. Consider the set $\mathcal{S}:=\{\ket{\Psi_k}_{AB}\otimes\ket{\Phi_0}_{A_1 \cdots A_m}\otimes\ket{\Theta_0}_{B_1 \cdots B_n}\}_{k=1}^{N}$, where $\ket{\Phi_0}_{A_1 \cdots A_m}$ and $\ket{\Theta_0}_{B_1 \cdots B_n}$ are some fully product states with some of the subsystems $\{A_i\}_{i=1}^m$ and $\{B_j\}_{j=1}^n$ being in possession of some parties. The resulting set $\mathcal{S}$ is also nonlocal between $\mathcal{H}_A\otimes (\otimes_{i=1}^m\mathcal{H}_{A_i})$ and $\mathcal{H}_B\otimes (\otimes_{j=1}^n\mathcal{H}_{B_j})$. \end{observation} With this observation, we show how to use the special structure of nonlocal sets in Theorem \ref{biparite_small_number} to construct genuinely nonlocal set of product states in tripartite systems. \begin{figure}[h] \includegraphics[width=0.44\textwidth,height=0.33\textwidth]{tripartite_yz.eps} \caption{\label{tripart_xy}States structure corresponding to $\{| \Phi\rangle\}$ in the Theorem \ref{triparite_nonlocal}. } \end{figure} \begin{theorem}\label{triparite_nonlocal} Let $x,z\geq 3,y\geq 4$ be integers and $X$, $Y$, $Z$ belong to $U_{FL}(x-1)$, $U_{FL}(y-1)$ and $U_{FL}(z-1)$ respectively. Let $\mathcal{H}_A$, $\mathcal{H}_B$, $\mathcal{H}_C$ be Hilbert spaces of dimension $x,y,z$ respectively. Suppose that $\mathcal{A} =(|1\rangle_A,\cdots, |x\rangle_A)$, $\mathcal{B} =(|1\rangle_B,\cdots, |y\rangle_B)$, and $\mathcal{C} =(|1\rangle_C,\cdots, |z\rangle_C)$ are ordered orthonormal bases with respect to $\mathcal{H}_A$, $\mathcal{H}_B$, $\mathcal{H}_C$. The following $2x+4y+2z-8$ product states in $\mathbb{C}^x\otimes\mathbb{C}^y\otimes \mathbb{C}^z$ are pairwise orthogonal and they form a set of product states which is genuinely nonlocal (see Fig. \ref{couple_two_bi}) $$ \begin{array}{l} |\Psi_{i}\rangle:=|1\rangle_A\otimes (Y_\mathcal{B}^{(U)} |i\rangle_B)\otimes |1\rangle_C,\\ |\Psi_{y-1+j}\rangle:=(X_\mathcal{A}^{(U)}|j\rangle_A)\otimes |y\rangle_B\otimes |1\rangle_C,\\ |\Psi_{x+y-3+k}\rangle:=|x\rangle_A \otimes (Y_\mathcal{B}^{(D)}|k\rangle_B)\otimes |1\rangle_C,\\ |\Psi_{x+2y-4+l}\rangle:=(X_\mathcal{A}^{(D)}|l\rangle_A)\otimes |1\rangle_B\otimes |1\rangle_C,\\ \end{array} $$ where $1\leq i\leq y-1, 1\leq j\leq x-1, 2\leq k \leq y, 2\leq l\leq x$ and (see Fig. \ref{tripart_xy}) $$ \begin{array}{l} |\Phi_{i}\rangle:=|2\rangle_A\otimes|1'\rangle_B\otimes (Z_{\mathcal{C}'}^{(U)}|i'\rangle_C),\\ |\Phi_{z-1+j}\rangle:=|2\rangle_A\otimes(Y_{\mathcal{B}'}^{(U)})|j'\rangle_B \otimes |z'\rangle_C,\\ |\Phi_{y+z-3+k}\rangle:=|2\rangle_A\otimes|y'\rangle_B\otimes (Z_{\mathcal{C}'}^{(D)}|k'\rangle_C),\\ |\Phi_{y+2z-4+l}\rangle:=|2\rangle_A\otimes(Y_{\mathcal{B}'}^{(D)}|l'\rangle_B)\otimes |1'\rangle_C.\\ \end{array} $$ where $1\leq i\leq z-1, 1\leq j\leq y-1, 2\leq k \leq z, 2\leq l\leq y $. Here the ordered bases $\mathcal{B}'$ and $\mathcal{C}' $ are defined as follows $$\begin{array}{l} \mathcal{B}':=(|1'\rangle_B,|2'\rangle_B,\cdots,|(y-1)'\rangle_B,|y'\rangle_B),\\ \mathcal{C}':=(|1'\rangle_C,|2'\rangle_C,\cdots, |z'\rangle_C) \end{array} $$ where $|1'\rangle_B=|2\rangle_B$,$|2'\rangle_B=|1\rangle_B$, $|y-1'\rangle_B=|y\rangle_B$, $|y'\rangle_B=|y-1\rangle_B$, $|j'\rangle_B=|j\rangle_B$ for $3\leq j\leq y-1$ and $|1'\rangle_C=|2\rangle_C$,$|2'\rangle_C=|1\rangle_C$, $|k'\rangle_C=|k\rangle_C$ for $3\leq j\leq z$. \end{theorem} \begin{figure}[h] \includegraphics[width=0.48\textwidth,height=0.34\textwidth]{tripartite_cubic.eps} \caption{\label{tripart_cubic}This is a schematic diagram for the states in Theorem \ref{triparite_nonlocal}.} \end{figure} \noindent \emph{Proof.} We notice that $ \text{span}_\mathbb{C}\{|\Psi_1\rangle,|\Psi_2\rangle\cdots, |\Psi_{2x+2y-4}\rangle\}$ is equal to the linear space spanned by $$\mathcal{S}_{\Psi}:=\{|i\rangle_A|j\rangle_B|1\rangle_C \big | \ \ i\in\{1,x\} \text{ or } j\in \{1,y\}\}$$ while $ \text{span}_\mathbb{C}\{|\Phi_1\rangle,|\Phi_2\rangle,\cdots, |\Phi_{2y+2z-4}\rangle\}$ is equal to the linear space spanned by $$\begin{array}{l} \mathcal{S}_{\Phi}:=\{|2\rangle_A|j'\rangle_B|k'\rangle_C \big | \ \ j\in\{1,y\} \text{ or } k\in \{1,z\}\}\\ =\{|2\rangle_A|j\rangle_B|k\rangle_C \big | \ \ j\in\{2,y-1\} \text{ or } k\in \{2,z\}\}. \end{array}$$ As $\mathcal{S}_{\Psi}\cap \mathcal{S}_{\Phi}=\emptyset$ and $\mathcal{S}_{\Psi},\mathcal{S}_{\Phi}\subseteq \{|i\rangle_A|j\rangle_B|k\rangle_C \big | \ 1\leq i\leq x,1\leq j\leq y,1\leq k\leq z\}$ which is a orthonormal basis of $\mathcal{H}_A\otimes \mathcal{H}_B\otimes\mathcal{H}_C$, we have $\langle \Psi_u |\Phi_v\rangle=0$ for integers $u,v$ with $1\leq u\leq 2x+2y-4,1\leq v\leq 2y+2z-4.$ Therefore, the states in $\{|\Psi_u\rangle\}_{u=1}^{2x+2y-4} \cup \{|\Phi_v\rangle\}_{v=1}^{2y+2z-4}$ are pairwise orthogonal ( Fig. \ref{tripart_cubic} is more intuitive for the orthogonality). To prove that the set of states we construct are genuinely nonlocal. There are only three ways to separate $ABC$ into two sets. That is, $A|BC, B|CA,C|AB$. By Theorem \ref{biparite_small_number} and Observation \ref{obser1}, the set $\{|\Psi_u\rangle\}_{u=1}^{2x+2y-4}$ is locally indistinguishable as the partitions $A|BC$ and $B|CA$. And the set $\{|\Phi_v\rangle\}_{v=1}^{2y+2z-4}$ is locally indistinguishable as the partitions $B|CA$ and $C|AB$. Hence the given set is genuinely nonlocal. \hfill \vrule height7pt width 7pt depth 0pt \vskip 6pt In the following, we begin to strengthen the results in \cite{Zhang17} where they constructed locally indistinguishable multipartite product states from known bipartite ones. The following two propositions enhance their results to genuine nonlocality settings. \begin{figure}[h] \includegraphics[width=0.45\textwidth,height=0.24\textwidth]{couple_one_to_l} \caption{\label{couple_one_to_l}States structure corresponding to the Proposition \ref{Strong_Product_four}. } \end{figure} \begin{proposition}\label{Strong_Product_four} Let $L\geq 4$ be an integer and $d_i\geq 3$ for all $1\leq i\leq L$. Let $S_i=\{|\psi_j^{(i)}\rangle |\phi_j^{(i)}\rangle\}_{j=1}^{n_i}\subseteq\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_{i+1}}$ be sets of product states that are locally indistinguishable for $i=1,2,\cdots,L-1$. Then the union of the following sets (See Fig. \ref{couple_one_to_l}) \begin{equation}\label{states_all_to_L1} \begin{array}{lllll} \mathcal{S}_1=\{|\psi_j^{(1)}\rangle|\phi_j^{(1)}\rangle|1\rangle|1\rangle|1\rangle\cdots|2\rangle\}_{j=1}^{n_1}, \\[2mm] \mathcal{S}_2=\{|\psi_j^{(2)}\rangle|2\rangle|\phi_j^{(2)}\rangle|1\rangle|1\rangle\cdots|1\rangle\}_{j=1}^{n_2},\\[2mm] \mathcal{S}_3=\{|\psi_j^{(3)}\rangle|1\rangle|2\rangle|\phi_j^{(3)}\rangle|1\rangle\cdots|1\rangle\}_{j=1}^{n_3},\\[2mm] \ \ \ \ \ \ \ \ \ \ \ \vdots\\[2mm] \mathcal{S}_{L-1}=\{|\psi_j^{(L-1)}\rangle|1\rangle|1\rangle\cdots|1\rangle|2\rangle|\phi_j^{(L-1)}\rangle\}_{j=1}^{n_{L-1}} \end{array} \end{equation} is also a genuinely nonlocal set of product states in $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}\otimes\cdots\otimes\mathbb{C}^{d_L}$. \end{proposition} \emph{Proof.} To distinguish the states of $\mathcal{S}_1$, by Observation \ref{obser1}, the first two parties must come together and perform a global measurement. Similarly, to distinguish the states of $\mathcal{S}_i$, the $1$-th and ($i+1$)-th parties must come together and perform a global measurement. Therefore, all the parties must come together to distinguish all the states in Eq. (\ref{states_all_to_L1}). Hence such a set of states is genuinely nonlocal. \hfill \vrule height7pt width 7pt depth 0pt \begin{figure}[h] \includegraphics[width=0.5\textwidth,height=0.14\textwidth]{couple_two.pdf} \caption{\label{couple_two}States structure corresponding to the Proposition \ref{Strong_Product_five}. } \end{figure} \begin{proposition}\label{Strong_Product_five}Let $L\geq 5$ be an integer and $d_i\geq 3$ for all $1\leq i\leq L$. Let $S_i:=\{|\psi_j^{(i)}\rangle |\phi_j^{(i)}\rangle\}_{j=1}^{n_i}\subseteq\mathbb{C}^{d_i}\otimes\mathbb{C}^{d_{i+1}}$ be sets of product states that are locally indistinguishable for $i=1,2,\cdots,L-1$. Then the union of the following sets (See Fig. \ref{couple_two}) \begin{equation}\label{states_two_to_L1} \begin{array}{lllll} \mathcal{S}_1=\{|\psi_j^{(1)}\rangle|\phi_j^{(1)}\rangle|2\rangle|1\rangle|1\rangle|1\rangle\cdots|1\rangle\}_{j=1}^{n_1}, \\[2mm] \mathcal{S}_2=\{|1\rangle|\psi_j^{(2)}\rangle|\phi_j^{(2)}\rangle|2\rangle|1\rangle|1\rangle\cdots|1\rangle\}_{j=1}^{n_2},\\[2mm] \mathcal{S}_3=\{|1\rangle|1\rangle|\psi_j^{(3)}\rangle|\phi_j^{(3)}\rangle|2\rangle|1\rangle\cdots|1\rangle\}_{j=1}^{n_3},\\[2mm] \ \ \ \ \ \ \ \ \ \ \ \vdots\\[2mm] \mathcal{S}_{L-1}=\{|2\rangle|1\rangle|1\rangle|1\rangle\cdots|1\rangle|\psi_j^{(L-1)}\rangle|\phi_j^{(L-1)}\rangle\}_{j=1}^{n_{L-1}} \end{array} \end{equation} is also a genuinely nonlocal set of product states in $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}\otimes\cdots\otimes\mathbb{C}^{d_L}$. \end{proposition} To construct multipartite genuinely nonlocal sets, instead of using bipartite nonlocal product states, we can also start with some known genuinely nonlocal sets of product states in tripartite systems. \begin{proposition}\label{Strong_Three_To_Mul} Let $L\geq 3$ be an integer. Let $\{|\psi_j\rangle |\phi_j\rangle|\chi_j\rangle\}_{j=1}^{n}\subseteq\mathbb{C}^{3}\otimes\mathbb{C}^{3}\otimes\mathbb{C}^{3}$ be a set of product states that is genuinely nonlocal. Then the union of the following sets (See Fig. \ref{couple_one_with_twoall}) $$ \begin{array}{lllll} \mathcal{S}_1=\{ |\psi_j\rangle|\phi_j\rangle|\chi_j\rangle |1\rangle|1\rangle|1\rangle\cdots|1\rangle|2\rangle|2\rangle\}_{j=1}^{n}, \\[2mm] \mathcal{S}_2=\{ |\psi_j\rangle|2\rangle|2\rangle|\phi_j\rangle|\chi_j\rangle|1\rangle \cdots|1\rangle|1\rangle|1\rangle\}_{j=1}^{n}, \\[2mm] \mathcal{S}_3=\{ |\psi_j\rangle|1\rangle|1\rangle|2\rangle|2\rangle|\phi_j\rangle|\chi_j\rangle|1\rangle\cdots|1\rangle\}_{j=1}^{n}, \\[2mm] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots\\[2mm] \mathcal{S}_L=\{ |\psi_j\rangle|1\rangle|1\rangle|1\rangle|1\rangle\cdots|2\rangle|2\rangle|\phi_j\rangle|\chi_j\rangle\}_{j=1}^{n}, \end{array} $$ is also a genuinely nonlocal set of product states in $\bigotimes_{i=1}^{2L+1} \mathcal{H}_{A_i}$ where $ \mathcal{H}_{A_i}=\mathbb{C}^3$. \end{proposition} \begin{figure}[h] \includegraphics[width=0.45\textwidth,height=0.24\textwidth]{couple_one_with_twoall} \caption{\label{couple_one_with_twoall}States structure corresponding to the Proposition \ref{Strong_Three_To_Mul}. } \end{figure} In fact, the above constructions can be extended to much more general settings. Let $L\geq 3$ be an integer and $\mathcal{P}:=\{1,2,3,\cdots,L\}$. Let $\mathcal{H}:=\otimes_{j\in\mathcal{P}}\mathcal{H}_j$ be an $L$-parties quantum system. Suppose there are $s$ proper subsets of $\mathcal{P}$: $\mathcal{P}_1, \mathcal{P}_2,\cdots,\mathcal{P}_s$ and we denote $\overline{\mathcal{P}}_i:=\mathcal{P}\setminus \mathcal{P}_i$ for each $i$. We make the following assumptions: \begin{enumerate} \item [{\rm (a)}] ${S}_i=\{|\Psi_j\rangle_{\mathcal{P}_i}\}_{j=1}^{n_i} $ is a genuinely nonlocal product set in $\mathcal{H}_{\mathcal{P}_i}:=\otimes_{j\in\mathcal{P}_i} \mathcal{H}_j$ for each $i\in \{1,2,\cdots,s\} $. \item [{\rm (b)}] There is a fully product state $|\Phi_i\rangle_{\overline{\mathcal{P}}_i}\in \otimes_{j\in\overline{\mathcal{P}}_i}\mathcal{H}_j$ for each $i$ such that the states in the union of the sets $\mathcal{S}_i=\{|\Psi_j\rangle_{\mathcal{P}_i}|\Phi_i\rangle_{\overline{\mathcal{P}}_i}\}_{j=1}^{n_i}$ ($1\leq i\leq s$) are mutually orthogonal to each other. \end{enumerate} We use the notation $\mathfrak{P}:=(\mathcal{P}, \{\mathcal{P}_1, \mathcal{P}_2,\cdots,\mathcal{P}_s\})$. For each $\mathfrak{P}$, We attach it a graph $G_{\mathfrak{P}}=(V_\mathfrak{P},E_\mathfrak{P})$ defined as follows: its vertex set is $V_\mathfrak{P}=\mathcal{P}$ and its edge set is $$E_\mathfrak{P}=\bigcup_{i=1}^s\{(u_i,v_i)| u_i,v_i\in \mathcal{P}_i \text{ and } u_i\neq v_i \}.$$ \begin{theorem}\label{general_genuine_nonlocal} Under the notation and assumptions of the two paragraphs previous and suppose that $G_\mathfrak{P}$ is connected, then the set $\mathcal{S}:=\cup_{i=1}^s \mathcal{S}_i$ is a genuinely nonlocal set of product states in $\otimes_{j\in\mathcal{P}}\mathcal{H}_j$. \end{theorem} \noindent \emph{ Proof.} Suppose not, there exist a nontrivial bipartition of $\mathcal{P}$, say $U\ |\ V$ (both $U$ and $V$ are nonempty subset of $\mathcal{P}$), such that the set $\mathcal{S}$ is locally distinguishable when considering as a set of bipartite states in $(\otimes_{j\in{U}}\mathcal{H}_j)\bigotimes (\otimes_{j\in V}\mathcal{H}_j).$ As the connectivity of $G_\mathfrak{P}$, there must exist some edge $(u,v)\in E_\mathfrak{P}$ which connects the two sets $U$ and $V$, i.e. $u\in U$ and $v\in V$. By the definition of $E_\mathfrak{P}$, there exist some $i$ such that $u,v\in\mathcal{P}_i$. However, the set $S_i$ is genuinely nonlocal in $\mathcal{H}_{\mathcal{P}_i}$ by the assumption (a) above. So it is locally indistinguishable for the partition $$(U\cap \mathcal{P}_i) \ | \ (V\cap \mathcal{P}_i)$$ of $\mathcal{P}_i$ as both $U\cap \mathcal{P}_i$ and $V\cap \mathcal{P}_i$ are nonempty. By Observation \ref{obser1}, the set $\mathcal{S}_i$ is locally indistinguishable in the bipartite system $(\otimes_{j\in{U}}\mathcal{H}_j)\bigotimes (\otimes_{j\in V}\mathcal{H}_j).$ However, $\mathcal{S}_i\subseteq \mathcal{S}$ implies that $\mathcal{S}$ must be locally indistinguishable as bipartite states $(\otimes_{j\in{U}}\mathcal{H}_j)\bigotimes (\otimes_{j\in V}\mathcal{H}_j).$ Hence we obtain a contradiction. So the set $\mathcal{S}$ must be genuinely nonlocal. \hfill \vrule height7pt width 7pt depth 0pt \section{Conclusion and Discussion}\label{fifth} We study a strong form of locally indistinguishable set of fully product states called genuinely nonlocal set. We generalize the results of locally indistinguishable product states in bipartite system in Ref. \cite{Xu20a} but provide a much more elegant proof. Based on a simple observation, we extend the results of Zhang \emph{et al.} in Ref. \cite{Zhang17} to the cases of genuinely nonlocal sets. Moreover, we extend these results to a much more general setting by relating the construction of genuinely nonlocal sets with the connectivity of some graphs. As a consequence, we can show that there always exists some genuinely nonlocal set of fully product states in $\otimes_{i=1}^L\mathbb{C}^{d_i}$ provided $d_i\geq 3$ for all $i$. One should note that the genuinely nonlocal set we constructed here maybe locally reducible under the concept introduced in Ref. \cite{Halder19}. Therefore, it is interesting to find some method to characterize the locally irreducible settings. \vspace{2.5ex} \noindent{\bf Acknowledgments}\, \, This work is supported by National Natural Science Foundation of China (11771419, 11875160, 11901084, 12005092, and U1801661), the China Postdoctoral Science Foundation (2020M681996), the Natural Science Foundation of Guang-dong Province (2017B030308003), the Key R$\&$D Program of Guangdong province (2018B030326001), the Guang-dong Innovative and Entrepreneurial Research TeamProgram (2016ZT06D348), the Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20170412152620376 and JCYJ20170817105046702 and KYTDPT20181011104202253), the Economy, Trade and Information Commission of Shenzhen Municipality (201901161512), the Research startup funds of DGUT (GC300501-103), the Fundamental Research Funds for the Central Universities, and Anhui Initiative in Quantum Information Technologies under Grant No. AHY150200.
2,877,628,090,600
arxiv
\section{Introduction} Many real-world networks have modular structure~\cite{GirvanNewman,Ravasz,NewmanGirvan,Radicchi}: their nodes are organized into tightly-knit communities where the node-node connections are dense, with sparser connections in-between communities. This structure is often hierarchically nested, with groups of communities themselves organized into higher level modules. Given the relevance of modularity to features like functional units in metabolic networks~\cite{Ravasz}, it is not surprising that community structure has become one of the most intensely studied aspects of complex networks. Recently, Song, Havlin and Makse~\cite{Song1, Song2} discovered that certain modular networks also possess another remarkable characteristic: fractal scaling, where the hierarchy of modules shows a self-similar nesting at all length scales. Examples of such fractal networks include the WWW, the actor collaboration network, protein interaction networks in {\it E. coli}, yeast, and humans, the metabolic pathways in a wide variety of organisms~\cite{Song1}, and genetic regulatory networks in {\it S. cerevisiae} and {\it E. coli}~\cite{Yook}. Despite the widespread occurrence of fractal topologies, little is yet known about the nature of cooperative behavior on these networks, or even more generally on how modular structure affects collective ordering or correlations among interacting objects. In particular the Ising model has been investigated extensively on non-fractal scale-free networks~\cite{Aleksiejuk,Bianconi,Dorogov0,Leone,Igloi,Goltsev,Indekeu,Giuraniuc1,Giuraniuc2}, but only recently has a form of community structure been included: an Ising ferromagnet was studied on two weakly coupled Barabasi-Albert scale-free networks with a varying density of inter-network links~\cite{Suchecki}, finding stable parallel and antiparallel orderings of the two communities at low temperatures. It would be interesting to examine a system with a large number of interacting communities, capturing more fully the complex modular organization of real-world examples. In this paper, we introduce a hierarchical lattice~\cite{BerkerOstlund,KaufmanGriffiths1,KaufmanGriffiths2} network model exhibiting a nested modular structure with fractal scaling. Hierarchical lattices (part of a broader class of deterministically constructed networks~\cite{Barabasi,Comellas,RavaszBarabasi,Andrade,Doye,Zhang1,ZhangComellas,ZhangRong,Zhang2,Zhang3}) have been the focus of increasing attention recently~\cite{Dorogov2,HinczewskiBerker,Rozenfeld,Zhang}, since they can be tailored to exhibit various features---including scale-free degree distributions, small-world behavior, and fractal structure---for which exact analytical expressions can be derived. The explicit results from such deterministic models can serve as a testing ground for approximate phenomenological approaches, and a starting point for extensions incorporating additional realistic features like randomness~\cite{Zhang}. Here we exploit another advantage of such lattices: the ferromagnetic Ising model can be solved through an exact renormalization-group (RG) transformation. Varying the strength of interactions between communities, we find an unusual combination of thermodynamic properties. At high temperatures or weak inter-community coupling the system is disordered, but the free energy as a function of magnetic field $H$ is nonanalytic at $H=0$. This is due to the presence of rare large communities, similar to the Griffiths singularity in bond-diluted ferromagnets below the percolation threshold~\cite{Griffiths}: there the system is partitioned into disjoint clusters of connected sites, and the small probability of arbitrarily large clusters leads to an analogous nonanalyticity in the free energy above $T_c$. As we lower the temperature in our network, true long-range order is never achieved at $T>0$, even for the strongest inter-community coupling. Surprisingly, we find instead a low-temperature phase with algebraic order, just as in the $XY$ model: the magnetization is zero, but there is power-law decay of pair correlations with distance, and the thermodynamic functions throughout the entire phase behave as if at a critical point. The organization of the paper is as follows: in Sec.~II we describe the network's construction (Sec.~II.A) and summarize its topological properties (Sec.~II.B), including its community structure and fractal scaling characteristics. Sec.~III examines thermodynamic properties of the Ising model on the network, derived from an exact renormalization-group approach (Sec.~III.A). We discuss the phase diagram and critical behavior in Sec.~III.B, and then focus on two particularly interesting aspects of the results: the presence of Griffiths singularities in the free energy (Sec.~III.C), and the nature of long-range pair correlations in the low-temperature phase (Sec.~III.D). We present our conclusions in Sec.~IV, and note that the behavior described here is characteristic of a broader class of hierarchical lattice complex networks---a fact that will be explored in future studies. \section{Network Properties} \subsection{Construction procedure} Our lattice has two types of bonds, depicted as solid and dashed lines respectively. At each construction step, every solid bond is replaced by the connected cluster on the right of Fig.~\ref{fig:1}(a), and this procedure is iterated $t$ times. The initial $t=0$ lattice is two sites connected by a single solid bond. Fig.~\ref{fig:1}(b) shows the network at $t=4$. All results quoted below are for the infinite lattice limit, $t \to \infty$. \begin{figure}[t] \centering \includegraphics*{latticefig2.eps} \caption{(a) Construction of the hierarchical lattice. (b) The lattice after $t=4$ steps in the construction. Communities at the $n=0,1,2$ levels of the hierarchy are shown with white, dark gray, and light gray backgrounds respectively.}\label{fig:1} \end{figure} \subsection{Topological characteristics} {\it Size of network:} The total number of sites $N = \frac{2}{3}(4^t+2)$, the total number of bonds $N_b = \frac{1}{3}(4^{t+1}-1)$, and the diameter of the network (the maximum possible shortest-path distance between any two sites) is $D = 2^{t+1}-1$. {\it Degree distribution:} The probability $P(k)$ of finding a node with degree $k$ is zero except for $k = 2^m+1$ for some integer $m\ge 1$, where $P(k) = 3 \cdot 4^{-m}$. The scale-free exponent $\gamma$ is calculated from the cumulative distribution $P_\text{cum}(k) \equiv \sum_{k^\prime=k}^\infty P(k^\prime) \sim k^{1-\gamma}$. For large $k$ we find $P_\text{cum}(k) \approx k^{-2}$, so $\gamma = 3$. {\it Community structure:} We can define several levels of hierarchical modular organization, labeled by integer $n$: at the lowest level ($n=0$) we have clusters of solid bonds (shown with white background in Fig.~\ref{fig:1}(b)), with the dashed bonds acting as inter-community links; at the next level $(n=1)$ we can group together those communities which correspond to a single solid-bond cluster at the $(t-1)$th construction step (dark gray background in the figure); the $n=2$ level communities are outlined in light gray. In general, for a lattice after $t$ construction steps, a community at the $n$th level of the hierarchy evolved from a single solid-bond cluster at step $t-n$. {\it Fractal scaling:} Adapting the analysis in Refs.~\cite{Song1, Song2}, we can characterize the fractal topology of the network through two exponents $d_B$, $d_k$, defined as follows. At the $n$th level, all communities have the same diameter, $\ell_B \rsp = \rsp 2^{\,n+2}-2$ (for $n < t-1$). Thus at each level the communities form a ``box covering'' of the entire network with boxes of the same $\ell_B$. The scaling of the number of boxes $N_B(\ell_B)$ required to cover the network for a given $\ell_B$ defines the fractal dimension $d_B$, namely $N_B(\ell_B)/N \sim \ell_B^{-d_B}$. In our case we have $N_B(\ell_B)/N = 4^{-n-1} \approx 4 \ell_B^{-2}$ for large $n$, yielding $d_B = 2$. Similarly the degree exponent $d_k$ of the boxes is defined through $k_B(\ell_B)/k_\text{hub} \sim \ell_B^{-d_k}$, where $k_B(\ell_B)$ is the number of outgoing links from the box as a whole, and $k_\text{hub}$ the degree of the most connected node inside the box. For boxes with large $k_\text{hub}$ we get a scaling $k_B(\ell_B)/k_\text{hub} \approx 2 \ell_B^{-1}$, giving $d_k = 1$. As with all the real-world fractal networks examined in Refs.~\cite{Song1, Song2}, the scale-invariance of the probability distribution is related to the fractal scaling of the network through the exponent relation $\gamma = 1+d_B/d_k$, which is satisfied for $d_B =2$, $d_k = 1$, and $\gamma=3$. {\it Modularity:} The strength of community structure---the extent to which nodes inside communities are more tightly knit than an equivalent random network model---is quantified through the modularity~\cite{NewmanGirvan} $Q = \sum_s \left[l_s/N_b -(d_s/2N_b)^2 \right]$, where the sum runs over all communities, and $l_s$, $d_s$ are the total number of bonds and total sum of node degrees for the $s$th community. In our case each level $n$ in the hierarchy describes a different partition of the network into communities, and we find the corresponding modularity $Q = 1-4^{-n-1}$. Thus $Q$ increases from $3/4$ at $n=0$ to the maximum possible value 1 as $n \to \infty$, showing that the modular structure becomes ever more pronounced as we go to higher levels. {\it Distribution of shortest-paths:} We define $N_\ell$ as the total number of site pairs $(i,j)$ whose shortest-path distance along the lattice $\ell_{ij} = \ell$. The distance $\ell$ can take on values between 1 and $D$. $N_\ell$ has a non-trivial dependence on $\ell$, but satisfies the scaling form $N_\ell = 2^{3t} f_t(\ell/D)$, where the function $f_t(\ell/D)$ approaches 0 for $\ell$ close to $1$ or $D$, and $f_t(\ell/D) \sim \text{O}(10^{-1})$ for $1\ll \ell \ll D$. The average shortest-path length $\bar\ell =\frac{2}{N(N-1)} \sum_{\ell=1}^D \ell N_\ell \sim D \sim N^{1/2}$, so the network is not small-world. \section{Ising Model on the Network} \subsection{Renormalization-group transformation} Let us now turn to the Hamiltonian for our system, \begin{equation}\label{eq:1} -\beta {\cal H}=J\sum_{\langle i j \rangle_s} s_i s_j+ K \sum_{\langle i j \rangle_d} s_i s_j + H \sum_{i} s_i\,, \end{equation} where $s_i = \pm 1$, $J, K > 0$, and $\langle i j \rangle_s$, $\langle i j \rangle_d$ denote sums over nearest-neighbor pairs on the solid and dashed bonds respectively. The ratio of inter- to intra-community coupling is parametrized by $K/J$. The RG transformation is the reverse of the construction step: the two center sites in every cluster like the one on the right of Fig.~\ref{fig:1}(a) are decimated, giving an effective interaction between the two remaining sites. The renormalized Hamiltonian $-\beta {\cal H}^\prime$ has the same form as Eq.~\eqref{eq:1}, but with interaction constants $J^\prime, K^\prime, H^\prime$. Two additional terms also appear: a magnetic field counted along the solid bonds, $H_B^\prime \sum_{\langle i j \rangle_s} (s_i +s_j)$, and an additive constant per solid bond $G^\prime$. The renormalized interaction constants are given by: \begin{equation}\label{eq:2} \begin{split} J^\prime &= (1/4)\ln\left(R_{1}R_{2}/R_{3}^2\right),\quad K^\prime=K,\quad H^\prime=H,\\ H_B^\prime&= (1/4)\ln\left(R_{1}/R_{2}\right),\quad G^\prime = (1/4)\ln\left(R_{1}R_{2}R_{3}^2\right), \end{split} \end{equation} where: \begin{equation}\label{eq:3} \begin{split} R_{1} &= w^{-4}xy^{-2}+2x^{-1}z^4+w^4xy^2z^8,\\ R_{2} &= w^{-4}xy^{2}+2x^{-1} z^{-4}+w^4xy^{-2}z^{-8},\\ R_{3} &= x^{-1}w^{-4}+x^{-1}w^4+xy^{-2}z^{-4}+xy^{2}z^{4}, \end{split} \end{equation} and $w = e^J$, $x=e^K$, $y=e^H$, $z=e^{H_B}$. Under renormalization a nonzero site magnetic field $H$ induces a bond magnetic field $H_B$, while $H^\prime = H$ since the site field at the edge sites in each cluster is unaffected by the decimation of the center sites~\cite{BOP}. This transformation is exact, preserving the partition function $Z^\prime = Z$, and we iterate it to obtain the RG flows, yielding the global phase diagram of the system. Thermodynamic densities, corresponding to averages of terms in the Hamiltonian, transform under RG according to a conjugate recursion relation~\cite{BOP}. Iterating this along the flow trajectories until a fixed point is reached, we can directly calculate the magnetization $M = \frac{1}{N} \sum_i \langle s_i \rangle$, internal energy per site $U = \frac{1}{N}\langle \cal H \rangle$, and their derivatives $\chi = \frac{\partial M}{\partial H}$, $C = \frac{\partial U}{\partial T}$. \begin{figure} \includegraphics*{flows.eps} \caption{Renormalization-group flows in the closed subspace $H=H_B=0$ for three cases of the inter-community coupling strength $K/J$: (a) $K/J = 0.75$, below the threshold value $(K/J)_m = \ln\left(\frac{1}{11} (43+24\sqrt{3})\right)/\ln(2+\sqrt{3})\approx 1.549$; (b) $K/J = (K/J)_m$; (c) $K/J = 3 > (K/J)_m$. In each case, the diagonal straight line represents the initial condition $J=J_0 = 1/T$, the thin vertical lines with arrows are sample flows, and the thick gray lines represent fixed points, with dark gray corresponding to stable fixed points, and light gray corresponding to unstable fixed points. The point where the light and gray curves meet is marginally stable. The dashed line marks the critical $J_c$ separating the disordered phase, flowing to the phase sink $J^\ast =0$, and the algebraically ordered phase, flowing to a line of finite temperature fixed points $J^\ast(J_0)$ (the dark gray curve).} \label{flows} \end{figure} We are also interested in the pair correlation $G_{ij} = \langle s_i s_j \rangle - \langle s_i \rangle \langle s_j \rangle$ for arbitrary sites $i$, $j$ in the network. Since our lattice is highly inhomogeneous, $G_{ij}$ is not a simple function of $\ell_{ij}$. However, following the analysis in Ref.~\cite{Dorogov1}, we can define an average correlation $G(\ell) = \frac{1}{N_\ell}\sum_{\{(i,j) : \ell_{ij} = \ell\}} G_{ij}$, where the sum is over all pairs $(i,j)$ satisfying $\ell_{ij} = \ell$. While $G(\ell)$ cannot be directly calculated from the RG flows, we can determine its long-distance scaling properties. Moreover, the average correlation for a certain subset of pairs in the lattice can be explicitly calculated: at the $n$th level of the hierarchy, let site $i$ be a hub of a community, and $j$ be a site at the very edge of the same community, separated by a distance $\ell_n=2^{n+1}-1$. Denote the average of $G_{ij}$ restricted to this subset as $G_\text{hub}(\ell_n)$. After $n$ RG steps, such pairs $(i,j)$ become nearest-neighbors along a solid bond, and thus we can obtain their thermodynamic average through the conjugate recursion relation~\cite{BOP}. The number of such pairs is $2^{2t-2n-1}$. For $1 \ll \ell_n \ll D$, compared to the overall number of pairs with the same separation, $N_{\ell_n} \sim 2^{3t}$, the subset forms a vanishingly small fraction of the total in the $t \to \infty$ limit. \begin{figure} \includegraphics*{phases.eps} \caption{(a) Phase diagram. The black and gray curves represent infinite- and second-order transitions respectively. (b,c) Critical exponents. The dotted line marks $(K/J)_m$.}\label{phase} \end{figure} \subsection{Phase diagram and critical properties} Fig.~\ref{flows} depicts various cases for the renormalization-group flows, and the corresponding phase diagram in terms of temperature $T=1/J$ versus $K/J$ at $H=0$ is shown in Fig.~\ref{phase}(a). The two phases, both with $M=0$, are (1) a disordered phase where pair correlations decay exponentially, $G(\ell) \sim \exp(-\ell/\xi)$ with a finite correlation length $\xi$; (2) a phase with algebraic order, $\xi = \infty$, characterized by power-law decay of correlations, $G(\ell) \sim \ell^{-\eta(T,K/J)}$. Since this latter phase flows under RG to a line of finite-temperature fixed points (the dark gray curves in Fig.~\ref{flows}), a different fixed point for every value of $T$ and $K/J$, we have a varying exponent $\eta(T,K/J)$. Fig.~\ref{phase}(b) plots $\eta$ for $K/J = 0.1$ and $1$, and we note that $\eta \to 0$ for $T \to 0$, as the system asymptotically approaches true long range order at $T=0$. Strengthening inter-community coupling has a similar effect, with $\eta$ decreasing for larger $K/J$. For $K/J$ below a threshold value $(K/J)_m \equiv \ln\left(\frac{1}{11} (43+24\sqrt{3})\right)/\ln(2+\sqrt{3}) \approx 1.549$, the phase transition is infinite order: as $T \to T_c^+$ we have exponential singularities of the Berezinskii-Kosterlitz-Thouless~\cite{Berezinskii,KosterlitzThouless} (BKT) form, just like in the $XY$ model. $\xi \sim e^{A/\sqrt{t}}$ and the singular part of the specific heat $C_\text{sing} \sim e^{-B/\sqrt{t}}$, where $t \equiv (T-T_c)/T_c$ and the constants $A,B>0$. It is interesting to note that for $K/J =1$, our Hamiltonian can be mapped by duality transformation to the Ising model on the small-world hierarchical lattice of Ref.~\cite{HinczewskiBerker} (the $p=1$ lattice). On this dual network a similar BKT transition occurs, though with the algebraic order in the high temperature phase (much as the Villain version of the $XY$ model is dual to a discrete Gaussian model describing roughening, with the low and high-temperature properties reversed \cite{ChuiWeeks}). BKT singularities have also been observed in an Ising and Potts system on an inhomogeneous growing network~\cite{Bauer,Khajeh}. For $K/J > (K/J)_m$, on the other hand, the phase transition is second-order: $\xi \sim t^{-\nu}$, $C_\text{sing} \sim t^{-\alpha}$ for exponents $\alpha$, $\nu$, plotted as a function of $K/J$ in Fig.~\ref{phase}(c). Thus with increasing $K/J$ the system almost looks like an ordinary second-order Ising transition: a critical $T_c$ below which we have something very close to long-range order, since $\eta$ is nearly zero. \begin{figure} \includegraphics*{cdenscor.eps} \caption{(a) Magnetic susceptibility $\chi$ at $K/J = 1$ for $T = 2,\,2.5 > T_c$. (b) $\chi$ at $K/J=1$ for $T=0.75,\,0.25 < T_c$. (c,d) Hub correlation length $\xi_\text{hub}$ and specific heat $C$ at $K/J = 1,\,2.5$. (d) Hub correlation function $G_\text{hub}(\ell_n)$, where $\ell_n = 2^{n+1}-1$, at $K/J = 1$ for $T = 0.90,\,0.95,\,0.98 < T_c$.}\label{dens} \end{figure} \subsection{Griffiths singularities} The disordered phase in our network differs in one important aspect from a conventional paramagnetic phase: $M \sim H(1-\ln H)$ for small $H$, leading to a divergence in the susceptibility, $\chi \sim -\ln H$. We see this in Fig.~\ref{dens}(a) for $T = 2$, $2.5$, and $K/J = 1$. As mentioned above, the mechanism for this nonanalyticity at $H=0$ is similar to the one behind the Griffiths singularity in random ferromagnets. We can understand it through the distribution of Yang-Lee zeros~\cite{YangLee} of the partition function in the complex magnetic field plane. Introducing the variable $z = e^{-2H}$, the Yang-Lee theorem states that the zeros of $Z$ lie on the unit circle $z=e^{i\theta}$ in the complex $z$ plane. If $g(\theta)$ is the density of these zeros (a continuous distribution in the thermodynamic limit), then for a regular ferromagnet above $T_c$ we have $g(\theta) = 0$ for a finite range of $\theta$ near the real axis $\theta=0$, so that $Z$ is analytic at $H=0$. However in our case $g(\theta)$ ``pinches'' the real axis: $g(0)=0$ but $g(\theta \ne 0) > 0$. Since $g(\theta)$ is related to the magnetization through $g(\theta) = \frac{1}{2\pi} \lim_{r \to 1^-} \text{Re}\, M(z=re^{i\theta})$, we can deduce from the observed singularity that $g(\theta)\sim \theta$ for small $\theta$. The dominant contributions to this $g(\theta)$ come from large communities, centered at hubs with degree $k=2^m+1$ for $m \gg 1$, despite their small probability $P(k) =3\cdot 4^{-m}$. We can see these contributions explicitly for the $K/J = 0$ case, adapting arguments used to derive scaling forms for $g(\theta)$ in disordered ferromagnets~\cite{BrayHuifang,Chan}. The system in this case is a disjoint set of solid-bond clusters, with the probability of a randomly chosen site being part of a cluster of size $N_m = 2^{m-1} +1$ given by $P_m = 3(2+2^m)/2^{2m+1}$ (for $m\ge 2$). The average magnetization per site of such of cluster is easily calculated analytically, and takes the following approximate form for small $H$, \begin{equation}\label{eq:4} M_m \approx \tanh(H f_m(J))\,, \end{equation} where \begin{equation}\label{eq:5} \begin{split} &f_m(J)\\ &= 1+\frac{N_m-1}{N_m}\tanh(2J)\left(2+(N_m-2)\tanh(2J)\right)\,. \end{split} \end{equation} The $H f_m(J)$ term in Eq.~\eqref{eq:4} can be interpreted as the effective field felt by the cluster, with the function $f_m(J)$ varying between $1$ at $J=0$ and $N_m$ at $J=\infty$. In the large $m$ limit $f_m(J) \to N_m \tanh^2(2J)$ for all $J>0$. To find $g_m(\theta)$, the cluster's contribution to the overall $g(\theta)$, we plug in a small complex magnetic field $H = \frac{1}{2}(\epsilon-i\theta)$, corresponding to $z = (1-\epsilon)e^{i\theta}$, and take the real part of the resulting magnetization: $g_m(\theta) = \frac{1}{2\pi}\lim_{\epsilon\to 0} \text{Re}\,M_m(H=\frac{1}{2}(\epsilon-i\theta))$. This gives \begin{equation}\label{eq:6} g_m(\theta) = \lim_{\epsilon\to 0} \frac{1}{2\pi} \frac{\sinh(\epsilon f_m(J))}{\cos(\theta f_m(J))+\cosh(\epsilon f_m(J))}\,. \end{equation} This expression is dominated by high, narrow peaks at $\theta = (2n+1)\pi/f_m(J)$, $n=0,1,\ldots$, and we can write it as a sum of delta functions in the small $\epsilon$ limit, \begin{equation}\label{eq:7} \begin{split} g_m(\theta) &\approx \lim_{\epsilon\to 0} \frac{1}{\pi} \sum_{n=0}^{\infty} \frac{\epsilon f_m(J)}{f_m^2(J)\left(\theta-\frac{(2n+1)\pi}{ f_m(J)}\right)^2+\epsilon^2 f_m^2(J)}\\ &= \sum_{n=0}^{\infty} \frac{1}{f_m(J)}\, \delta\left(\theta-\frac{(2n+1)\pi}{f_m(J)}\right)\,. \end{split} \end{equation} Thus the total $g(\theta)$ for the system is \begin{equation}\label{eq:8} g(\theta) = \sum_{m=2}^{\infty} \sum_{n=0}^{\infty} \frac{P_m}{f_m(J)}\, \delta\left(\theta-\frac{(2n+1)\pi}{f_m(J)}\right)\,. \end{equation} For small $\theta$, the nonzero contributions to $g(\theta)$ come from $m$ values where $f_m(J) = (2n+1)\pi/\theta$ for some $n$. Since $f_m(J) \approx N_m \tanh^2(2J)$ for large $m$, these are the contributions of clusters with large size $N_m \propto 1/\theta$, with a corresponding probability $P_m \approx 3/4N_m$. Eq.~\eqref{eq:8} becomes \begin{equation}\label{eq:9} g(\theta) \approx \sum_{m,n} \frac{3\theta^2 \tanh^2(2J)}{4(2n+1)^2 \pi^2} \, \delta\left(\theta-\frac{(2n+1)\pi}{N_m \tanh^2(2J)}\right)\,. \end{equation} As $m\to \infty$ the delta function peaks become densely spaced, and from Eq.~\eqref{eq:9} it is evident for small $\theta$ that $g(\theta)$ scales like $g(b \theta) = b g(\theta)$ for any constant $b$, consistent with the observation of $g(\theta) \sim \theta$ deduced from the singularity in $M$. Thus we see this behavior is directly related to the presence of large communities around highly connected hubs, which have a scale-free distribution $P_m \sim N_m^{-1}$. In comparison, for bond-diluted ferromagnets below the percolation threshold large connected clusters are exponentially rare, the resulting $g(\theta) \sim e^{-A(T)/|\theta|}$, and the Griffiths singularity is much weaker, leading to a finite $\chi$ at $H=0$~\cite{Harris}. Turning now to the algebraic phase for $T < T_c$, here $M$ and $\chi$ behave as if at a critical point: $M \sim H^{1/\delta(T,K/J)}$, $\chi \sim H^{1/\delta(T,K/J)-1}$ as $H \to 0$. Fig.~\ref{dens}(b) shows $\chi$ for $K/J =1$, $T=0.75$, $0.25$, and Fig.~\ref{phase}(b) plots the exponent $\delta(T,K/J)$ for $K/J = 0.1$ and $1$. The corresponding scaling of the density of zeros is $g(\theta) \sim \theta^{1/\delta(T,K/J)}$ near $\theta=0$. \vspace{1em} \subsection{Pair correlations} Finally, we consider the behavior of the hub correlation function $G_\text{hub}(\ell_n)$. For $T>T_c$, we find an exponential decay with $\ell_n$, which we characterize by a correlation length $\xi_\text{hub}$. The divergence of $\xi_\text{hub}$ as $T \to T_c^+$ is shown in Fig.~\ref{dens}(c) for $K/J = 1$, $2.5$. Like the overall pair correlation length $\xi$, $\xi_\text{hub}$ diverges with a BKT form for $K/J<(K/J)_m$ and as a power law for $K/J>(K/J)_m$. The onset of the rapid increase in $\xi_\text{hub}$ coincides with the position of the peak in the specific heat $C$, plotted in Fig.~\ref{dens}(d). Just like in the $XY$ model~\cite{BerkerNelson}, $C$ is smooth at $T_c$ for all $K/J$, and the peak occurs at $T>T_c$, corresponding to the onset of short-range order in the system. For $T<T_c$, $G_\text{hub}(\ell_n)$ has a surprising behavior: as seen in Fig.~\ref{dens}(e), it approaches a nonzero limit as $\ell_n \to \infty$, a signature of long-range order. However, since $G_\text{hub}(\ell_n)$ describes only a subset of pairs, a vanishingly small fraction of the total for large $\ell_n$, the long-range ordering of these pairs is compatible with $M$ being zero. The presence of such long correlations in the algebraic phase, and the overall slow power-law decay of $G(\ell)$ with $\ell$, is remarkable given that for ``fat-tailed'' scale-free networks (i.e. with $\gamma \le 3$) pair correlations longer than nearest-neighbors are typically suppressed: one can prove that $G(\ell>1) = 0$ at $H=0$ in the thermodynamic limit, if $\chi(H=0)$ is finite~\cite{Dorogov1}. In our case $\chi(H=0) = \infty$ at all $T$, the proof does not apply, and we see that this fractal modular lattice is an important exception to the general expectation of weakened pair correlations on networks~\cite{Dorogov1}. \section{Conclusions} In conclusion, we have introduced a hierarchical lattice network with the modular structure and fractal scaling characteristic of a wide array of real-world networks. The Ising model on this lattice---solved through an exact RG transformation---exhibits an interesting transition. A disordered phase with Griffiths singularities gives way at low temperatures to algebraic order: the system behaves as if at criticality for a broad range of parameters, and we find power-law decay of pair correlations, unexpected for this type of scale-free network. The thermodynamic phenomena observed here are not confined to one particular network. In fact, we can consider the much larger class of hierarchical lattices that form scale-free networks on which the Ising model exhibits a standard order-disorder transition: these include fractal lattices on which Migdal-Kadanoff recursion relations are exact~\cite{Migdal,Kadanoff}, related hybrid lattices~\cite{Erbas}, and their duals~\cite{HinczewskiBerker2}. In all these cases we can modify the connected graph which defines the lattice construction step as follows: replace a subset of the bonds in the graph with dashed bonds (which remain unaltered as we iterate the construction) in such a way that the graph would break into two or more disjoint pieces if the dashed bonds were cut. The Ising model on the resulting hierarchical lattice will no longer flow under renormalization to an ordered fixed point at low temperatures, but rather to a continuous line of fixed points, yielding an algebraically ordered phase. And the power-law distribution of highly connected hubs in such networks will lead to Griffiths singularities in the disordered phase. Conversely the duals of such networks---which have infinite fractal dimension and show small-world scaling---have algebraic order at high temperatures. Recent studies have highlighted the diversity of structural properties in families of hierarchical lattice networks~\cite{Rozenfeld,Zhang}---the ability to tune degree exponents, fractal dimensionality, and other topological aspects of these networks by varying the defining graph. The structural richness of these networks is manifested through unusual phase transitions and critical phenomena, already apparent even in a simple system like the Ising model. The use of renormalization-group methods to characterize cooperative behavior on this broad class of networks, both in the pure case and in the presence of bond randomness~\cite{HinczewskiBerker}, will be the subject of future work. I thank A.N. Berker, T. Garel, and H. Orland for useful discussions.
2,877,628,090,601
arxiv
\section{Introduction} Bohmian mechanics is a quantum theory based on a primitive ontology of particles and two precise mathematical equations defining their dynamics. These are the Schrödinger equation \begin{equation}\label{eq:schroedinger} i\hbar \partial_t \psi_t = H\psi_t, \end{equation} for the wave function, and the guiding equation \begin{align} \label{eq:BM-vel} \dot{X}_k = v^\psi_{k,t}(X) := \frac{\hbar}{m_k}\mathrm{Im} \frac{\nabla_k\psi_t(X)}{\psi_t(X)}, \end{align} in which the wave function enters to determine a velocity field for $N$ particles with positions $X=(X_1,\ldots,X_N)\in \mathbb R^{3N}$. On the fundamental level, there is only one wave function, the universal wave function, guiding the motion of all the particles together. In many relevant situations, however, subsystems allow for an autonomous description in terms of an \emph{effective wave function} determined by the universal wave function and the actual positions of particles outside the subsystem. It can then be shown that Born's rule, applied to effective wave functions, describes the \emph{typical} distribution of particle positions in an ensemble of identically prepared subsystems \citep[ch. 2]{durr.etal2013a}. With this \emph{quantum equilibrium hypothesis}, the Bohmian theory reproduces the statistical predictions of standard quantum mechanics (whenever the latter are well-defined). It does so by making correct statistical predictions about the outcome of measurement experiments as recorded in the spatial configuration of whatever plays the role of a ``measurement device'' \citep[ch. 3]{durr.etal2013a}. While Bohmians generally insist that the empirical content of the theory is exhausted by its predictions about particle motions, critics have questioned the empirical status of the particles, usually advocating for a priority of the wave function when it comes to relating the theory to observation (e.g. \cite{zeh1999, bedard1999, brown.wallace2005, gao2019a}). On this basis, it has even been argued that Bohmian mechanics doesn't solve the quantum measurement problem \citep{stone1994, gao2019}, or that it solves the measurement problem only by being a Many-Worlds theory in denial \citep{deutsch1996}. The misleading terminology of ``hidden variables'' has probably done its part to stir the debate about just how hidden the Bohmian particles actually are. I will argue that these criticisms are based on misconceptions of the Bohmian theory and the role of particles vis-a-vis wave functions in it. To the extent that valid questions have been raised -- in particular about the empirical accessibility of particle positions -- they are questions that can be answered. To this end, I will first provide a brief review of the Bohmian description of the measurement process. Section 3 will clarify the status of particles and wave functions in Bohmian mechanics and address various worries about the empirical (in)accessibility of particle positions. In Section 4, I will (reluctantly) address the issue of conscious experience and why, assuming a functionalist theory of mind, mental states would be realized by the particles rather than the wave function. I end with a short ``dialogue'' in Section 5, trying to put the discussion into a broader perspective. \section{The measurement process in Bohmian mechanics} A prototypical measurement in Bohmian mechanics is an interaction between a system $S$ and a measurement device $D$ resulting in one of several macroscopically discernible configurations of $D$ (``pointer positions'') which are correlated with certain possible quantum states of $S$. Schematically, the interaction between the measured system and measurement device is such that, under the Schrödinger evolution, \begin{equation} \label{Pfeil1} \varphi_i\Phi_0\stackrel{\mbox{\footnotesize Schrödinger evolution\index{Schrödinger! Entwicklung}}}{\longrightarrow}\varphi_i\Phi_i\,\,, \end{equation} where the wave function $\Phi_0$ is concentrated on pointer configurations corresponding to the ``ready state'' of the measurement device, and $\Phi_i$ are concentrated on configurations indicating a particular measurement result, e.g., by a pointer pointing to a particular value on a scale, a point-like region of a detector screen being darkened, a detector clicking or not clicking, etc. The Schrödinger time evolution is linear, so that a superposition \[\varphi=c_1\varphi_1+c_2\varphi_2,\qquad c_1,c_2\in\mathbb{C},\qquad |c_1|^2+|c_2|^2=1,\] leads to \begin{equation} \label{Pfeil2} \varphi\Phi_0=(c_1\varphi_1+c_2\varphi_2)\Phi_0 \stackrel{\mbox{\footnotesize Schrödinger evolution\index{Schrödinger! Entwicklung}}}{\longrightarrow} c_1\varphi_1\Phi_1+c_2\varphi_2\Phi_2. \end{equation} At this point, standard quantum mechanics is hit by the measurement problem \citep{maudlin1995e}. In Bohmian mechanics, however, the system is described not only by the wave function but also by the actual spatial configuration $(X,Y) \in \mathbb R^k \times \mathbb R^m$ of measured system and measurement device, given by the positions of their constituent particles. It thus has a well-defined configuration at all times, regardless of whether or not its wave function is in a superposition. \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{WF_overlap2.pdf} \caption{Sketch of the pointer wave functions on configuration space.}\label{fig:pointerstates} \end{center} \end{figure} For illustrative purposes, we assume that $\Phi_1$ is concentrated on a region $\mathrm{L}\subset \mathbb R^m$ of the configuration space of $D$ corresponding to pointer-configurations pointing to the left, while $\Phi_2$ is concentrated on a region $\mathrm{R}\subset \mathbb R^m$ corresponding to pointer-configurations pointing to the right. Obviously, the two regions are disjoint, i.e. $\mathrm{L}\cap \mathrm{R}=\emptyset$. By assumption, $\Phi_1$ and $\Phi_2$ are well localized in the respective regions (otherwise, the measurement device is no good), i.e., almost zero outside (see Fig. 1). In particular, we have \begin{subequations} \begin{align}\label{L1} \int_{\mathrm{L}} |\Phi_1|^2 \; \mathrm{d}^my \approx 1, \;\;\; \int_{\mathrm{L}} |\Phi_2|^2 \; \mathrm{d}^my \approx 0\\\label{R0} \int_{\mathrm{R}} |\Phi_1|^2 \; \mathrm{d}^my \approx 0, \;\;\; \int_{\mathrm{R}} |\Phi_2|^2 \; \mathrm{d}^my \approx 1. \end{align} \end{subequations} Now, according to Bohmian mechanics, the probability of the pointer \emph{actually} pointing to the left is: \begin{equation}\begin{split}\label{pointerintegral} \mathbb P(Y \in \mathrm{L}) &= \int_{\mathbb R^k \times \mathrm{L}}|c_1\varphi_1\Phi_1+ c_2\varphi_2\Phi_2|^2 \, \mathrm{d}^kx\,\mathrm{d}^my \\ & = |c_1|^2 \int_{\mathbb R^k \times \mathrm{L}}|\varphi_1\Phi_1|^2\mathrm{d}^kx \,\mathrm{d}^my\\& + |c_2|^2\int_{\mathbb R^k \times \mathrm{L}} |\varphi_2\Phi_2|^2\mathrm{d}^kx \,\mathrm{d}^my\\ &+2 \,\mathrm{Re} \Bigl( c_1c_2\int_{\mathbb R^k \times \mathrm{L}}(\varphi_1\Phi_1)^* \varphi_2\Phi_2 \mathrm{d}^kx \,\mathrm{d}^my \Bigr) \approx|c_1|^2.\end{split} \end{equation} The final approximation follows from eq. \eqref{L1} (together with the Cauchy-Schwarz inequality $\left\lvert \int_{L} \Phi_1^* \Phi_2 \right\rvert \leq \sqrt{\int_{L} |\Phi_1|^2}\sqrt{\int_{L} |\Phi_2|^2}$ ). Similarly, the probability of the pointer pointing to the right is $\mathbb P(Y \in \mathrm{R})\approx |c_2|^2$. If $\varphi_1$ and $\varphi_2$ are eigenstates of some quantum observable, $|c_1|^2$ and $|c_2|^2$ are the statistical predictions of standard quantum mechanics for an ideal measurement. (The better the pointer states $\Phi_1$ and $\Phi_2$ are localized in disjoint regions of configuration space, the closer the measurement is to ``ideal''. ) Moreover, after the measurement (assuming it was not destructive), the measured system $S$ will be guided by the wave function $\varphi_1(x)\Phi_1(Y) + \varphi_2(x)\Phi_2(Y)$. If the pointer actually points left (let's say), i.e. $Y \in \mathrm{L}$, we have $\Phi_2(Y) \approx 0$ and hence (after normalization) the effective wave function $\varphi_1$ describing the system $S$ at the end of the measurement. In this way -- that depends crucially on actual particle positions -- Bohmian mechanics vindicates the postulate of textbook quantum mechanics that a measurement collapses the wave function of the measured system such that the previous outcome will be reproduced by a repeated measurement. It does not, however, vindicate the (bad) idea that the state $\varphi_1$ or $\varphi_2$ corresponds to some pre-existing property of the system (``observable value'') that the measurement merely reveals (cf. \cite{lazarovici.etal2018}). Bohmian particles have a position and nothing else, while the physical content of the wave function is understood through its role for the dynamical and statistical description of the particles. \section{The epistemic status of particles} Despite this central role of point particles in Bohmian mechanics -- or maybe because of it -- there has been a lot of debate about their empirical status. Some authors have suggested that Bohmian mechanics includes -- or should include -- a postulate stating that measurement results are instantiated in particle positions, or that observations ``supervene'' on particle positions (rather than the wave function), or something like that (see e.g. \cite{naaman-marom.etal2012}). Such a postulate is neither helpful nor necessary, as I hope to clarify with this paper. In fact, Bohmians generally insist (as did John \citet[ch. 23]{bell2004}) that it is a bad idea to include postulates about ``observation'' or ``measurements'' in any physical theory since those are much too vague and physically complex notions. Other authors suggest that ``measurement results'' in Bohmian mechanics correspond first and foremost to certain wave functions, while the role of the particle configuration is merely to ``pick out'' one part of a (decoherent) superposition as the actual result. In particular, \cite{brown.wallace2005} claim to identify such a ``Result Assumption'' in the second part of David Bohm's 1952 paper.\footnote{\cite{bohm1952a} writes: ``[T]he packet entered by the apparatus variable $y$ determines the actual result of the measurement, which the observer will obtain when he looks at the apparatus.'' (p. 182)} I lack the historical competence to provide a thorough exegesis of Bohm's original work. I believe that Brown and Wallace are reading too much into an innocuous statement, but can't rule out the possibility that Bohm had not yet appreciated the implications of his theory in full. What I can unequivocally say is that such a ``Result Assumption'' plays no role in the modern understanding of Bohmian mechanics (that has been further developed by Bell, and Dürr, Goldstein, Zanghì, among others). Indeed, it would be a rather unproductive assumption to make since it leaves open the critical question, how and why and in what sense a particular wave function is supposed to ``correspond to a measurement result'' -- or any concrete physical fact at all. Unsurprisingly, though, this $\psi$-centric reading has resonated in particular with modern Everettians who are committed to the view that objects and events in physical space (like measurement devices indicating a measurement result) can be recovered by some sort of functional analysis in terms of internal degrees of freedom of the wave function or quantum state. This, however, is \emph{not} how the Bohmian theory relates to the physical world, and there are legitimate questions as to whether the procedure can succeed in general (see e.g. \cite{monton2006, maudlin2010}; I will express some of my own concerns in the course of this paper). What Bohmian mechanics makes is an ontological commitment to particles. They are the local beables \citep[ch. 7]{bell2004} or primitive ontology \citep{allori.etal2014}, what the theory postulates as the basic constituents of matter. The role of the wave function is first and foremost to determine the motion of particles and also (though this is a theorem rather than an additional postulate) to describe their statistical distributions. All our analyses of the theory are then consistent with the particles forming stable configurations that move and behave, qualitatively and quantitatively, like the tables, cats, measurement devices, etc. that we observe in the world. This is why the theory is empirically adequate. In particular, the way in which the particle ontology solves the measurement problem is not just by picking out certain parts or branches of the wave function as guiding but by releasing the wave function from the undue burden of representing matter in the first place. A point that Bohmians \emph{do} repeatedly and emphatically insist on, is that making correct predictions about the spatio-temporal configuration of matter -- including pointer positions, display readings, or whatever else is used to ``record'' the outcome of ``measurements'' -- is sufficient for the empirical adequacy of a physical theory (cf. \citet[p. 166]{bell2004}). But this is a claim about physics in general, not an additional postulate about Bohmian measurements in particular. It is unfortunate since potentially misleading that some authors (e.g. \cite{gao2019}) mistake it for the latter. As a nod to the neo-Everettians (and other wave function monists), it is worth pointing out that Bohmians are also ``macro-object functionalists'' (\cite{lewis2007}) in the sense that functionalist arguments are relevant to locating macroscopic objects in the particle trajectories. However, while I understand how things moving and interacting in physical space can be functionalized in terms of other things moving and interacting in physical space, it is unintelligible to me how things moving and interacting in physical space could be functionalized in terms of degrees of freedom of the wave function which (no matter how you want to think about it) are not things moving and interacting in physical space. I will return to this issue in Section 4. \subsection{Position Measurements} Some sceptics now say that this is all well and good, Bohmian mechanics may predict that particles can form cats and tables and measurement devices that have a definite configuration at all times, but there is no good reason to believe that when we \emph{look} where a table is or whether the pointer points left or right, we will see them in the position that the theory predicts for the particles. The intuition behind this worry seems to be that observations are physical interactions and that these interactions are first and foremost described by the Schrödinger equation for wave functions which makes no reference to Bohmian particle positions. Hence, it may seem like observations are determined by the wave function after all, while the particles are somehow epiphenomenal. This reasoning is not correct, but since an observation is indeed a physical interaction, the question here is ultimately a physical one, so let's see what the theory actually predicts. We recall the measurement procedure described in Section 2 with the final wave function of system and apparatus given by the right-hand-side of eq. \eqref{Pfeil2}. Now we go one step further and consider a ``measurement of the pointer position'' by another system $C$ (we assume that the measurement device $D$ was perfectly isolated up to this point, so there is no environmental decoherence). We may think of an ``observer'' looking at the measurement device, resulting, ultimately, in a certain particle configuration of her brain, though I prefer a camera or some other system under no suspicion of consciousness (we will return to the issue of conscious experience in Section 4). In any case, the spatial resolution of such an observation can very well be finer than the spread of the ``pointer states'' $\Phi_i$ (thus corresponding to a Schrödinger evolution $ \Phi_i \longrightarrow \sum_j \Phi_{ij} \Psi_{j}, $ where $\sum_j \Phi_{ij} = \Phi_{i}$ and the $\Psi_{j}$ are the ``record states'' of $C$.) However, we shall consider the simplest case in which the measurement interaction leads to a final wave function of the form \begin{equation} \, c_1\varphi_1\Phi_1\Psi_1+c_2\varphi_2\Phi_2\Psi_2, \, \end{equation} where $\Psi_1$ is concentrated on a region $\mathcal{L}$ of the configuration space of $C$ corresponding to the camera recording a pointer pointing left, and $\Psi_2$ is concentrated on a region $\mathcal{R}$ corresponding to the camera recording a pointer pointing right. So what is the probability that the pointer actually points to the left, i.e. $Y \in \mathrm{L}$, while the camera records a pointer pointing right, i.e. $Z \in \mathcal{R}$? We find \begin{equation}\label{pointerintegral} \mathbb P(Y \in \mathrm{L}, Z \in \mathcal{R}) = \int_{\mathbb R^k \times \mathrm{L}\times \mathcal{R}} |c_1\varphi_1\Phi_1\Psi_1+ c_2\varphi_2\Phi_2\Psi_2|^2 \, \mathrm{d}^kx\, \mathrm{d}^my\, \mathrm{d}^nz \approx 0, \end{equation} since $\Phi_2$ is zero (or nearly so) on $\mathrm{L}$, while $\Psi_1$ is zero (or nearly so) on $\mathcal{R}$, hence both $\Phi_1\Psi_1$ and $\Phi_2\Psi_2$ are just about zero on $\mathrm{L}\times \mathcal{R}$. Simply put: if you look where the pointer is, you will typically see the pointer where it is. \begin{figure}[ht] \begin{center}\label{fig:confspace} \includegraphics[width=\textwidth]{Aufspaltung3.pdf} \caption{Sketch of position measurement in configuration space. The dot indicates the actual configuration of the system.} \end{center} \end{figure} How does this result square with the argument that particle positions do not matter because interactions are described by the wave function and its Schrödinger evolution? Well, as I said, the argument is not correct (see also \cite{maudlin1995d}). It neglects the fact that in the interaction between the systems $D$ and $C$, the particle configuration of $D$ is essential to determining which part of the wave function guides the configuration of $C$. It is instructive to consider an intermediate stage of the measurement interaction \begin{equation} (\Phi_L + \Phi_R)\Psi_0 \longrightarrow \Phi_L\Psi_{\vartriangleleft} + \Phi_R\Psi_{\vartriangleright} \longrightarrow \Phi_1\Psi_1+ \Phi_2\Psi_2 \end{equation} in which the wave packets $\Psi_{\vartriangleleft}$ and $\Psi_{\vartriangleright}$ are just beginning to separate in the configuration space of $C$ and propagate towards the regions $\mathcal{L}$ and $\mathcal{R}$, respecticely. Note that in the full configuration space of $D+C$, however, the entangled wave function $\Phi_L\Psi_{\vartriangleleft} + \Phi_R\Psi_{\vartriangleright}$ is already well-separated (decohered) along the $y$-coordinates (Fig. 2). Now, acccording to the guiding equation \eqref{eq:BM-vel}, the velocity of the $Z$-variables is \begin{equation} \dot{Z} \propto \mathrm{Im} \frac{\Phi_L(Y)\nabla_z\Psi_{\vartriangleleft}(Z) + \Phi_R(Y)\nabla_z\Psi_{\vartriangleright}(Z)}{\Phi_L(Y)\Psi_{\vartriangleleft}(Z) + \Phi_R(Y)\Psi_{\vartriangleright}(Z)}. \end{equation} Hence, if the pointer is actually left, i.e. $Y \in \mathrm{L}$, we have $\Phi_R(Y) \approx 0$ and thus $\dot{Z} \approx \mathrm{Im} \frac{\nabla_z\Psi_{\vartriangleleft}(Z)}{\Psi_{\vartriangleleft}(Z)}$, so that the configuration $Z$ is effectively guided by the wave packet $\Psi_{\vartriangleleft}$ that moves towards $\mathcal{L}$ (i.e. towards configurations in which the photography shows the pointer pointing left). Analogously, if the observed system is actually right, i.e. $Y \in \mathrm{R}$ the configuration $Z$ is effectively guided by the wave packet $\Psi_{\vartriangleright}$ that moves towards $\mathcal{R}$ (i.e. configurations in which the photography shows the pointer pointing to the right). Hence, the idea that the particles are causally inert, an ``idle wheel'', is clearly wrong. Indeed, it is misleading to say that interactions in Bohmian mechanics are described only by the wave function and the Schrödinger equation; the wave function rather mediates interactions between particles via the guiding law \eqref{eq:BM-vel}. \subsection{Atypical outcomes} If we return to the probability estimate, eq. \eqref{pointerintegral}, and suppose that the wave packets $\Phi_i$ or $\Psi_i$ have long ``tails'', $\mathbb P(Y \in \mathrm{L}, Z \in \mathcal{R})$ may indeed not be exactly zero but only nearly so (as indicated by the $\approx$ sign). Hence, there would be a very small, yet non-zero probability that the pointer configuration points to the left (at least for a short period of time), while the camera -- or ``observer'' -- sees a pointer pointing to the right. Realistically, this probability will be so small as to be practically negligible, but the atypical outcome is still \emph{possible} according to the theory. Would this mean that the Bohmian particle configuration $Y$ does not correspond to the ``real'' pointer position? No, it means precisely what the theory says, namely that there is an extremely small, yet non-zero probability that the pointer points left, while the camera records a pointer pointing right. And this shouldn't be all that surprising upon reflection. Also according to electrodynamics, it is possible, yet extremely unlikely, that I see the moon to my right while it is actually to my left -- because what I see is a very special, random fluctuation in the electromagnetic field. It is also possible, yet extremely unlikely, that I hold a thermometer (or my finger) in hot water but register a very low temperature because all the fast particles happen to stay away from it. \emph{Atypicality} can always undermine the reliability of observations; consequently, any inference from empirical evidence has to rely on the assumption that the evidence has not been produced by an atypical or very-low-probability event. This is an important insight about physics in general, not a mystery of Bohmian mechanics or quantum mechanics in particular. \subsection{``Position measurements'' that do not measure positions} There are also special measurement procedures in which the relevant ``record states'' $\Psi_1$ and $\Psi_2$ in eq. \eqref{pointerintegral} would have a big overlap in the configuration space of $C$. These include, in particular, so-called \emph{weak measurements} but also interactions that lead, for instance, to a spin-flip or the excitation of an atom, so that $\Psi_1$ and $\Psi_2$ are orthogonal in Hilbert space but not separated in configuration space. (This cannot be directly observed but the ``read out'' that manifests in particle configurations can be delayed.) From the same equation, it is evident that such procedures will not reliably reveal the actual particle positions (\cite{aharonov.vaidman1996, naaman-marom.etal2012}). There are even interferometer experiments, in which the naive reading of a detector is systematically wrong about the path of a particle, in which a spin flip (let's say) is always produced by a nonlocal effect rather than a Bohmian trajectory passing nearby (nevertheless, the measurement statistics are always correctly predicted by Bohmian mechanics). This has given rise to the catchy accusation that Bohmian mechanics predicts ``surrealistic trajectories'' \citep{englert.etal2014}. In practice, decoherence prevents such situations for macroscopic systems, but as \cite{gisin2018} rightly points out, there is nothing in the Bohmian theory that makes it \emph{in principle} impossible to perform such an experiment with elephants. This is supposed to sound bad. However, stars are even bigger than elephants and General Relativity tells us that they are not always where we see them (literally). As Einstein reminded the young \citet[p. 80]{heisenberg2012}, it is always the physical theory that has to tell us what can be measured and how, i.e., which physical events are correlated in a way that allows us to infer one from the other. Bohmian mechanics tells us that certain measurement procedures (which are much less trivial than just ``looking'') are not reliable ways to detect the position of a particle or an elephant. Of course, concluding from this that we cannot trust observations of Bohmian particles in general, is to commit a similar mistake as the American president when he says that ``you literally can't see'' the F-35 stealth fighter. \cite{gisin2018} summarizes the situation correctly by saying that not all measurements which are called ``position measurements'' in standard quantum mechanics are actually position measurements in Bohmian mechanics.\footnote{Another instructive example for this fact was already provided in \citet[sec. 7.5]{durr.etal2004}.} Again, this is probably meant to sound bad (for Bohmian mechanics). But what, in fact, is the justification for calling these (or any) experimental procedures ``position measurements'' in standard quantum mechanics? Is it merely because their statistics can be described by some sort of ``position operator''? This is not a physical account of why and how the detector events in question should be systematically correlated with the position of anything. Orthodox quantum mechanics is unable to provide such an account. In fact, it doesn't even contain localized objects with definite positions, leading to the more basic question, what ``position measurements'' are supposed to measure in the first place. \subsection{Absolute Uncertainty} An unfortunate source of confusion about the empirical status of particle positions in Bohmian mechnaics is the theorem of \emph{absolute uncertainty} \citep[ch. 2]{durr.etal2013a}. This theorem states that if the effective wave function of a subsystem $S$ is $\varphi$, an external observer cannot have more information about the particle configuration of that system than provided by the $\lvert \varphi \rvert^2$-distribution. (``Information'' here just refers to a correlation between the configuration of $S$ and the configuration of some other system -- e.g. a brain -- that constitutes a ``record''.) \citet[p. 757]{lewis2007} then objects that \begin{quote} ``this can't be exactly right; the wavefunction, after all, doesn't determine a unique result for a measurement. So Bohmians note that since an observer can know which wavepacket contains the particles, the lower bound on the accuracy with which the particle configuration can be known is actually the squared amplitude of the occupied wave packet.''\end{quote} \noindent The theorem is exactly right (it's a theorem, after all). What Lewis seems to forget is that in order to know the actual measurement result, an observer has to look at (interact with) the measurement apparatus. This will effectively collapse the apparatus wave function into an ``occupied'' wave packet consistent with that measurement result and the observer's knowledge of it.\footnote{For another version of this misunderstanding, see \citet[footnote 1]{gao2019}).} To counter further misunderstandings, here are some things the theorem \emph{doesn't} imply: \begin{enumerate}[i.] \item Absolute uncertainty doesn't prevent us from determining particle positions to \emph{arbitrary} precision (again, keeping in mind that whatever procedure we use to localize the particle positions can also localize their wave function). Note that while one usually states the reverse implication, we could just as well say that our knowledge of the particle positions puts a limit on the spread of their wave function. To measure a trajectory is, evidently, just to measure the position at different times, though one then has to keep in mind that since the measurement procedure can change (effectively collapse) the effective wave function, it can also significantly change the trajectory, in particular for microscopic systems. \item Absolute uncertainty doesn't prevent us from inferring additional information about past trajectories or particle positions. For instance, in the double slit experiment (assuming a suitably symmetric setup) we know on theoretical grounds that particles hitting the screen above/below the symmetry axis have passed through the upper/lower slit (because Bohmian trajectories cannot cross). \item Absolute uncertainty sets a limit on our knowledge of a system's particle configuration in terms of its wave function. It does not say that our knowledge of a system is limited \emph{to} its wave function. Indeed, what we can know about wave functions is an entirely different question. It seems evident to me that our knowledge \emph{of} the wave function is usually much more limited -- and certainly much more indirect -- than our knowledge of particle positions. In fact, to the extent that we can measure the wave function (by so-called ``protective measurements'', see \cite{aharonov.vaidman1993}), we infer it from position measurements. \end{enumerate} \noindent For all these reasons, attempts to use absolute uncertainty in an argument for the empirical priority of wave functions over particles are thoroughly misguided. \section{Measurements and conscious experience} All that said, some authors insist that Bohmian mechanics runs into problems when the description of the measurement process is supposed to end not with the pointer of a measurement device (or maybe a photograph of the measurement device) but the brain and conscious experience of an observer (e.g. \cite{gao2019a}, see \cite{oldofredi2019} for a good discussion). A priori, there are at least two reasons to be suspicious of such claims: \begin{enumerate} \item Most of the authors making them seem to misunderstand Bohmian mechanics already as applied to measurement devices. \item From the point of view of the physical theory, there is no essential difference between a measurement device and a brain (or whatever physical system is supposed to be the supervenience base of conscious experience). The particle configuration of a brain records an observation in the same sense as the particle configuration of a measurement device or a photographic film does. Everything else falls under the mind-body problem, about which, I believe, quantum physics has nothing new to say (cf. \cite{loewer2003}). \end{enumerate} \noindent Of course, there is in general more to a ``record'' than a static particle configuration. It is also relevant how the system in question evolves and interacts, and this is determined by the wave function. Thus, to the extent that there is a legitimate debate here, it comes down to the following question (cf. \cite{lewis2007}): \begin{quote}If some functionalist theory of the mind is true, what makes it that mental states are functionally realized by the particles rather than the wave function which is also part of the Bohmian theory? \end{quote} This objection is particularly popular among Everettians, who use it to argue that Bohmian mechanics is a Many-Worlds theory in denial (see, in particular, \cite{deutsch1996, brown.wallace2005}). Bohmian mechanics agrees, after all, that the wave function of the universe never collapses, thus admitting all the branches that make up the Everettian multiverse. There are a few observations I can make in response: \begin{enumerate}[a)] \item I don't know if any functionalist theory of the mind is true (and I wouldn't want to make my understanding of quantum mechanics contingent on it). \item To apply functionalism, it must be clear what the basic objects and properties are in terms of which the non-basic objects and properties are functionalized. In the Bohmian theory, the basic terms are particle positions, while the wave function is itself understood through its ``functional'' role for the motion of particles. \item There are good arguments for \emph{substrate independence} in the philosophy of mind, in particular the ``fading/dancing qualia'' of \cite{chalmers1995}, but I don't see how they would apply to particles and the wave function, even if the wave function were another physical entity (which I don't believe it is). It may be possible to gradually replace a biological brain by a silicone brain while maintaining the functional organization, but I don't know what it would even mean to replace parts of a particle brain by wave functions. \item In more detail, the objection against taking particles as the physical correlates of conscious experience is that ``If the functionalist assumption is correct, for consciousness to supervene on the Bohmian particles but not the wave function, the Bohmian particles must have some functional property that the wave function do not share. But the functional behaviour of the Bohmian particles is arguably identical to that of the branch of the wave function in which they reside.'' \cite[p. 306]{gao2019a} The last assertion is also arguably false. For instance, Bohmian mechanics allows for the possibility that the universal wave function is stationary while all the change in the world comes from particle motions. Changing in time versus not changing in time is clearly a significant functional difference. I'm not committing to a stationary wave function, here; my point is that it cannot be \emph{a priori} true that anything which can be functionalized in terms of particles can also be functionalized in terms of the wave function, even under a generous interpretation of functionalism. \item On a more basic note: particles move relative to one another. The wave packets guiding their motion (to the extent that they are even separable) don't. They ``live'' in different dimensions of configuration space and hence do not even stand in a distance relation to one another. Anything that allows for a functional definition in terms of matter in motion (this arguably includes brains, though the critical question is, of course, whether it includes ``minds'') can, in principle, be realized by particles. It is not clear at all that it can also be realized by degrees of freedom in the wave function.\footnote{What some neo-Everettians seem to establish is nothing more than a mapping between patterns in the wave function and trajectories in physical space. This is not even a mathematical isomorphism, let alone a functional one. I find it remarkable how the philosophical discussion turned to the question whether the empirical content of Bohmian mechanics is really that of Everettian quantum mechanics when it is not clear if Everettian quantum mechanics has any empirical content at all.} \item Another version of the objection against particles as the physical correlates of conscious experience is that a conscious agent would then have precise knowledge of the particle configuration of her brain, which leads to worries about faster-than-light signaling as well as to the question, how the brain measures it's own particle configuration \cite{stone1994}. To be honest, I don't even see how this objection gets off the ground. Knowledge realized in (or supervenient on) brain configurations is not knowledge \emph{about} brain configurations. \end{enumerate} \noindent In the upshot, to say that Bohmian mechanics cannot account for conscious experience (to the extent that it is physical) is to say that particles moving in accordance with the Bohmian laws cannot possibly be a ``brain''. As far as I can tell, this claim has no basis in physics, neuroscience, or anywhere else. On the other hand, the claim that ``brains'' would have to be located in the undulating wave function rather than moving particles is based on a variety of physical and metaphysical assumptions that are questionable, at best. I don't believe that physics can tell us why brain states are correlated with mental states but I believe that physics \emph{must} tell us what brains are made of. And the answer of Bohmian mechanics is clearly and unequivocally: particles. \section{Epilogue} When all is said and done, I suspect that some readers will still insist on the question:\\ \noindent \emph{Supppose that Bohmian mechanics is true, how do I know that the tree in front of me is a collection of particles rather than a pattern in the wave function?}\\ In response, I could insist on a particular metaphysical interpretation of the wave function and say that it is not physical stuff but rather a nomological object (see e.g. \cite{esfeld.etal2014}). I believe that this response is correct but doubt that it would satisfy the questioner. Thus, if I may be more blunt, I would say: if you even ask this question, you still have some physical theory in mind that is not Bohmian mechanics. I suppose that when you first studied classical Hamiltonian mechanics, you didn't wonder why, according to that theory, a tree is a configuration of particles rather than a pattern in the Hamiltonian flow on phase space. Physics has never been about locating trees in an abstract mathematical formalism, only the confusions about quantum mechanics lead to this business of ``interpretation''. Instead, the scientific enterprise departs from our ``manifest image of the world'' \citep{sellars1962}, our observation of trees, tables, cats, etc., and the question, what these objects are made of on the most fundamental level. Once we have a hypothesis about the basic entities and the laws describing them, we are in the business of locating trees (and cats, and measurement devices, etc.) in the scientific image of the theory to see if it matches the world that we experience. \\ \noindent\emph{But if the world -- including you -- was just patterns in the wave function rather than configurations of particles, your experience would be the same. }\\ I doubt that this is true, and the people who claim it is, have, again, another theory in mind than Bohmian mechanics. I agree that \emph{if} trees were patterns in the wave function, Bohmian mechanics would not be the correct theory of the world. However, what some physicists and philosophers have tried to argue is that even if Bohmian mechanics were true, the tree in front of you would most likely be a pattern in the wave function rather than a collection of particles. And these arguments don't hold water; they are question-begging at best and usually based on misconceptions of the physical theory. \\ \noindent\emph{I feel like you're still avoiding the real issue, so let me rephrase it: How does it follow FROM THE EQUATIONS of Bohmian mechanics that the tree in front of me is a configuration of particles rather than a pattern in the wave function? }\\ Nothing physical follows from mathematics alone. This is why the primitive ontology -- the stuff that trees are made of (or maybe instantiated in) -- is a basic and indispensable part of any fundamental physical theory. A theory with a clear primitive ontology can be wrong about what matters is, but it cannot be wrong about what it says that matter is. \\ \noindent \textbf{Acknowledgements:} I am grateful to Andrea Oldofredi for helpful comments and to Shan Gao for an inspiring discussion. I gratefully acknowledge funding by the Swiss National Science Foundation (SNSF) Doc.Mobility Fellowship P1LAP1\_184150. \bibliographystyle{apalike}
2,877,628,090,602
arxiv
\section{Introduction} We've started a theory of homotopy enrichment with the notion of co-Segal category (see \cite{COSEC1}). The basic idea is to replace the composition operation `$\mathcal{C}(A,B) \otimes \mathcal{C}(B,C) \to \mathcal{C}(A,C)$' by configurations of the form: \[ \xy (-15,0)*+{\mathcal{C}(A,B) \otimes \mathcal{C}(B,C)}="X"; (30,0)*+{\mathcal{C}(A,B,C)}="Y"; (30,18)*+{\mathcal{C}(A,C)}="E"; {\ar@{->}^-{\varphi}"X";"Y"}; {\ar@{->}^-{}_{\wr}"E";"Y"}; {\ar@{.>}^-{}"X";"E"}; \endxy \] where the vertical map is a weak equivalence.\\ Such structure is defined as a lax functor $\mathcal{C}: \sxop \to \mathscr{M}$ satisfying a homotopy condition (vertical maps being weak equivalence). Here $\sxop$ is a strict $2$-category build out of a set $X$ and $\mathscr{M}$ is a symmetric monoidal model category (viewed as a $2$-category with a single object). The set $X$ is the set of objects of $\mathcal{C}$ .\\ The philosophy of co-Segal categories is to reserve the Segal situation, but this is not the only difference. Indeed, Segal categories are defined by simplicial diagrams satisfying some homotopy conditions (Segal conditions) whereas the definition of co-Segal categories mixes both simplicial structure, homotopy conditions together with algebraic data (the map $\varphi$ above and his cousins). And it seems that the the presence of algebraic data creates some \emph{obstruction} to have a nice homotopical understanding of these structures. For example, as far as the author knows, we cannot guarantee that the category of $\mathscr{M}$-valued lax functors inherits the left properness of $\mathscr{M}$. \\ Strict $\mathscr{M}$-categories correspond to co-Segal categories that are \emph{purely algebraic}, in the sense that the simplicial structure and homotopy conditions are trivial: everything is given by identity morphisms. \\ In this paper we investigate the \emph{strictification problem} for co-Segal $\mathscr{M}$-categories for a (symmetric) monoidal model category $\mathscr{M}$. So morally we try to find an analogue of Bergner's strictification theorem for Segal categories (see \cite{Bergner_rigid}). We have the following theorem \begin{thmsans}[\ref{quasi-strict}] Every excellent co-Segal $\mathscr{M}$-category is equivalent to an $\mathscr{M}$-category with the same objects and having a strict composition and weak identity morphisms. \end{thmsans} As one can observe this is not totally a strictification theorem since it concerns only the ones we've called \emph{excellent} (Definition \ref{excellent-lax-diag}). We don't know if the theorem holds for all co-Segal categories. Even if we don't know examples of non-excellent co-Segal categories, there are some reasons from the theory of \emph{triangulated categories} that suggest that not all co-Segal categories admit a strict model. In fact it was acknowledged to the author that F.Muro has examples of triangulated categories that don't have \emph{dg-enhancement} (see also \cite{MSS}). But it is more likely that such triangulated category be can be enhanced by a co-Segal (dg)-category which, a posteriori, shouldn't be equivalent to a strict one.\\ Our theorem goes in the direction of \emph{Simpson's conjecture} which says that ``higher categories are equivalent to ones that admit a strictly associative composition but weak identities'' (see \cite{Simpson_weak_unit}). A particular case of the conjecture has been proved by Joyal and Kock (see \cite{Joyal_Kock_wu}). To prove the theorem we simply use the fact a weak equivalence between cofibrant Reedy diagrams induces an equivalence on the colimits. And being \emph{excellent} ensures that we are in this case up-to a weak equivalence.\\ We give a weaker version of the previous result in Theorem \ref{weak-strict}. Unfortunately even this weaker version doesn't not induce a Quillen equivalence between arbitrary co-Segal categories and strict categories. Finally we would like to remind the reader that co-Segal categories that are considered here have homotopy units. And that the previous theorem gives a strictification for the composition and not the units. We will address the strictification of homotopy units in a different work. \section{Excellent lax diagrams} \subsection{Preliminaries} \begin{warn} In this paper all the set theoretical size issues have been left aside \footnote{We can work with universes $\mathbb{U} \subsetneq \mathbb{V} \subsetneq \cdots$ }. Some of the material provided here are well known facts and we make no claim of inventing or introducing them. Unless otherwise specified when we say `lax functor' we will mean the ones called \emph{normal lax functors} or \emph{normalized lax functor}. These are lax functors $\mathcal{F}$ such that the maps `$\Id \to \mathcal{F}(\Id)$' are identities and all the laxity maps $\mathcal{F}(\Id) \otimes \mathcal{F}(f) \to \mathcal{F}( \Id \otimes f)$ are natural isomorphisms. \end{warn} In the following $\mathcal{C}$ is a locally Reedy $2$-category (henceforth $\lr$-category) which is simple in the sense of \cite{COSEC1}. This means that each hom-category $\mathcal{C}(A,B)$ has a Reedy structure together with a degree that is compatible with compositions. Consider $\overleftarrow{\mathcal{C}}$ the $2$-category obtained by keeping only the \underline{inverse} category of each $\mathcal{C}(A,B)$. \begin{df} Say that $\mathcal{C}$ is an \textbf{inverse divisible} locally Reedy $2$-category if every composition functor: $$ \overleftarrow{\mathcal{C}}(A,B) \times \overleftarrow{\mathcal{C}}(B,C) \to \overleftarrow{\mathcal{C}}(A,C) $$ is a Grothendieck fibration. \end{df} For a monoidal category $\mathscr{M}=(\underline{M}, \otimes, I)$, we will denote by $\Lax(\mathcal{C},\mathscr{M})_n$ the category of normal lax functors and icons; and by $\kc=\prod_{A,B}\Hom[\mathcal{C}(A,B), \underline{M}]$. \\ We have a forgetful functor: $\Ub: \Lax(\mathcal{C},\mathscr{M})_n \to \kc$ that admits a left adjoint if $\mathscr{M}$ is cocomplete (see \cite{COSEC1})\footnote{This hold for arbitrary $2$-categories $\mathcal{C}$ and not only for $\lr$ ones}.\\ Let $\mathcal{F}: \mathcal{C} \to \mathscr{M}$ be lax diagram in a complete monoidal category. Given a $1$-morphism $z\in \mathcal{C}(A,B)$, one has the corresponding notions of: \begin{enumerate} \item lax-latching object of $\mathcal{F}$ at $z$: $\laxlatch(\mathcal{F},z)$; \item lax-matching object $\laxmatch(\mathcal{F},z)=\match(\mathcal{F}_{AB},z)$; \item and the classical latching object $\latch(\mathcal{F}_{AB},z)$. \end{enumerate} \begin{rmk}\label{rmk-reedy-map} We have canonical maps: \begin{align*} \laxlatch(\mathcal{F},z) \to \mathcal{F}(z),\\ \laxmatch(\mathcal{F},z) \to \mathcal{F}(z),\\ \latch(\mathcal{F}_{AB},z) \to \mathcal{F}(z). \end{align*} and one important map: \begin{align*} \delta_z:\latch(\mathcal{F}_{AB},z) \to \laxlatch(\mathcal{F},z). \end{align*} We have a factorization of the map $\latch(\mathcal{F}_{AB},z) \to \mathcal{F}(z)$ as: \begin{align*} \latch(\mathcal{F}_{AB},z) \xrightarrow{\delta_z} \laxlatch(\mathcal{F},z) \to \mathcal{F}(z). \end{align*} \end{rmk} And if $\mathscr{M}$ is a monoidal model category then we can define corresponding notion of Reedy cofibrations and Reedy fibrations. Denote by $\kc_{\text{-Reedy}}$ the product Reedy model structure on $\kc=\prod_{A,B}\Hom[\mathcal{C}(A,B), \underline{M}]$. Similarly we will denote by $\kc_{\text{-proj}}$ the product projective model structure. \\ The advantage of having such $\lr$-category is that we can use `Reedy techniques' and establish the following: \begin{thm} Let $\mathscr{M}$ be a monoidal model category and $\mathcal{C}$ be an $\lr$-category which is inverse co-divisible. Then we have: \begin{enumerate} \item there exists a unique model structure, called the Reedy model structure, on the category $\Lax(\mathcal{C},\mathscr{M})_n$ of normal lax functors such that $$ \Ub : \Lax(\mathcal{C},\mathscr{M})_n \to \kc_{\text{-Reedy}} $$ is a right Quillen functor; \item if $\mathcal{C}$ is totally direct i.e, $\mathcal{C}=\overrightarrow{\mathcal{C}}$, then we have a `projective' Quillen: $$ \Ub : \Lax(\mathcal{C},\mathscr{M})_n \to \kc_{\text{-proj}}$$ \item if all objects of $\mathscr{M}$ are cofibrant and $\mathscr{M}$ is cofibrantly generated, then for \underline{any} $\mathcal{C}$ we also have a projective Quillen adjunction: $$ \Ub : \Lax(\mathcal{C},\mathscr{M})_n \to \kc_{\text{-proj}}.$$ between cofibrantly generated model categories. \end{enumerate} \end{thm} \begin{proof} Assertion $(1)$ is the dual statement of \cite[Theorem 7.1 ]{Colax_Reedy}. Assertion $(2)$ is a corollary of Assertion $(1)$ combine with the fact that for direct Reedy categories, the projective and Reedy model structures are the same. Assertion $(3)$ can be found in a more general context in \cite{COSEC1}. \end{proof} \begin{rmk} To prove Assertion $(3)$ one uses a transfer lemma of Schwede-Shipley \cite{Sch-Sh-Algebra-module} as exposed in \cite{COSEC1}. \end{rmk} \subsection{Excellent lax diagrams} From now on we will work with the Reedy model structure. \begin{df}\label{excellent-lax-diag} A lax diagram $\mathcal{F} \in \Lax(\mathcal{C},\mathscr{M})_n$ is $\Ub$-cofibrant if $\Ub(\mathcal{F})$ is cofibrant in $\prod_{A,B}\Hom[\mathcal{C}(A,B), \underline{M}]$. A lax diagram $\mathcal{F}$ is \textbf{excellent} if there is a weak equivalence $\mathcal{F} \xrightarrow{\sim} \mathcal{G}$ where $\mathcal{G}$ is a $\Ub$-cofibrant lax diagram. \end{df} Recall that for $\mathscr{M}=( \sset, \times, 1)$, the cofibration are precisely the monomorphisms. A direct consequence of Remark \ref{rmk-reedy-map} is that: \begin{prop} \begin{enumerate} \item For an arbitrary $\mathscr{M}$, a cofibrant lax diagram $\mathcal{F} \in \Lax(\mathcal{C},\mathscr{M})_n$ in the Reedy structure is excellent if for every $z$ the canonical map $$ \delta_z:\latch(\mathcal{F}_{AB},z) \to \laxlatch(\mathcal{F},z)$$ is a cofibration. \item For $\mathscr{M}=( \sset, \times, 1)$, a cofibrant lax diagram $\mathcal{F} \in \Lax(\mathcal{C},\mathscr{M})_n$ in the Reedy structure is excellent \textbf{if and only if} for every $z$, the map $\delta_z$ is a cofibration. \end{enumerate} \end{prop} \begin{proof} Indeed being cofibrant in the lax-Reedy structure the canonical map hereafter is a cofibration: $$\laxlatch(\mathcal{F},z) \to \mathcal{F}(z).$$ Therefore if in addition the maps $\delta_z:\latch(\mathcal{F}_{AB},z) \to \laxlatch(\mathcal{F},z)$ is also a cofibration, then so is the composite: $$ \latch(\mathcal{F}_{AB},z) \to \laxlatch(\mathcal{F},z) \to \mathcal{F}(z).$$ Thus $\mathcal{F}_{AB}$ is Reedy cofibrant for all $(A,B) \in \Ob(\mathcal{C})^2$ and $\mathcal{F}$ is excellent.\\ Assertion $(2)$ is elementary. Indeed if the composite $$\latch(\mathcal{F}_{AB},z) \to \laxlatch(\mathcal{F},z) \to \mathcal{F}(z)$$ is a monomorphism, then so is $$\latch(\mathcal{F}_{AB},z) \to \laxlatch(\mathcal{F},z).$$ \end{proof} \begin{df} A morphism $\sigma : \mathcal{F} \to \mathcal{G}$ in $ \Lax(\mathcal{C},\mathscr{M})_n $ is an $\Ub$-cofibration if $\Ub(\sigma)$ is a cofibration in $\kc$. \end{df} It follows immediately that $\Ub$-cofibrations are closed under composition and retract. Denote by $\Gamma: \kc \to \Lax(\mathcal{C},\mathscr{M})_n $ be the left adjoint of $\Ub$. \begin{prop} Let $\mathscr{M}$ be a model category such that all objects are cofibrant. With respect to the projective model structure, if for any generating cofibration $\sigma \in \kc$, $\Gamma(\sigma)$ is an $\Ub$-cofibration, then every cofibration in $ \Lax(\mathcal{C},\mathscr{M})_n $ is also a $\Ub$-cofibration. \end{prop} \begin{proof}[Sketch of proof] Denote by $\mathbf{I}$ the generating set of cofibrations in $\kc$. By definition of the model structure on $ \Lax(\mathcal{C},\mathscr{M})_n$, the set $\Gamma(\mathbf{I})$ constitutes a set of generating cofibrations in $ \Lax(\mathcal{C},\mathscr{M})_n$ (see \cite{COSEC1}). Then a cofibration in $ \Lax(\mathcal{C},\mathscr{M})_n$ is just a relative $\Gamma(\mathbf{I})$-cell complex. Therefore it's enough to show that in a pushout square \[ \xy (0,18)*+{\Gamma \mathcal{A}}="W"; (0,0)*+{\Gamma \mathcal{B}}="X"; (30,0)*+{\mathcal{F} \cup^{\Gamma \mathcal{A}}\Gamma \mathcal{B}}="Y"; (30,18)*+{\mathcal{F}}="E"; {\ar@{->}^-{j}"X";"Y"}; {\ar@{->}^-{\alpha}"W";"X"}; {\ar@{->}^-{}"W";"E"}; {\ar@{->}^-{}"E";"Y"}; \endxy \] where $\alpha \in \mathbf{I}$, then the map $\mathcal{F} \to \mathcal{F} \cup^{\Gamma \mathcal{A}}\Gamma \mathcal{B} $ is also a $\Ub$-cofibration. To calculate that pushout, one starts by taking the pushout between the underlying diagrams in $\kc$. And as in any model category, projective cofibrations are closed under pushout; it follows that the first canonical map is also a projective cofibration. This map modifies $\mathcal{F}$ and all the trick is to build the pushout out of that modification. The hypothesis `all objects are cofibrant' is used to guarantee that cofibration are closed by tensor product. In the end the map $\mathcal{F} \to \mathcal{F} \cup^{\Gamma \mathcal{A}}\Gamma \mathcal{B} $ is a transfinite composition of projective cofibrations and therefore is a projective cofibration. We refer the reader to \cite{COSEC1} for the details on that pushout. \end{proof} \begin{nota} \begin{enumerate} \item For each pair $(A,B) \in \Ob(\mathcal{C})^2$, let $p_{AB}$ be the projection functor: $$p_{AB}: \kc \to \Hom(\mathcal{C}(A,B), \underline{M}).$$ \item $p_{AB}$ has a left adjoint $\delta_{AB}$ which is the `Dirac mass' (see \cite{COSEC1}). \item Let $\mathbf{I}_{AB}$ (resp. $\Ja_{AB}$) be a set of generating cofibrations (resp. trivial cofibrations) for $\Hom(\mathcal{C}(A,B), \underline{M})$. \end{enumerate} \end{nota} By lifting properties and adjunction one can clearly have: \begin{lem}\label{lem-generation} \begin{enumerate} \item The sets $$ \coprod_{(A,B)} \{ \delta_{AB}(\alpha); \alpha \in \mathbf{I}_{AB} \}$$ $$ \coprod_{(A,B)} \{ \delta_{AB}(\alpha); \alpha \in \Ja_{AB} \} $$ constitutes a set of generating cofibrations (resp. trivial cofibrations) of $\kc$. \item Similarly the two sets: $$ \coprod_{(A,B)} \{ \Gamma[\delta_{AB}(\alpha)]; \alpha \in \mathbf{I}_{AB} \}$$ $$ \coprod_{(A,B)} \{ \Gamma[\delta_{AB}(\alpha)]; \alpha \in \Ja_{AB} \} $$ are generating set of (trivial) cofibrations for $ \Lax(\mathcal{C},\mathscr{M})_n$ \end{enumerate} \end{lem} \subsection{Excellent co-Segal precategories} From now we take $\mathcal{C}= \sxop$. Recall that $\sxop$ is entirely direct (as $\Depiop$) so the Reedy and projective model structure on both $\kc$ and $\Lax(\mathcal{C},\mathscr{M})_n$ are the same.\\ In this case we can explicitly write a formula for the left adjoint $\Gamma$.\\ For $\mathcal{G} \in \prod_{A,B} \Hom[\sx(A,B)^{\text{op}}, \underline{M}]$, $\Gamma \mathcal{G}$ is given by the formula: $$\Gamma\mathcal{G} (z)= \mathcal{G}(z) \sqcup (\coprod_{(s_1,..., s_l); \otimes(s_i)=z ; s_i\neq z} \mathcal{G}(s_1) \otimes \cdots \otimes \mathcal{G}(s_l)).$$ \begin{prop} Let $\mathscr{M}=(\underline{M}, \otimes, I)$ be a monoidal category having an initial object $0$ and such that for every $m \in M$, $0 \otimes m \cong 0$.\\ If $A \neq B$ then for every $\mathcal{G} \in \Hom[\sx(A,B)^{\text{op}}, \underline{M}]$ we have: $$\Ub (\Gamma \delta_{AB}\mathcal{G})\cong \delta_{AB}(\mathcal{G}). $$ \end{prop} \begin{proof} Indeed we have $\delta_{AB}(\mathcal{G})(s)= 0$ if $s\notin \mathcal{C}(A,B)$. Therefore is we have an $l$-tuple $(s_1,..., s_l)$ of composable morphisms such that the composite is $z$ and $s_i\neq z$; then if $A\neq B$, necessarily there is at least one $s_i \notin \mathcal{C}(A,B)$. It follows that the only summand in $\Gamma(\delta_{AB}(\mathcal{G}))(z)$ that is different from $0$ is $\mathcal{G}(z)$ and the proposition follows. \end{proof} \begin{cor} If $A \neq B$ then for any cofibration $\alpha$ of $\Hom[\sx(A,B)^{\text{op}}, \underline{M}]$, $\Gamma(\delta_{AB}(\alpha))$ is a $\Ub$-cofibration. \end{cor} \subsubsection{Obstruction of Excellence} From the previous corollary together with Lemma \ref{lem-generation}, it's clear that if for every $A \in \Ob(\mathcal{C})$ and any $\alpha \in \mathbf{I}_{AA}$, we have $\Gamma(\delta_{AA}(\alpha))$ is a $\Ub$-cofibration; then every cofibration is $\Ub$-cofibration. One can observe that we no longer have $\Ub[\Gamma(\delta_{AA}(G))] \cong \Gamma(\delta_{AA}(G)) $. Indeed if $z=(A,A,A...,A)$, there can be non trivial summand that contain a tensor product in $\Gamma(\delta_{AA}(G))(z)$. \begin{rmk} It's precisely the presence of \emph{algebraic data} that \emph{kills} the `projectiveness' of cofibrations. The main reason is that the category $\text{Arr}(\mathscr{M})$ of arrows of $\mathscr{M}$ with it's projective model structure; cofibration are not (necessarily) closed under tensor product. \end{rmk} One can establish the following. \begin{prop} If $\mathscr{M}=(\underline{M}, \otimes, I)$ is a monoidal model category such that all objects are cofibrant then: equivalent. \begin{enumerate} \item In the adjunction $$\Ub: \Lax[\sxop,\mathscr{M}]_n \rightleftarrows \prod_{A,B}\Hom[\sx(A,B)^{\text{op}}, \underline{M}]: \Gamma$$ for every cofibrant object in $\mathcal{G} \in \kc$ then $\Gamma(\mathcal{G})$ is $\Ub$-cofibrant and hence excellent. It follows that for any $\mathcal{G}$, $\Gamma \mathcal{G}$ is excellent. \item Every $\mathscr{M}$-category is excellent \end{enumerate} \end{prop} \begin{proof} Assertion $(2)$ is clear since all object are cofibrant. In fact $\mathscr{M}$-categories correspond to the locally constant lax diagrams. And given a category with an initial object $E$, e.g $\sx(A,B)^{op}$ with $E=(A,B)$, constant diagrams correspond to the essential image of the left adjoint $\Fb^{E}$ of the evaluation at $E$. And since $\Fb^{E}$ is a left Quillen functor with respect to the projective model structure, then the result follows.\\ One can alternatively check this by lifting properties agains all fibrations.\\ For Assertion $(1)$ we proceed as follows. Let $z$ be a $1$-morphism in $\sxop(A,B)$. Recall that $z$ is a sequence $(A,..., A_i, ...,B)$. If $\partial_z$ represents the classical \emph{latching category} at $z$; then the particularity of $\sxop$ is that: \begin{claim} For any $1$-morphism $z$ of $\sxop$, and any presentation $(s_1,..., s_l)$ of $z$, we have an isomorphism: $$\partial_z \cong \partial_{s_1} \times \cdots \times \partial_{s_l} $$ \end{claim} In fact we leave the reader to verify that this is true in any \emph{direct divisible} $\lr$-category (see \cite{Colax_Reedy}). And thanks to this isomorphism and from the formula of $\Gamma$ one has that for any $\mathcal{G}$: $$\latch(\Gamma\mathcal{G},z) \cong \latch(\mathcal{G},z) \sqcup \coprod_{(s_1,..., s_l); \otimes(s_i)=z ; s_i\neq z} \latch(\mathcal{G},s_1) \otimes \cdots \otimes \latch(\mathcal{G},s_l)$$ If $\mathcal{G}$ is cofibrant then, by definition every map $$\latch(\mathcal{G},s_i) \to \mathcal{G}(s_i) \hspace{0.5in} \text{(including $s_i=z$)}$$ is a cofibration (with cofibrant domain). And since cofibrations with cofibrant domain are closed under tensor product and coproduct, one clearly have that the canonical map: $$\latch(\Gamma\mathcal{G},z) \to (\Gamma\mathcal{G})(z) $$ is also a cofibration. \end{proof} \section*{New model structure for precategories} In the following we still work with $\mathcal{C}=\sxop$ for some set $X$. Let's write for simplicity $\msx=\Lax[\sxop,\mathscr{M}]_n$. We assume that $\mathscr{M}$ is a combinatorial monoidal model category. It can be shown that the projective (=Reedy) model structure in the previous sections is also combinatorial (see \cite{COSEC1}).\\ In this section we are going to construct another model structure on $\msx$ that will be used in the upcoming sections. We will use Smith's theorem (see for example \cite{Hov-model}).\\ Consider the following maps in $\msx$. \begin{enumerate} \item Let $\mathbf{I}_{ex}$ be the set of maps $ \coprod_{A\neq B} \{ \Gamma[\delta_{AB}(\alpha)]; \alpha \in \mathbf{I}_{AB} \}$; \item Let $\Ja_{ex}$ be the set of maps $ \coprod_{A\neq B} \{ \Gamma[\delta_{AB}(\alpha)]; \alpha \in \Ja_{AB} \} $; \item Let $\Wa$ be the class of maps $\sigma$ such that for $A\neq B$, the component $\Ub(\sigma)_{AB}$ is a level-wise weak equivalence. We will identify $\Wa$ with the subcategory generated by these maps in $\msx$. \end{enumerate} It follows that any (old) weak equivalence in $\msxproj$ is in $\Wa$ and that $\Wa$ is closed by composition and retract. Using the fact we already have a model structure on $\msx$ and thanks to Smith's theorem one has: \begin{thm}\label{thm-new-model} There is a cofibrantly generated model structure on $\msx$ with : \begin{enumerate} \item $\mathbf{I}_{ex}$ as the set of generating cofibrations; \item $ \Ja_{ex}$ as the set of generating trivial cofibrations; \item $\Wa$ as the subcategory of weak equivalences. \end{enumerate} The model structure is combinatorial and will be denoted by $\msxex$.\\ The identity functor $\msxproj \to \msxex$ is a right Quillen functor. \end{thm} \begin{proof} All the criterions of Smith's theorem are easily verified. Indeed since maps in $\Ja_{ex}$ are old trivial cofibrations, the pushout along a map in $\Ja_{ex}$ is an old trivial cofibration and in particular an old weak equivalence, thus in $\Wa$.\\ Finally (trivial) fibrations in $\msxproj$ are also new (trivial) fibrations since we have smaller set of generating (trivial) cofibrations. \end{proof} \section{Locally constant lax functors and enriched categories} \subsubsection{Indiscrete or coarse category} Recall that the `object functor' $\Ob: \Cat \to \Set$ that takes a category $\mathcal{B}$ to it set of objects $\Ob(\mathcal{B})$, has a left adjoint $\disc : \Set \to \Cat$ `the discrete functor'. It turns out that this functor has also a right adjoint $ \indisc : \Set \to \Cat$. We will denote by $\overline{X}:= \indisc(X)$. By definition for any category $\mathcal{B}$ and any set $X$ we have an isomorphism of sets: $$\Hom(\mathcal{B},\overline{X}) \cong \Hom(\Ob(\mathcal{B}),X)$$ functorial in $X$ and $\mathcal{B}$; where the left-hand side is the set of functors from $\mathcal{B}$ to $\overline{X}$ while the right-hand side is the set of functions from $\Ob(\mathcal{B})$ to $X$. Below we give a brief description of $\overline{X}$. \paragraph{Brief description of $\overline{X}$} The category $\overline{X}$ is the terminal \underline{connected} groupoid having $X$ as the set of objects. There is precisely a unique morphism between any pair of elements: $$\overline{X}(a,b)=\Hom_{\overline{X}}(a,b):= \{(a,b)\} \cong 1.$$ The composition is the unique one: the bijection $1 \times 1 \cong 1$. Given a function $g: \Ob(\mathcal{B}) \to X$, the associated functor $\overline{g}: \mathcal{B} \to \overline{X}$ is given by the (unique) constant functions $$\overline{g}_{UV}: \mathcal{B}(U,V) \to \overline{X}(g(U),g(V))\cong 1.$$ \begin{rmk} If $X$ has two elements then $\overline{X}$ is the ``\textbf{walking-isomorphism category}'' in the sense that any isomorphism in a category $\mathcal{B}$ is the same thing as a functor $\overline{X} \to \mathcal{B}$. \end{rmk} \subsection{An adjunction lemma} In classical $1$-category theory, given an indexing category $\J$ one define the colimit functor $$\colim: \Hom(\J,\mathscr{M}) \to \mathscr{M}$$ as the left adjoint of the constant functor $\cb_{\ast}: \mathscr{M} \to \Hom(\J,\mathscr{M})$.\\ Below we extend, locally, this fact to (normal) lax-functor when $\J$ is a $2$-category. \\ \begin{df} Say that a (normal) lax functor $ \mathcal{F}: \J \to \mathscr{M}$ is \textbf{locally constant} if for every $(i,j)$ the component $\mathcal{F}_{ij}: \J(i,j) \to \mathscr{M}$ is a constant functor. \end{df} The reader can check that such a (normal) lax functor is the same thing as a (semi) $\mathscr{M}$-category whose set of objects is $\Ob(\J)$; the hom-object between $i$ and $j$ is the value of $\mathcal{F}_{ij}$.\ \\ Denote by $\cb_{\ast}\Lax(\J,\mathscr{M}) \hookrightarrow \Lax(\J,\mathscr{M})$ the full subcategory of locally constant lax functors and transformations which are icons (see \cite{Lack_icons}). \begin{lem}\label{lax-to-cat} The inclusion functor $\cb_{\ast}\Lax(\J,\mathscr{M}) \hookrightarrow \Lax(\J,\mathscr{M})$ has a left adjoint. \end{lem} We can rephrase the above adjunction using the previous observation that we have an adjunction $\Ob \dashv \indisc$ that is also valid for $2$-categories. This means that for any $2$-category $\J$ and any (nonempty) set $X$ then we have an isomorphism of \underline{sets}: $$2\text{-Func}(\J, \overline{X}) \cong \Hom[\Ob(\J),X].$$ The unit of this adjunction (when $X= \Ob(\J)$) gives a canonical $2$-functor $$\varepsilon_{\J}: \J \to \overline{\Ob(\J)}.$$ Then the lemma says essentially that the pullback functor $$\varepsilon_{\J \ast}: \Lax[\overline{\Ob(\J)}, \mathscr{M}] \to \Lax(\J,\mathscr{M})$$ has a left adjoint $$\varepsilon_{\J !}: \Lax(\J,\mathscr{M}) \to \Lax[\overline{\Ob(\J)}, \mathscr{M}]$$ when $\mathscr{M}$ is cocomplete. \begin{note} This situation is a left Kan extension for lax functors and there is a general statement for $2$-functors $\J \to \J'$ but we will not go through that here (see \cite{COSEC1}). \end{note} \begin{proof}[Sketch of proof] For a lax functor $\mathcal{F}$ one construct the adjoint-transpose by taking the colimit of each component $\mathcal{F}_{ij}$ of $\mathcal{F}$. Let $m(i,j):= \colim \mathcal{F}_{ij}$. As $\mathscr{M}$ is monoidal \underline{closed}, colimits distribute over $\otimes$. Consider the following compatible diagram which ends at $m(i,k)$: $$\mathcal{F}_{ij}s \otimes \mathcal{F}_{jk} t \to \mathcal{F}_{ik}(s \otimes t) \to m(i,k)$$ in which $(s,t)$ runs through $\J(i,j) \times \J(j,k)$. \\ We get a unique map by universal property of the colimit: $$\varphi: m(i,j) \otimes m(j,k) \to m(i,k).$$ For the coherence axiom, one considers the compatible diagram ending at $m(i,l)$ as $(s,t,u)$ runs through $\J(i,j) \times \J(j,k) \times \J(k,l)$: $$\mathcal{F}_{ij}s \otimes \mathcal{F}_{jk} t \otimes \mathcal{F}_{kl}u \to \mathcal{F}_{il}(s \otimes t \otimes u) \to m(i,l).$$ Note that there are two ways to go from $\mathcal{F}_{ij}s \otimes \mathcal{F}_{jk} t \otimes \mathcal{F}_{kl}u$ to $ \mathcal{F}_{il}(s \otimes t \otimes u)$ and the coherence for $\mathcal{F}$ says that the two ways induce the same map. By the universal property of the colimit we get a unique map that makes every thing compatible: $$\gamma_1: m(i,j) \otimes m(j,k) \otimes m(k,l) \to m(i,l).$$ On the other hand we have two other maps in $\Hom[m(i,j) \otimes m(j,k) \otimes m(k,l), m(i,l)]$: \begin{enumerate} \item $\gamma_2: m(i,j) \otimes m(j,k) \otimes m(k,l) \xrightarrow{\varphi \otimes \Id} m(i,k) \otimes m(k,l) \xrightarrow{\varphi} m(i,l)$ \item $\gamma_3: m(i,j) \otimes m(j,k) \otimes m(k,l) \xrightarrow{ \Id \otimes \varphi} m(i,j) \otimes m(j,l) \xrightarrow{\varphi} m(i,l)$ \end{enumerate} If we restrict these two maps to $\mathcal{F}_{ij}s \otimes \mathcal{F}_{jk} t \otimes \mathcal{F}_{kl}u$, one gets the same compatible diagram; thus by uniqueness of the map out of the colimit we get that $\gamma_1= \gamma_2= \gamma_3$ and the coherence axiom follows. We leave the reader to check that the unit axiom holds also. \end{proof} We will make an abuse of notation and write $\mcatx$ be the category of semi $\mathscr{M}$-categories with fixed set of objects $X$. We endow $\mcatx$ with its canonical model structure (= projective). A direct consequence of the previous lemma is that: \begin{cor} \begin{enumerate} \item We have a Quillen adjunction $$| |: \msxproj \leftrightarrows \mcatx: \iota$$ where $| |$ is left Quillen and $\iota$ is right Quillen. \item We also have a Quillen adjunction: $$| |: \msxex \leftrightarrows \mcatx: \iota$$ with $\iota$ still right Quillen. \end{enumerate} \end{cor} \begin{proof} Indeed $\mcatx$ is equivalent to $\cb_{\ast}\Lax[\sxop,\mathscr{M}]$. In both $\mcatx$ and $\msxproj$ (trivial) fibrations are level-wise and we get Assertion $(1)$. Assertion $(2)$ is obvious and follows from Theorem \ref{thm-new-model}. \end{proof} \begin{rmk} From the lemma we know that for any co-Segal pre-category $\mathcal{F}$ there is a semi-$\mathscr{M}$-category $|\mathcal{F}|$ which is the adjoint transpose of $\mathcal{F}$. The co-unit of the adjunction if a transformation $\sigma: \mathcal{F} \to |\mathcal{F}|$ of lax-morphisms. \end{rmk} A natural question is to ask whether or not the canonical map $\sigma: \mathcal{F} \to |\mathcal{F}|$ a weak equivalence. If this map is a weak equivalence then we will say that that we have a (semi) strictification of $\mathcal{F}$. We treat this question in the next section. \section{Quasi-strictification} Given a co-Segal $\mathscr{M}$-category $\mathcal{F}$ a natural candidate to consider is $|\mathcal{F}|$ constructed previously. \ \\ The map $\sigma: \mathcal{F} \to |\mathcal{F}|$ will be a weak equivalence if and only if we can show that for every $(A,B)$ , the canonical map $\mathcal{F}(A,...,B) \to \colim \mathcal{F}_{AB}$ is a weak equivalence. This problem can be formulate in general as follow \begin{quest} Given a diagram $ \mathcal{F}: \J \to \mathscr{M} $ such that $\mathcal{F}(i) \to \mathcal{F}(j)$ is a weak equivalence for all morphism $i \to j$ of $\J$; is the canonical map $\mathcal{F}(i) \to \colim \mathcal{F}$ a weak equivalence ? \end{quest} The answer to that question is negative in general as illustrated in the following example. \begin{ex} The coequalizer hereafter is not equivalent to all other objects: \[ \xy (0,0)*++{1}="X"; (20,0)*++{[0,1]}="Y"; (40,0)*++{S^1}="Z"; {\ar@<-0.5ex>@{->}_-{0}"X";"Y"}; {\ar@<0.5ex>@{->}^-{1}"X";"Y"}; {\ar@{->}^{p}"Y";"Z"}; \endxy \] \end{ex} As we shall see in a moment there are some cases where we have an affirmative answer. More precisely we have: \begin{prop}\label{prop_weak_equiv} Let $\mathscr{M}$ be a model category and $\J$ be a Reedy category, with an initial object $e$.\\ Let $\mathcal{F}: \J \to \mathscr{M}$ be a Reedy cofibrant diagram such that for every morphism $i \to j$ of $\J$, the map $\mathcal{F}(i) \to \mathcal{F}(j)$ is a weak equivalence. Then every canonical map $\mathcal{F}(i) \to \colim\mathcal{F}$ is a weak equivalence. \end{prop} \begin{proof} By $3$-for-$2$ it's enough to have that $\mathcal{F}(e) \to \colim \mathcal{F}$ is a weak equivalence.\\ As $\J$ has an initial object, then automatically $\J$ has cofibrant constant in the sense of \cite[Def. 15.10.1]{Hirsch-model-loc}. Now If $\mathcal{F}$ is Reedy cofibrant then necessarily $\mathcal{F}(e)$ is cofibrant in $\mathscr{M}$ since the latching category of $\J$ at $e$ is empty. It follows that the constant diagram $\cb_{\ast}(\mathcal{F}(e))$ is Reedy cofibrant.\\ Now as $e$ is initial, we have a canonical natural transformation $\eta: \cb_{\ast}(\mathcal{F}(e)) \to \mathcal{F} $ which is a point-wise weak equivalence of Reedy cofibrant diagrams. Consequently taking the colimit preserve weak equivalences, thus $\mathcal{F}(e) \to \colim \mathcal{F}$ is a weak equivalence in $\mathscr{M}$. \end{proof} \subsubsection{Quasi-strictification} \begin{df} Say that a co-Segal category $\mathcal{C}: \sxop \to \mathscr{M}$ \textbf{has weak identities} if the semi-$\Ho(\mathscr{M})$-category $$\mathcal{C}: \sxop \to \mathscr{M} \to \Ho(\mathscr{M})$$ has identities. \end{df} \begin{thm}\label{quasi-strict} Every \textbf{excellent} co-Segal $\mathscr{M}$-category with weak identities is \textbf{weakly co-Segal equivalent} to an $\mathscr{M}$-category with weak identities. \end{thm} The proof of the theorem is a direct application of Proposition \ref{prop_weak_equiv}. \begin{proof}[Proof of Theorem \ref{quasi-strict}] Let $\mathcal{F}: \sxop \to \mathscr{M}$ be a unital co-Segal category. Since being unital is stable under weak equivalence we can assume that $\mathcal{F}$ is $\Ub$-cofibrant. As $\Ub(\mathcal{F})$ is projective cofibrant in $\prod_{(A,B) \in X^2} \Hom(\sx(A,B)^{op}, \mathscr{M})$, this means that each component $\mathcal{F}_{AB} \in \Hom(\sx(A,B)^{op}, \mathscr{M})_{proj}$ is projective cofibrant. Being projective cofibrant allows computing the homotopy colimit of $\mathcal{F}_{AB}$ as the usual colimit of $\mathcal{F}_{AB}$.\ \\ By Lemma \ref{lax-to-cat} we get a semi-$\mathscr{M}$-category $|\mathcal{F}|$ by declaring $|\mathcal{F}|(A,B):= \colim \mathcal{F}_{AB}$. $|\mathcal{F}|$ is a locally constant object of $\msx$ equipped with a canonical map $\sigma: \mathcal{F} \to |\mathcal{F}|$ in $\msx$. Thanks to Proposition \ref{prop_weak_equiv}, all canonical maps $\mathcal{F}(A...,B) \to \colim \mathcal{F}_{AB}$ are weak equivalences; these maps are exactly the components of $\sigma: \mathcal{F} \to |\mathcal{F}|$ which means that $\sigma$ is a weak equivalence in $\msx$. \ \\ \ \\ $|\mathcal{F}|$ is a strict semi-category with a strict composition; it inherits of the (weak) unities of $\mathcal{F}$ since $\sigma$ is a weak equivalence and the theorem follows. \end{proof} We have an immediate consequence. \begin{cor} Let $\msx_{\Ub\text{-cof}}\hookrightarrow \msx$ be the full subcategory of $\Ub$-cofibrant co-Segal categories. Then if all objects of $\mathscr{M}$ are cofibrant, the restrict adjunction $$| |: \msx_{\Ub\text{-cof}} \leftrightarrows \mcatx: \iota$$ induces an equivalence between the respective homotopy categories. \end{cor} \begin{proof}[Proof of the corollary] Let $\mathcal{F}$ be an $\Ub$-cofibrant co-Segal category and $\mathcal{A}$ be a category. Given a morphism $\sigma: \mathcal{F} \to \mathcal{A}$ in $\msx$, then by adjunction we can factorize that map as: $$ \mathcal{F} \to |\mathcal{F}| \xrightarrow{\sigma'} \mathcal{A} .$$ From the proof of the theorem we know that the canonical map $ \mathcal{F} \to |\mathcal{F}| $ is always a weak equivalence if $\mathcal{F}$ is an $\Ub$-cofibrant co-Segal category. Then by $3$-for-$2$, we get that $\sigma$ is a weak equivalence in $\msx$ if and only if $\sigma': |\mathcal{F}| \to \mathcal{A}$ is a weak equivalence in $\mcatx$. Moreover both functors $\iota$ and $||$ preserve weak equivalences and $|| \circ \iota= \Id$. The rest is just a categorical argument on localization of categories. \end{proof} The next move is to go from quasi-strictification to strictification. This is a general issue and will be discussed in full generality in a different work. The previous theorem has a weaker version using the model structure $\msxex$. \begin{thm}\label{weak-strict} Every co-Segal category in $\msxex$ is equivalent to a strict one. \end{thm} \begin{proof} Let $\mathcal{F}$ be a co-Segal category. Then up to a cofibrant replacement we can assume that $\mathcal{F}$ is cofibrant. Note that such cofibrant replacement is only, a priori, partially co-Segal. By definition of the model structure on $\msxex$, since $\mathcal{F}$ is cofibrant then for $A \neq B$ the functor $$\mathcal{F}_{AB}: \sx(A,B)^{op} \to \underline{M}$$ is projective cofibrant and take its values in the subcategory of weak equivalences (partial co-Segal conditions). Then from Proposition \ref{prop_weak_equiv} we get that for $A\neq B$ all canonical map $$\mathcal{F}(A,...B) \to |\mathcal{F}|(A,B)$$ are weak equivalences. This means that $\mathcal{F} \to |\mathcal{F}|$ is a weak equivalence in $\msxex$. \end{proof} \section{Commutative co-Segal monoids} \subsection{Preliminaries} Following Leinster \cite{Lei2}, we will denote by $\Phi$ the skeletal category of finite sets: its objects are finite sets $n=\{ 0,...,n-1\}$ for each integer $n \geq 0$; and its morphisms are all functions. $\Phi$ has a monoidal structure given by disjoint union, which is a symmetric operation. So we have a symmetric monoidal category $(\Phi,+,0)$, where $+$ is the disjoint union and $0$ is the empty set. \ \\ Let $\Gamma$ be the category considered by Segal in \cite{Seg1}. The objects of $\Gamma$ are all finite sets, and a morphism from $S$ to $T$ is a morphism from $S$ to $P(T)$, the set of subsets of $T$.\ \\ Leinster \cite[Prop 3.1.1]{Lei2} pointed out a relationship between $\Phi$ and $\Gamma$ in the following proposition. \begin{prop} Let $\mathscr{M}= (\underline{M}, \times, 1)$ be a category with finite product. Then there is an isomorphism of categories: $$\scolax[(\Phi,+,0), (\underline{M}, \times, 1)] \cong[ \Gamma^{op}, \underline{M}].$$ \end{prop} Here `$\scolax$' stands for symmetric colax monoidal functors. Following the above result and the Segal formalism, Leinster considered weak commutative algebra (or monoid) in a symmetric monoidal category $\mathscr{M}=(\underline{M},\otimes,I)$ having a subcategory of weak equivalence $\mathscr{W}$ which satisfies certain properties; we called in \cite{SEC1}, the pair $(\mathscr{M},\mathscr{W})$, a \emph{base of enrichment}. The following definition is due to Leinster. \begin{df} Let $\mathscr{M}=(\underline{M},\otimes,I)$ be a symmetric monoidal category with a subcategory $\mathscr{W}$ such that the pair $(\mathscr{M},\mathscr{W})$ is a base of enrichment.\ \\ A \textbf{homotopy commutative monoid} in $\mathscr{M}$ is a symmetric colax monoidal functor: $$ \mathcal{C}: (\Phi,+,0) \to \mathscr{M} $$ satisfying the Segal conditions: \begin{enumerate} \item for every $m,n \in \Phi$ the colaxity map $\mathcal{C}(n+m) \to \mathcal{C}(n) \otimes \mathcal{C}(m)$ is a weak equivalence; \item the map $\mathcal{C}(0) \to I$ is a weak equivalence. \end{enumerate} \end{df} \subsection{The co-Segal formalism} Colax diagrams are difficult to manipulate for a homotopical and categorical point of view. For example computing limits in the category $\scolax[(\Phi,+,0), (\underline{M}, \otimes, 1)]$ is not straightforward !\ \\ For this reason we will change colax to lax using the co-Segal formalism. Let $\phepi$ be the subcategory of $\Phi$ having the same objects but only morphisms which are surjective. $\phepi$ is the `symmetric' companion of $\Depi$. It's easy to see that that we a symmetric monoidal subcategory $(\phepi,+,0) \subset (\Phi,+,0).$ We have an obvious (nonsymmetric) monoidal functor $$i: (\Depiop,+,0) \to (\phepiop,+,0).$$ \begin{df} Let $\mathscr{M}=(\underline{M},\otimes,I)$ be a symmetric monoidal category with a subcategory $\mathscr{W}$ such that the pair $(\mathscr{M},\mathscr{W})$ is a base of enrichment.\ \\ A \textbf{commutative co-Segal semi-monoid} in $\mathscr{M}$ is a normal \textbf{symmetric lax monoidal} functor: $$ \mathcal{C}: (\phepiop,+,0) \to \mathscr{M} $$ such that for every map $f:n \to m$ of $\phepi$, the structure map $$ \mathcal{C}(m) \to \mathcal{C}(n)$$ is a weak equivalence. \ \\ A \textbf{ commutative co-Segal monoid} is a commutative co-Segal semi-monoid $\mathcal{C}$ such that the induced diagram $$i^{\star}\mathcal{C}: (\Depiop,+,0) \to \mathscr{M}$$ is a co-Segal monoid. \end{df} If $\mathcal{C}$ is a commutative co-Segal monoid, then as in the noncommutative case the monoid structure is on the object $\mathcal{C}(1)$. The commutative quasi-multiplication is obtained as before.\\ A direct consequence of the result of the previous section is: \begin{thm}\label{quasi-strict-com} Every commutative excellent co-Segal monoid is \textbf{weakly co-Segal equivalent} to a commutative one which is strictly associative. \end{thm}
2,877,628,090,603
arxiv
\section{Introduction} The coherent optical response, which results after resonant excitation of quantum emitters with multiple optical pulses, carries rich information about the energy structure and dynamical properties of the studied system~\cite{2DFS-Cundiff,FWM-Langbein}. Moreover, it can be used for applications in quantum memories where light-matter interaction is used to store and retrieve optical fields in the form of photon echoes (PE)~\cite{Moiseev-Memory, Lvovsky-Memory, Tittel-Memory}. In solid-state systems based on color centers and rare earth ions, significant progress has been achieved in that respect~\cite{Gisin-2011, ROSE-2011, Faraon-2017,You-zhi-2021, Moiseev-ROSE, Tittel-2021}. Yet, the search for new systems where similar or alternative approaches can be pursued on much faster times scales is of great interest~\cite{Sussman-Molecules, Sussman-Diamond, Langer-2012, Langer-2014, Kosarev-2020}. Excitons in semiconductor nanostructures can be addressed resonantly by sub-ps optical pulses on very short timescales enabling access to exceptionally high bandwidths, but unavoidably leading to a short radiative lifetime, which imposes limitation on the optical storage time. One of the solutions is to use the spin degrees of freedom of resident electrons in semiconductors which makes it possible to extend the timescale of coherent optical response by several orders of magnitude~\cite{Langer-2014,Salewski-2017}. The demonstration of this concept has been achieved for localized charged excitons in CdTe/(Cd,Mg)Te quantum well structures and donor-bound excitons in bulk ZnO crystals~\cite{FTT-review-2018}. It is based on resonant excitation of the donor bound exciton $D^{0}X$ or negatively charged exciton (trion) $X^{-}$ with a sequence of three resonant optical pulses in the presence of a transverse magnetic field \cite{Langer-2014}. This allows one to transfer the optical coherence of trions into the electron spin coherence of resident electrons with a significantly longer relaxation time. For realistic quantum memory protocols it is necessary to apply resonant optical pulses with an area of $\pi$, i.e. to perform robust Rabi flops. This is very difficult in semiconductor quantum wells and bulk crystals due to the strong damping of Rabi oscillations by excitation-induced dephasing~\cite{Langbein-2005,Poltavtsev-2019}. Moreover, weakly localized resident carriers hop between the localization sites which leads to an additional loss of the coherence~\cite{Kosarev-2019}. Therefore, it is advantageous to use quantum dots (QDs) with strong localization potential which ensures robust coherence properties~\cite{Langbein-2001,Cundiff-2016,Poltavtsev-2016,Kasprzak-2018,Kosarev-2020}. Experiments with an ensemble of QDs are challenging due to the strong inhomogeneous broadening of optical transitions. In quantum wells the optical transitions for exciton and trion are spectrally separated and, therefore, they can be selectively addressed by proper choice of the photon energy of excitation. This selectivity is not available in a QD ensemble which imposes serious restrictions for observation and subsequent application of the photon echo retrieved from resident electrons. Therefore, the demonstration of long-lived spin-dependent echoes in QDs remained unresolved. In this work, we demonstrate that in spite of strong inhomogeneous broadening it is possible to perform a robust transfer between the optical and spin coherence and to observe long-lived spin-dependent photon echoes (LSPE) in an ensemble of charged self-assembled QDs in a moderate transverse magnetic field. Moreover, in self-assembled (In,Ga)As/GaAs QDs the Zeeman splitting of the hole is of the same order of magnitude as that of the electron. We demonstrate that the heavy-hole splitting has a strong impact on the formation of three-pulse LSPE. In order to understand and describe properly the dynamics of LSPE in self-assembled QDs and its dependence on magnetic field, we develop a model, that accounts for both the electron and heavy-hole Zeeman splittings. \medskip \section{Sample and experiment} The studied sample ($\#$ 14833) was grown by molecular beam epitaxy. It consists of 4 layers of n-doped (In,Ga)As QDs in GaAs matrix, embedded in the antinodes of the standing wave electric field in the microcavity. The QDs in each layer have a density of about 10$^{10}$~cm$^{-2}$. The resident electrons were supplied to the QDs by introducing $\delta$-doping with Si donors at a distance of 64.5~nm below each QD layer. After the epitaxial growth, the sample was annealed at the temperature of 900$^\circ$C to reduce the inhomogeneous broadening of the optical transitions. The QD emission is represented by the photoluminescence (PL) spectrum in Fig.~\ref{fig1}(a) which was measured from the edge of the sample in order to avoid the cavity impact (blue line). The PL maximum at the photon energy of 1.4355~eV corresponds to the radiative recombination of excitons from the lowest confined energy state, while a weak shoulder at higher energies around 1.45 eV is apparently related to the emission from the first excited exciton states. The width of the PL line reflects the magnitude of inhomogeneous broadening for the optical transitions with the full width at the half maxima (FWHM) of 10~meV. The 5/2$\lambda$ microcavity is formed by 11 and 14 pairs of GaAs/AlAs layers in the top and bottom distributed Bragg reflectors, respectively, having a gradient axis in the plane of the sample along which the energy of photonic mode can be tuned. All the experiments were performed in the sample area where the photon energy of the cavity mode is in resonance with the emission peak of QDs. The corresponding transmission spectrum with a band centered at 1.434~eV and FWHM of 1.4~meV is shown by the red line in Fig.~\ref{fig1}(a). Using a microcavity with a quality factor $Q\sim$1000 facilitates the efficient generation of non-linear coherent optical signal due to the significant increase of light-matter interaction~\cite{Fras-2016,Poltavtsev-2016,Salewski-Tamm-2017}. \begin{figure*}[hbt!] \center{\includegraphics[width=14cm]{Figure1.pdf}} \caption{{\bf Schematic representation of the experimental technique and the sample.} (a) Spectra of the sample PL, transmission and the laser. Temperature $T=6$~K. PL spectrum is shown for lateral emission from the edge of the sample in the direction parallel to its plane, e.g. along $x$-axis. (b) Sketch of the photon echo experiment. (c) Blue line shows the transient FWM signal measured in $\mathbf k_{\rm S} = 2 \mathbf k_{2} - \mathbf k_1$ direction for $\tau_{12}= 33.3$~ps and $\tau_{23}=100$~ps. The signal is represented by the two-pulse PE (2PE) at 67~ps and the three-pulse PE (3PE) at 167~ps. The three peaks with filled area show the temporal position of excitation laser pulses. Labels on top correspond to the polarization of excitation and detection in the HVVH configuration.} \label{fig1} \end{figure*} The sample is mounted in a liquid helium bath magneto-optical cryostat and cooled down to a temperature $T=2$~K unless stated otherwise. Laser pulses with a duration of 2.5 ps are emitted at a repetition rate of 75.75 MHz were generated by a tunable mode-locked Ti:Sapphire oscillator. The spectral width of the laser pulses with FWHM of 0.5 meV is approximately three times narrower than the photonic mode of the cavity, i.e. the excitation pulses are not distorted by the cavity [see a the green curve in the Fig.~\ref{fig1}(a)]. The magnetic field $\mathbf{B}$ is applied parallel to the sample plane. Photon echoes are generated by a sequence of laser pulses focused into a spot of 250 $\mu$m and entering the sample under incidence close to normal with wavevectors $\mathbf k_i$ ($i$ is the pulse number, $\mathbf k_2=\mathbf k_3$), as it is shown in Fig.~\ref{fig1}(b). The pulse energy of $\mathcal{P}=5$~pJ corresponds to the pulse area of about $\pi$. The resulting transient four-wave mixing (FWM) signal is detected in reflection geometry in the direction of $\mathbf k_{\rm S} = 2 \mathbf k_{2} - \mathbf k_1$ using heterodyne detection\cite{FWM-Langbein, FTT-review-2018}. The time-resolved electric field amplitude of the FWM signal is shown in Fig~\ref{fig1}(c) by the blue line for $\tau_{12}= 33.3$~ps and $\tau_{23}=100$~ps, where $\tau_{ij}$ is the time delay between pulses $i$ and $j$ in the sequence. Two- and three-pulse echoes are observed at times $t=2\tau_{12}$ (2PE) and $t=2\tau_{12}+\tau_{23}$ (3PE), respectively. They are well described by Gaussian peaks with the FWHM of about 10~ps which is mainly determined by the spectral width of the excitation pulses~\cite{Kosarev-2020}. In what follows we use the magnitude of the electric field amplitude at the PE peak maximum $|P_{\rm PE}|$ to characterize the strength of the photon echo signal. In order to address various spin configurations, we use different linear polarization schemes in the excitation and detection paths. The direction of polarization is assigned with respect to the magnetic field direction, i.e. H and V polarizations are parallel and perpendicular to $\mathbf{B}$, respectively. The polarization scheme is labeled as $ABD$ or $ABCD$ for two- or three- pulse echoes. Here, the first two ($AB$) or three ($ABC$) letters indicate the linear polarizations of the optical pulses in the excitation sequence and the last letter ($D$) corresponds to the polarization direction in the detection, e.g. the data in Fig.~\ref{fig1}(c) are taken in the HVVH polarization configuration. In the case of the two-pulse PE, we used areas of pulses 1 and 2 corresponding approximately to $\pi /2$ and $\pi$, respectively. As for the three-pulse PE experiment, we used a sequence of three $\pi /2$ pulses. \section{Photon echo from trions in QDs} In order to observe long-lived spin-dependent echoes it is necessary to address trion $X^-$ (charged exciton) complexes, which correspond to the elementary optical excitation in a charged QD. The energy spectrum in the charged QD can be well described by a four-level energy scheme with Kramers doublets in the ground and excited states at $B=0$, which are determined by the spin of the resident electron $S=1/2$ and the angular momentum of the heavy hole $J=3/2$, as shown in Fig.~\ref{fig2}(a). In contrast to the exciton in a neutral QD, this four-level scheme allows establishing optically induced long-lived spin coherence in the ground state~\cite{Salewski-2017}. \begin{figure}[hbt!] \center{\includegraphics[width=7cm]{Figure2.pdf}} \caption{ {\bf Photon echo from trions at zero magnetic field.} (a) Energy level diagram and optical transitions for the trion $X^{-}$. (b) Polar plots of two-pulse PE amplitude in HRH and HRV polarization configurations at $t = 2 \tau_{12} = 132$ ps as function of polarization angle $\varphi_2$ of the second pulse. (c) Decay of the two- and three-pulse PE as function of $2\tau_{12}$ and $\tau_{23}$. In the three-pulse PE the delay time $\tau_{12}= 33.3$~ps. The two-pulse PE$_{12}$ decays exponentially with $T_2$ = 0.45 ns (blue circles). The three-pulse PE$_{123}$ shows exponential decay with the short time constant $T_1$ = 0.26 ns superimposed on the long-lived offset. Dashed red curves show the corresponding exponential fits. } \label{fig2} \end{figure} Although the photon energies for resonant excitation of trion and exciton ($X$) complexes are different in one and the same QD, it is not possible to perform selective excitation of only charged QDs by proper choice of the photon energy. This is due to the strong degree of inhomogeneous broadening for optical transitions in the QD ensemble, which is considerably larger than the energy difference between the $X$ and $X^-$ resonances. It is, however, possible to distinguish between exciton and trion contributions using polarimetric measurement of photon echo signal~\cite{Cundiff-Pola-2015,Poltavtsev-Pola-2019}. Figure~\ref{fig2}(b) shows polar plots of two -pulse PE magnitude measured at $\tau_{12}=66$~ps using HRH and HRV polarization schemes. The diagrams are obtained by rotation of the polarization direction of the second pulse (R-polarization) by angle $\varphi_2$ with respect to the H polarization. In both polarization schemes, the signal is represented by rosettes with fourth harmonic periodicity when the angle $\varphi_2$ is scanned. Such behavior corresponds to PE response from trions where the PE is linearly polarized with the angle $\varphi_{\rm PE} = 2\varphi_2$ and the PE amplitude is independent of $\varphi_2$~\cite{Poltavtsev-Pola-2019}. In case of the neutral exciton the polar plot is different because the PE signal is co-polarized with the second pulse ($\varphi_{\rm PE} = \varphi_2$) and it amplitude follows $|\cos\varphi_2|$. We note that the small increase of the PE amplitude by about 15\% in HHH as compared to HVH remains the same under rotation of the sample around $z$-axis which excludes an anisotropy of dipole matrix elements in $xy$-plane as possible origin of asymmetry (see the blue pattern in Fig.~\ref{fig2}(b)). The difference could be provided by a weak contribution from neutral excitons. This is because in HRH configuration the PE from trions is the four-lobe pattern $\propto|\cos2\varphi_2|$ while for excitons it corresponds to a two-lobe pattern $\propto\cos^2\phi_2$. Finally, we conclude that independent of the polarization scheme the main contribution to the coherent optical response with a photon energy of 1.434~eV in the studied sample is attributed to trions. This demonstration is very important for proper interpretation of the results because long-lived spin-dependent echoes can be observed only in charged QDs. Moreover it has large impact for applications in quantum memory protocols where high efficiency is required. We evaluate the optical coherence time $T_2$ and the population lifetime $T_1$ of trions in QDs from the decay of PE amplitude of the two- and three-pulse echoes, respectively. The data measured at $B=0$ in HHH polarization are shown in Fig.~\ref{fig2}(c). In the case of 2PE, the amplitude is scanned as a function of $2\tau_{12}$ (blue points), while for 3PE the dependence on $\tau_{23}$ is shown (green points). The exponential fit of two-pulse echo $|P_{\rm 2PE}| \propto \exp{(-2\tau_{12}/T_2)}$ gives $T_{\rm 2}$ = 0.45~ns which is in agreement with previous studies in (In,Ga)As/GaAs QDs \cite{Langbein-2001,Poltavtsev-2016,Kosarev-2020}. The decay of 3PE has a more complex structure. At short delay times, its magnitude decays exponentially with a time constant of $T_1=0.27$~ns which we attribute to the trion lifetime $\tau_r$. However, the signal does not decay to zero and shows a small offset with a magnitude of about 5\% of the initial amplitude at long delay times $t>1$~ns. This weak signal is governed by the dynamics of population grating in the ground state of the QDs ensemble and can be provided by many different reasons, which are out of the scope of this paper. We note that $T_{\rm 2} \approx 2 T_{\rm 1}$ indicates that the loss of optical coherence under resonant excitation of trions is governed by their radiative recombination. \section{Long-lived spin-dependent photon echo in QDs} Application of the transverse magnetic field ($\mathbf{B}||\mathbf{x}$) leads to Zeeman splitting of the Kramers doublets in the ground resident electron and optically excited trion states. The electron spin states with spin projections $S_x=\pm1/2$ are split by $\hbar\omega_e = g_e\mu_B B$, while the trions states with angular momentum projections $J_x=\pm3/2$ are split by $\hbar\omega_h=g_h\mu_B B$. Here, $\omega_e$ and $\omega_h$ are the Larmor precession frequencies of electron and heavy hole spins, $g_e$ and $g_h$ are the electron and hole $g$ factor, and $\mu_B$ is the Bohr magneton. Optical transitions between all four states are allowed using light with H or V linear polarization, as shown in Fig.~\ref{fig2}(a). The energy structure can be considered as composed of two $\Lambda$ schemes sharing common ground states. The magnetic field induces the asymmetry between these two $\Lambda$ schemes allowing one to transfer optical coherence induced by the first optical pulse into the spin coherence by application of the second optical pulse \cite{Langer-2014, Salewski-2017}. Thus, a sequence of two-linearly polarized pulses can be used to initialize a spin grating in the ground and excited states. The addressed spin components depend on the polarization of the exciting pulses. For linearly co-polarized HH sequence the spin components along the magnetic field direction are addressed (see Eq.~35 in the supplementary material) \begin{equation} \label{eq:Spins-HH} S_x = - J_x \propto \sin \left( \frac{\omega_e-\omega_h }{2} \tau_{12} \right)\exp \left( - \frac{\tau_{12}}{T_2} \right) \cos \left( \omega_0\tau_{12} \right). \end{equation} In case of cross-polarized HV sequence the spin grating is produced in the plane perpendicular to the magnetic field direction (see Eqs.~36 and 37 in the supplementary material) \begin{equation} \label{eq:Spins-HV} \begin{split} S_y + iS_z= J_y - iJ_z & \propto i \exp \left( i \frac{\omega_e-\omega_h }{2} \tau_{12} \right) \\ &\times \exp \left( - \frac{\tau_{12}}{T_2} \right) \cos \left( \omega_0\tau_{12} \right). \end{split} \end{equation} The spectral gratings appear due to inhomogeneous broadening of the optical resonance frequencies $\omega_0$. The evolution of spin gratings for trions and resident electrons is governed by their population and spin dynamics. The hole spin grating lifetime is limited by the trion lifetime. The electron spin grating in the ground state is responsible for the long-lived spin-dependent echo which appears if the third pulse is applied~\cite{Langer-2014}. The decay of LSPE as a function of $\tau_{23}$ is governed by the spin dynamics of resident electrons. HHHH and HVVH polarization schemes give access to longitudinal $T_{\rm 1,e}$ and transverse $T^*_{\rm 2,e}$ spin relaxation times, respectively. In the studied (In,Ga)As/GaAs QDs the value of $g_h=0.18$ is of the same order of magnitude as the electronic $g$-factor $g_e=-0.52$ \cite{Trifonov-Arxiv}. Therefore, it should be taken into account in contrast to previous studies where the Zeeman splitting in the trion state was neglected. In addition, it should be noted that the PE signal depends sensitively on the orientation of crystallographic axes with respect to the magnetic field direction due to the strongly anisotropic in-plane $g$-factor of the hole in semiconductor quantum wells and QDs~\cite{Poltavtsev-PRR2020, Trifonov-Arxiv}. In our studies, the sample was oriented with the [110] crystallographic axis parallel to $\mathbf{B}$ which corresponds to the case when the H- and V- polarized optical transitions have the photon energies of $\hbar\omega_0\pm(\omega_e-\omega_h)$ and $\hbar\omega_0\pm(\omega_e+\omega_h)$, respectively. \begin{figure*}[hbt!] \includegraphics[width=14cm]{Figure3.pdf} \center \caption{{\bf Long-lived spin dependent photon echo in QDs.} (a) Amplitude of three-pulse PE as a function of $\tau_{23}$ for $\tau_{12}=66$~ps. The data are taken in HHHH and HVVH polarization schemes at $B=$0.3~T and 0.1~T, respectively. (b) Magnetic field dependence of LSPE for $\tau_{12} $ = 100 ps and $\tau_{23} $ = 2.033 ns. Top and bottom curves correspond to signal measured in HHHH and HVVH polarization schemes, respectively. Red lines present the results of the theoretical modeling using Eqs.~\ref{eq:signal-HHHH} and \ref{eq:signal-HVVH} with the following parameters: $g_e= -0.516$, $g_h=0.18 $, $T_T=\tau_r=T_1=0.26$~ns, $T_{1,e}=23$~ns, $T_{2,e}^*$ is evaluated from $T_{2,e}=4.3$~ns and $\Delta g_e = 0.004$ using Eq.~\ref{eq:T(B)} (as follows from Fig.~\ref{fig4}(a)). The signals in HHHH polarization are shifted for clarity with the dashed line corresponding to zero signal level.} \label{fig3} \end{figure*} The three-pulse PE amplitude as a function of delay time $\tau_{23}$ and magnetic field $B$ are shown in Fig.~\ref{fig3}. In full accord with our expectations, we observe that application of a moderate magnetic field $B<1$~T drastically changes the dynamics of three-pulse PE. In HHHH polarization scheme the large offset emerges which decays on a timescale significantly longer than the repetition period of laser pulses, i.e. $T_{\rm 1,e}\gg10$~ns. The short decay, which is also present at $B=0$, with the time constant $T_1=0.26$~ns is associated to the trion lifetime. In the HVVH polarization scheme, long-lived oscillatory signal appears which is attributed to the Larmor spin precession of resident electrons and decays exponentially with $T^*_{\rm 2,e}$. At shorter delays, the signal behavior is more complex due to the superposition of spin-dependent signals from trions and resident electrons. Further insight can be obtained from the magnetic field dependence of LSPE signal which is measured at the long delay $\tau_{23}=2.033$~ns when the contribution from trions in three-pulse PE is negligible, see Fig.~\ref{fig3}(b). The delay time $\tau_{12}$ is set to 100~ps which is shorter than the optical coherence $T_2$. At zero magnetic field, the PE is absent in the HVVH polarization scheme and shows only very weak amplitude in HHHH configuration. An increase of magnetic field leads to the appearance of LSPE in both polarization configurations. For HHHH we observe a slow oscillation which is governed by Larmor precession of both electron and hole spins during $\tau_{12}$ when the spin grating is initialized by the sequence of two pulses. In the HVVH scheme the LSPE oscillates much faster because it is mainly determined by the Larmor precession of resident electron spins during $\tau_{23}$, which is roughly 20 times longer than $\tau_{12}$. In order to describe the experimental results quantitatively, we extended the theory from Ref. \cite{Langer-2014} by taking into account both electron and heavy-hole Zeeman splitting (for details see supplementary material). We analytically solve the Lindblad equation for the ($4 \times 4 $) density matrix to describe the temporal evolution between the first and second pulses for $0<t<\tau_{12}$ and after the third pulse for $t>\tau_{12}+\tau_{23}$. The spin dynamics of trions and electrons in external magnetic field for $\tau_{12}<t<\tau_{12}+\tau_{23}$ is described by the Bloch equations. The three-pulse PE amplitude in HHHH scheme is given by \begin{equation} \label{eq:signal-HHHH} \begin{split} P_{\rm HHHH} \propto \mathrm{e}^{-\frac{2 \tau_{12}}{T_2}} \Big[ 2 & \mathrm{e}^{-\frac{\tau_{23}}{\tau_r}} \cos^2{\left(\frac{\omega_e-\omega_h}{2} \tau_{12}\right) } + \\ & \mathrm{e}^{-\frac{\tau_{23}}{T_T}}\sin^2{\left(\frac{\omega_e-\omega_h}{2} \tau_{12} \right)} + \\ & \mathrm{e}^{-\frac{\tau_{23}}{T_{1e}}}\sin^2{\left(\frac{\omega_e-\omega_h}{2} \tau_{12} \right )} \Big] \end{split} \end{equation} Here $T_T^{-1}= \tau_r^{-1} + T_h^{-1}$ is the spin lifetime of the trion. For moderate magnetic fields $B\le 1$~T we can assume that the spin relaxation time of hole in QDs $T_{h}$ is significantly longer than $\tau_r$ and, therefore, in our case $T_T=\tau_r$~\cite{Greilich-2006}. The first and second terms on the right hand side correspond to the trion contribution, while the last term is due to the LSPE from resident electrons. For HVVH polarization we obtain \begin{equation} \label{eq:signal-HVVH} \begin{split} P_{\rm HVVH} \propto \mathrm{e}^{-\frac{2 \tau_{12}}{T_2}} \big[ & \mathrm{e}^{-\frac{\tau_{23}}{T_T}} r_h \cos{(\omega_h \tau_{23}-(\omega_e-\omega_h)\tau_{12}-\phi_h)} \\ + & \mathrm{e}^{-\frac{\tau_{23}}{T^*_{2,e}}} r_e \cos{(\omega_e \tau_{23}+(\omega_e-\omega_h)\tau_{12}-\phi_e)}\big] \end{split} \end{equation} where for simplicity we introduce the following parameters: phases $\phi_e$, $\phi_h$ and amplitudes $r_e$ and $r_h$. The subscript $e,h$ corresponds to the electron or trion contributions which are given by the first and second terms on right-hand side in Eq.~\ref{eq:signal-HVVH}, respectively. The parameters are given by Eqs.~55-57 in supplementary material. They are determined by the Larmor precession frequencies $\omega_e$ and $\omega_h$, delay time $\tau_{12}$, trion lifetime $\tau_r$. The $g$-factors of electrons and holes are known from previous studies~\cite{Kamenskii-2020, Trifonov-Arxiv}. Therefore, the only unknown parameter is the spin dephasing time of resident electrons $T_{2,e}^*$. Note that if the $g$-factors of electrons and holes are unknown they can be used as additional fitting parameters in the description below. In order to determine $T_{2,e}^*(B)$, we fit the transient signals in HVVH polarization for different magnetic fields as shown exemplary for the transient at $B$ = 0.1~T in Fig.~\ref{fig3}(a). For the LSPE when $\tau_{23}\gg\tau_r=2T_2$ only the second term in Eq.~\ref{eq:signal-HVVH} remains, which simplifies the fitting procedure. Three parameters of the LSPE signal, i.e. decay rate $1/T_{2,e}^*$, amplitude $r_e$, and phase $ \phi_e$, were extracted from the fit which are plotted as blue dots in Fig.~\ref{fig4} as a function of the magnetic field. It follows from Fig.~\ref{fig4}(a) that the spin dephasing rate increases linearly with the increase of $B$. Such behavior is well established in ensembles of QDs and it is related to the fluctuations of electron $g$-factor value in different QDs~\cite{Greilich-2006}. It can be described as \begin{equation} \label{eq:T(B)} \hbar/T^*_{\mathrm 2,e} = \hbar/T_{\mathrm 2,e} + \Delta g_e \mu_B B , \end{equation} where $T_{\mathrm 2,e}$ is the transverse spin relaxation time and $\Delta g_e$ is the inhomogeneous broadening of the electron $g$-factor. The linear fit with this expression shown in Fig.~\ref{fig4}(a) by the red dashed line gives $T_{\mathrm 2,e} = 4.3$~ns and $ \Delta g_e = 4 \times 10^{-3}$. \begin{figure}[hbt!] \center{\includegraphics[width=7cm]{Figure4.pdf}} \caption{{\bf Magnetic field dependence of LSPE.} Magnetic field dependence of the main parameters (decay time, phase and amplitude) of three-pulse LSPE signal evaluated from the LSPE transients $P_{\rm HVVH}(\tau_{23})$ measured at different $B$. (a) decay time $\hbar/T^*_{\mathrm 2,e}$; (b) phase $\phi$; (c) amplitude $r_e$. Blue points correspond to the data resulting from the fit using the last term on the right hand side in Eq.~\ref{eq:signal-HVVH}. Red dashed line in (a) is fit by linear function from Eq.~\ref{eq:T(B)} with $T_{2,e}=4.3$~ns and $\Delta g_e = 4\times 10^{-3}$. Red solid line in (b) and (c) - magnetic field dependences of $\phi$ and $r_e$ given by analytic expressions from supplementary material. } \label{fig4} \end{figure} The parameter $ \phi_e$ in Fig.~\ref{fig4}(b) starts from $-0.8$~rad in magnetic fields below 0.1~T and approaches zero in fields above 0.8~T. The amplitude $r_e$ in Fig.~\ref{fig4}(c) gradually rises with an increase of $B$ up to 0.4~T and remains the same in larger magnetic fields. We calculate the magnetic field dependence of amplitude and phase of LSPE using Eqs.~56 and 57 from supplementary material, respectively, using $g_e= - 0.516$, $g_h= 0.18$, $T_T=\tau_r= 0.26$~ns and $\tau_{12}= $~33.3~ps. The resulting curves are shown by red solid lines in Fig.~\ref{fig4} and are in excellent agreement with the experimental data. We note that in the limit of large magnetic fields, which corresponds to the condition of $|(\omega_e-\omega_h)|\tau_{12} \gg 1$, the amplitude of LSPE saturates ($r_e \rightarrow 1$) and the phase of the signal approaches zero ($\phi_e \rightarrow 0$) which gives the simple expression $P_{\rm HVVH}\propto \cos[\omega_e\tau_{23}+(\omega_e-\omega_h)\tau_{12}]$ for a long-lived signal at $\tau_{23} \gg \tau_r$. We emphasize that this expression takes into account the non-zero $g$-factor of the hole $g_h$ which plays an important role in the formation of the LSPE signal. After evaluation of $T_{2,e}^*(B)$, we can reproduce the LSPE signals as a function of $\tau_{23}$ and $B$ using Eqs.~\ref{eq:signal-HHHH} and \ref{eq:signal-HVVH} which are shown by red curves in Fig.~\ref{fig3} in both HHHH and HVVH polarization configurations. Here, the longitudinal spin relaxation rate $T_{1e}$ can be neglected, because it strongly exceeds $\tau_{23}$. Excellent agreement is obtained at all time delays and magnetic fields. We note that the small discrepancies in HHHH polarization configuration at the magnetic fields around 0 and 1~T are attributed to the presence of a weak background signal possibly due to a population grating in the ground states as previously discussed for the case of Fig.~\ref{fig2}(c). Nevertheless, importantly the HVVH configuration which corresponds to fully coherent transformation between optical and spin coherence is free from any background. \section{Conclusions} In conclusion, we have demonstrated that the spin degrees of freedom can be used for substantial temporal extension of the coherent optical response in self-assembled quantum dots which has important implications for applications of this system in quantum memory devices with high bandwidth. In particular, we show that in spite of strong inhomogeneous broadening of optical transitions in the ensemble of quantum dots it is possible to store and retrieve the optical coherence in the spin ensemble of resident electrons and to extend the optical coherence time by about an order of magnitude from 0.5~ns to 4~ns. This is manifested in the emergence of long-lived spin-dependent photon echo signals under resonant excitation of trions in (In,Ga)As/GaAs quantum dots in the presence of a moderate transverse magnetic field. We have developed a theoretical model, that quantitatively describes the behavior of three-pulse photon echo in quantum dots and takes into account the spin precession of both electrons and holes. The decay of the long-lived signal is attributed to spin dephasing of resident electrons. Therefore, the time scales can be further extended into the microsecond range by spin-echo techniques using dynamic decoupling via excitation with radio-frequency pulses. \section{Acknowledgements} The authors acknowledge financial support by the Deutsche Forschungsgemeinschaft through the International Collaborative Research Centre TRR 160 (Projects A3 and A1). A.V.T. and I.A.Y. thank the Russian Foundation for Basic Research (Project No. 19-52-12046) and the Saint Petersburg State University (Grant No. 73031758). A.L. and A.D.W. gratefully acknowledge financial support from the grants DFH/UFA CDFA05-06, DFG project 383065199, and BMBF Q.Link.X 16KIS0867.
2,877,628,090,604
arxiv
\section{Introduction} \label{sec:intro} A Banach space $X$ is a UMD-space, if there is a constant $c\geq 1$ such that \begin{equation} \label{eq:umd} \Bigg( \intop\limits_M \bigg\| \sum_{k=1}^n \epsilon_k d_k(\xi) \bigg\|^2 d\mu(\xi) \Bigg)^{1/2} \leq c \Bigg( \intop\limits_M \bigg\| \sum_{k=1}^n d_k(\xi) \bigg\|^2 d\mu(\xi) \Bigg)^{1/2} \end{equation} for all sequences $d_1,\dots,d_n$ of $X$-valued martingale differences and all sequences $\epsilon_1,\dots,\epsilon_n$ of signs. (The letters UMD{} stand for \emph{unconditional martingale differences}.) Maurey \cite{mau74} and later Burkholder \cite{bur86} showed, that this is the case if and only if \eqref{eq:umd} is satisfied for Walsh-Paley-martingales on the interval $[0,1)$ only. Throughout this article, we will only deal with those special martingales. In this setting, there are essentially three different ways of changing signs: \begin{enumerate} \item use all \emph{predictable} sequences $(\epsilon_k)$, i.~e. $\epsilon_k \;:\; [0,1]\to\{\pm1\}$ is ${\cal F}_{k-1}$-measurable, where $({\cal F}_k)$ is the filtration, to which the martingale is adapted, \item use all constant sequences of signs $\epsilon_k\in\{\pm1\}$, \item use one fixed sequence of signs $\epsilon_k=(-1)^k$. \end{enumerate} For each fixed $n$ in \eqref{eq:umd}, we will define below three corresponding ideal norms. The obtained sequences of ideal norms are bounded, if and only if $X$ is a UMD-space. However, also in the non-bounded case we can gain some information on $X$ from the asymptotic behavior of these sequences. The main result of this paper states that this information is essentially the same in all three cases. The corresponding sequences of ideal norms are asymptotically equivalent. A similar result in the setting of general martingales was obtained by Burkholder in \cite[Lemma 2.1]{bur84}. However, to make his proof work, one has to allow the underlying filtrations for the martingales to vary. In the natural way, all concepts extend to the setting of operators between Banach spaces. \section{Definitions and main result} \label{sec:def} For $k=1,2,\dots$ and $j=0,\pm1,\pm2,\dots$, we let \[ \Delta_k^{(j)}:=\left[\tfrac{j-1}{2^k},\tfrac j{2^k}\right) \] be the {\em dyadic intervals}. The {\em Haar functions} are given by \[ \chi_k^{(j)}(t):= \begin{cases} +2^{(k-1)/2} & \txt{if $t\in\Delta_k^{(2j-1)}$,}\\[3pt] -2^{(k-1)/2} & \txt{if $t\in\Delta_k^{(2j)}$,}\\ 0 & \txt{otherwise.} \end{cases} \] We let \[ {\mathbb D}:=\{(k,j)\;:\; k=1,2,\dots;\ j=1,\dots,2^{k-1}\} \] denote the {\em dyadic tree}. We will mainly consider finite dyadic trees \[ {\mathbb D}_m^n:=\{(k,j)\;:\; k=m,\dots,n;\ j=1,\dots,2^{k-1}\}, \] where $m\le n$. To shorten terms, we write ${\mathbb D}_k$ for the \emph{$k$-th level} ${\mathbb D}_k^k$ of ${\mathbb D}$. We denote by $L_2^X$ the Banach space of square integrable $X$-valued functions $f$ on the interval $[0,1)$ equipped with the norm \[ \|f\|_2 := \bigg( \intop\limits_0^1 \|f(t)\|^2 dt \bigg)^{1/2}. \] All results in this article could also be obtained for an arbitrary index $1 < p < \infty$ instead of $2$, the changes are straightforward. However, to avoid cumbersome notation, we decided to restrict ourselves to the case $p=2$. Given any ${\mathbb D}_1^n$-tuple $(x_k^{(j)})$, we get a \emph{Walsh-Paley-Martingale} of length $n$ with mean value zero, by letting \[ f_k := \sum_{(h,i)\in{\mathbb D}_1^k} \!\! x_h^{(i)} \chi_h^{(i)} \txt{for $k=1,\dots,n$.} \] Note that by the martingale properties of the sequence $(f_k)$ and since the conditional expectation operator has norm one in $L_2^X$, we have \begin{equation} \label{eq:cond_exp} \|f_k\|_2 \leq \|f_n\|_2 \end{equation} whenever $k\leq n$. We write \[ \tsprod f{\chi_k^{(j)}} := \intop\limits_0^1 f(t) \chi_k^{(j)}(t) \, dt, \] for the \emph{Haar-Fourier coefficients} of a function $f\in L_2^X$ and call \[ \mathop{\rm spec}(f) := \{ (k,j)\in{\mathbb D} \;:\; \tsprod f{\chi_k^{(j)}} \not=0\} \] the \emph{spectrum} of the function $f$. \begin{definition} For an operator $T\;:\; X \to Y$, we denote by $\mbox{\boldmath$\mu$}_n(T)$ the least constant $c\geq 1$ such that \[ \bigg\| \sum_{(k,j)\in{\mathbb D}_1^n} \!\! \epsilon_k^{(j)} Tx_k^{(j)} \chi_k^{(j)} \bigg\|_2 \leq c \, \bigg\| \sum_{(k,j)\in{\mathbb D}_1^n} \!\! x_k^{(j)} \chi_k^{(j)} \bigg\|_2 \] for all ${\mathbb D}_1^n$-tuples $(x_k^{(j)})$ and all signs $\epsilon_k^{(j)}=\pm1$. \end{definition} The above definition can be modified by assuming that the signs are changed on every level simultaneously. In other terms, $\epsilon_k^{(j)}=\epsilon_k=\pm 1$ should not depend on $j=1,\dots,2^{k-1}$. A still weaker concept can be introduced by using only the signs $\epsilon_k^{(j)}=(-1)^k$. The ideal norms so obtained will be denoted by $\mbox{\boldmath$\mu$}_n^\circ(T)$ and $\mbox{\boldmath$\mu$}_n^{\circ\circ}(T)$, respectively. Note that the uniform boundedness of $\mbox{\boldmath$\mu$}^\circ_n$ exactly describes the usual UMD-property \eqref{eq:umd} restricted to Walsh-Paley-martingales. Obviously, we have \[ \mbox{\boldmath$\mu$}_n^{\circ\circ}(T) \leq \mbox{\boldmath$\mu$}_n^\circ(T) \leq \mbox{\boldmath$\mu$}_n(T). \] Surprisingly, there holds also an estimate in the reverse direction. \begin{theorem*} $\mbox{\boldmath$\mu$}_n(T) \leq 3 \mbox{\boldmath$\mu$}_n^{\circ\circ}(T)$. \end{theorem*} \section{Proofs} \label{sec:proofs} For $(h,i)\in{\mathbb D}$, we denote by $\phi_h^{(i)}$ the transformation of $[0,1)$ that interchanges the intervals \[ \Delta_{h+1}^{(4i-2)} \txt{and} \Delta_{h+1}^{(4i-1)}. \] More formally \[ \phi_h^{(i)}(t):= \begin{cases} t+\frac1{2^{h+1}} & \txt{for $t\in\Delta_{h+1}^{(4i-2)}$,} \\[3pt] t-\frac1{2^{h+1}} & \txt{for $t\in\Delta_{h+1}^{(4i-1)}$,} \\[3pt] t & \txt{otherwise.} \end{cases} \] It turns out that \[ \chi_k^{(j)}\circ \phi_h^{(i)} = \begin{cases} \chi_k^{(j)} & \txt{if $k<h$ or $k=h$, $j\not=i$,} \\[4pt] \displaystyle \frac{\chi_{h+1}^{(2i-1)}+\chi_{h+1}^{(2i)}}{\sqrt2} & \txt{if $k=h$ and $j=i$,} \\[12pt] \chi_k^{(j)} & \txt{if $k=h+1$ and $j\not=2i-1,2i$,} \\[4pt] \chi_k^{(j^*)} & \txt{if $k>h+1$,} \end{cases} \] where $(j^*)$ is a permutation of $(1,\dots,2^{k-1})$. See \cite{wen96} for a proof. The most important property for our purpose is that whenever \[ \tsprod f{\chi_{h+1}^{(2i-1)}} = \tsprod f{\chi_{h+1}^{(2i)}} = 0 \] it follows that \begin{equation} \label{eq:main_property} \tsprod{f\circ\phi_h^{(i)}}{\chi_h^{(i)}}=0 \txt{and} \tsprod {f\circ \phi_h^{(i)}}{\chi_{h+1}^{(2i-1)}} = \tsprod {f\circ \phi_h^{(i)}}{\chi_{h+1}^{(2i)}} = \frac{\tsprod f{\chi_h^{(i)}}}{\sqrt2}. \end{equation} In other words, the Haar-Fourier coefficient of a function $f$ with respect to the index $(h,i)$ is shifted up one level and distributed to the indices $(h+1,2i-1)$ and $(h+1,2i)$. The basic idea of the proof is contained in the following proposition. \begin{proposition} \label{prop:1} $\mbox{\boldmath$\mu$}_n(T) \leq \mbox{\boldmath$\mu$}_{2n}^{\circ\circ}(T)$. \end{proposition} \ifvmode\else\newline\fi\noindent\textsc{Proof:\ } For a ${\mathbb D}_1^n$-tuple $(x_k^{(j)})$ write \[ f := \sum_{(k,j)\in{\mathbb D}_1^n} \!\! x_k^{(j)} \chi_k^{(j)} \txt{and} f^\epsilon := \sum_{(k,j)\in{\mathbb D}_1^n} \!\! \epsilon_k^{(j)} x_k^{(j)} \chi_k^{(j)}. \] First, we want to find a transformation $\psi_1\;:\;[0,1)\to[0,1)$ such that the spectrum of $f\circ\psi_1$ is concentrated on the odd levels, i.~e. \[ \tsprod{f\circ\psi_1}{\chi_{2k}^{(j)}} = 0 \txt{for all $(2k,j)\in{\mathbb D}$.} \] Indeed, using the composition of all $\phi_n^{(j)}$ with $j=1,\dots,2^{n-1}$, we shift the whole level ${\mathbb D}_n$ of the spectrum of $f$ to the level ${\mathbb D}_{n+1}$. Repeating this process of `spreading' $\mathop{\rm spec}(f)$ successively on the levels $n+1,n+2,\dots,2n-2$ we move the $n$-th level of $\mathop{\rm spec}(f)$ to the level ${\mathbb D}_{2n-1}$. In a similar manner, we next move the $(n-1)$-st level to ${\mathbb D}_{2n-3}$ and so on. So that finally \[ \tsprod{f\circ\psi_1}{\chi_{2k}^{(j)}} = 0, \] as required. Treating $f^\epsilon$ in the same way, we get that \[ \tsprod{f^\epsilon\circ\psi_1}{\chi_{2k}^{(j)}} = 0 \] and \[ \tsprod{f^\epsilon\circ\psi_1}{\chi_{2k-1}^{(j)}} = \delta_{2k-1}^{(j)} \tsprod{f\circ\psi_1}{\chi_{2k-1}^{(j)}}, \] where $\delta_{2k-1}^{(j)}=\pm1$ are signs that depend on the initial signs $(\epsilon_k^{(j)})$ only. We now construct a second transformation $\psi_2$ as composition of all those transformations $\phi_{2k-1}^{(j)}$ for which $\delta_{2k-1}^{(j)}=+1$. Since \[ \tsprod{f\circ\psi_1}{\chi_{2k}^{(2j-1)}} = \tsprod{f\circ\psi_1}{\chi_{2k}^{(2j)}} = 0, \] this moves all the plus signs to the even levels and leaves the minus signs on the odd levels. Letting $\psi:=\psi_2\circ\psi_1$, it follows that \[ \label{eq:pm1} \tsprod{f^\epsilon\circ\psi}{\chi_k^{(j)}} = (-1)^k \tsprod{f\circ\psi}{\chi_k^{(j)}}. \] Hence, the definition of $\mbox{\boldmath$\mu$}_{2n}^{\circ\circ}(T)$ yields \[ \| Tf^\epsilon\circ\psi\|_2 \leq \mbox{\boldmath$\mu$}_{2n}^{\circ\circ}(T) \|f\circ\psi\|_2. \] This completes the proof of Proposition \ref{prop:1}, since \[ \| Tf^\epsilon\circ\psi\|_2 = \|Tf^\epsilon\|_2 = \bigg\| \sum_{(k,j)\in{\mathbb D}_1^n} \!\! \epsilon_k^{(j)} Tx_k^{(j)} \chi_k^{(j)} \bigg\|_2 \] and \[ \| f\circ\psi\|_2 = \|f\|_2 = \bigg\| \sum_{(k,j)\in{\mathbb D}_1^n} \!\! x_k^{(j)} \chi_k^{(j)} \bigg\|_2.\mbox{ $\Box$} \] Next, we show that the sequence $\mbox{\boldmath$\mu$}_n^{\circ\circ}(T)$ behaves quite regularly. \begin{proposition} \label{prop:2} $\mbox{\boldmath$\mu$}_{2n}^{\circ\circ}(T) \leq 3 \mbox{\boldmath$\mu$}_n^{\circ\circ}(T)$. \end{proposition} \ifvmode\else\newline\fi\noindent\textsc{Proof:\ } Writing ${\mathbb D}_1^{2n}$ as the union of its lower part ${\mathbb D}_1^n$ and its upper part ${\mathbb D}_{n+1}^{2n}$, we obtain \[ \bigg\| \sum_{(k,j)\in{\mathbb D}_1^{2n}} \!\! (-1)^k Tx_k^{(j)}\chi_k^{(j)} \bigg\|_2 \leq L + U, \] where \[ L := \bigg\| \sum_{(k,j)\in{\mathbb D}_1^n} \!\! (-1)^k Tx_k^{(j)}\chi_k^{(j)} \bigg\|_2 \txt{and} U := \bigg\| \sum_{(k,j)\in{\mathbb D}_{n+1}^{2n}} \!\! (-1)^k Tx_k^{(j)}\chi_k^{(j)} \bigg\|_2. \] Obviously \[ L \leq \mbox{\boldmath$\mu$}_n^{\circ\circ}(T) \bigg\| \sum_{(k,j)\in{\mathbb D}_1^n}\!\! x_k^{(j)}\chi_k^{(j)} \bigg\|_2, \] and by \eqref{eq:cond_exp}, we get \[ \bigg\| \sum_{(k,j)\in{\mathbb D}_1^n} \!\! x_k^{(j)}\chi_k^{(j)} \bigg\|_2 \leq \bigg\| \sum_{(k,j)\in{\mathbb D}_1^{2n}} \!\! x_k^{(j)}\chi_k^{(j)} \bigg\|_2. \] To estimate $U$, we use the `self-similarity' of the Haar functions. Write ${\mathbb D}_{n+1}^{2n}$ as the disjoint union of its subtrees \[ {\mathbb S}_i := \{(k,j) \in {\mathbb D}_{n+1}^{2n} \;:\; j=(i-1)2^{k-n-1}+1,\dots,i2^{k-n-1}\}. \] Then the map \[ (k,j) \mapsto (k',j') := (k-n, j-(i-1)2^{k-n-1}) \] defines a bijection of ${\mathbb S}_i$ and ${\mathbb D}_1^n$. Moreover, we have \begin{equation} \label{eq:haar_properties} \chi_k^{(j)}(\tfrac{t+i-1}{2^n}) = \begin{cases} 2^{n/2} \chi_{k'}^{(j')}(t) & \txt{if $(k,j)\in{\mathbb S}_i$,} \\[6pt] 0 & \txt{otherwise.} \end{cases} \end{equation} Hence for \begin{eqnarray*} U_i & := & \Bigg( \intop\limits_{\Delta_n^{(i)}} \bigg\| \sum_{(k,j)\in{\mathbb D}_{n+1}^{2n}} \!\! (-1)^k Tx_k^{(j)}\chi_k^{(j)}(t) \bigg\|^2 dt \Bigg)^{1/2} \\ & = & \Bigg( \frac1{2^n} \intop\limits_0^1 \bigg\| \sum_{(k,j)\in{\mathbb S}_i} \!\! (-1)^k Tx_k^{(j)}\chi_k^{(j)}(\tfrac{t+i-1}{2^n}) \bigg\|^2 dt \Bigg)^{1/2} \\ & = & \Bigg( \intop\limits_0^1 \bigg\| \sum_{(k,j)\in{\mathbb S}_i} \!\! (-1)^k Tx_k^{(j)}\chi_{k'}^{(j')}(t) \bigg\|^2 dt \Bigg)^{1/2}, \end{eqnarray*} we get \begin{equation} U_i \leq \mbox{\boldmath$\mu$}_n^{\circ\circ}(T) \Bigg( \intop\limits_0^1 \bigg\| \sum_{(k,j)\in{\mathbb S}_i}\!\! x_k^{(j)}\chi_{k'}^{(j')}(t) \bigg\|^2 dt \Bigg)^{1/2}. \label{eq:est_U.1} \end{equation} Using \eqref{eq:haar_properties} again, we obtain that \begin{equation} \label{eq:est_U.2} \Bigg( \intop\limits_0^1 \bigg\| \sum_{(k,j)\in{\mathbb S}_i}\!\! x_k^{(j)}\chi_{k'}^{(j')}(t) \bigg\|^2 dt \Bigg)^{1/2} = \Bigg(\! \intop\limits_{\Delta_n^{(i)}} \bigg\| \sum_{(k,j)\in{\mathbb D}_{n+1}^{2n}} \!\! x_k^{(j)}\chi_k^{(j)}(t) \bigg\|^2 dt \Bigg)^{1/2}\!. \end{equation} Putting \eqref{eq:est_U.1} and \eqref{eq:est_U.2} together yields \[ U=\Big(\sum_{i=1}^{2^n} U_i^2 \Big)^{1/2} \leq \mbox{\boldmath$\mu$}_n^{\circ\circ}(T) \Bigg( \intop\limits_0^1 \bigg\| \sum_{(k,j)\in{\mathbb D}_{n+1}^{2n}} \!\! x_k^{(j)}\chi_k^{(j)}(t) \bigg\|^2 dt \Bigg)^{1/2}. \] Finally, again by \eqref{eq:cond_exp} we have \[ \Bigg( \intop\limits_0^1 \bigg\| \sum_{(k,j)\in{\mathbb D}_{n+1}^{2n}} \!\! x_k^{(j)}\chi_k^{(j)}(t) \bigg\|^2 dt \Bigg)^{1/2} \leq 2 \Bigg( \intop\limits_0^1 \bigg\| \sum_{(k,j)\in{\mathbb D}_1^{2n}} \!\! x_k^{(j)}\chi_k^{(j)}(t) \bigg\|^2 dt \Bigg)^{1/2}. \] This completes the proof of Proposition \ref{prop:2}.\mbox{ $\Box$} The theorem is now an immediate consequence of Propositions \ref{prop:1} and \ref{prop:2}.
2,877,628,090,605
arxiv
\section{Introduction} The strong dipole-dipole interaction between Rydberg atoms has inspired many proposals for their use in quantum simulation and quantum information processing \cite{Jaksch2000,Lukin2002,Saffman2010}. Rapid experimental progress is accompanying this development \cite{Heidemann2007,Reetz-Lamour2008,Urban2009,Gaetan2009,Tauschinsky2010}. So far, the excitation has always been resonant which results in dynamics of the internal states that are much faster than the external motion of the atoms. This regime is referred to as the frozen Rydberg gas. A trapping potential is only needed in order to prepare and detect the atoms. During the internal dynamics, the trapping fields are often switched off (dipole trap) or do not play a major role (magnetic trap). Recently, increasing interest focuses on dressed Rydberg states \cite{Pupillo2010,Henkel2010,Honer2010,Johnson2010}. The coupling to the Rydberg state is off-resonant such that the ground state only acquires a small admixture of the Rydberg state. This helps reducing the energy scale of the dipole-dipole interaction to the same order of magnitude as the interaction between two ground state atoms and the typical kinetic energy of atoms in an ultracold sample. As the Rydberg-dressed states allow for experiments on a much longer time-scale, the frozen Rydberg gas assumption does no longer hold and effects arising from the trapping potential will be much more pronounced. It is therefore necessary to understand the role of the trapping potential and its influence on the energy levels and the lifetime of the Rydberg states. The physical properties of the ground state of an atom can differ substantially from those of its Rydberg states. The magnetic moment does not depend on the principal quantum number $n$ but only on the angular momentum and the spin of the electrons. Typical Rydberg states that are used in current experiments ({\it ns}, {\it np}, {\it nd} states) therefore have a magnetic moment which is comparable to that of the ground state and no drastic effect is expected. Optical trapping fields instead couple to the dynamic polarizability of the atom and this can lead to significantly different light shifts for the Rydberg state compared to the ground state: the shift can be larger, smaller or can even change its sign. The light shift of Rydberg levels have been measured, for example, for Xe atoms in an atomic beam \cite{OBrian1994} and for ultracold Rb atoms in an optical lattice \cite{Younge2010}. Theoretical calculations have been performed, for example, on the light shift for low-lying (n$<$9) states of rubidium \cite{Safronova2004} and approximate results for high lying Rydberg states can be found in Ref.\,\cite{Delone2000}. As the Rydberg states are close to the ionization threshold, the light field has usually enough energy to ionize the atom. Photoionization is therefore a possible interaction mechanism that will cause additional losses. The magnitude of both effects depends strongly on the wavelength and the intensity of the light field as well as on the principal quantum number $n$ of the Rydberg state. Here, we report on the measurement of the AC-Stark shift of the 14$D_{5/2}$ Rydberg state of $^{87}$Rb in an optical dipole trap generated by a CO$_2$-laser. The atoms are initially prepared as a thermal cloud at mikrokelvin temperature. The experiment is performed in steady state by continuously exciting the atoms to the Rydberg state and looking at the production rate of rubidium ions upon photoionization by the CO$_2$-laser. We compare the observed spectra with a model that includes the AC-Stark shift and the finite lifetime against photoionization of the Rydberg state. For the chosen parameters both effects are very strong and easily visible in the experiment. \section{Experimental setup} The experiments are carried out on an apparatus for the production of ultracold quantum gases. Starting from a magneto-optical trap (MOT) we load $4\times 10^6$ rubidium atoms in a single beam optical dipole trap generated by a CO$_2$-laser with a waist of $30\,\mu$m. The initial power of the CO$_2$-laser is 10\,W, corresponding to a trap depth of $500\,\mu$K and a laser intensity of $7\times10^5$ W/cm$^2$ in the trap center. The atoms are prepared in the $|5S_{1/2},F=1\rangle$ hyperfine ground state and are equally distributed among all three Zeeman sublevels. We then ramp down the intensity of the CO$_2$-laser for evaporative cooling. After 6\,s we end up with a Bose-Einstein condensate of $10^5$ atoms at a final laser power of 50\,mW. In order to produce a thermal cloud we can stop the cooling ramp at any intensity in between. The temperature of the cloud is defined by the laser power and can be measured by standard absorption imaging. The spatial extension of the cloud is cigar-shaped and amounts to about $10\,\mu m \times 100\,\mu m$, slightly decreasing with temperature. After preparation, we keep the optical dipole trap at a constant power and switch on two additional light fields. The first light field is resonant with the $|5S_{1/2},F=2\rangle \leftrightarrow |5P_{3/2},F'=3\rangle$ transition of rubidium, which is also used for cooling and imaging the atoms. We refer to this laser as the ''imaging laser'' ($35\,\mu$W power, 5.4\,mm beam waist). As the atoms are initially in the $|5S_{1/2},F=1\rangle$ ground state, the imaging laser causes a weak off-resonant (6.8\,GHz detuning) optical pumping of atoms in the $|5S_{1/2},F=2\rangle$ ground state. The pumping rate is about 0.1\,s$^{-1}$. The second light field couples the $|5P_{3/2},F'=3\rangle$ excited state to the Rydberg state $|14D_{5/2}\rangle$ \cite{hyperfine}. It has a wavelength of 495\,nm and is generated via frequency doubling of a seeded diode laser in a periodically poled waveguide crystal. In the following, we refer to this laser as the ''Rydberg laser'' (1\,mW power, $250\,\mu$m beam waist). The extension of both lasers is larger than the extension of the atomic cloud and can be considered as homogeneous. The CO$_2$-laser, which provides the trapping potential, completes the three-photon ionization scheme. The relevant energy levels as well as the geometry of the laser beams are shown in Fig.\,1. \begin{figure}[htbp] \label{fig1} \centering \includegraphics[width=14cm]{Fig1_Energy_levels.pdf} \caption{Three-photon ionization scheme. (a) Relevant energy levels of rubidium. The imaging laser off-resonantly pumps the atoms from the $|5S_{1/2},F=1\rangle$ ground state to the $|5S_{1/2},F=2\rangle$ ground state. Subsequently, the atoms are ionized via the intermediate $|5P_{3/2},F'=3\rangle$ and the $|14D_{5/2}\rangle$ state. (b) Geometry of the three laser beams. The different directions are due to geometrical constraints of the vacuum system and have no special meaning. All laser beams are linearly polarized. The CO$_2$-laser provides both, the trapping potential and the final ionization step. The created ions are extracted with a small electric field (5\,$V/cm$) and counted by a channeltron detector.} \end{figure} {\bf Choice of the ${\bf 14D_{5/2}}$ state:} The experimental setup is part of a scanning electron microscope, which has been adopted for the imaging and manipulation of ultracold atoms. The detection principle relies on electron impact ionization of the atoms and subsequent ion detection \cite{Gericke2008,Wuertz2009}. The probability for electron impact ionization is less than 40\,\%. This currently limits the detection efficiency of this method. The dominant competing scattering channel is electron impact excitation. Thereby, most collisions lead to the excitation of the $5P_{1/2}$ and $5P_{3/2}$ states, as they have the largest dipole matrix elements. After excitation the atom eventually decays to the $|5S_{1/2},F=1\rangle$ or $|5S_{1/2},F=2\rangle$ ground state. As the atoms are initially in the $|5S_{1/2},F=1\rangle$ ground state, atoms that decay to the $|5S_{1/2},F=2\rangle$ ground state can be ionized by the above described three-photon-ionization scheme, thus enhancing the overall detection efficiency. The cross-section for photoionization of a Rydberg atom strongly depends on the binding energy \cite{Potvliege2006}. For the CO$_2$-laser wavelength, the {\it n}=13 state of rubidium is the lowest bound state that can be ionized. As the required wavelength of 498\,nm is inconvenient to generate we have decided to study photoionization via the $14D_{5/2}$ state at a wavelength of 495\,nm. For such low-lying Rydberg states one expects a very short lifetime against photoionization \cite{Potvliege2006}, resulting in a fast ionization scheme. \section{Photoionization spectroscopy} The signal that we use for the spectroscopy is the number of produced ions. We have recorded photoionization spectra for four different powers of the CO$_2$-laser. For each spectrum, the frequency of the Rydberg laser is varied while the frequencies of the imaging laser and the CO$_2$-laser are kept constant. For each setting of the Rydberg laser frequency we have performed one experimental run. We have chosen a total exposure time of 1\,s and have recorded the total number of detected ions during this time. Typically, a few thousand ions are detected, which is much less than the total number of atoms in the trap. Saturation effects can therefore be neglected. The total time of flight of an ion to the detector is 18\,$\mu$s (already after 1\,$\mu$s the ion has left the cloud) and the highest observed production rate of ions is 4000 s$^{-1}$. Therefore, there is almost never more than one ion or Rydberg atom at the same time inside the cloud and effects related to space charge, cold plasma formation, avalanche ionization or Rydberg blockade can be neglected. The spectra are shown in Fig.\,2. Each spectrum has two pronounced features. One rather sharp peak occurs close to resonance while a second peak with varying width is shifted with increasing intensity of the CO$_2$-laser. This second peak is due to the AC-Stark shift of the Rydberg state $14D_{5/2}$. \begin{figure}[htbp] \label{fig2} \centering \includegraphics[width=10cm]{Fig2_spectra.pdf} \caption{Recorded photoionization spectra for different powers of the CO$_2$-laser. The laser power and the temperature of the thermal cloud are indicated in each graph. For zero detuning, the Rydberg laser is resonant with the $|5P_{3/2},F'=3\rangle \rightarrow |14D_{5/2}\rangle$ transition.} \end{figure} We start our analysis by first discussing the AC-Stark shifts, induced by the three involved lasers. The light-shift of the ground state $5S_{1/2}$ and the intermediate state $5P_{3/2}$ due to the CO$_2$-laser are well known \cite{Friebel1998}. The ground state is shifted towards negative energies as the laser is red detuned to all possible transitions. It is this shift that provides the trapping potential for the atoms. For the highest power of the CO$_2$-laser (1\,W) it amounts to $- h \times 1.1$\,MHz in the trap center. The shift of the $5P_{3/2}$ state is about two times as large and also negative. Compared to the size of the observed features in the spectra, both shifts can be neglected. We also note that the imaging laser is resonant for all four spectra as both light shifts are smaller than the natural linewidth of the intermediate $5P_{3/2}$. The intensity of the imaging laser ($7 \times 10^{-5}$\,W/cm$^2$ in the beam center) ensures a small Rabi frequency of about $2\pi\times1$\,MHz. The Rydberg laser (1\,W/cm$^2$ in the beam center) couples the excited state to the Rydberg state with a Rabi frequency of $2\pi\times2.6$\,MHz. As the light shift cannot exceed the Rabi frequency, the imaging laser and the Rydberg laser do not induce considerable light shifts. The only remaining shift is that of the $14D_{5/2}$ state, induced by the CO$_2$-laser. \begin{figure}[htbp] \label{fig3} \centering \includegraphics[width=10cm]{Fig3_VvonE.pdf} \caption{(a) Trapping potential along the direction of gravity (not to scale). The saddlepoint defines the maximal potential energy $U_{\mathrm{max}}$ which we consider in our model. (b) Trap volume for a given potential energy, calculated for the spectrum with 1\,W power in the CO$_2$-laser. Due to the convex shape of the wings of the optical dipole trap, the volume becomes very large when the potential energy approaches $U_{\mathrm{max}}$.} \end{figure} The simplest way to model this shift is the ponderomotive potential of a free electron in the laser field. The potential energy corresponds to the average kinetic energy of an electron oscillating in the light field. It is given by \begin{equation} \label{eq2} h\nu_{\mathrm{ls}}=\frac{e^2I}{2m_e\epsilon_0 c(2\pi\nu_{\mathrm L})^2}\,\,, \end{equation} where $\nu_{\mathrm{ls}}$ denotes the resulting lightshift, $e$ is the electron charge, $I$ is the intensity of the light field at the position of the electron, $m_e$ is the electron mass, $c$ is the speed of light, $\epsilon_0$ is the dielectric constant, and $\nu_{\mathrm L}$ is the frequency of the laser. We have expressed the ponderomotive potential directly in terms of the lightshift. As the sign of the ponderomotive potential is always positive, the level is shifted upwards in energy and the light field constitutes a repulsive potential for the atom. Compared to a dipole trap with the same trap depth which is running at frequencies in the visible or near infrared spectrum, the ponderomotive potential in the CO$_2$-laser dipole trap is a factor of 1000 larger \cite{factor1000}. \begin{figure}[htbp] \label{fig4} \centering \includegraphics[width=10cm]{Fig4_noa.pdf} \caption{Number of atoms with a given lightshift. The red points are the experimental data (see also Fig.\,2). The blue line is the calculated fraction of atoms with a given lightshift (normalized to the experimental data). The peak on the left corresponds to atoms close to the edge of the dipole trap, where the density is small but the available trap volume is large. The peak on the right stems from atoms in the trap center where the density is large but the trap volume is small. While there is already qualitative agreement, the shape of the peak at the right shows a significant deviation from the data.} \end{figure} Next, we have to take into account that in a dipole trap the lightshift of the Rydberg state depends on the position of the atom within the trap. For a thermal cloud, the density of the atoms is determined by the Boltzman distribution \begin{equation} n(\vec{r})\propto e^{-U(\vec{r})/k_{\mathrm B}T}, \end{equation} where $U(\vec{r})$ is the trapping potential which is directly given by the intensity profile of the dipole trap laser: $U(\vec{r})\propto -I(\vec{r})$. Thus, according to Eq.\,\ref{eq2}, the lightshift of an atom is determined by its potential energy. The number of atoms with a certain lightshift is then identical to the number of atoms with a certain potential energy. This number is given by multiplying the density $n(\vec{r})$ with the available trap volume at this potential energy. We discretize the trap volume in potential energy shells of width $\Delta U$. The volume of each energy shell is then given by $\Delta V(U) =dV(U)/dU\times \Delta U$, where $V(U)$ is the integrated trap volume with a potential energy smaller than $U$. The calculation of $\Delta V(U)$ has to be done numerically. As an example, $\Delta V(U)$ for the spectrum with 1\,W power in the CO$_2$-laser is shown in Fig.\,\ref{fig3}b. Due to the asymptotic behaviour of the trapping potential, $\Delta V(U)$ diverges if the potential energy equals the trap depth. However, due to gravity, the symmetry is distorted and a saddle point of the potential along the direction of gravity emerges (see Fig.\,3a). We take the potential of the saddle point as a cutoff potential energy $U_{\mathrm {max}}$ for our calculation. In order to calculate the density one has to know the temperature of the cloud. In our approach, the temperature changes during the exposure as the evaporation continues after stopping the evaporation ramp. After 1\,s the temperature is about 30\,\% lower than at the beginning. We take the average of the initial and final temperature as the effective temperature for the measurement. The number of atoms with a certain lightshift is then readily calculated and plotted in Fig.\,4. It is clearly visible that a two-peak structure emerges, arising from the competition between the Boltzman distribution and the large number of available states at the edges of the dipole trap. While the shape of the spectra is already visible, there is not yet full quantitative agreement. The reason is that the finite lifetime of the Rydberg state causes an additional broadening. The dominant contribution stems from photoionization. The lifetime of the Rydberg state depends on the photoionization cross-section $\sigma$ and the intensity of the CO$_2$-laser and is given by \begin{equation} \tau_{\mathrm{ion}} = \frac{h\nu_L}{I(\nu_{\mathrm {ls}})\sigma}\,\,. \end{equation} As the intensity is connected via Eq.\,1 to the lightshift $\nu_{\mathrm{ls}}$, the lifetime depends on the lightshift. The photoionization cross-section $\sigma$ of low-lying Rydberg states has been measured in Ref.\,\cite{Gabbanini2006}. For the $16D$ state it amounts to 39 Mb. Following the trend of the data we estimate a cross-section for the $14D_{5/2}$ state between 45 and 50 Mb. This corresponds to a lifetime in the trap center between 10 and 100 \,ns for the four different spectra. Assuming a Lorentzian profile with width $\delta \nu (\nu_{\mathrm{ls}})$ for the total broadening we write \begin{equation} \delta \nu (\nu_{\mathrm{ls}})= \delta \nu_{\mathrm{ion}}(\nu_{\mathrm{ls}}) + \delta \nu_{\mathrm{laser}} + \delta \nu_{\mathrm {natural}}\,\,, \end{equation} where $\delta \nu_{\mathrm{ion}} (\nu_{\mathrm{ls}})=(2\pi\tau_{\mathrm{ion}})^{-1}$ is the contribution from photoionization, $\delta \nu_{\mathrm{laser}}$ denotes a constant broadening due to the finite bandwidth of the lasers (1\,MHz each), and $\delta \nu_{\mathrm{natural}}$ denotes the natural linewidth of the Rydberg state (70\,kHz). \begin{figure}[htbp] \label{fig5} \centering \includegraphics[width=10cm]{Fig5_results.pdf} \caption{Comparison with theory. The experimental data (red points, same data as in Fig.\,2) are shown together with the theoretical model as outlined in the text. The model has been normalized to the height of the shifted peak.} \end{figure} As all involved timescales (Rabi frequencies, lifetime and ionization rate) are much faster than the motion of the atoms in the trap, we further assume that the atoms are ionized right at the position where they are pumped into the $|5S_{1/2},F=2\rangle$ ground state. This allows to ignore the external dynamics of the atoms and to consider only a static density distribution. The final lineshape is then given by a convolution of the atom number distribution as shown in Fig.\,4 with the Lorentzian profile for the broadening. In Fig.\,5 we show the result of the convolution together with the experimental data for a cross-section of 48\,Mb. The agreement is good, especially for a high power in the CO$_2$-laser. With the same parameters we can recover the shape of all spectra. Only the height and width of the left peak show stronger deviations. This is not surprising since the shape of the peak is very sensitive to the density at the edges of the dipole trap. As the evaporation is a dynamical process, there might be atoms in the trap that have a higher potential energy than $U_{\mathrm {max}}$ and therefore lead to a broadening of the unshifted peak. Moreover, the density at the trap edge is exponentially sensitive to the temperature which changes during the measurement. However, the position of the shifted peak is well described for all data sets. This is important as this peak contains the information about the lightshift and the photoionization cross section. Spontaneous decay (2.2\,$\mu$s lifetime) and transitions induced by black body radiation (10\,$\mu$s lifetime) can cause a redistribution of the $14D_{5/2}$ state to neighboring states. As the ionization process takes place on a timescale which is at least 20 times faster, it is sufficient to restrict the analysis to the $14D_{5/2}$ state. Also, ionization due to black body radiation does not play a significant role as it amounts to only a fraction of the rate for black body induced transitions \cite{Glukhov2010}. Electric fields are another possible source of line broadening and line shifts. Our measurement principle requires a small electric field (5\,V/cm) which is continuously applied during the experiment. While such a field can significantly shift high-lying Rydberg states, its influence on the $14D_{5/2}$ state is less than 1\,MHz, which is below the resolution of our spectroscopy technique. We conclude the discussion by a detailed analysis of the validity of the ponderomotive potential. The assumption of a free electron for the $14D_{5/2}$ state is certainly questionable as the binding energy corresponds to 70\,\% of the photon energy and resonance effects might occur. A quantum-mechanical calculation is therefore necessary for a verification. This is most conveniently done by writing the interaction of the electron with the radiation field in terms of the vector potential \cite{vectorpotential} \begin{equation} H_\mathrm{int}(t)=\frac{e^2}{2m}{\bf A}(t)^2+\frac{e}{m}{\bf A}(t){\bf p}, \end{equation} with ${\bf p}$ being the electron momentum and ${\bf A}(t)=-{\bf E_0}/\omega_{\mathrm L}\cos(\omega_{\mathrm L} t)$, where ${\bf E_0}$ is the electric field vector of the light field. The first term directly gives the ponderomotive potential in first order perturbation theory after time averaging over one oscillation period, $\Delta E_1=e^2E_0^2/(4m\omega_{\mathrm L}^2)$. It shifts all states in the same way. The second term can then be regarded as a correction to the ponderomotive potential. In second order perturbation theory one can write \begin{equation} \label{correction} \Delta E_2 =\frac{e^2E_0^2}{4m\omega_{\mathrm L}^2}\times\frac{1}{\hbar}\sum_k\frac{|\left\langle k|z|i\right\rangle|^22m\omega_{ik}^3}{\omega_{ik}^2-\omega_{\mathrm L}^2}. \end{equation} Here, we have set the linear polarization of the light field along the $z$-axis and have replaced the matrix elements according to $\left\langle k|{\bf p}|i\right\rangle=im\omega_{ik}\left\langle k|{\bf r}|i\right\rangle$, with $\hbar\omega_{ik}=E_i-E_k$, $E_i$ and $E_k$ being the energies of the initial state $|i\rangle$ and the intermediate states $|k\rangle$. The first factor in Eq.\,\ref{correction} is again the ponderomotive potential and the second factor is a dimensionless correction factor. The $14D_{5/2}$ state is coupled to all $nP_{3/2}$, $nF_{5/2}$ and $nF_{7/2}$ states and we have included in the calculation all intermediate states from $n=5$ to $n=120$. Note that 90 percent of the lightshift originates from the states up to $n=40$. The wave functions have been generated with help of the Numerov method and the quantum defects have been taken from Ref.\,\cite{Lorenzen1983}. The calculation has been performed for $|m|=1/2, 3/2$, and $5/2$, where $m$ is the projection of the total angular momentum on the electric field vector of the CO$_2$-laser. For all three Zeeman sub-states the correction factor $c_m$ to the pondermotive potential is less than 10 percent. We find $c_{1/2}=0.07$, $c_{3/2}=0.05$, and $c_{5/2}=0.02$. In the experiment we populate a mixture of all three sublevels. In Fig.\,6 we show the level shift arising from the coupling to the $nP_{3/2}$ states for $|m|=1/2$. It is clearly visible that a peak-like structure appears around $n=11$, where the CO$_2$-laser is close to resonance. However, the detuning is still large enough to ensure a ponderdomotive potential. Note that the contributions from the various states partially cancel. \begin{figure}[htbp] \label{fig7} \centering \includegraphics[width=10cm]{Fig6.pdf} \caption{Contribution to the light shift from the intermediate $nP_{3/2}$ states for $|m|=1/2$, see Eq.\,\ref{correction}. The light shift is given in units of the ponderomotive potential.} \end{figure} \begin{figure}[htbp] \label{fig6} \centering \includegraphics[width=9cm]{Fig7.pdf} \caption{Comparison of the 1\,W spectrum with different strengths of the lightshift. The dotted (dashed) line corresponds to a lightshift with 10\,\% less (10\,\% more strength.) } \end{figure} The above presented model has (apart from the normalization constant) no free parameter. In order to test for a possible deviation from the ponderomotive potential we can artificially tune the strength of the ponderomotive potential with an additional factor $\eta$ and repeat the evaluation for different values of $\eta$. This is shown in Fig.\,7 for $\eta$=0.9 and 1.1. As one can see, the deviation of 10\,\% already leads to a disagreement with the observed spectra. This is in accordance with the detailed calculation and we can conclude that the AC-Stark shift of the $14D_{5/2}$ state in a CO$_2$-laser dipole trap is given by the ponderomotive potential of a free electron. A similar result has been obtained for low-lying Rydberg states of Xenon (n=10,...,15) which were also found to be in good agreement with a ponderomotive potential \cite{OBrian1994}. \section{Summary and Outlook} We have measured the AC-Stark shift of the $14D_{5/2}$ state of rubidium in a CO$_2$-laser dipole trap. We find that the lightshift is given by the ponderomotive potential of a free electron in the light field. The ponderomotive potential is always repulsive and is independent of the principal quantum number $n$. All higher lying Rydberg states are shifted in the same way, provided that no near-resonant coupling to lower lying states occurs. For our settings we observe a light shift of up to 170\,MHz. This can be used, for instance, for new schemes of evaporative cooling, as the excitation of the atoms to the Rydberg state can be made spatially selective. We also extract the photoionization cross-section from our data which we find to be compatible with previous measurements. The observed short lifetime of the Rydberg state of less than 100\,ns even for a shallow trapping potential sets a limitation for the use of low-lying Rydberg states in combination with a CO$_2$-laser dipole trap. However, for higher quantum numbers, the ionization cross-section drastically decreases and lifetimes in the ms range are realistic \cite{Potvliege2006}. Both effects, the light shift and the lifetime against photoionization can be significantly reduced using dipole traps in the visible or near-infrared spectral range. This will make experiments with Rydberg-dressed atoms in optical dipole traps feasible. \section*{References}
2,877,628,090,606
arxiv
\section{Introduction} The energy relaxation time $T_1$ of superconducting qubits is affected by dielectric loss, nonequilibrium quasiparticles \cite{MartinisPRL09}, and charge or bias noise, and varies between a few nano- to several microseconds, depending on qubit type, material, and device layout. Superconducting qubits are commonly based on $\Al$ thin films, and their central element, the non-linear inductor given by a Josephson tunnel junction (JJ), is formed either by overlap \cite{SteffenPRL06} or window-type geometries \cite{KlineSST09}. Qubit spectroscopy reveals coupling to stochastically distributed two-level systems (TLSs) in the tunnel oxide \cite{SimmondsPRL04,LupascuPRB09,LisenfeldPRB10,Bushev_PRB10,DeppePRB07} which provide a channel for qubit decoherence. While the physical nature of TLSs is still under debate, their number was shown to decrease with junction size and their density with higher atomic coordination number of the tunnel oxide \cite{KlineSST09,MartinisPRL05}. The number of coherent oscillations in the qubit is limited by, among other decoherence mechanisms such as nonequilibrium quasiparticles, the \emph{effective} dielectric loss tangent $\tan\delta_{\rm{eff}}$ \cite{MartinisPRL05}. The overlap geometry provides JJs with amorphous barriers with no need for isolation dielectrics, being itself a source for additional TLSs and dielectric losses. The window geometry is used for higher quality, e.g. epitaxial, trilayer JJs with in-situ grown barriers. Besides complex fabrication, they have the drawback of requiring additional isolation dielectrics \cite{BaronePaterno}. The importance of keeping the total dielectric volume in qubits small to reduce the additional loss was shown in Ref. \cite{MartinisPRL05}.\par In this paper we give an overview of our standard technology for junction fabrication, and present an alternative junction based on sputtered trilayer stacks, which provide an intrinsically cleaner tunnel oxide and is well suited for micron-sized trilayer qubit junctions. The so-called \emph{side-wall passivated JJs} provide contact to the top electrode without adding too much lossy dielectric to the circuitry, which would negatively affect the loss tangent. The trilayer isolation is achieved via an electrolytic process. These novel JJs were realized in a flux-biased phase qubit and characterized by i) current transport measurements on reference junctions and ii) spectroscopy and time-domain measurements of the qubit.\par By systematically replacing only the Josephson junction, being central to any superconducting qubit, we aim to analyze the loss contributions of this specific element, and, ideally, develop low-loss Josephson junctions for superconducting qubits and improve our qubit performance. We found performance comparable to the current generation of overlap phase qubits. \section{Novel geometry} \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm]{Fig1SidewallAnod.eps} \caption{(Color online) Schematics of the a) overlap JJ and b) side-wall passivated JJ, offering minimal volume of passivation region. Left (right) part: before (after) the top-layer deposition. After the edge etch in the trilayer stack, the side-wall oxide is grown by anodic oxidation. The trilayer JJ has in-situ grown tunnel oxides to avoid sources of residual impurities. Patterning of the top wiring and etching below the tunnel barrier yields the tunnel junction.}\label{SidewallAnod} \end{center} \end{figure} Figure \ref{SidewallAnod} depicts the patterning process for our standard overlap (a) and trilayer junctions (b). Our standard process has an oxide layer grown on an ion mill cleaned aluminum edge, which was previously chlorine etched. The top wiring is then etched back below the oxide layer using argon with $\sim 10\%$ chlorine mixture. For the trilayer process, the in-situ sputtered $\Al$-$\Al\O_x$-$\Al$ trilayer has a thermally grown tunnel oxide barrier, formed for 10 min at $140\:\rm{mTorr}$ at room temperature. After deposition of the trilayer stack an edge is etched. The bottom electrode of the trilayer stack is isolated from the top electrode wiring by a self-aligned nanometer thin dielectric layer, grown for $\Al$ (or other suitable electrode metals such as $\Nb$) by anodic oxidation \cite{Kroger81SNAP}. The metallic aluminium serves as partly submerged anode in a liquid electrolytic mixture of $156\;\rm{g}$ ammonium pentaborate, $1120\;\rm{ml}$ ethylene glycol and $760\;\rm{ml}$ $\H_2\O$ at room temperature. A gold-covered metal served as cathode and the electric contact was made outside the electrolyte to the anode. By protecting parts of the aluminum electrode with photoresist only a well-defined area was oxidized by passing a constant current through the Al film and converting the metallic surface to its oxide form. The oxide thickness can be controlled by the voltage drop across the electrolyte. After a light ion clean and top wiring deposition the resist is patterned to define the junction area. Finally, the trilayer is etched below the tunnel barrier, yielding Josephson junctions with planar tunnel barrier and isolation dielectric on just one side of the tunnel area. \W{For $\Nb$ junctions a similar patterning process, without minimizing the dielectric loss contribution, was developed using anodic $\Nb$ oxide and covered by $\Si\O_2$ \cite{Mueller_01}.} The in-situ grown tunnel oxide avoids sources of residual impurities such as hydrogen, hydroxide or carbon at the interface vicinity, which may remain even after ion-milling in our standard process. These trilayer junctions are fully compatible with our standard process using overlap patterning and no junction side-wall.\\ \subsection{Transport} Transport measurements on a $\sim 3\:\rm{\mu m^2}$ reference junction at $100\;\rm{mK}$ are shown in Fig. \ref{IVC}. The critical current $I_c$ is $1.80\:\rm{\mu A}$, with normal resistance $R_n=150\rm{\Omega}$ yielding $I_c R_n=270\rm{\mu V}$, close to the calculated Ambegaokar-Baratoff value of $I_c R_n=298\rm{\mu V}$ for the measured superconducting gap of $190\;\rm{\mu V}$. \W{The back bending of the voltage close to the gap voltage is attributed to self-heating inside the junction.} The retrapping current of $\approx 0.01 \cdot I_c$ indicates a very small subgap current. The current transport is consistent with tunneling, and we can exclude transport via metallic pinholes, located in the $\sim5\;\rm{nm}$ thin side-wall dielectric. As a further check, the $I_c(T)$ dependence is as expected, see inset in Fig. \ref{IVC}, with a critical temperature $T_c$ of $1.2\;\rm{K}$.\\ \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm]{Fig2IVC.eps} \caption{Current-voltage-characteristic at $100\;\rm{mK}$ and $I_c(T)$ dependence (lower inset) of a $3\;\rm{\mu m^2}$ side-wall passivated trilayer junction. Top inset: dielectric circuit elements of the junction. The tunnel oxide capacitance $C_{\rm{t}}$ is connected in parallel with the capacitor formed by the side-wall oxide $C_{\rm{sw}}$.} \label{IVC} \end{center} \end{figure} \section{Measurement} The qubit is a flux-biased phase qubit that is coupled via a tunable mutual inductance to the readout-SQUID \cite{NeeleyPRB08}. \W{The total qubit capacitance $C_{total}$, see upper inset of Fig. \ref{Fig:2dSpectro}, is given by the tunnel oxide $C_{\rm{t}}$, the anodic side-wall oxide $C_{\rm{sw}}$ and shunt capacitor $C_{\rm{s}} \approx 1250\;\rm{fF}$ dielectric}, provided by a parallel plate capacitor with relative permittivity $\epsilon' \simeq 11.8$ made from hydrogenated amorphous silicon (a-Si:H). The measurement process follows the standard phase qubit characterization \cite{MartinisSC_PhaseQubits09}. \subsection{Spectroscopy} When operated as a qubit, spectroscopy over a range of more than $2.5\;\rm{GHz}$ revealed clean qubit resonance spectra with just two avoided level crossings of $40$--$50\;\rm{MHz}$ coupling strength (at $6.96$ and $7.32\:\rm{GHz}$, as shown in Fig. \ref{Fig:2dSpectro}). The excitation pulse length is $1\;\rm{\mu sec}$, and the qubit linewidth is about $3\:\rm{MHz}$ in the weak power limit. The qubit visibility, measured in a separate experiment, is about $86\%$, which is in the range we found for our standard phase qubits.\\ Qualitatively, the TLS number and coupling strength per qubit is lower than in other trilayer systems \cite{KlineSST09}, that have larger tunnel areas. The TLS density per qubit has roughly the same order of magnitude as in conventional overlap qubits with similar tunnel area dimensions \cite{SteffenPRL06}. \subsection{Relaxation} \begin{figure}[tb] \begin{center} \includegraphics*[width=8.6cm]{Fig32DSpectro.eps} \caption{(Color online) 2D spectroscopy of side-wall passivated trilayer qubit at $25\;\rm{mK}$. Two avoided level crossings due to qubit-TLS coupling are observed at $6.96$ and $7.32\:\rm{GHz}$ (arrows). Top inset: Dielectric circuit schematics of the qubit. Bottom inset: Qubit relaxation measurement.}\label{Fig:2dSpectro} \end{center} \end{figure} Qubit relaxation measurements via $\pi$ pulse excitation and time-varied delay before readout pulse were obtained when operated outside the avoided level structures. We measured a relaxation time $T_1$ of about $400\;\rm{nsec}$, as shown in the lower inset of Fig. \ref{Fig:2dSpectro}. This relaxation time is similar to that observed in the overlap qubits, which consistently have $300\textrm{--}500\:\rm{nsec}$ for $\sim2$-$4\;\rm{\mu m^2}$ JJ size. Apart from the change to trilayer junctions, no modification from the previous design was made. \section{Loss estimation} \begin{table}[tb] \begin{tabular}{lllllc} \hline \hline dielectric elements & &capacitance & loss $\tan\delta_i$&$\frac{C_i}{C_{total}}\tan\delta_i$&\\ & &$[\rm{fF}$] & &&\\ \hline shunt capacitor a-Si:H&$C_{\rm{s}}$ & 1250& $2\cdot10^{-5}$&$1.83\cdot10^{-5}$& \cite{oConnellAPL}\\ anodic side-wall oxide& $C_{\rm{sw}}$ & 3.2 & $<1.6\cdot10^{-3}$&$<3.7\cdot10^{-6}$& \cite{MartinisPRL05}\\ tunnel barrier& $C_{\rm{t}}$ & 116&$<1.6\cdot10^{-3}$&$<1.36\cdot10^{-4}$&\cite{MartinisPRL05}\\ \emph{measured} $\tan\delta_{m}$ &&&&$6.6\cdot 10^{-5}$&\\ \hline \hline \end{tabular}\caption{Dielectric parameters for anodic oxide, shunt capacitance, and tunnel barrier. $\tan\delta$ is given for low temperature and low power at microwave frequencies. The capacitance for the tunnel oxide $\Al\O_x$ is taken and corrected for $\Al$ electrodes from Ref. \cite{ZantAPL94} for the dimensions given in the text. For qubits the loss tangent is calculated away from TLS resonances, as the losses in small size anodic side-wall oxide and tunnel barrier are smaller than the bulk value considered for the specific loss contribution $\frac{C_i}{C_{total}}\tan\delta_i$. The \W{measured loss} $\tan \delta_{m}$ is a factor 2-3 smaller than $\tan \delta_{\rm{eff}}$, the weighted sum of all specific loss contributions. \label{Tab:LossEstimation}} \end{table} We estimate the additional dielectric loss due to the sidewall oxide. The \emph{effective} loss tangent of a parallel combination of capacitors is given by \[ \tan{\delta_{\rm{eff}}}= \frac{\epsilon''_{\rm{eff}}}{\epsilon'_{\rm{eff}}}= \frac{\sum\limits_{i} \epsilon''_i \frac{A_i}{d_i}} {\sum\limits_{i} \epsilon'_i \frac{A_i}{d_i}}= \frac{\sum\limits_{i}C_i \tan\delta_i}{\sum\limits_{i} C_i} \] with $\epsilon'_i$ and $\epsilon''_i$ being the real and imaginary part of the individual permittivity for capacitor $i$ with area $A_i$ and dielectric thickness $d_i$. Now, we discuss the individual loss contributions for all dielectrics. We design the qubit so that the dominant capacitance comes from the shunt capacitor made from a-Si:H, which has a relatively low loss tangent of $2 \cdot 10^{-5}$. Including the non-negligible capacitance of the tunnel junction, this gives an effective loss tangent to the qubit of $1.83 \cdot 10^{-5}$. Because the junction capacitance is about 10\% of the shunting capacitance, the effective junction loss tangent is 10 times less than the loss tangent of the junction oxide. We statistically avoid the effects of two-level systems by purposely choosing to bias the devices away from the deleterious resonances. The loss tangent of the junction is smaller than the value for bulk aluminum oxide, approximately $1.6 \cdot 10^{-3}$, and probably smaller than $5\cdot 10^{-5}$ since long energy decay times ($500\;\rm{nsec}$) have been observed for an unshunted junction when operated away from resonances \cite{MartinisPRL05}.\\ The anodic side-wall oxide contributes a small capacitance of about 3.2\,fF, which can be calculated assuming a parallel plate geometry. Here, we use the dielectric constant $\epsilon' = 9$ for aluminum oxide, assume an area given by $2\,\mu$m, the width of the overlap, multiplied by $0.1\mu$m the thickness of the base layer, and estimate the thickness of the oxide $\simeq5\,$nm as determined by the anodic process \cite{Kroger81SNAP}. The anodic oxide is assumed to have a bulk loss tangent of $1.6 \cdot 10^{-3}$ \cite{oConnellAPL}, which gives a net qubit loss contribution of $3.7 \cdot 10^{-6}$, about 5 times lower than for the a-Si:H capacitor. Note that we expect the loss from this capacitance to be even lower because of statistical avoidance of the TLS loss \cite{MartinisPRL05}. The small volume of the capacitor, equivalent to a $\sim 0.5\,\mu\textrm{m}^2$ volume tunnel junction, implies that most biases do not put the qubit on resonance with two-level systems in the anodic oxide. \section{Qubit lifetime and effective loss tangent} From the measured energy decay time $T_1=400\;\rm{nsec}$ , we determine the loss tangent of the qubit to be $\delta_m = (T_1\;\omega_{10})^{-1} \approx 6.6\cdot 10^{-5}$, using a qubit frequency $\omega_{10}/2\pi=6\;\rm{GHz}$. This is 3-5 times larger than our estimation of our dielectric losses, as shown in Table \ref{Tab:LossEstimation}. We believe the \W{qubit} dissipation mechanism comes from some other energy loss sources as well, such as non-equilibrium quasiparticles \cite{MartinisPRL09}.\\ \section{Conclusion} In conclusion, we have shown that the use of a anodic oxide, self-aligned to the junction edge, does not degrade the coherence of present phase qubits \cite{MartinisSC_PhaseQubits09}. We found performance comparable to the current generation of overlap phase qubits. The new junction geometry may provide a method to integrate submicron sized, superior quality junctions (lower TLS densities) grown, for example, by MBE epitaxy to eliminate the need for shunt dielectrics. Also, our nanometer-thin, three dimensional-conform anodic passivation layer can be replaced by a self-aligned isolation dielectric at the side-wall, which could be used for all types of trilayer stacks. Devices were made at the UCSB Nanofabrication Facility, a part of the NSF-funded National Nanotechnology Infrastructure Network. The authors would like to thank D. Pappas for stimulating discussions. This work was supported by IARPA under grant W911NF-04-1-0204. M.W. acknowledges support from AvH foundation and M.M. from an Elings Postdoctoral Fellowship.
2,877,628,090,607
arxiv
\section{Introduction} NICMOS parallel observations, taken in parallel with one of the other science instruments on HST, has provided us for the first time a wealth of data at near infrared wavelengths with HST resolution. Small background at wavelengths of 0.8$\mu$ and 1.6$\mu$ and HST high angular resolution make NICMOS a very efficient instrument in studying the faint galaxy population at high redshifts. The NICMOS parallel imaging and grism observations were both made with Camera 3 with a field of view $\sim$ 52$''\times$ 52$''$. The imaging data were taken with broad band filters F110W and F160W at 1.1$\mu$ (J band) and 1.6$\mu$ (H band). The grism data has a spectral resolution of 200 per pixel and covers wavelength regions from 0.8$\mu$ to 1.2$\mu$ (G096) and 1.1$\mu$ to 1.9$\mu$ (G141). We have reduced and analysed the NIC3 parallel imaging data covering $\sim$150 sq. arcminutes and the grism data in G141 grism $\sim$65 sq. arcminutes. \section{Emission-line Galaxies and the H$\alpha$ Luminosity Function at $z \sim 1.3$ from the NICMOS/HST Grism Parallel Observations} The recent detections of dust enshrouded galaxies at $z > 1$ at sub-millimeter wavelengths (Smail, Ivison \&\ Blain 1997; Hughes et al. 1998; Barger et al. 1998; Lilly et al. 1998) suggest that significant amounts of star formation activity at high redshifts may be obscured. Observations in the rest-frame UV wavelength suffer large uncertainties in extinction corrections. Furthermore, little is known about the properties of normal galaxies in the region between $ 1 < z < 2$, where neither the 4000\AA\ break nor the Ly continuum break are easily accessible. The near-IR offers one means of accessing both redshift indicators and measures of star formation within this critical redshift range. We have reduced and analysed the NIC3 parallel grism G141 data, covering $\sim$ 65 sq. arcminutes. The details of the data reduction can be found in McCarthy et al. (1999). We found a total of 33 emission line galaxies over an effective co-moving volume of $10^5~h_{50}^{-3}$~Mpc$^3$ for $q_0=0.5$. The implied co-moving number density of emission line galaxies in the range $0.75 < z < 1.9$ is $3.3\times10^{-4}~h_{50}^{3}$~Mpc$^{-3}$, very similar to that of the bright Lyman break objects at $z \sim 3$. These objects have a median H$\alpha$ luminosity of $2.7 \times 10^{42}$ erg sec$^{-1}$. The most, if not all, of the emission lines detected are either H$\alpha$ or unresolved blend of H$\alpha$+[NII]6583/6548. This identification is mostly based on H-band apparent magnitudes, the emission line equivalent widths and the lack of other detected lines within the G141 bandpass. The median H-band apparent magnitude of $\sim$20.5 (which corresponds to a L$^\star$ galaxy at $z \sim 1.5$) implies that the possibility of the line being [OII] or H$\beta$ is very small. The redshifts of 6 galaxies in our sample have been confirmed by detection of [OII]3727 emission in the optical spectra using LRIS on the Keck 10m telescope (Teplitz et al. 1999; Malkan et al. 1999). The fraction of AGN contamination in our sample is around 10\%; for details, see McCarthy et al. (1999). Figure 1 shows the spectra for a subset of galaxies in our sample. \begin{figure} \plotfiddle{yanl1.eps}{6.5truecm}{0.0}{50}{45}{-160}{-100} \caption{1-D spectra of a subset of 33 H$\alpha$ emission-line galaxies. For each object we have marked our candidate emission-lines with arrows below the line. We plot the entire range of the G141 grism for each spectrum even though parts of the spectrum have fallen beyond the field of view of the detector for several objects. The redshifts, assuming an H$\alpha$ identification for the line are given for each object.} \end{figure} We compute the H$\alpha$ luminosity function (LF) based on our sample of emission-line galaxies. We have corrected the incompleteness in the original source catalog using simulations. The final correction is significant only in the faintest luminosity bin and our main result does not sensitively depend on that. All of the detailed results are in Yan et al. (1999). Figure 2 shows our derived H$\alpha$ LF at $z=1.3$ and the local H$\alpha$ LF as measured by Gallego et al. (1995). This plot shows strong evolution in the H$\alpha$ luminosity density from $z\sim 0 $ to $z \sim 1.3$. This is no surprise given the evolution in the ultraviolet luminosity density, but our result provides an independent measure of evolution for H$\alpha$ emission alone. The LF is well fit by a Schechter function over the range $6 \times 10^{41} <$ L$(H\alpha$) $ < 2 \times 10^{43}$ erg sec$^{-1}$ with L$^{*} = 7 \times 10^{42}$ erg sec$^{-1}$ and $\phi^* = 1.7 \times 10^{-3}$ Mpc$^{-3}$ for H$_0=50$~km s$^{-1}$ Mpc$^{-1}$ and q$_0=0.5$. The integrated H$\alpha$ luminosity density at $z \sim 1.3$ (our median $z$) is $1.64\times 10^{40}$~h$_{50}$~erg s$^{-1}$ Mpc$^{-3}$, $\sim$14 times greater than the local value reported by Gallego et al. (1995). \begin{figure} \plotfiddle{yanl2.eps}{6.5truecm}{0.0}{45}{45}{-160}{-100} \caption{H$\alpha$ luminosity function at $ 0.7 < z < 1.9$. The open and filled circles are the data points from our measurements. The open circles represent the raw data and the filled circles are the points corrected for incompleteness. The incompleteness correction is only significant at the faintest luminosity bin. The open triangles show the local H$\alpha$ luminosity function by Gallego et al. (1995). The solid and dashed lines are the best fits to the data at $z \sim 1.3$ and $z \sim 0$ respectively.} \end{figure} We converted the integrated H$\alpha$ luminosity density to a star formation rate (SFR) using the relation from Kennicutt (1999): $\rm {SFR}(M_\odot yr^{-1}) = 7.9 \times 10^{-42} L(H\alpha) (erg~s^{-1}) $. This assumes Case B recombination at $T_e = 10^4$~K and a Salpeter IMF ($0.1 - 100~M_\odot$). This conversion factor is about 10\% smaller than the value listed in Kennicutt (1983), the difference reflecting updated evolutionary tracks. While different choices of stellar tracks introduce modest uncertainties in the conversion of UV and H$\alpha$ luminosities to star formation rates, the choice of different IMFs lead to rather large differences. To make consistent comparisons between our results and those in the literature derived from 1500\AA\ and 2800\AA\ UV continuum luminosity densities, we adopt the relation from Kennicutt (1999): $\rm {SFR}(M_\odot yr^{-1}) = 1.4 \times 10^{-28} L(1500-2800\AA) (erg~s^{-1} Hz^{-1}) $. This relation is appropriate for the Salpeter IMF used to derive the H$\alpha$ conversion factor. \begin{figure} \plotfiddle{yanl3.eps}{6.0truecm}{0.0}{45}{45}{-160}{-120} \caption{The global volume-averaged star formation rate as a function of redshift without any dust extinction correction. The open squares represent measurements of the 2800\AA\ or 1500\AA\ continuum luminosity density by Lilly et al. (1996), Connolly et al. (1997) and Steidel et al. (1998), whereas the filled squares are the measurements using H$\alpha$~6563\AA\ by Gallego et al. (1995), Tresse \&\ Maddox (1996) and Glazebrook et al. (1998). Our result is shown in the filled circle.} \end{figure} In Figure 3, we plot uncorrected published measurements of the volume-averaged global star formation rate at various epochs. Our result is shown as a filled circle. The star formation rates shown in Figure 3 are calculated from the luminosity densities integrated over the entire luminosity functions, for both H$\alpha$ and the UV continuum. Lilly et al. and Connolly et al. assumed a faint end slope of $-1.3$ for the UV continuum luminosity functions at $\rm z \le 1$. The 1500\AA\ continuum luminosity function at $z \sim 3 - 4$ measured from Lyman break galaxies by Steidel et al. (1998) has a faint end slope of $-1.6$. The clear trend for the longer wavelength determinations of the star formation rate to exceed those based on UV continua is one of the pieces of evidence for significant extinction at intermediate and high redshifts. The amplitude of the extinction correction is quite uncertain Our measurement spans $0.7 < z < 1.9$, overlapping with the Connolly et al. photometric redshift sample and allowing a direct comparison between the observed 2800\AA\ luminosity density and that inferred from H$\alpha$. Our H$\alpha$-based star formation rate is three times larger than the average of the three redshift bins measured by Connolly et al. (1997). The star formation rates derived from line or continuum luminosities depend strongly on the choice of IMF, evolutionary tracks, and stellar atmospheres that are input into a specific spectral evolution model. The relevant issue for the present discussion is the ratio of the star formation rates derived from H$\alpha$ and the 2800\AA\ continuum. This ratio differs significantly for the Scalo and Salpeter IMFs and is a function of metallicity (Glazebrook et al. 1998). Our choice of the Salpeter IMF comes close to minimizing the difference between the published UV- and our H$\alpha$-derived star formation rates. The use of a Scalo IMF and solar metallicity would increase the apparent discrepancy by a factor of $\sim 2$. The only model considered by Glazebrook et al. that further reduces the H$\alpha$/2800\AA\ star formation ratio is the Salpeter IMF with the Gunn \& Stryker (1983) spectral energy distributions, and this model still leaves us with a factor of $\sim 2$ enhancement in apparent star formation activity measured at H$\alpha$. If we attribute the entire difference to reddening, the total extinction corrections at 2800\AA\ and H$\alpha$ are large and model-dependent. The calculation is sensitive to the relative geometry of the stars, gas and dust, as well as the adopted reddening curve. In the extreme case of a homogeneous foreground screen and a MW or LMC reddening curve, we derive A$_{2800} = 2.1$~magnitudes. In local starburst galaxies, differential extinction between the nebular gas, and stellar continuum, and scattering produce an effective reddening curve that is significantly grayer than the MW or LMC curves (Calzetti, Kinney \&\ Storchi-Bergmann 1994; 1996; Calzetti 1997). The Calzetti reddening law (Calzetti 1997) is appropriate for geometries in which the stars, gas and dust are well mixed. In this model, our estimate of the dust extinction at 2800\AA\ is one to two magnitudes larger than in the simple screen case, and is an uncomfortably large correction compared to results from other methods. \section{Extremely Red Objects (EROs)} Several groups have discovered a population of galaxies with extremely red colors R$-$K $> 5$ or 6. However, the statistics of EROs is still very poor and the nature of these objects remains unclear. The central issue is to understand whether EROs are intrinsically red stellar systems formed at high redshifts in a monolithic collapse or highly reddened starburst galaxies at low to moderate redshifts. The detection of strong sub-mm continuum from ERO HR~10 (Dey et al. 1999; Cimmati et al. 1998) provides conclusive evidence that some of EROs, if not all, are indeed dusty, starburst galaxies at a star formation rate of $\rm 500-1000 M_\odot~yr^{-1}$ at moderate redshifts ($z \sim 1 -2$). We have obtained deep ground-based optical images of 27 NIC3 fields, yielding $\sim$ 20 square arcminutes. Among these fields, we have identified about a dozen of EROs with R$-$H $>5$ and H brighter than 20.6. The surface density of EROs with H $<$ 20.6 and R$-$H $>5$ is roughly 0.6/sq. arcminutes. We also found some evidence that EROs are highly clustered. Among 27 NIC3 fields, we found 2 clusters of EROs. Figure 4 shows a cluster of EROs in a single NIC3 field (0.75$^{''}$) where we also have K-band magnitudes. \begin{figure} \plotfiddle{yanl4.eps}{6.5truecm}{0.0}{50}{50}{-160}{-100} \caption{This plot shows a cluster of EROs with BVRH images in a single NIC3 fields (0.75${''}$). All the four EROs have R$-$K $>6$ and the brightest two A and D have K $\sim 18$. The deep BVR images were taken with the BTC at CTIO.} \end{figure} \acknowledgments We thank the staff of the Space Telescope Science Institute for their efforts in making this parallel program possible. This research was supported, in part, by grants from the Space Telescope Science Institute, GO-7498.01-96A and P423101.
2,877,628,090,608
arxiv
\section{Introduction} The recent increase in digitally available language corpora made it possible to extend the traditional linguistic tools to a vast amount of often user-generated texts. Understanding how these corpora differ from traditional texts is crucial in developing computational methods for web search, information retrieval or machine translation \cite{CrystalInternetLinguistics}. The amount of these texts enables the analysis of language on a previously unprecedented scale \cite{Altmann2016StatisticalLinguistics,Gerlach2014ScalingFrequencies,Altmann2009BeyondWords}, including the dynamics, geography and time scale of language change \cite{Goel2016,Goncalves2017TheEnglish}, social media cursing habits \cite{Wang2014CursingTwitter,Gauthier2015TextHabits,Byrne2014SweetSoccer} or dialectal variations \cite{Blodgett2016c}. From online user activity and content, it is often possible to infer different socio-economic variables on various aggregation scales. Ranging from showing correlation between the main language features on Twitter and several demographic variables \cite{Bokanyi2016}, through predicting heart-disease rates of an area based on its language use \cite{Eichstaedt2015} or relating unemployment to social media content and activity \cite{Llorente2014,bokanyi2017prediction, Pavlicek2015} to forecasting stock market moves from search semantics \cite{Curme2014}, many studies have attempted to connect online media language and metadata to real-world outcomes. Various studies have analyzed spatial variation in the text of OSN messages and its applicability to several different questions, including user localization based on the content of their posts \cite{Cheng2010,Backstrom2008}, empirical analysis of the geographic diffusion of novel words, phrases, trends and topics of interest \cite{travelingtrends,Eisenstein2012}, measuring public mood \cite{Mitchell2013}. While many of the above cited studies exploit the fact that language use or social media activity varies in space, it is hard to capture the impact of the geographic environment on the used words or concepts. There is a growing literature on how the sheer size of a settlement influences the number of patents, GDP or the total road length driven by universal laws \cite{Bettencourt2007}. These observations led to the establishment of the theory of urban scaling \cite{Bettencourt2010b,Alves2015c, Arcaute2015a,Cottineau2017,Bettencourt2013a,Bettencourt2013d,Gomez-Lievano2012, Gomez-Lievano2016a,Yakubo2014SuperlinearCities}, where scaling laws with city size have been observed in various measures such as economic productivity \cite{Lobo2013}, human interactions \cite{Schlapfer2014b}, urban economic diversification \cite{Strumsky2016}, election data \cite{Bokanyi2018UniversalResults}, building heights \cite{Schlapfer2015UrbanSize}, crime concentration \cite{Oliveira2017,Hanley2016a} or touristic attractiveness \cite{Bojic2016ScalingStates}. In our paper, we aim to capture the effect of city size on language use via individual urban scaling laws of words. By examining the so-called scaling exponents, we are able to connect geographical size effects to systematic variations in word use frequencies. We show that the sensitivity of words to population size is also reflected in their meaning. We also investigate how social media language and city size affects the parameters of Zipf's law \cite{Zipf1932SelectedLanguage}, and how the exponent of Zipf's law is different from that of the literature value \cite{Zipf1932SelectedLanguage,Takahashi2018AssessingProperties}. We also show that the number of new words needed in longer texts, the Heaps law \cite{Altmann2016StatisticalLinguistics} exhibits a power-law form on Twitter, indicating a decelerating growth of distinct tokens with city size. \section{Methods} \subsection{Twitter and census data} We use data from the online social network Twitter, which freely provides approximately 1\% of all sent messages via their streaming API. For mobile devices, users have an option to share their exact location along with the Twitter message. Therefore, some messages contain geolocation information in the form of GPS-coordinates. In this study, we analyze 456 millions of these geolocated tweets collected between February 2012 and August 2014 from the area of the United States. We construct a geographically indexed database of these tweets, permitting the efficient analysis of regional features \cite{Dobos2013}. Using the Hierarchical Triangular Mesh scheme for practical geographic indexing, we assigned a US county to each tweet \cite{Szalay2007,Kondor2014}. County borders are obtained from the GAdm database \cite{gadm}. Counties are then aggregated into Metropolitan and Micropolitan Areas using the county to metro area crosswalk file from \cite{CMSsCrosswalk}. Population data for the MSA areas is obtained from \cite{Bureau}. There are many ways a user can post on Twitter. Because a large amount of the posts come from third-party apps such as Foursquare, we filter the messages according to their URL field. We only leave messages that have either no source URL, or their URL after the \texttt{'https://'} prefix matches one of the following SQL patterns: \texttt{'twit\%'}, \texttt{'tl.gd\%'} or \texttt{'path.com\%'}. These are most likely text messages intended for the original use of Twitter, and where automated texts such as the phrase 'I'm at' or 'check-in' on Foursquare are left out. For the tokenization of the Twitter messages, we use the toolkit published on \url{https://github.com/eltevo/twtoolkit}. We leave out words that are less than three characters long, contain numbers or have the same consecutive character more than twice. We also filter hashtags, characters with high unicode values, usernames and web addresses \cite{Dobos2013}. \subsection{Urban scaling} Most urban socioeconomic indicators follow the certain relation for a certain urban system: \begin{equation} Y(N)=Y_0\cdot N^\beta, \label{eq:scaling} \end{equation} \noindent where $Y$ denotes a quantity (economic output, number of patents, crime rate etc.) related to the city, $Y_0$ is a multiplication factor, and $N$ is the size of the city in terms of its population, and $\beta$ denotes a scaling exponent, that captures the dynamics of the change of the quantity $Y$ with city population $N$. $\beta=1$ describes a linear relationship, where the quantity $Y$ is linearly proportional to the population, which is usually associated with individual human needs such as jobs, housing or water consumption. The case $\beta>1$ is called superlinear scaling, and it means that larger cities exhibit disproportionately more of the quantity $Y$ than smaller cities. This type of scaling is usually related to larger cities being disproportionately the centers of innovation and wealth. The opposite case is when $\beta<1$, that is called sublinear scaling, and is usually related to infrastructural quantities such as road network length, where urban agglomeration effects create more efficiency. \cite{Bettencourt2013a} Here we investigate scaling relations between urban area populations and various measures of Twitter activity and the language on Twitter. When fitting scaling relations on aggregate metrics or on the number of times a certain word appears in a metropolitan area, we always assume that the total number of tweets, or the total number of a certain word $Y_{tot}$ must be conserved in the law. That means that we have only one parameter in our fit, the value of $\beta$, while the multiplication factor $Y_0$ determined by $\beta$ and $Y_{tot}$ as follows: \[\sum_{i=1}^K Y_0\cdot N_i^\beta = Y_{tot},\] \noindent where the index $i$ denotes different cities, the total number of cities is $K$, and $N_i$ is the population of the city with index $i$. We use the 'Person Model' of Leitao et al. \cite{Leitao2016a}, where this conservation is ensured by the normalization factor, and where the assumption is that out of the total number of $Y_{tot}$ units of output that exists in the whole urban system, the probability $p(j)$ for one person $j$ to obtain one unit of output depends only on the population $N_j$ of the city where person $j$ lives as \[p(j)=\frac{N_j^{\beta-1}}{Z(\beta)},\] where $Z(\beta)$ is the normalization constant, i.e. $Z(\beta)=\sum_{j=1}^M N_j^{\beta-1}$, if there are altogether $M$ people in all of the cities. Formally, this model corresponds to a scaling relationship from (\ref{eq:scaling}), where $Y_0=Y_{tot}/Z(\beta)$. But it can also be interpreted as urban scaling being the consequence of the scaling of word choice probabilities for a single person, which has a power-law exponent of $\beta-1$. To assess the validity of the scaling fits for the words, we confirm nonlinear scaling, if the difference between the likelihoods of a model with a $\beta_W$ (the scaling exponent of the total number of words) and $\beta$ given by the fit is big enough. It means that the difference between the Bayesian Information Criterion (BIC) values of the two models $\Delta BIC = BIC_{\beta=1}-BIC_{\beta\neq 1}$ is sufficiently large \cite{Leitao2016a}: $\Delta BIC >6$. Otherwise, if $\Delta BIC<0$, the linear model fits the scaling better, and between the two values, the fit is inconclusive. \subsection{Zipf's law} We use the following form for Zipf's law that is proposed in \cite{FerreriCancho2001}, and that fits the probability distribution of the word frequencies apart from the very rare words: \[p(f) = C\cdot f^{-\alpha}\mbox{, if }f>f_{min}.\] We fit the probability distribution of the frequencies using the \texttt{powerlaw} package of Python \cite{Alstott2014}, that uses a Maximum Likelihood method based on the results of \cite{Goldstein2004,Toeplitz2015,Virkar2014}. $f_{min}$ is the frequency for which the power-law fit is the most probable with respect to the Kolmogorov-Smirnov distance \cite{Alstott2014}. A perhaps more common form of the law connects the rank of a word and its frequency: \[f(r)=C\cdot r^{-\gamma}.\] \noindent We use the previous form because the fitting method of \cite{Alstott2014} can only reliably tell the exponent for the tail of a distribution. In the rank-frequency case, the interesting part of the fit would be at the first few ranks, while the most common words are in the tail of the $p(f)$ distribution. The two formulations can be easily transformed into each other (see \cite{FerreriCancho2001}, which gives us \[\alpha=\frac{1}{\gamma}+1.\] \noindent This enables us to compare our result to several others in the literature. \section{Results and discussion} \subsection{Scaling of aggregate metrics} First, we checked how some aggregate metrics: the total number of users, the total number of individual words and the total number of tweets change with city size. Figures \ref{fig:usernum}, \ref{fig:wordcnt} and \ref{fig:tweetcnt} show the scaling relationship data on a log-log scale, and the result of the fitted model. In all cases, $\Delta BIC$ was greater than 6, which confirmed nonlinear scaling. The the total count of tweets and words both have a slightly superlinear exponents around 1.02. The deviation from the linear exponent may seem small, but in reality it means that for a tenfold increase in city size, the abundance of the quantity $Y$ measured increases by 5\%, which is already a significant change. The number of users scales sublinearly ($\beta=0.95\pm 0.01$) with the city population, though. \begin{figure}[h!] \begin{center} \includegraphics[width=0.6\textwidth]{population_vs_num_total_users.png} \caption{Scaling of the number of distinct users who sent a geolocated message with city population. Each point represents an MSA, the fitted line is the best MLE fit for the Person Model of \cite{Leitao2016a}.} \label{fig:usernum} \end{center} \end{figure} \begin{figure}[p] \begin{center} \includegraphics[width=0.6\textwidth]{population_vs_wordcnt.png} \caption{Scaling of the total number of words with city population. Each point represents an MSA, the fitted line is the best MLE fit for the Person Model of \cite{Leitao2016a}.} \label{fig:wordcnt} \end{center} \end{figure} \begin{figure}[p] \begin{center} \includegraphics[width=0.6\textwidth]{population_vs_total_tweetcnt.png} \caption{Scaling of the total number of geolocated messages with city population. Each point represents an MSA, the fitted line is the best MLE fit for the Person Model of \cite{Leitao2016a}.} \label{fig:tweetcnt} \end{center} \end{figure} It has been shown in \cite{Schlapfer2014b} that total communication activity in human interaction networks grows superlinearly with city size. This is in line with our findings that the total number of tweets and the total word count scales superlinearly. However, the exponents are not as big as that of the number of calls or call volumes in the previously mentioned article ($\beta\in[1.08,1.14]$), which suggests that scaling exponents obtained from a mobile communication network cannot automatically be translated to a social network such as Twitter. \subsection{Individual scaling of words} For the 11732 words that had at least 10000 occurrences in the dataset, we fitted scaling relationships using the Person Model. The distribution of the fitted exponents is visible in Figure~\ref{fig:expdistr}. There is a most probable exponent of approximately 1.02, which corresponds roughly to the scaling exponent of the overall word count. This is the exponent which we use as an alternative model for deciding nonlinearity, because a word that has a scaling law with the same exponent as the total number of words has the same relative frequency in all urban areas. The linear and inconclusive cases calculated from $\Delta BIC$ values are located around this maximum, as shown in different colors in Figure~\ref{fig:expdistr}. In this figure, linearly and nonlinearly classified fits might appear in the same exponent bin, because of the similarity in the fitted exponents, but a difference in the goodness of fit. Words with a smaller exponent, that are "sublinear" do not follow the text growth, thus, their relative frequency decreases as city size increases. Words with a greater exponent, that are "superlinear" will relatively be more prevalent in texts in bigger cities. There are slightly more words that scale sublinearly (5271, 57\% of the nonlinear words) than superlinearly (4011, 43\% of the nonlinear words). Three example fits from the three scaling regime are shown in Figure~\ref{fig:examples}. \begin{figure}[h!] \begin{center} \subfloat[Sublinear scaling]{\includegraphics[width=0.3\textwidth]{squirrels.png}} \subfloat[Linear scaling]{\includegraphics[width=0.3\textwidth]{the.png}} \subfloat[Superlinear scaling]{\includegraphics[width=0.3\textwidth]{artists.png}} \caption{Three scaling relationships from the sublinear (a), linear (b), and superlinear (c) scaling regimes with the MLE fits explained in the Methods section.} \label{fig:examples} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.6\textwidth]{exp_distr.png} \caption{Distribution of word exponents. Statistically significant deviations from the scaling of the total number of words are marked by color codes. The peak around 1.02 marks words that have an exponent around the exponent of the total number of words. The majority of words follow a superlinear or a sublinear scaling law. Note, that there can be multiple categories in one bin according to the $\Delta BIC$ of fits.} \label{fig:expdistr} \end{center} \end{figure} \begin{table} \begin{tabular}{p{\textwidth}} \texttt{the you and that for this just lol like with have but get not your was all love what are when out know good now got can about one time day how they too shit want back need why she people right some see going today fuck will really her} \end{tabular} \caption{The top 50 words as ranked according to the $BIC$ values for a $\beta=1.0207$ fixed exponent Person Model. These are the words that correspond most to the scaling of the overall word volume, thus, they are the words that appear most homogeneously in the texts of all urban areas.} \label{table:toplinear} \end{table} We sorted the words falling into the "linear" scaling category according to their $BIC$ values showing the goodness of fit for the fixed $\beta$ model. The first 50 words in Table~\ref{table:toplinear} according to this ranking are some of the most common words of the English language, apart from some swearwords and abbreviations (e.g. lol) that are typical for Twitter language \cite{Bokanyi2016}. These are the words that are most homogeneously present in the text of all urban areas. \begin{table} \begin{tabular}{p{\textwidth}} sublinear:\\ \texttt{flood severe thunderstorm warning statement april lsu february bama ole unc shxt beside deer shelby kentucky ian fishing dynasty dorm freeze nigha carolina roomie walmart december january tornado gotti mountains mite wind kelsey campus exams mart roommates frat mud roads lmbo biology duke logan roommate ruzzle exam pinterest brooke bahaha slowly further mam hunting bahahaha thanking dang dwn hush softball bailey haley porch rec gates yuu november memphis marshall haven storms ncaa cody renee tanning oomf heck paige nosey casino southern muh bre lab tub truck cowboy jeep seth messy lawd layin tourney trashy puke library gah lake tweeps rae semester wreck johns bonfire studying until quit state gotcha anatomy prolly knw eagle wrk lifting flag lastnight courtney awhile tweetin bend ann abby march douche snuggle fog bracket hannah bedtime golf sittin gosh lynn whiskey nerves rain road town fixing hut whatcha drinkin driveway damnit country moore riley lyin duck}\\ \\ superlinear:\\ \texttt{hoy gracias por para feliz con cuando que siempre verdad algo donde amor ver tiempo mejor semana estas alguien bien jajaja mas del todo jajajaja vez tus ama tengo vamos porque buenos eres linda muy quiero puedo hola las mucho nada sabes mañana amo soy les tambien vas dormir buenas amigo hay madrid mis bueno gusta brunch mal jaja uno flight familia dos cara delayed landed dice casa amigos loco grande papi fin traffic tix com lounge puerto heights brazil rico deja gate madre solo pls luis plane event international bon bella oscar sin mil damm ily mon studio maria carlos lmfao italian das film thx omw peep era salon omfg van jose london sushi blocks security vip mah ilysm hookah fitness cos ariana fashion via park jenny performing pronto artists stadium kanye restaurant awk melissa market danny ale booked leo inspired connect rft fab culture artist demi blasting design} \end{tabular} \caption{The most sublinearly ($0.54<\beta<0.93$) or superlinearly ($1.13<\beta<1.41$) scaling words out of the 5000 most frequent words with small bootstrapped error $\Delta \beta <0.1$. Sublinear words are sorted in an ascending, superlinear words in a descending order with respect to $\beta$.} \label{table:wordlist} \end{table} From the first 5000 words according to word rank by occurrence, the most sublinearly and superlinearly scaling words can be seen in Table~\ref{table:wordlist}. Their exponent differs significantly from that of the total word count, and their meaning can usually be linked to the exponent range qualitatively. The sublinearly scaling words mostly correspond to weather services reporting (flood 0.54, thunderstorm 0.61, wind 0.85), some certain slang and swearword forms (shxt 0.81, dang 0.88, damnit 0.93), outdoor-related activities (fishing 0.82, deer 0.81, truck 0.90, hunting 0.87) and certain companies (walmart 0.83). There is a longer tail in the range of superlinearly scaling words than in the sublinear regime in Figure \ref{fig:expdistr}. This tail corresponds to Spanish words (gracias 1.41, por 1.40, para 1.39 etc.), that could not be separated from the English text, since the shortness of tweets make automated language detection very noisy. Apart from the Spanish words, again some special slang or swearwords (deadass 1.52, thx 1.16, lmfao 1.17, omfg 1.16), flight-reporting (flight 1.25, delayed 1.24 etc.) and lifestyle-related words (fitness 1.15, fashion 1.15, restaurant 1.14, traffic 1.22) dominate this end of the distribution. Thus, when compared to the slightly nonlinear scaling of total amount of words, not all words follow the growth homogeneously with this same exponent. Though a significant amount remains in the linear or inconclusive range according to the statistical model test, most words are sensitive to city size and exhibit a super- or sublinear scaling. Those that fit the linear model the best, correspond to a kind of 'core-Twitter' vocabulary, which has a lot in common with the most common words of the English language, but also shows some Twitter-specific elements. A visible group of words that are amongst the most super- or sublinearly scaling words are related to the abundance or lack of the elements of urban lifestyle (e.g. deer, fitness). Thus, the imprint of the physical environment appears in a quantifiable way in the growths of word occurrences as a function of urban populations. Swearwords and slang, that are quite prevalent in this type of corpus \cite{Gauthier2015TextHabits,Wang2014CursingTwitter}, appear at both ends of the regime that suggests that some specific forms of swearing disappear with urbanization, but the share of overall swearing on Twitter grows with city size. The peak consisting of Spanish words at the superlinear end of the exponent distribution marks the stronger presence of the biggest non-English speaking ethnicity in bigger urban areas. This is confirmed by fitting the scaling relationship to the Hispanic or Latino population \cite{RankingCenter} of the MSA areas ($\beta=1.31\pm 0.14$, see SI), which despite the large error, is very superlinear. \subsection{Zipf's law on Twitter} Figure~\ref{fig:zipf} shows the distribution of word counts in the overall corpus. The power-law fit gave a minimum count $x_{min}=13$, and an exponent $\alpha=1.682\pm0.001$. To check whether this law depends on city size, we fitted the same distribution for the individual cities, and according to Figure~\ref{fig:zipfcity}, the exponent gradually decreases with city size, that is, it decreases with the length of the text. \begin{figure}[h!] \begin{center} \includegraphics[width=0.6\textwidth]{zipf.png} \caption{Probability distribution of word frequencies in the overall corpus and power-law fitted by the \texttt{powerlaw} package.} \label{fig:zipf} \end{center} \end{figure} That the relative frequency of some words changes with city size means that the frequency of words versus their rank, Zipf's law, can vary from metropolitan area to metropolitan area. We obtained that the exponent of Zipf's law depends on city size, namely that the exponent decreases as text size increases. It means that with the growth of a city, rarer words tend to appear in greater numbers. The values obtained for the Zipf exponent are in line with the theoretical bounds 1.6-2.4 of \cite{FerreriCancho2005TheLanguage}. In the communication efficiency framework \cite{FerreriCancho2005TheLanguage,FerreriCancho2003}, decreasing $\beta$ can be understood as decreased communication efficiency due to the increased number of different tokens, that requires more effort in the process of understanding from the reader. Using more specific words can also be a result of the 140 character limit, that was the maximum length of a tweet at the time of the data collection, and it may be a similar effect to that of texting \cite{Crystal2008Texting}. This suggests that the carrying medium has a huge impact on the exact values of the parameters of linguistic laws. The Zipf exponent measured in the overall corpus is also much lower than the $\beta = 2$ from the original law \cite{Zipf1932SelectedLanguage}. We do not observe the second power-law regime either, as suggested by \cite{Montemurro2001} and \cite{FerreriCancho2001}. Because most observations so far hold only for books or corpora that contain longer texts than tweets, our results suggest that the nature of communication, in our case Twitter itself affects the parameters of linguistic laws. \begin{figure}[h!] \begin{center} \includegraphics[width=0.6\textwidth]{zipf_vs_population.png} \caption{Dependency of the Zipf exponent on city population. The exponent decreases as the number of words in a city grows.} \label{fig:zipfcity} \end{center} \end{figure} \subsection{Vocabulary size change} \begin{figure}[h!] \begin{center} \includegraphics[width=0.6\textwidth]{population_vs_vocabcnt.png} \caption{Scaling of the total number of distinct words with city population. Each point represents an MSA, the fitted line is the best MLE fit for the Person Model of \cite{Leitao2016a}.} \label{fig:vocabcnt} \end{center} \end{figure} Figure~\ref{fig:vocabcnt} shows the vocabulary size as a function of the metropolitan area population, and the power-law fit. It shows that in contrary to the previous aggregate metrics, the vocabulary size grows very sublinearly ($\beta=0.68$) with the city size. This relationship can also be translated to the dependency on the total word count, which would give a $\beta=0.68/1.02=0.67$, another sublinear scaling. The decrease in $\beta$ for bigger cities (or bigger Twitter corpora) suggesting a decreasing number of words with lower frequencies is thus confirmed. There is evidence, that as languages grow, there is a decreasing marginal need for new words \cite{Petersen2012}. In this sense, the decelerated extension of the vocabulary in bigger cities can also be regarded as language growth. \section{Conclusion} In this paper, we investigated the scaling relations in citywise Twitter corpora coming from the Metropolitan and Micropolitan Statstical Areas of the United States. We could observe a slightly superlinear scaling decreasing with the city population for the total volume of the tweets and words created in a city. When observing the scaling of individual words, we found that a certain core vocabulary follows the scaling relationship of that of the bulk text, but most words are sensitive to city size, and their frequencies either increase at a higher or a lower rate with city size than that of the total word volume. At both ends of the spectrum, the meaning of the most superlinearly or most sublinearly scaling words is representative of their exponent. We also examined the increase in the number of words with city size, which has an exponent in the sublinear range. This shows that there is a decreasing amount of new words introduced in larger Twitter corpora. \clearpage \setlength{\parskip}{0em} \subsubsection*{Data availability} Owing to Twitter's policy we cannot publicly share the original dataset used in this analysis. However, aggregated results from which all calculations can be recreated are available in at \url{http://bokae.web.elte.hu/papers/2018/word_scaling} and from the Dryad Digital Repository: \url{https://doi.org/10.5061/dryad.824f24t}, with the review link \url{ https://datadryad.org/review?doi=doi:10.5061/dryad.824f24t}. \subsubsection*{Competing interests} The authors declare no competing interests. \subsubsection*{Author contributions} E.B. and G.V. designed the study, E.B. and D.K. analyzed the data, E.B., D.K. and G.V. synthetized the results, E.B. and D.K. wrote the manuscript. All authors gave final approval for publication and agree to be held accountable for the work performed therein. \subsubsection*{Funding} The authors thank the support of the National Research, Development and Innovation Office of Hungary (grant no. KH125280). \subsubsection*{Research Ethics} We were not required to complete an ethical assessment prior to conducting our research. \subsubsection*{Animal Ethics} We were not required to complete an ethical assessment prior to conducting our research. \subsubsection*{Permission to carry out fieldwork} No permissions were required prior to conducting our research. \subsubsection*{Acknowledgements} Not applicable. \clearpage \bibliographystyle{unsrt}
2,877,628,090,609
arxiv
\section{Introduction and Summary} The Riemann hypothesis is perhaps the best-known open problem of mathematics. It hypothesizes that all non-real zeros of Riemann's zeta function $\zeta(s)$, $s\in\mathbb{C}$, lie on the straight line $\frac12+i\mathbb{R}$, where $\zeta(s)$ is obtained from Euler's real (Dirichlet-)series \vspace{-20pt} \begin{equation}\label{zetaFCT} \zeta(s) = {\textstyle\sum\limits_{n\in\mathbb{N}}^{}}\; \frac{1}{n^s}, \quad s > 1, \end{equation} \vspace{-15pt} \noindent by analytic continuation to the complex plane; see \cite{Edwards} for a good introduction. The importance of Riemann's hypothesis derives from the fact that its truth would confirm deep putative insights into the distribution of the natural prime numbers --- a holy grail of number theory. This feat would also have applications: chiefly in encryption, but also in physics, see \cite{NHFG,SchuHu,Watkin}. It continues to fascinate the minds of professionals and amateurs alike. The latter group includes Benoit Cloitre, who has been documenting his experimental mathematical approach to number theory in general, and to the Riemann hypothesis in particular, on his homepage \cite{Cloitre}. Some years ago he pondered (``for no particular reason'')\footnote{\noindent{Private communication by B.C. on 02.2016; we took the liberty to attach $_{\mbox{\tiny{\sc{Cl}}}}$ at Cloitre's~P$(t)$.}} the deterministic infinite trigonometric product \vspace{-20pt} \begin{equation}\label{CloitrePROD} \textstyle\prod\limits_{n\in\mathbb{N}}\left[\frac23+\frac13\cos\left({t}/{n^{2}}\right)\right] =: {\mbox{P}_{\mbox{\tiny\sc{Cl}}}}(t),\quad t\in\mathbb{R}, \end{equation} \vspace{-15pt} \noindent which appears to be fluctuating chaotically about some monotone trend; see Fig.~1 and Fig.~2 below. Cloitre ``guessed'' that $\ln {\mbox{P}_{\mbox{\tiny\sc{Cl}}}}(t) \sim -C \sqrt{t}$ when $t\to\infty$ for \emph{some} constant $C>0$, which captures the trend asymptotically, and he asked us whether we can prove this. The proof requires only elementary undergraduate mathematics and will be given in section \ref{THM} (in fact, we prove a stronger result). But why does ${\mbox{P}_{\mbox{\tiny\sc{Cl}}}}(t)$ fluctuate apparently chaotically about its monotone trend? And what does this have to do with the Riemann hypothesis? Statistical physics offers some answers. We note (see section \ref{PROB}) that any trigonometric product \vspace{-20pt} \begin{equation}\label{genCLOITREprod} \textstyle\prod\limits_{n\in\mathbb{N}}\left[1- p +p\cos\left({t}/{n^s}\right)\right] =: {\mbox{Cl}_{p;s}^{}}(t) ,\quad t\in\mathbb{R}, \quad p\in (0,1]\ \&\ s>\frac12, \end{equation} \vspace{-15pt} \noindent is \emph{the characteristic function of a} ``\emph{random Riemann-$\zeta$ function}'' $\Omega^\zeta_{p}(s)$, i.e. ${\mbox{Cl}_{p;s}^{}}(t)\equiv \Phi^{}_{\Omega^\zeta_p(s)}(t)$, where $\Phi^{}_{\Omega}(t) := \mbox{Exp}\big(e^{it\Omega}\big)$ with ``Exp'' denoting \emph{expected value}. Here, \vspace{-20pt} \begin{equation}\label{RANDOMzetaFCT} \Omega^\zeta_{p}(s) := {\textstyle\sum\limits_{n\in\mathbb{N}}^{}} R_p^{}(n)\frac{1}{n^s}, \quad s >\tfrac12,\quad p\in(0,1], \end{equation} \vspace{-15pt} \noindent where $\{R_p^{}(n)\in\{-1,0,1\}\}^{}_{n\in\mathbb{N}}$ is a sequence of i.i.d. random coefficients, with Prob$(R_p^{}(n)=0) = 1-p$ and Prob$(R_p^{}(n)=1) = p/2=$ Prob$(R_p^{}(n)=-1)$. We draw heavily on the probabilistically themed publications by Kac \cite{Kac}, Morrison \cite{Morrison}, and Schmuland \cite{Schmuland}, in which \emph{the random harmonic series} ${\Omega^{\mbox{\tiny{harm}}}}\equiv\Omega^\zeta_{1}(1)$ is explored; in \cite{Schmuland} also the special case $\Omega^\zeta_{1}(2)$ is explored. We register that $p=1$ and $s=1$ in ${\mbox{Cl}_{p;s}^{}}(t)$ yields (cf. sect.5.2 in \cite{Morrison}) \vspace{-20pt} \begin{equation}\label{MorrisonPRODharmonic} \Phi^{}_{\Omega^{\mbox{\tiny{harm}}}}(t) = \textstyle\prod\limits_{n\in\mathbb{N}}^{}\cos \frac{t}{n}, \end{equation} \vspace{-12pt}\noindent while Cloitre's ${\mbox{P}_{\mbox{\tiny\sc{Cl}}}}(t)$ is the special case $p=\frac13$ and $s=2$ in ${\mbox{Cl}_{p;s}^{}}(t)$. Both $\zeta(s)$ and $-\zeta(s)$ are possible outcomes for such random Riemann-$\zeta$ functions $\Omega^\zeta_{p}(s)$, namely the extreme cases in which each $R_p^{}(n), n\in\mathbb{N}$, comes out $1$, respectively $-1$. While this is trivial, we anticipate that also $1/\zeta(s)$ is a possible outcome for $\Omega^\zeta_{p}(s)$, which is nontrivial and going to be interesting! After introducing the notion of \emph{typicality} for the random walks associated to $\Omega^\zeta_{p}(s)$ we will ask how typical $\zeta(s)$ and $1/\zeta(s)$ are. It should come as no surprise that $\zeta(s)$ is an extremely atypical outcome of a random Riemann-$\zeta$ walk, and so is $-\zeta(s)$. However, for the particular value of $p=6/\pi^2$, its reciprocal $1/\zeta(s)$ does exhibit several aspects of typicality. Intriguingly, as pointed out to us by Alex Kontorovich, \emph{if $1/\zeta(s)$ also exhibits a certain {particular} aspect of typicality, then the Riemann hypothesis is true, and false if not}! This will be extracted from \cite{Edwards} in section \ref{typical}. Which of the many aspects of typicality are exhibited by $1/\zeta(s)$ is an interesting open question which may go beyond settling the Riemann hypothesis. We will use a paradox to argue, though, that $1/\zeta(s)$ cannot possibly exhibit each and every aspect of typicality, i.e. $1/\zeta(s)$ cannot be a perfectly typical random Riemann-$\zeta$ walk. So much for the connection between Cloitre's ${\mbox{Cl}_{\hfrac13;2}^{}}(t)$ and Riemann's $\zeta$ function. As for our inquiry into Cloitre's surmise that $\ln {\mbox{Cl}_{\hfrac13;2}^{}}(t) \sim -C\sqrt{t}\; \left(t\rightarrow\infty\right)$ for {some} $C>0$, curiously some well-known probability laws emerged unexpectedly. Using elementary analysis we will prove in section \ref{THM} that if $p\in(0,1]\ \&\ s> \frac12$, then there exists $K_{p;s}^{}\!>\!0$, and $F_{p;s}^{}(|t|)$ bounded by $|F_{p;s}^{}(|t|)| \leq \exp\big(K_{p;s}^{} |t|^{1/(s+1)}\big)$, so that \vspace{-20pt} \begin{equation}\label{genCLOITREtrendANDfluc} \forall\,t\in\mathbb{R}:\quad {\mbox{Cl}_{p;s}^{}}(t) = \exp\bigl(- C_{p;s}^{} \,|t|^{1/s}\bigr) F_{p;s}^{}(|t|) \end{equation} \vspace{-5pt} \noindent with \vspace{-10pt} \begin{equation}\label{generalTRENDcoeff} C_{p;s}^{} = -\frac1s\int_0^\infty\ln |{1-p+p\cos\xi}|\frac{1}{\xi^{1+1/s}}{\rm{d}}\xi; \end{equation} \vspace{-10pt} \noindent when $p\in(0,\frac12)$ the integral can be evaluated in terms of a rapidly converging series expansion. This result not only vindicates Cloitre's surmise as a corollary, we note that the factor $\exp\bigl({- C_{p;s}^{} \,|t|^{1/s}}\bigr)$ at r.h.s.(\ref{genCLOITREtrendANDfluc}) in itself is a characteristic function --- of Paul L\'evy's \emph{stable laws}; see \cite{probBOOK}. Stable laws exist for all $s\geq 1/2$, but here $s = 1/2$ is ruled out because $C_{p;1/2}=\infty$. Be that as it may, stable L\'evy laws (which have applications in physics \cite{GaroniFrankel}) were discovered by answering a completely different question \cite{probBOOK,GaroniFrankel}, and the probabilistic reason why they would feature in the analysis of the random Riemann-$\zeta$ functions is presently obscure. Lest the reader gets the wrong impression that random Riemann-$\zeta$ functions were small perturbations of L\'evy random variables, we emphasize that they are not! Although the ``chaotic factor'' $F_{p;s}^{}(|t|)$ in (\ref{genCLOITREtrendANDfluc}) is \emph{overwhelmed} by $\exp(-C_{p;s}^{}|t|^{1/s})$ when $|t|$ is large enough, $F_{p;s}^{}(|t|)$ is not approaching 1 and in fact responsible for relatively large chaotic fluctuations of ${\mbox{Cl}_{p;s}^{}}(t)$ about the L\'evy trend; see Fig.~1\ \&\ Fig.~2. \includegraphics[scale=0.44]{CloitrePplusTRENDsmallX.jpg} \hspace{-20pt} \includegraphics[scale=0.44]{CloitrePplusUBOUNDmoderateX.jpg} In section \ref{TandF} we will see that the ``empirically unpredictable'' behavior of ${\mbox{Cl}_{\hfrac13;2}^{}}(t)$ is reflected in a \emph{fractal-like} structured probability distribution $\varrho^{\zeta}_{1/3;s}(d\omega)$ of $\Omega^\zeta_{1/3}(s)$ obtained by Fourier transform of ${\mbox{Cl}_{\hfrac13;2}^{}}(t)$ (section \ref{PROB}). This is also illustrated in section \ref{RRZf} by random sampling of the Riemann-$\zeta$ walks. We will show, though, that $\varrho^{\zeta}_{p;s}(d\omega)$ is not supported on a true fractal. Random variables supported on a fractal are discussed in \cite{DFT}, \cite{Morrison}, and \cite{PerezSchlagSolomyak}; see our Appendix on \emph{power walks}. The remainder of our paper supplies the details of our inquiry, and we conclude with a list of open questions. \newpage \section{Random Riemann-$\zeta$ functions}\label{RRZf} The random Riemann-$\zeta$ functions $\Omega_{p}^\zeta(s)$ given in (\ref{RANDOMzetaFCT}) have random coefficients $R_p^{}(n)\in\{-1,0,1\}$ that can be generated by a two-coin tossing process. In this vein, let's write $R_p^{}(n) = \sigma(n)|R_p^{}(n)|$, where $\sigma(n)\in\{-1,1\}$ and $|R_p^{}(n)|\in\{0,1\}$. One now repeatedly tosses both, a generally loaded coin with Prob$(H)=p\in(0,1]$ (where ``$H$'' means ``head''), and a fair one, independently of each other and of all the previous tosses. The $n$-th toss of the generally loaded coin decides whether $|R_p^{}(n)|=0$ or $|R_p^{}(n)|=1$; let's stipulate that $|R_p^{}(n)|=1$ when $H$ shows, which happens with probability $p$, and $|R_p^{}(n)|=0$ else. The concurrent and independent toss of the fair coin decides whether $\sigma(n)=+1$ or $\sigma(n)=-1$, either outcome being equally likely. Incidentally, we remark that the $R_{1/3}^{}(n)$ can also be generated by rolling a fair die --- if the $n$-th roll shows 1, then $R_{1/3}^{}(n)=1$, if it shows 6 then $R_{1/3}^{}(n)=-1$, and $R_{1/3}^{}(n)=0$ otherwise (which is the case $2/3$ of the time, in the long run). Also, it is clear that when $p=1$ then the loaded coin is superfluous, i.e. $R_1^{}(n)\in\{-1,1\}$ is generated with a single, fair coin. This completes the explanation of the ``experimental generation'' of our random Riemann $\zeta$ functions. Now let us understand which type of objects we have defined.\vspace{-10pt} \subsection{Random Riemann-$\zeta$ walks and their kin}\label{RRZw} Evaluating a random Riemann-$\zeta$ function $\Omega_{p}^\zeta(s)$ for given $p\in(0,1]$ at any particular $s>\frac12$ turns (\ref{RANDOMzetaFCT}) into a numerical random series. Recalling that an infinite series is defined as the sequence of its partial sums, viz. \vspace{-18pt} \begin{equation}\label{RZwDEF} \Omega_{p}^\zeta(s) = \left\{{\textstyle\sum\limits_{n=1}^N} R_p^{}(n)\frac{1}{n^s}\right\}^{}_{N\in\mathbb{N}}, \end{equation} \vspace{-10pt} \noindent and interpreting $\sum_{n=1}^N R_p^{}(n)\frac{1}{n^s}$ as the position of a walker after $N$ random steps $R_p^{}(n)\frac{1}{n^s}, n=1,...,N$, we can identify such an evaluation of $\Omega_{p}^\zeta(s)$ for given $p\in(0,1]$ at a particular $s>\frac12$ with a \emph{random walk on the real $\omega$ line}. If the $n$-th toss of the pair of coins comes out on ``move,'' the walker moves $1/n^s$ units in the direction determined by the fair coin; otherwise he stays put (note that such a ``non-move'' is called a ``step,'' too). Starting at $\omega=0$, he keeps carrying out these random steps ad infinitum. We call this a ``random Riemann-$\zeta$ walk,'' and its outcome (whenever it converges) is a ``random Riemann-$\zeta$ function'' evaluated at $s$. Absolute convergence is guaranteed for each and every such walk when $s>1$ (because the series (\ref{zetaFCT}) for $\zeta(s)$ converges absolutely for $s>1$), and by a famous result of Rademacher conditional convergence holds with probability 1 when $s>\frac12$, see \cite{Kac}, \cite{Morrison}, and \cite{Schmuland}. Since the harmonic series diverges logarithmically, the outcome of the random walks with $\frac12 <s\leq 1$ is distributed over the whole real line; see \cite{Schmuland} for $s=1$. To have some illustrative examples, we first pick $s=2$ and $p=\frac13$. In Fig.~3 we display (in black) the fractal tree (cf. \cite{Mandelbrot}, chpt.16; note its self-similarity) of all possible walks for $s=2$ when $p\in(0,1)$, plotted top-down to resemble a Galton board figure. (The tree is truncated after $9$ steps, for more steps would only produce a black band between the current cutoff and the finish line). Also shown (in red) is a computer-generated sample of $7$ random Riemann-$\zeta$ walks with $p=\frac13$ \&\ $s=2$. \medskip \hspace{275pt} \boxed{\textrm{{Fractal\ tree\ \&\ 7 walks)}}}\vspace{-15pt} \includegraphics[scale=0.40]{Cloitretreeoverlay9.png} \medskip We also exhibit a histogram of the endpoints of $10^5$ walks with 1000 steps (Fig.~4). \medskip \hspace{270pt} \boxed{\textrm{{Histogram\ ($s=2$, $p=\tfrac13$)}}}\vspace{-15pt} \includegraphics[scale=0.315]{cloitreplot100k.png} We next pick $s=1$ and two different choices of $p$, namely $p=\frac13$ and $p=1$. For $s=1$ the random Riemann-$\zeta$ walks become so-called ``random Harmonic Series,'' which have been studied by Kac \cite{Kac}, Morrison \cite{Morrison}, and Schmuland \cite{Schmuland} in the special case that $p=1$. When $p\neq 1$ these harmonic random walks are interesting variations on their theme. We refrain, though, from trying to display the infinitely long harmonic random walk tree, for it is difficult to illustrate it faithfully. Yet the histograms of the endpoints of $10^5$ harmonic walks with $10^3$ steps when $p=\frac13$ (Fig.~5) and $p=1$ (Fig.~6) are easily generated. \bigskip \hspace{270pt} \boxed{\textrm{{Histogram\ ($s=1$, $p=\tfrac13$)}}}\vspace{-15pt} \includegraphics[scale=0.3]{cloitre100kplotpthirdq1.png} \noindent \hspace{270pt} \boxed{\textrm{{Histogram\ ($s=1$, $p=1$)}}}\vspace{-15pt} \includegraphics[scale=0.3]{cloitre100kplotp1q1.png} \noindent Our Fig.~6 resembles the smooth theoretical PDF of the endpoints of the harmonic walk with $p=1$, displayed in Fig.~3 of \cite{Morrison} and Fig.~1 of \cite{Schmuland}, quite closely; cf. the histogram based on $5,000$ walks with 100 steps displayed in Fig.~4 of \cite{Morrison}. When $p=1$ one is always on the move, so the histogram is quite broad. Our Fig.~5 indicates that reducing $p$ (in this case to $p=1/3$) will lead to the build-up of a ``central peak.'' The peak is even more pronounced in our Fig.~4 (where $p=1/3$ and $s=2$) which reveals a rich, conceivably self-similar structure with side peaks, and side peaks to the side peaks. Our Fig.~4 also makes one wonder whether the peaks, if not fractal, could indicate that the first or second derivative of a theoretical PDF might blow up. These questions will be investigated in section \ref{THM}. But first, after having introduced random Riemann-$\zeta$ functions, at this point it is appropriate to pause and explain their relationship with the Riemann hypothesis. \section{Typicality and the Riemann Hypothesis}\label{typical} Loosely speaking, a \emph{typical feature} of a random Riemann-$\zeta$ walk is a feature which ideally occurs ``with probability 1'' (strong typicality), or at least ``in probability'' (weak typicality); see below. A \emph{(strongly or weakly) perfectly typical random Riemann-$\zeta$ walk} is an empirical outcome of a random Riemann-$\zeta$ function evaluated at $s$ which \emph{exhibits all (either strongly or weakly) typical features}. Since coin tosses are involved, for simplicity we look at the example of the set of all infinitely long sequences of fair coin tosses first. \subsection{Typicality for coin toss sequences}\label{typicalCOINS} We identify the events $H$ with $1$ and $T$ with $0$, and introduce the Bernoulli random variable $B\in\{0,1\}$, with Prob$(B=1)=\frac12$. Let $B_n$ be an identical and independent copy of $B$. Then by the \emph{strong law of large numbers} (see \cite{probBOOK}) one has \begin{equation} \mbox{Prob}\left(\lim_{N\to\infty}\frac{1}{N}{\textstyle\sum\limits_{n=1}^N} B_n = \frac12\right) =1 \end{equation} whereas the \emph{weak law of large numbers} (see \cite{probBOOK}) says that for any $\epsilon>0$, \begin{equation} \lim_{N\to\infty} \mbox{Prob}\left(\left|\frac{1}{N}{\textstyle\sum\limits_{n=1}^N} B_n - \frac12\right| > \epsilon\right) =0. \end{equation} Let $b_n^{}\in\{0,1\}$ denote the outcome of the coin toss $B_n$. Then based on either the strong, or the weak law of large numbers we say that ``$\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N b_n = \frac12$'' is a \emph{strongly, or weakly, typical feature} for such an empirical sequence of outcomes $\{b_n^{}\}_{n\in\mathbb{N}}^{}$. Of course, not every empirical sequence $\{b_n^{}\}_{n\in\mathbb{N}}^{}$ does exhibit this typical feature; take, for instance, $\{b_n^{}\}_{n\in\mathbb{N}}^{} =\{1,1,1,1,...\}$. We therefore say that $\{1,1,1,1,...\}$ is \emph{an atypical empirical sequence} for the fair coin tossing process. More generally, \emph{any} empirical sequence $\{b_n^{}\}_{n\in\mathbb{N}}^{}$ for which $\Big|\frac{1}{N}\sum_{n=1}^N b_n - \frac12\Big| > \epsilon$ occurs infinitely often is said to be an \emph{atypical empirical sequence} for this coin tossing process. Next, consider the sequence $\{b_n^{}\}_{n\in\mathbb{N}}^{} =\{1,0,1,0,1,0,...\}$. Could this be a perfectly typical sequence? Clearly, $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N b_n = \frac12$, but anyone who has ever flipped a coin a dozen times, again and again, knows that ``typically'' it doesn't happen to obtain six consecutive 1-0 pairs --- here we borrow the common sense usage of ``typicality;'' indeed, on average the alternating pattern of six consecutive 1-0 pairs occurs less than once in 4,000 repetitions of a dozen coin tosses, and the likelihood of $k$ 1-0 pairs decreases to zero with $k$ increasing to infinity in a trial of length $2k$. Yet, in an infinite sequence of coin tosses, with probability 1 the pattern of six consecutive 1-0 pairs occurs infinitely often; more generally, for \emph{any} $k\in\mathbb{N}$, with probability 1 a pattern with $k$ consecutive 1s, or a pattern with $k$ consecutive 0s, as well as $k$ consecutive 1-0 pairs, all occur infinitely often. Thus \emph{recurrences of such $k$-patterns are strongly typical features} of this coin tossing process. Let's look at one more {strongly typical feature} --- a variation on this theme will turn out to be related to the Riemann hypothesis. Namely, since by either the weak or the strong law of large numbers we can informally say that when $N$ is large enough then $\sum_{n=1}^N b_n \approx \frac12 N$, i.e. $\sum_{n=1}^N (2b_n-1) \approx 0$ in a perfectly typical empirical sequence, we next ask for the typical size of the fluctuations about this theoretical mean, i.e. how large can they be, typically? \emph{Khinchin's law of the iterated logarithm} states that for any $\epsilon > 0$, with probability 1 the event $|\sum_{n=1}^N (2B_n -1)| > (1-\epsilon) \sqrt{2N\ln\ln N}$ will occur infinitely often, while the event $|\sum_{n=1}^N (2B_n -1)| > (1 +\epsilon) \sqrt{2N\ln\ln N}$ has probability 0 of occurring infinitely often in the sequence $\{B_n\}_{n\in\mathbb{N}}^{}$. Thus, \begin{equation} \Big|\sum_{n=1}^N (2b_n -1)\Big| > (1-\epsilon) \sqrt{2N\ln\ln N} \end{equation} occurs for infinitely many $N$ in a perfectly typical empirical sequence $\{b_n^{}\}_{n\in\mathbb{N}}^{}$, and \begin{equation} \Big|\sum_{n=1}^N (2b_n -1)\Big| > (1+\epsilon) \sqrt{2N\ln\ln N} \end{equation} will happen at most finitely many times. Countlessly many more features occur with probability 1, many of them trivially (like Prob$(\sum_{n=1}^NB_n<N+\epsilon)=1$), but many others not, and some of them are deep. This makes it plain that it is impossible, or at least extremely unlikely, that anyone will ever give an \emph{explicit} characterization of a \emph{perfectly typical empirical sequence} of coin tosses. (It is even conceivable that no such sequence exists!) By contrast, once a particular feature has been proven to occur with probability 1 (the strong version), or in probability (the weak version), it is straightforward to ask whether a given empirical sequence exhibits this particular \emph{aspect of typicality}. We are now armed to address the connection of the Riemann hypothesis with the notion of typicality of random Riemann-$\zeta$ functions. \subsection{Typicality for random Riemann-$\zeta$ functions}\label{typicalRH} We begin by listing a few typical features of random Riemann-$\zeta$ walks. Let $r_p^{}(n)\in\{-1,0,1\}$ denote the outcome of the random variable $R_p^{}(n)$, and for given $p\in(0,1]$ and $s>\frac12$ let $\omega_{p}^\zeta(s)$ denote the outcome for the random Riemann-$\zeta$ walk $\Omega_{p}^\zeta(s)$, i.e. \begin{equation}\label{RZwOUT} \omega_{p}^\zeta(s) = \left\{{\textstyle\sum\limits_{n=1}^N} r_p^{}(n)\frac{1}{n^s}\right\}^{}_{N\in\mathbb{N}}. \end{equation} Then the fair coin tossing process of the previous subsection now yields that \begin{equation}\label{RZwTYPa} \lim_{N\to\infty}\textstyle{\frac1N\sum\limits_{n=1}^N} r_p^{}(n) =0 \end{equation} is a feature typically exhibited by an outcome $\omega_{p}^\zeta(s)$, independently of $p$ and $s$. Next, \begin{equation}\label{RZwTYPb} \lim_{N\to\infty}\textstyle{\frac1N\sum\limits_{n=1}^N} |r_p^{}(n)| = p \end{equation} is a $p$-dependent feature typically exhibited by an $\omega_{p}^\zeta(s)$, independently of $s$. Lastly, Rademacher's result mentioned above actually shows that typically \begin{equation}\label{RZwTYPc} \lim_{N\to\infty} \textstyle{\sum\limits_{n=1}^N} r_p^{}(n)\frac{1}{n^s} = \omega_{p}^\zeta(s) \end{equation} exists on the real $\omega$ line whenever $s>\frac12$. All these are strongly typical features. We now inquire into the typicality of the following outcomes of random Riemann-$\zeta$ functions with $s>\frac12$: Riemann's $\zeta$-function (\ref{zetaFCT}) itself, viz. $\zeta(s)=\sum_{n\in\mathbb{N}}1/n^s$ understood as a (not necessarily convergent) sequence of its partial sums; its reciprocal \begin{equation}\label{reciZETAfct} \frac{1}{\zeta(s)}=\textstyle{\sum\limits_{n\in\mathbb{N}}}\mu(n)\frac{1}{n^s}, \end{equation} where $\mu(n) \in\{-1,0,1\}$ is the M\"obius function (see \cite{Edwards}); and also \begin{equation}\label{ratioZETAfcts} \frac{\zeta(2s)}{\zeta(s)} =\textstyle{\sum\limits_{n\in\mathbb{N}}}\lambda(n)\frac{1}{n^s}, \end{equation} where $\lambda(n) \in\{-1,1\}$ is Liouville's $\lambda$-function (see \cite{OEIS}). All are possible outcomes of a random Riemann-$\zeta$ walk with $s>\frac12$, any\footnote{$\zeta(s)$ and $\zeta(2s)/\zeta(s)$ are possible outcomes also when $p=1$, while $1/\zeta(s)$ is not.} $p\in(0,1)$. In terms of the outcomes $r_p^{}(n)$ of the coin tossing process, Riemann's zeta function corresponds to $r_p^{}(n)=1$ for all $n\in\mathbb{N}$, its reciprocal to $r_p^{}(n)=\mu(n)$, and the ratio ${\zeta(2s)}/{\zeta(s)}$ to $r_p^{}(n)=\lambda(n)$. Can any of these $\omega_{p}^\zeta(s)$ be perfectly typical outcomes, at least for some $p$ values? As to $\zeta(s)$ itself, it is clear that it must be atypical, since $r_p^{}(n)=1$ for all $n\in\mathbb{N}$ manifestly violates the $p$- and $s$-independent typicality feature (\ref{RZwTYPa}). Yet $\zeta(s)$ does not necessarily violate each and every aspect of typicality! For instance, if $p=1$ then (\ref{RZwTYPb}) holds for $\zeta(s)$ (though not for any other $p\in(0,1)$). Moreover, while the sequence of its partial sums diverges to infinity when $s\in(\frac12,1]$ in violation of the typicality feature (\ref{RZwTYPc}), this feature is verified by $\zeta(s)$ if $s>1$. In any event, since $\zeta(s)$ is an \emph{extreme outcome}, it is intuitively clear that it will violate most aspects of typicality --- in this sense, we say that $\zeta(s)$ is \emph{extremely atypical} for all $p\in(0,1]$. On to its reciprocal. It is known that the \emph{Prime Number Theorem}\footnote{This is the statement that the number of primes less than $x$ is asymptotically given by $\int_2^x\frac{d\xi}{\ln\xi}$, with relative error going to zero as $x\to\infty$; see \cite{Edwards}.} is equivalent to the actual frequencies of the values $\mu(n)=1$ and $\mu(n)=-1$ being equal in the long run, so ${1}/{\zeta(s)}$ exhibits the typicality feature (\ref{RZwTYPa}). It is also known that the actual frequency of values $\mu(n)\neq 0$ equals $1/\zeta(2)\; (=6/\pi^2)$ in the long run, so ${1}/{\zeta(s)}$ also exhibits the typicality feature (\ref{RZwTYPb}) if $p=1/\zeta(2)$ (though clearly not for any other $p$ value). Furthermore, ${1}/{\zeta(s)}$ satisfies the typicality feature (\ref{RZwTYPc}) for all $s>\frac12$. Could ${1}/{\zeta(s)}$ perhaps be a {perfectly typical} random Riemann-$\zeta$ function for all $s>\frac12$ when $p=1/\zeta(2)$? Recall that this would mean that for each $s>\frac12$ the pertinent actual walk is a \emph{perfectly typical walk}, i.e. a walk which \emph{exhibits all features} of the theoretical random-walk law \emph{which occur with probability} 1 (or at least \emph{in probability}). Similarly, the Prime Number Theorem is equivalent to the actual frequencies of the values $\lambda(n)=1$ and $\lambda(n)=-1$ being equal in the long run, so also the ratio ${\zeta(2s)}/{\zeta(s)}$ exhibits the typicality feature (\ref{RZwTYPa}). Furthermore, if (and only if) $p=1$ then ${\zeta(2s)}/{\zeta(s)}$ exhibits the typicality feature (\ref{RZwTYPb}). Lastly, ${\zeta(2s)}/{\zeta(s)}$ also exhibits the typicality feature (\ref{RZwTYPc}) for all $s>\frac12$. Could also ${\zeta(2s)}/{\zeta(s)}$ perhaps be a {perfectly typical} random Riemann-$\zeta$ function for all $s>\frac12$ when $p=1$? A moment of reflection reveals that this would be \emph{truly paradoxical}: if ${1}/{\zeta(s)}$ and / or ${\zeta(2s)}/{\zeta(s)}$ are {perfectly typical} random Riemann-$\zeta$ functions for the mentioned $p$-values, then one can learn a lot about them by studying what is {typical} for random walks with those $p$-values, without ever looking at ${1}/{\zeta(s)}$ or ${\zeta(2s)}/{\zeta(s)}$. Of course, if one learns something about ${1}/{\zeta(s)}$ and / or ${\zeta(2s)}/{\zeta(s)}$, then one also learns something about $\zeta(s)$ --- but how can one learn something about an extremely atypical random Riemann-$\zeta$ function by studying what is typical for such random walks? The obvious way out of this dilemma is to conclude:\hfill \centerline{\emph{Neither ${1}/{\zeta(s)}$ nor ${\zeta(2s)}/{\zeta(s)}$ can be perfectly typical random Riemann-$\zeta$ functions!}} \newpage The upshot is that both ${1}/{\zeta(s)}$ and ${\zeta(2s)}/{\zeta(s)}$ \emph{must feature} {some} \emph{atypical empirical statistics}, encoded in the sequences $\{\mu(n)\}^{}_{n\in\mathbb{N}}$ and $\{\lambda(n)\}^{}_{n\in\mathbb{N}}$. Obviously these atypical features must be inherited from the correlations in the distribution of prime numbers; recall that the coin tossing process, by contrast, is correlation-free. Since the Riemann hypothesis about the location of the non-real zeros of $\zeta(s)$ is equivalent to some detailed knowledge about the distribution of and correlations amongst prime numbers, it may well be that some particular atypical empirical feature of ${1}/{\zeta(s)}$ and ${\zeta(2s)}/{\zeta(s)}$ will be equivalent to the Riemann hypothesis. Which kind of feature, if any, remains anybody's best guess --- to the best of our knowledge. Surprisingly, and indeed intriguingly, it is known though that \emph{a certain typical feature}, if indeed exhibited by the $1/\zeta(s)$ walk for $p=1/\zeta(2)$, beyond the agreement of empirical and theoretical frequencies, \emph{is equivalent to the Riemann hypothesis}! We are grateful to Alex Kontorovich for having pointed this out to us. Namely, let us extend the definition of the random Riemann-$\zeta$ walk $1/\zeta(s)$ to $s=0$, \emph{not} by analytic continuation, but in terms of the sequence of its partial sums: \begin{equation}\label{reciRZwATnull} \frac{1}{\zeta(0)} := \left\{\textstyle\sum\limits_{n=1}^N \mu(n)\right\}^{}_{N\in\mathbb{N}}. \end{equation} Note that for $s\leq \frac12$ the $1/\zeta(s)$ random walk may well wander off to infinity, but the rate at which this happens is crucial (recall Khinchin's law of the iterated logarithm which we mentioned in subsection \ref{typicalCOINS}). As explained in \cite{Edwards}, chpt.12.1, Littlewood proved the equivalence: \begin{equation}\label{reciRZwATnullGROWTH} \forall\epsilon>0:\ \lim_{N\to\infty}N^{-\frac12-\epsilon}\Big|\sum_{n=1}^N\mu(n)\Big| =0\quad \leftrightarrow\quad \mbox{The\ Riemann\ hypothesis\ is\ true}. \end{equation} And as explained in \cite{Edwards}, chpt.12.3, Denjoy noted that if one assumes that the $\pm1$ values of $\mu(n)$ are distributed as if they were generated by fair and independent coin flips, then the central limit theorem implies that $\lim_{N\to\infty}N^{-\frac12-\epsilon}|\sum_{n=1}^N\mu(n)| =0$ holds with probability 1. Of course, $\mu(n)=0$ is still determined by its formula, but the empirical frequency of $\mu(n)=0$ occurrences is $1-6/\pi^2$ in the long run, and by adopting Denjoy's reasoning one can show that for $p= 6/\pi^2$ one has that \begin{equation}\label{randomDenjoy} \forall\epsilon>0:\ \mbox{Prob}\left(\lim_{N\to\infty}N^{-\frac12-\epsilon}\Big|\sum_{n=1}^N R^{}_{6/\pi^2}(n)\Big| =0\right) =1. \end{equation} Thus l.h.s.(\ref{reciRZwATnullGROWTH}) would be a typical feature exhibited by the $1/\zeta(0)$ walk at $p= 6/\pi^2$. \medskip \section{The Characteristic Function of $\Omega^\zeta_p(s)$}\label{PROB} We now show that the infinite trigonometric products ${\mbox{Cl}_{p;s}^{}}(t)$ given in (\ref{genCLOITREprod}) are characteristic functions of the $\Omega_{p}^\zeta(s)$, i.e. ${\mbox{Cl}_{p;s}^{}}(t) = \mbox{Exp}\big(\exp\big(it\Omega_{p}^\zeta(s)\big)\big)$, where ``Exp'' is \emph{expected value} (not to be confused with the exponential function $\exp$). Since $\Omega_{p}^\zeta(s)$ is an infinite sum of independent random variables $R_p^{}(n)/n^s$ (see (\ref{RANDOMzetaFCT})), $\exp\!\big(it\Omega_{p}^\zeta(s)\big)$ is an infinite product of independent random variables $\exp\!\big(itR_p^{}(n)/n^s\big)$, and by a well-known theorem in probability theory, expected values of products of independent random variables are products of the their individual expected values. And so we have (temporarily ignoring issues of convergence) \begin{eqnarray}\label{charFUNC} \mbox{Exp}\Big(\exp\big(it\Omega_{p}^{\zeta}(s)\big)\Big) \!\!&=&\!\! \mbox{Exp}\Big(\prod_{n\in\mathbb{N}}\exp\big(itR_{p}^{}(n)\tfrac{1}{n^s}\big)\Big) = \prod_{n\in\mathbb{N}}\mbox{Exp}\Big(\exp\big(itR_{p}^{}(n)\tfrac{1}{n^s}\big)\Big)\cr \!\!&=&\!\! \prod_{n\in\mathbb{N}} \Big(\tfrac{1}{2}p e^{-i\hfrac{t}{n^s}} + (1-p) + \tfrac{1}{2}p e^{i\hfrac{t}{n^s}}\Big) \cr \!\!&=&\!\! \prod_{n\in\mathbb{N}} \Big(1-p + p \cos\big(\tfrac{t}{n^s}\big)\Big) \equiv {\mbox{Cl}_{p;s}^{}}(t),\vspace{-15pt} \end{eqnarray} where we have used Euler's formula to rewrite $\tfrac12\big(e^{i\hfrac{t}{n^s}} +e^{-i\hfrac{t}{n^s}}\big)=\cos\big(\hfrac{t}{n^s}\big)$. That was straightforward. Next we explain the relationship between the characteristic functions ${\mbox{Cl}_{p;s}^{}}(t)$ of $\Omega^\zeta_p(s)$ and the probability distribution $\varrho^{\zeta}_{p;s}(d\omega)$ of the endpoints of these random Riemann-$\zeta$ walks on the $\omega$ line. Formally this is accomplished by realizing that ${\mbox{Cl}_{p;s}^{}}(t)$ is the inverse Fourier transform of $\varrho^{\zeta}_{p;s}(d\omega)$, viz. \begin{eqnarray}\label{charFUNCisFOURIERinverseOFdistribution} \mbox{Exp}\Big(\exp\big(it\Omega_{p}^{\zeta}(s)\big)\Big) = \int_{\mathbb{R}} e^{it\omega}\varrho^{\zeta}_{p;s}(d\omega). \end{eqnarray} Therefore we obtain $\varrho^{\zeta}_{p;s}(d\omega)$ by taking the Fourier transform of ${\mbox{Cl}_{p;s}^{}}(t)$. As recalled in \cite{Morrison}, the Fourier transform of a product equals the \emph{convolution product} (``$*$'', see below) of the Fourier transforms of its factors, and so we find \begin{eqnarray}\label{RZdistribution} \varrho^{\zeta}_{p;s}(d\omega) \!\!&=&\!\! \Big(*\prod_{n\in\mathbb{N}} \frac{1}{2\pi}\int_\mathbb{R} e^{-i\omega t} \Big[\tfrac{1}{2}p e^{-i\hfrac{t}{n^s}} + (1-p) + \tfrac{1}{2}p e^{i\hfrac{t}{n^s}}\Big]dt\Big) (d\omega)\cr \!\!&=&\!\! \Big(*\prod_{n\in\mathbb{N}} \Big[\tfrac{1}{2}p\delta_{-\frac{1}{n^s}}^{} + (1-p)\delta_{0}^{} + \tfrac{1}{2}p \delta_{\frac{1}{n^s}}^{}\Big]\Big)(d\omega); \end{eqnarray} here, ``$*\prod$'' denotes repeated convolution (cf. \cite{Morrison}), and $\delta_{\omega_k}^{}$ is a Dirac measure.\footnote{If $I\subset\mathbb{R}$ is any closed interval, then $\int_I \delta_{\omega_k}^{} (d\omega)= 1$ if $\omega_k^{}\in I$ and $\int_I \delta_{\omega_k^{}}^{} (d\omega)= 0$ if $\omega_k^{}\not\in I$.} This distribution looks intimidating, but it only conveys what we know already! Namely, formally (\ref{RZdistribution}) is the limit $N\to\infty$ of the $N$-fold partial convolution products\footnote{We temporarily suppress the suffix ``$\zeta$'' so as not to overload the notation.} \begin{eqnarray}\label{NstepsDELTAconcise} \varrho^{(N)}_{p;s}(d\omega) := \Big(*\prod_{n=1}^N \Big[\tfrac{1}{2}p\delta_{-\frac{1}{n^s}}^{} + (1-p)\delta_{0}^{} + \tfrac{1}{2}p \delta_{\frac{1}{n^s}}^{}\Big]\Big)(d\omega). \end{eqnarray} Now recall that the {convolution} product, which for two integrable functions $f$ and $g$ is defined by $(f*g)(\omega) = \int f(\omega')g(\omega-\omega')d\omega'$, extends to delta measures where it acts as follows: $\delta_a^{}*\delta_b^{} = \delta_{a+b}^{}$ (see \cite{Morrison}). Therefore, by multiplying out the convolution product at r.h.s.(\ref{NstepsDELTAconcise}), using the distributivity of ``$*$'' one finds that $\varrho^{(N)}_{p;s}(d\omega)$ is a weighted sum of point measures at the possible outcomes \vspace{-20pt} \begin{equation}\label{RZwNout} \omega_{p}^{(N)}(s) : = \sum_{n=1}^N r_p^{}(n)\frac{1}{n^s} \in {\cal{L}}_{p}^{(N)}(s),\quad r_p^{}(n) \in \{-1,0,1\}, \vspace{-5pt} \end{equation} of the random walk truncated after $N$ steps, \vspace{-20pt} \begin{equation}\label{RZwN} \Omega_{p}^{(N)}(s) := \sum_{n=1}^N R_p^{}(n)\frac{1}{n^s}. \end{equation} \vspace{-15pt}\noindent The set of locations ${\cal{L}}_{p}^{(N)}(s)\subset\mathbb{R}$ is finite, and generically\footnote{It may in principle happen for certain discrete values of $s$ (but not of $p$) that different $N$-step paths lead to the same outcome $\omega_{p}^{(N)}(s)$. However, since $s>\frac12$ is a continuous parameter, this degenerate situation is not generic. Note though that it may well happen that we humans ``inadvertendly'' pick precisely those non-generic cases, for instance if degeneracy occurs when $s\in\mathbb{N}$!} consists of $3^N$ distinct real points if $p\in(0,1)$, and of $2^N$ distinct real points if $p=1$. Thus, $\varrho^{(N)}_{p;s}(d\omega)$ becomes \begin{equation}\label{NstepsDELTA} \varrho^{(N)}_{p;s}(d\omega) = \sum_{{\cal{L}}_{p}^{(N)}(s)} \mbox{P}\big(\omega_p^{(N)}(s)\big)\delta_{\omega_p^{(N)}(s)}^{} (d\omega); \end{equation} \vspace{-10pt}\noindent the sum runs over all ${\omega_p^{(N)}(s)\in {\cal{L}}_{p}^{(N)}(s)}$, and $\mbox{P}\big(\omega_p^{(N)}(s)\big):=\mbox{Prob}\big( \Omega_{p}^{(N)}(s)=\omega_p^{(N)}(s)\big)$. These probabilities $\mbox{P}\big(\omega_p^{(N)}(s)\big)$ are readily computed from the tree diagram in Fig.~3, or by inspecting (\ref{RZwNout}): if in order to reach $\omega_p^{(N)}(s)$ you need to move $m\leq N$ times (whether left or right has equal probability), then $\mbox{P}\big(\omega_p^{(N)}(s)\big) = (p/2)^m (1-p)^{N-m}$, independently of $s$. Note that there are $2^m \genfrac{(}{)}{0pt}{}{N}{m}$ possible outcomes for an $N$-step walk with $m\leq N$ moves, and indeed $\sum_{m=0}^N 2^m \genfrac{(}{)}{0pt}{}{N}{m}(p/2)^m (1-p)^{N-m} = (1-p+p)^N=1$. Let's look at two examples. After $1$ step with $p\in(0,1)$ there are 3 possible positions, and the distribution (\ref{NstepsDELTAconcise}) with $N=1$ reads \begin{equation}\label{ONEstepsDELTA} \varrho^{(1)}_{p;s}(d\omega) = \Big(\tfrac{1}{2}p \delta_{-1}^{}+ (1-p) \delta_0^{} + \tfrac{1}{2}p \delta_{1}^{}\Big)(d\omega) . \end{equation} After $2$ steps with $p\in(0,1)$ we have 9 possible positions, and (\ref{NstepsDELTAconcise}) with $N=2$ reads \begin{eqnarray}\label{TWOstepsDELTA} \varrho^{(2)}_{p;s}(d\omega) =& \Big(\Big[\tfrac12 p \delta_{-1}^{} + (1-p)\delta_0^{} +\tfrac12 p\delta_{1}^{}\Big]*\Big[\tfrac{1}{2}p\delta_{-\frac{1}{2^s}}^{} + (1-p)\delta_{0}^{} + \tfrac{1}{2}p \delta_{\frac{1}{2^s}}^{}\Big]\Big)(d\omega)\cr = &\Big( \tfrac{1}{4}p^2 \delta_{-1-\frac{1}{2^s}}^{} + \tfrac{1}{2}p(1-p)\delta_{-1+0}^{} + \tfrac{1}{4}p^2 \delta_{-1+\frac{1}{2^s}}^{} \Big)(d\omega) + \cr & \Big(\tfrac{1}{2}p(1-p) \delta_{0-\frac{1}{2^s}}^{} +(1-p)^2 \delta_{0+0}^{} + \tfrac{1}{2}p(1-p) \delta_{0+\frac{1}{2^s}}^{}\Big)(d\omega) +\cr & \Big( \tfrac{1}{4}p^2\delta_{1-\frac{1}{2^s}}^{} +\tfrac{1}{2}p(1-p)\delta_{1+0}^{} + \tfrac{1}{4}p^2 \delta_{1+\frac{1}{2^s}}^{} \Big)(d\omega), \end{eqnarray} which is precisely (\ref{NstepsDELTA}) with $N=2$; we have facilitated the comparison by writing all two-step walks which lead to the locations of the point masses explicitly, including the ``non-moves.'' Similarly one can compute the $N$-th partial convolution product, although this soon gets cumbersome and does not illuminate the process any further. The theory of convergence of probability measures (e.g. ref.[1] in \cite{Schmuland}) shows that the sequence of partial products (\ref{NstepsDELTAconcise}) does converge to a probability measure (\ref{RZdistribution}) if $s>\frac12$. Unfortunately, the expression (\ref{RZdistribution}) does not readily give up its secrets. In particular, each measure (\ref{NstepsDELTAconcise}) is singular with respect to (w.r.t.) Lebesgue measure $d\omega$, so could it be that the $N\to\infty$ limit (\ref{RZdistribution}) is singular as well --- e.g., supported by a fractal? And if not, when $\varrho^{\zeta}_{p;s}(d\omega)$ is absolutely continuous w.r.t. $d\omega$, is its PDF perhaps not differentiable at its peaks, as hinted at by Fig.~4? The answers to these questions will be extracted from ${\mbox{Cl}_{p;s}^{}}(t)$ in the next section. \vspace{-27pt} \section{The Main Theorem}\label{THM} \vspace{-10pt} In this section we use elementary calculus techniques to prove the following result: \begin{theo}\label{THMgenCLOITREtrend} Let $p\in(0,1]\ \&\ s> \frac12$. Then \vspace{-2pt} \begin{equation}\label{genClTHM} \forall\,t\in\mathbb{R}:\quad {\mathrm{Cl}}_{p;s}^{}(t) = \exp\left({- C_{p;s}^{} \,|t|^{1/s}}\right) F_{p;s}^{}(|t|), \end{equation} where $|F_{p;s}^{}(|t|)| \leq \exp(K_{p;s}^{} |t|^{1/(s+1)})$ for some constant $K_{p;s}^{}>0$, and where \begin{equation}\label{genCLOITREcoeffC} C_{p;s}^{} := -\frac1s\int_0^\infty\ln|{1-p+p\cos\xi}|\frac{1}{\xi^{1+1/s}}{\rm{d}}\xi. \end{equation} \vspace{-5pt}\noindent Moreover, when $p\in(0,\frac12)\ \&\ s> \frac12$ the stronger bound $|\ln F_{p;s}^{}(|t|)| \leq K_{p;s}^{} |t|^{1/(s+1)}$ holds; furthermore, we then have $C_{p;s}^{} = A_{s}^{} B_{p;s}^{}$, with \vspace{-5pt} \begin{equation}\label{genCLOITREcoeffA} A_{s}^{} := \int_0^\infty {\sin\xi}\frac{1}{\xi^{1/s}}{\rm{d}}\xi = \textstyle\Gamma\left(1-\frac1s\right)\cos\left(\frac{\pi}{2s}\right) \end{equation} (where it is understood that $A_1^{}=\lim_{s\to1}\Gamma\bigl(1-\frac1s\bigr)\cos\bigl(\frac{\pi}{2s}\bigr)\ [=\frac{\pi}{2}]$), and \begin{equation}\label{genCLOITREcoeffB} B_{p;s}^{} := \sum_{n=0}^\infty(-1)^n\left(\frac{p}{1-p}\right)^{n+1}\frac{1}{2^n}\sum_{k=0}^{\ceil{\frac{n-1}{2}}} \begin{pmatrix} n\cr k \end{pmatrix} \frac{({1+n-2k})^{1/s}\hspace{-5pt}}{{1+n-k}\;}\qquad. \end{equation} \end{theo} \newpage \begin{rema} Recalling that $\ln|z| = \Re{e} \ln z$ for $z\in\mathbb{C}$, we conclude from (\ref{genCLOITREcoeffC}) that \begin{equation}\label{genCLOITREcoeffCrealPART} C_{p;s}^{} = -\frac1s\Re{e}\int_0^\infty\ln({1-p+p\cos\xi})\frac{1}{\xi^{1+1/s}}{\rm{d}}\xi,\quad p\in(0,1]\ \&\ s>\frac12, \end{equation} where the integral at r.h.s.(\ref{genCLOITREcoeffCrealPART}) is understood as analytic continuation from $p\in(0,\frac12)$ (when $\ln(1-p+p\cos\xi)\in\mathbb{R}$) to $p\in(\frac12,1]$ (when $\ln(1-p+p\cos\xi)\in\mathbb{C}$). Larry Glasser and Norm Frankel (personal communications, Dec. 2016) have informed us that this analytically continued $C_{p;s}^{}$, denoted $\widetilde{C}_{p;s}^{}$, has been calculated in \cite{Glasser} to \begin{equation}\label{CpsLARRY} \widetilde{C}_{p;s}= -2\textstyle\Gamma\left(1-\frac1s\right)\cos\left(\frac{\pi}{2s}\right) {\mathrm{Li}}_{1-\frac1s}\!\left(\sqrt{q^2-1}-q\right), \quad q= (1-p)/p; \end{equation} here $\mathrm{Li}_{a}\!(z)$ is a polylogarithm. They also remarked that for $p=\frac12$ and $s=1$ one has \begin{equation}\label{genCLOITREcoeffCpvSPECIAL} C_{\frac12;1}^{} = \mbox{\sc{pv}}\!\int_0^\infty\!\frac{\sin\xi}{1+\cos\xi}\frac{1}{\xi}{\rm{d}}\xi = \mbox{\sc{pv}}\!\int_0^\infty\frac{\tan\xi}{\xi}{\rm{d}}\xi =C_{1;1}^{}, \end{equation} with $C_{1;1}^{}=\frac\pi2$; see \cite{BG} (here ``{\sc{pv}}'' means \emph{principal value}). So presumably \begin{equation}\label{genCLOITREcoeffCpvGENERAL} C_{p;s}^{}\! =\! \mbox{\sc{pv}}\!\int_0^\infty\!\!\!\frac{p\sin\xi}{1-p+p\cos\xi}\frac{1}{\xi^{1/s}}{\rm{d}}\xi \end{equation} for $p\in[\frac12,1]$ and $s>\frac12$. Note that $p\mapsto C_{p;s}$ has a derivative singularity at $p=\frac12$. \end{rema} \begin{rema} Identity (\ref{genCLOITREcoeffCpvSPECIAL}) is a special case of the identity $C_{1;s}^{} = 2^{-1+1/s}C_{1/2;s}^{}$; $s>\frac12$. Indeed, a half-angle identity and an obvious substitution of variables yields \vspace{-15pt} \begin{eqnarray}\label{Chalfs} \int_0^\infty\!\!\ln(\tfrac12+\tfrac12\cos\xi)\frac{1}{\xi^{1+1/s}}{\rm{d}}\xi =\!\! \int_0^\infty\!\!\ln(\cos^2\tfrac{\xi}{2})\frac{1}{\xi^{1+1/s}}{\rm{d}}\xi = 2^{1-\frac1s}\!\!\int_0^\infty\!\!\ln|\cos {\xi}|\frac{1}{\xi^{1+1/s}}{\rm{d}}\xi. \end{eqnarray} \vspace{-5pt}\noindent But l.h.s.(\ref{Chalfs})=$-sC_{\hfrac12;s}^{}$ while r.h.s.(\ref{Chalfs})=$-s2^{1-\frac1s}C_{1;s}^{}$. \end{rema} \begin{rema} It is also easy to show that \emph{$\mbox{Cl}_{1/2;s}^{}(t) = \mbox{Cl}_{1;s}^2(t/2)$}. \hfill End of Remarks. \end{rema} Cloitre's surmise follows from Theorem \ref{THMgenCLOITREtrend}. Indeed, we have the stronger result: \begin{coro} For $p=1/3$ and $s=2$, Theorem \ref{THMgenCLOITREtrend} reduces to \begin{equation}\label{CLOITREasymp} \forall\,t\in\mathbb{R}:\quad {\mathrm{P}}_{\!\mbox{\tiny\sc{Cl}}} (t)= e^{- C \,\sqrt{|t|} +\varepsilon(|t|)}, \end{equation} \vspace{-15pt} \noindent with correction term bounded as $|\varepsilon(|t|)|\leq K |t|^{1/3}$ for some $K>0$, and with \vspace{-5pt} \begin{equation}\label{CLOITREcoeffC} C= \int_\mathbb{R}\frac{\sin\xi^2}{2+\cos \xi^2}{\rm{d}}\xi = \sqrt{\frac\pi2}\sum_{n=0}^\infty(-1)^n\frac{1}{2^{2n+1}}\sum_{k=0}^{\ceil{\frac{n-1}{2}}} \begin{pmatrix} n\cr k \end{pmatrix} \frac{\sqrt{1+n -2k}}{1+n-k}; \end{equation} numerically, $C = 0.319905585... \sqrt{\pi}\approx 0.32\sqrt{\pi}$. \end{coro} \begin{rema} By a simple change of variables, and the fact that $\xi^2$ is even, we have \begin{equation} C = \int_{\mathbb{R}}\frac{\sin\xi^2}{2+\cos\xi^2}{\rm{d}}\xi = 2 \int_0^\infty\frac{\sin\xi^2}{2+\cos\xi^2}{\rm{d}}\xi = \int_0^\infty\frac{\sin\xi}{2+\cos\xi}\frac{1}{\xi^{1/2}}{\rm{d}}\xi = C_{\frac13;2}^{} \end{equation} \end{rema} Theorem \ref{THMgenCLOITREtrend} implies that $\Omega_{p}^\zeta(s)$ is a very regular random variable. Namely, ${\mbox{Cl}_{p;s}^{}}(t)\in C^\infty$, and by (\ref{genClTHM}) the integral of $|t|^m{\mbox{Cl}_{p;s}^{}}(t)$ exists for any $m\in\{0,1,2,...\}$, so ${\mbox{Cl}_{p;s}^{}}(t)$ is a Schwartz function, and so the Fourier transform of ${\mbox{Cl}_{p;s}^{}}(t)$ is a Schwartz function; see chpt. IX in \cite{ReedSimonI}. Also, ${\mbox{Cl}_{p;s}^{}}(0)=1$, so its Fourier transform has integral 1. We already know that its Fourier transform is positive. Thus we have \begin{coro}\label{regularity} The random variable ${\Omega_{p}^\zeta(s)}^{}$ defined by its characteristic function (\ref{genCLOITREprod}) with $p\in(0,1]$ and $s>\frac12$ converges with probability 1. It is continuous, having a probability density $f_{\Omega_{p}^\zeta(s)}^{}(\omega)$ which is a (generally not real analytic) Schwartz function. \end{coro} Corollary \ref{regularity} settles our questions concerning the distribution of $\Omega_{1/3}^{\zeta}(2)$. Despite the seemingly self-similar structure hinted at in Fig.~4, $\Omega_{1/3}^{\zeta}(2)$ cannot be supported on a fractal subset of the $\omega$ line. Also, despite the appearance of singular peaks hinted at in Fig.~4, the PDF of $\Omega_{1/3}^{\zeta}(2)$ is infinitely often continuously differentiable. It remains to prove Theorem~\ref{THMgenCLOITREtrend}. To offer some guidance we explain our \textit{strategy}. \textbf{First of all}, we prove the stronger part of Theorem~\ref{THMgenCLOITREtrend} concerning $p\in(0,\frac12)$. We then have ${\mbox{Cl}_{p;s}^{}}(t)>0$, so we can take its logarithm and obtain the infinite series \vspace{-5pt} \begin{equation}\label{logCloitrePRODgeneralized} \ln {\mbox{Cl}_{p;s}^{}}(t) = \textstyle{\sum\limits_{n\in\mathbb{N}}} \ln \left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right],\ t\in\mathbb{R}, \end{equation} with $\textstyle p\in(0,\frac12)\ \&\ s> \frac12$. We then follow the proof of Theorem~1 of \cite{KieJSPforBenJerryMichael} which establishes that if $s>1$, then for all $t\in\mathbb{R}$ one has $\sum_{n\in\mathbb{N}}\sin\left(\textstyle n^{-s}_{_{}}t\right) = \alpha_s^{}{\mbox{sign}}(t)|t|^{1/s} + \varepsilon(|t|)$, with $\alpha_s^{} = \Gamma\bigl(1-\frac1s\bigr)\sin\bigl(\frac{\pi}{2s}\bigr)$ and $|\varepsilon(|t|)| \leq K_s^{} |t|^{1/(s+1)}$ for some $K_s^{}>0$. Note that by the reflection symmetry about $t=0$ of ${\mbox{Cl}_{p;s}^{}}(t)$ it suffices to consider $t>0$. Yet one needs to distinguish $0\leq t\leq t_s^{}$ and $t\geq t_s^{}$ for some $t_s^{}>0$. The \emph{near side} $0\leq t\leq t_s^{}$ will be estimated with the help of a Maclaurin expansion and turn out to be subdominant. The \emph{far side} $t\geq t_s^{}$ will be handled by splitting the infinite series into two parts, \begin{equation} \textstyle{\sum\limits_{n\in\mathbb{N}}}(\cdots_n) =\label{eq:SdefSsplitSplusONEroot} \textstyle{\sum\limits_{n=1}^{N_s(t/\tau)}} (\cdots_n) + \textstyle{\sum\limits_{n=N_s(t/\tau)+1}^{\infty} }(\cdots_n), \end{equation} where $(\cdots_n) = \ln \left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right]$, and where $N_s(t/\tau): = \lceil{(st/\tau)^{1/(s+1)}}\rceil$, with $\tau <\pi/2$. The first (finite) sum in (\ref{eq:SdefSsplitSplusONEroot}) will be shown to yield only a subdominant error bound. The second (infinite) sum in (\ref{eq:SdefSsplitSplusONEroot}) will be interpreted as a Riemann sum approximation to an integral over the real line, the trend function, plus a subdominant error bound. We now outline this argument. Since $\tau < \pi/2$, when $t$ gets large any two consecutive arguments $t/n^s$ and $t/(n+1)^s$ of the cosine functions will come to lie within one quarter period of cosine whenever $n > \lceil{(st/\tau)^{1/(s+1)}}\rceil$. Moreover, with increasing $n$, for fixed $t/\tau$, the consecutive arguments $t/n^s$ and $t/(n+1)^s$ will be more and more closely spaced. And so, when $\tau$ is sufficiently small, with increasing $t$ the part of the sum of $\ln{\mbox{Cl}_{p;s}^{}}(t)$ with $n > N_s(t/\tau)$ becomes an increasingly better Riemann sum approximation to $$ \int_{ N_s(t/\tau)+1}^\infty \ln \left[1- p +p\cos\left(\textstyle \nu^{-s}_{_{}}t\right)\right] {\rm{d}}\nu. $$ More precisely, using the variable substitution $\nu^{-s}t=\xi$, we have (informally) \begin{equation} \hspace{-25pt} {\textstyle\sum\limits_{n= N_s(t/\tau)+1}^{\infty}}\hspace{-10pt}\ln\left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right] \approx \label{eq:SdefSsplitSplusONErootINTapprox} t^{1/s} {\textstyle\frac{1}{s}}\displaystyle \int_0^{t/( N_s(t/\tau)+1)^s} \hspace{-30pt}\ln\left[1- p +p\cos \xi \right]\frac{1}{\xi^{1+1/s}} {\rm{d}}\xi. \end{equation} Since $s>1/2$, the upper limit of integration at r.h.s.(\ref{eq:SdefSsplitSplusONErootINTapprox}) goes~to~$\infty$ like $K t^{1/(s+1)}$ when $t\to\infty$. The limiting integral is an improper Riemann integral which converges absolutely for all $s>1/2$, yielding \begin{equation} {\textstyle\frac1s}\int_0^\infty \ln\left[1- p +p\cos \xi \right]\frac{1}{\xi^{1+1/s}} {\rm{d}}\xi \equiv \label{eq:alphaSint} - C_{p;s}^{}. \end{equation} This heuristic argument will be made rigorous by supplying the subdominant error bounds, using only senior level undergraduate mathematics. The integral (\ref{eq:alphaSint}) will be evaluated with the help of a rapidly converging geometric series expansion and a recursion which involves the Catalan numbers. \textbf{Secondly}, we consider the regime $p\in[\tfrac12,1]$. In this case ${\mbox{Cl}_{p;s}^{}}(t)$ has zeros at \begin{equation}\label{genClPRODzeros} t_{n,j,\pm}(p;s) = n^s \big[(2j-1)\pi \pm \arccos \big(-\tfrac{1-p}{p}\big)\big]\, \quad j,n \in\mathbb{N}, \end{equation} and when $p\in (\tfrac12,1]$ then ${\mbox{Cl}_{p;s}^{}}(t)$ changes sign at these zeros. So now we take the logarithm of $|{\mbox{Cl}_{p;s}^{}}(t)|$ and study the resulting infinite series of logarithms. This series is the monotone lower limit of a regularized family of series, viz. \begin{equation}\label{logCloitrePRODgeneralizedREG} \forall\; t\in\mathbb{R}:\quad \ln |{\mbox{Cl}_{p;s}^{}}(t)| = \lim_{\epsilon\downarrow 0} {\textstyle\sum\limits_{n\in\mathbb{N}}} \ln \max \left\{\epsilon,\, \left|1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right|\right\}. \end{equation} For any $\epsilon >0$, the regularized series at the right-hand side can be controlled essentially verbatim to our proof of the regime $p\in(0,\tfrac12)$. The limit ${\epsilon\downarrow 0}$ is is then established with the help of the integrability of $\ln |t|$ over any bounded neighborhood of zero, plus the summability of $1/n^{1+1/s}$ when $1/s>0$. This ends the outline of our strategy. \newpage \noindent {\textit{Proof of Theorem~\ref{THMgenCLOITREtrend}}:} Let $p\in(0,\frac12)$. If $t_s^{}>0$ is sufficiently small, then for the \emph{near side} $0\leq t\leq t_s^{}$ we have the Maclaurin expansion $\ln{\mbox{Cl}_{p;s}^{}}(t)= -\frac{1}{2}p\,\zeta(2s)t^2 +O(t^4)$. It follows that $|\ln{\mbox{Cl}_{p;s}^{}}(t) + C_{p;s}^{}t^{1/s}|\leq K t^{1/(s+1)}$ for some $K>0$ when $0\leq t \leq t_s^{}$. Here and in all estimates below, $K$ is a generic positive constant which may depend on $p,s,\tau,t_s^{}$. As for the \emph{far side} $t\geq t_s$, the first sum at r.h.s.(\ref{eq:SdefSsplitSplusONEroot}) is estimated by \vspace{-10pt} \begin{equation} \Big|{\textstyle\sum\limits_{n=1}^{{N_s(t/\tau)}}} \ln\left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right] \Big| \; \leq \; \label{eq:firstSsumEST} |\ln(1-2p)| \lceil{(st/\tau)^{1/(s+1)}}\rceil \; \leq \; K {t^{1/(s+1)}}, \end{equation} \vspace{-10pt}\noindent where we used the triangle inequality and $$ \big|\ln\left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right]\big|\leq |\ln(1-2p)|. $$ For the second sum at r.h.s.(\ref{eq:SdefSsplitSplusONEroot}) we find, for some $\nu_n\in[n,n+1]$, \begin{eqnarray} \textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty}\!\!\!\!\!\!\!\! \ln\left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right] -\!\!\displaystyle\int_{N_s(t/\tau)+1}^\infty\!\!\!\!\!\!\!\!\ln\left[1- p +p\cos\left(\textstyle \nu^{-s}_{_{}}t\right)\right] d\nu &\!\!\!=\!\!\!& \label{eq:secondSsumESTa}\qquad \\ \textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty}\!\! \Big(\ln\left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right] - \!\! \displaystyle\int_n^{n+1}\!\!\!\!\!\!\!\! \ln\left[1- p +p\cos\left(\textstyle \nu^{-s}_{_{}}t\right)\right] d\nu \Big) &\!\!\!=\!\!\!& \label{eq:secondSsumESTb}\qquad \\ \textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty} \Big(\ln\left[1- p +p\cos\left(\textstyle n^{-s}_{_{}}t\right)\right] - \ln\left[1- p +p\cos\left(\textstyle \nu^{-s}_{n}t\right)\right]\Big) &\!\!\!=\!\!\!& \label{eq:secondSsumESTc}\qquad \\ {\textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty}} \displaystyle \int_{t/\nu_n^{s}}^{t/n^{s}} \frac{p\sin(\xi)}{1-p+p\cos(\xi)}{\rm{d}}\xi; \end{eqnarray} here, (\ref{eq:secondSsumESTa}) is obviously true, whereas (\ref{eq:secondSsumESTb}) expresses the mean value theorem for some $\nu_n\in [n,n+1]$, and (\ref{eq:secondSsumESTc}) holds by the fundamental theorem of calculus. Now taking absolute values, we estimate \begin{eqnarray} \Big|{\textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty}} \displaystyle \int_{t/\nu_n^{s}}^{t/n^{s}} \frac{p\sin(\xi)}{1-p+p\cos(\xi)}{\rm{d}}\xi\Big| &\!\!\!\leq\!\!\!& \label{eq:secondSsumESTd}\qquad \\ \frac{p}{1-2p}{\textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty}} \displaystyle \int_{t/\nu_n^{s}}^{t/n^{s}} \big| \sin\xi \big| {\rm{d}}\xi &\!\!\!\leq\!\!\!& \label{eq:secondSsumESTe}\qquad \\ \frac{p}{1-2p}\textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty} t\big(\frac{1}{n^{s}}- \frac{1}{\nu_n^{s}}\big) &\!\!\!\leq\!\!\!& \label{eq:secondSsumESTf}\qquad \\ \frac{p}{1-2p} \textstyle\sum\limits_{n={N_s(t/\tau)+1}}^{\infty} t\big(\frac{1}{n^{s}}- \frac{1}{(n+1)^{s}}\big) &\!\!\!=\!\!\!& \label{eq:secondSsumESTg}\qquad \\ \frac{p}{1-2p} t\lceil{(st/\tau)^{1/(s+1)}}+1\rceil^{-s} &\!\!\!\leq\!\!\!& \label{eq:secondSsumESTh}\qquad \vspace{-10pt}\\ K t^{\frac{1}{s+1}} ;&&\notag \end{eqnarray} \vspace{-5pt}\noindent inequality (\ref{eq:secondSsumESTd}) holds by the triangle inequality in concert with $\cos\xi\geq -1$, (\ref{eq:secondSsumESTe}) holds since $|\sin \xi|\leq 1$, followed by elementary integration, while (\ref{eq:secondSsumESTf}) is due to the monotonic decrease of $\nu\mapsto \nu^{-s}$ for $s>1/2$, with $\nu_n\in [n,n+1]$; equality (\ref{eq:secondSsumESTg}) holds because the sum at l.h.s.(\ref{eq:secondSsumESTg}) is telescoping; lastly, inequality (\ref{eq:secondSsumESTh}) holds because $\ceil{x}$ differs from $x$ by at most 1, and for large $x$ the $+1$ in its argument becomes negligible. For the integral in (\ref{eq:secondSsumESTa}) the variable substitution $\nu^{-s}t=\xi$ yields \begin{eqnarray} t^{1/s} {\textstyle\frac{1}{s}} \displaystyle\int_0^{t/(N_s(t/\tau)+1)^s}\hspace{-56pt} \ln\left[1- p +p\cos \xi \right]\frac{{\rm{d}}\xi}{\xi^{1+1/s}} = \label{eq:SdefSasympINTrewrite} t^{1/s}\displaystyle\Biggl[-C_{p;s}^{} - {\textstyle\frac{1}{s}} \int_{t/(N_s(t/\tau)+1)^s}^\infty \hspace{-50pt} \ln\left[1- p +p\cos \xi \right]\frac{{\rm{d}}\xi}{\xi^{1+1/s}} \Biggr]\!. \end{eqnarray} Using again the estimate $| \ln\left[1- p +p\cos \xi \right]|\leq -\ln(1-2p)$, we find (for $t\geq 1$): \begin{eqnarray} t^{1/s} \Biggr|\int_{t/(N_s(t/\tau)+1)^s}^\infty \hspace{-30pt} \ln\left[1- p +p\cos \xi \right]\frac{1}{s\xi^{1+1/s}} {\rm{d}}\xi\Biggr| &\!\!\! \leq &\!\!\! \label{eq:SdefSasympINTrewriteESTa} |\ln(1-2p)| \lceil{(st/\tau)^{1/(s+1)}+1}\rceil\ \\ & \!\!\!\leq \!\!\! & K t^{1/(s+1)}. \end{eqnarray} This completes the proof of (\ref{genClTHM}) with $|\ln F_{p;s}^{}(|t|)| \leq K_{p;s}^{} |t|^{1/(s+1)}$. It remains to prove (\ref{genCLOITREcoeffC}), (\ref{genCLOITREcoeffA}), (\ref{genCLOITREcoeffB}). Integration by parts yields \begin{equation} C_{p;s}^{} \equiv -{\textstyle\frac1s}\int_0^\infty \ln\left[1- p +p\cos \xi \right]\frac{1}{\xi^{1+1/s}} {\rm{d}}\xi = \label{eq:alphaSintPARTIAL} \int_0^\infty \frac{p \sin\xi}{1- p +p\cos \xi}\frac{1}{\xi^{1/s}} {\rm{d}}\xi, \end{equation} where the integral at r.h.s.(\ref{eq:alphaSintPARTIAL}) converges absolutely when $s\in(1/2,1)$, but only conditionally when $s\geq 1$. With the help of the geometric series r.h.s.(\ref{eq:alphaSintPARTIAL}) becomes \begin{equation} \frac{p}{1- p} \int_0^\infty \frac{ \sin\xi}{1 + \frac{p}{1- p}\cos \xi}\frac{1}{\xi^{1/s}} {\rm{d}}\xi = \label{Cpq} \sum_{n=0}^\infty(-1)^n\left(\frac{p}{1-p}\right)^{n+1}\!\! \int_0^\infty {\sin\xi}\cos^n \xi \frac{1}{\xi^{1/s}} {\rm{d}}\xi; \end{equation} the exchange of summation and integration is justified for $s\in (1/2,1)$ by Fubini's theorem, and for $s\geq 1$ by a more careful limiting argument involving the definition of the conditional convergent integrals as limit $R\to\infty$ of integrals from $0$ to $R$. Repeatedly using the trigonometric identity $2\sin(\alpha)\cos(\beta) = \sin(\alpha+\beta)-\sin(\alpha-\beta)$, eventually followed by a simple rescaling of the integration variable, now yields \begin{equation}\label{trigINTexpand} \!\!\!\!\int_0^\infty\!\!\!\!\! {\sin\xi}\cos^n \xi \frac{1}{\xi^{1/s}} {\rm{d}}\xi = \int_0^\infty \!\!\!\!\!{\sin\xi} \frac{1}{\xi^{1/s}} {\rm{d}}\xi \; \frac{1}{2^n}\!\!{\textstyle\sum\limits_{k=0}^{\ceil{\frac{n-1}{2}}}}\!\! \left[\begin{pmatrix} n\cr k \end{pmatrix} -\begin{pmatrix} n\cr k-1 \end{pmatrix}\right]{({1+n -2k})^{\frac1s-1}}, \end{equation} where it is understood that when $k=0$ one has ${\big(\; {}^{\;n}{}_{\hspace{-10pt}-1}\big)}=0$. The integral at r.h.s.(\ref{trigINTexpand}) is $A_{s}^{}$ given in (\ref{genCLOITREcoeffA}). Inserting (\ref{trigINTexpand}) into (\ref{Cpq}) and pulling out $A_{s}^{}$ yields r.h.s.(\ref{genCLOITREcoeffC}) with \begin{equation}\label{genCLOITREcoeffBagain} B_{p;s}^{} := \sum_{n=0}^\infty(-1)^n\left(\frac{p}{1-p}\right)^{n+1}\frac{1}{2^n}{\textstyle\sum\limits_{k=0}^{\ceil{\frac{n-1}{2}}}} \left[\begin{pmatrix} n\cr k \end{pmatrix} -\begin{pmatrix} n\cr k-1 \end{pmatrix}\right]{({1+n -2k})^{\frac1s-1}}, \end{equation} and a simple manipulation of r.h.s.(\ref{genCLOITREcoeffBagain}) now yields (\ref{genCLOITREcoeffB}). This proves the part of Theorem~\ref{THMgenCLOITREtrend} with $p\in(0,\frac12)$. Now let $p\in[\frac12,1]$. With minor and obvious modifications of our proof for the regime $p\in(0,\frac12)$ one finds that for $\epsilon>0$ there are $G_{p;s}^{(\epsilon)}(|t|)>0$, $K_{p;s}^{(\epsilon)}>0$, such that \begin{equation}\label{genClTHMeps} \forall\,t\in\mathbb{R}:\quad \prod_{n\in\mathbb{N}} \max\left\{\epsilon,\, \left|1-p+p\cos\left({n^{-s}}{t}\right)\right| \right\}= \exp\left({- C_{p;s}^{(\epsilon)} \,|t|^{1/s}}\right) G_{p;s}^{(\epsilon)}(|t|), \end{equation} with $|\ln G_{p;s}^{(\epsilon)}(|t|)| \leq K_{p;s}^{(\epsilon)} |t|^{1/(s+1)}$, and where \begin{equation}\label{genCLOITREcoeffCeps} C_{p;s}^{(\epsilon)} := -\frac1s\int_0^\infty \ln \max\{\epsilon,\,|{1-p+p\cos\xi}|\} \frac{1}{\xi^{1+1/s}}{\rm{d}}\xi. \end{equation} Clearly, $\forall\,t\in\mathbb{R}:\;\lim_{\epsilon\downarrow 0}\mbox{l.h.s.}(\ref{genClTHMeps})= \prod_{n\in\mathbb{N}} \left|1-p+p\cos\left({n^{-s}}{t}\right)\right| = |{\mathrm{Cl}}_{p;s}^{}(t)|$. Next, $\lim_{\epsilon\downarrow 0}C_{p;s}^{(\epsilon)} =-\frac1s\int_0^\infty \ln |{1-p+p\cos\xi}| \frac{1}{\xi^{1+1/s}}{\rm{d}}\xi<\infty$ as a doubly improper Riemann integral. Indeed, the singularities, one at each zero of $1-p+p\cos\left(\xi\right)$, are all improper Riemann integrable because $-\int_0^\delta \ln \xi \rm{d}\xi = -\delta\ln\delta +\delta \downarrow 0$ as $\delta\downarrow 0$. There are countably many singularities, located at \begin{equation}\label{genClPRODzerosXI} \xi_{j,\pm} = (2j-1)\pi \pm \arccos \big(-\tfrac{1-p}{p}\big)\, \quad j\in\mathbb{N}, \end{equation} and so the absolute contribution from a $\delta$-neighborhood of the singularity at $\xi_{j,\pm}$, with $\delta\downarrow 0$ when $\epsilon\downarrow 0$, can be dominated by $c(-\delta\ln\delta +\delta)/ \xi_{j,\pm}^{1+1/s}$ for some positive constant $c$ which is independent of $j$ and the $\pm$ index. Since $1/j^{1+1/s}$ is summable over $\mathbb{N}$ when $1/s>0$, the absolute difference between $C_{p;s}$ and $C_{p;s}^{(\epsilon)}$ is dominated by $\tilde{c} (-\delta\ln\delta +\delta)$, which vanishes as $\delta\downarrow 0$ when $\epsilon\downarrow 0$. The convergence of l.h.s.(\ref{genClTHMeps}) and of the first factor at r.h.s.(\ref{genClTHMeps}) now imply that also the second factor at r.h.s.(\ref{genClTHMeps}) converges when $\epsilon\downarrow 0$. This does not yet establish an upper bound on some $|F_{p;s}^{}(|t|)|$, defined below, as claimed in the theorem; indeed, the bound $|\ln G_{p;s}^{(\epsilon)}(|t|)| \leq K_{p;s}^{(\epsilon)} |t|^{1/(s+1)}$ and the fact that $G_{p;s}^{(\epsilon)}(|t|)$ has zeros in the limit $\epsilon\to0$ means that the constants $K_{p;s}^{(\epsilon)}\to\infty$ as $\epsilon\to0$. However, our theorem only requires an {upper bound} on $\ln G_{p;s}^{(\epsilon)}(|t|)$, \emph{not} on $|\ln G_{p;s}^{(\epsilon)}(|t|)|$, as ${\epsilon\to0}$. To prove such a bound is straightforward. Since l.h.s.(\ref{genClTHMeps}) only introduces a lower cutoff on $|{\mbox{Cl}_{p;s}^{}}(t)|$, at $\epsilon$, it follows by inspection of the estimates in the proof of the $p\in(0,\frac12)$ part of our Theorem that we do have the upper bound $\ln G_{p;s}^{(\epsilon)}(|t|) \leq K_{p;s}^{} |t|^{1/(s+1)}$ \emph{uniformly} in $\epsilon$, and this proves that $G_{p;s}^{}(|t|) \leq \exp(K_{p;s}^{} |t|^{1/(s+1)})$ in the limit $\epsilon\downarrow 0$. Finally, we set $F_{p;s}(|t|) := G_{p;s}^{}(|t|)\; {\mbox{sign}}\,{\mathrm{Cl}}_{p;s}^{}(t)$, and the entirely elementary proof of Theorem~\ref{THMgenCLOITREtrend} is complete. \hfill QED \newpage \section{L\'evy Trends and Fluctuations}\label{TandF} In this section we display the PDFs for a small selection of random Riemann-$\zeta$ walks $\Omega_{p}^{\zeta}(s)$, obtained by numerical Fourier transform of their characteristic functions ${\mbox{Cl}_{p;s}^{}}(t)$. We compare them with the Fourier transform of their trend functions $\exp\left(- C_{p;s}^{} \,|t|^{1/s}\right)$, which are known as L\'evy-stable distributions with \emph{stability parameter} $\alpha=1/s$, \emph{skewness parameter} $\beta=0$, \emph{scale parameter} $c=C_{p;s}^{s}$, and \emph{median} $\mu=0$; see \cite{probBOOK}. The comparison will highlight the importance of the fluctuating factors $F_{p;s}(|t|)$ in the characteristic functions ${\mbox{Cl}_{p;s}^{}}(t)$. The first figure shows the PDF $f_{\Omega_{p}^\zeta(s)}(\omega)$ for Cloitre's parameter values $p=1/3$ and $s=2$, together with the pertinent L\'evy PDF (here $\mathcal{C}$ and $\mathcal{S}$ are Fresnel integrals \begin{equation}\label{LevyHALFetc} f_{\Omega_{\hfrac12;0;C^{2};0}^{\mbox{\tiny{\sc{L\'evy}}}}}\!(\omega) = 2\pi u^3 \Bigl(\sin\!\left(\tfrac{\pi}{2}u^2\right)\!\!\left[\tfrac12-\mathcal{S}\!\left(u\right)\right] +\cos\!\left(\tfrac{\pi}{2}u^2\right)\!\! \left[\tfrac12-\mathcal{C}\!\left(u\right)\right]\Bigr), \end{equation} \noindent where $u = C/\sqrt{2\pi|\omega|}$ and $C=C_{1/3;2}$; cf. the histogram Fig.~4. \vspace{5pt} \includegraphics[scale=0.65]{CloitrePDFandTRENDrev.jpg} \vspace{5pt} \noindent Fig.~7 reveals that the stable distribution (\ref{LevyHALFetc}) obtained by Fourier transform of the L\'evy trend factor $\exp({- C\surd|t|})$, which captures the ``large scale'' behavior of ${\mbox{P}_{\mbox{\tiny\sc{Cl}}}}(t)$ asymptotically exactly but misses all of its ``small scale'' details (recall Fig.~1 and Fig.~2), only very crudely resembles the distribution obtained by the Fourier transform of ${\mbox{P}_{\mbox{\tiny\sc{Cl}}}}(t)$. Also, we recall that the random variable $\Omega_{1/3}^\zeta(2)$ takes its values in the interval $[-\zeta(2),\zeta(2)]$, so $f_{\Omega_{1/3}^\zeta(2)}(\omega)$ vanishes identically outside this interval. By contrast, L\'evy-stable PDF are ``heavy-tailed'' (except when $\alpha=2$, i.e. $s=1/2$, which is excluded here); in particular, it follows from (\ref{LevyHALFetc}) (see also \cite{probBOOK}) tha \begin{equation}\label{LevyHALFetcASYMP} f_{\Omega_{\frac12;0;C;0}^{\mbox{\tiny{\sc{L\'evy}}}}}\!(\omega) \sim \tfrac12 C_{\frac13;2}^{2} {\sqrt{\pi}}|\omega|^{-3/2} \quad (\omega\to\infty). \end{equation} Next we turn to the borderline case $s=1$, which is particularly interesting. When $p\neq 1$ this random walk is a generalization of the harmonic random walk ($p=1$) studied by Kac \cite{Kac}, Morrison \cite{Morrison}, and Schmuland \cite{Schmuland}. Furthermore, the ``trend factor'' of the characteristic function for ${\Omega_{p}^\zeta(1)}^{}$ becomes $e^{-C_{p;1}^{}|t|}$: the characteristic function of a Cauchy random variable with ``theoretical spread'' $C_{p;1}^{}$ (which is explicitly computable; see below). The next Figure displays the PDF $f_{\Omega_{1/3}^\zeta(1)}(\omega)$ for the harmonic random walk with $p=1/3$, together with the Cauchy distribution of theoretical spread $C_{1/3;1}^{}$ about~$0$; cf. the histogram in Fig.~5. \smallskip \includegraphics[scale=0.55]{HarmonicPDFandTRENDrev.jpg} The discrepancy between the PDF $f_{\Omega_{1/3}^\zeta(1)}(\omega)$ for the harmonic random walk with $p=\frac13$ and the Cauchy distribution of theoretical spread $C_{1/3;1}^{}$ about~$0$ visible in Fig.~8 is not quite as flagrant as the corresponding discrepancy in Fig.~7. Not so outside the shown interval, though: the Cauchy distribution is heavy-tailed, while $f_{\Omega_{1/3}^\zeta(1)}(\omega)$, because it is a Schwartz function, has moments of all order. This can also be shown by adaptation of the estimates for $f_{\Omega_{1/2}^\zeta(1)}(\omega)$ given by Schmuland \cite{Schmuland}. We also vindicate our claim that one can compute $C_{p;1}^{}$ explicitly. First of all, \begin{equation}\label{genCLOITREcoeffWHENsISoneBa} \sum_{k=0}^{\ceil{\hfrac{(n-1)}{2}}} \begin{pmatrix} n\cr k \end{pmatrix} \frac{1+n-2k}{{1+n-k}\;} = \begin{pmatrix} n\cr \floor{\hfrac{n}{2}}\end{pmatrix}, \end{equation} which is A001405 in Sloane's OEIS. Now $\frac{1}{2^0}\genfrac{(}{)}{0pt}{}{0}{\floor{\hfrac{0}{2}}} = 1$ while $\frac{1}{2^n}\genfrac{(}{)}{0pt}{}{n}{\floor{\hfrac{n}{2}}}= \frac{1}{2^{n-1}}\genfrac{(}{)}{0pt}{}{n-1}{\floor{\hfrac{n-1}{2}}}$ when $n=2m$ with $m\in\mathbb{N}$, and using that $\sum_{m=0}^\infty \! \frac{1}{2^{2m}} \genfrac{(}{)}{0pt}{}{2m}{m} x^{2m} = \frac{1}{\sqrt{1-x^2}}$ we compute \vspace{-15pt} \begin{eqnarray}\label{genCLOITREcoeffWHENsISoneBb} B_{p;1}^{} =\! \sum_{n=0}^\infty\left(-1\right)^n\!\left(\!\frac{p}{1-p}\!\right)^{\!\!n+1}\frac{1}{2^n} \begin{pmatrix} n\cr \floor{\hfrac{n}{2}}\end{pmatrix} = 1 - \sqrt{1-2p}\quad \mbox{for}\quad p\in(0,\tfrac12); \end{eqnarray} \vspace{-10pt}\noindent so with $A_1^{}=\frac{\pi}{2}$ we obtain $C_{p;1}^{} = A_1^{} B_{p;1}^{}$ in closed form, displayed in Fig.~9. Note that its $p$-derivative blows up as $p\nearrow\frac12$. \smallskip \includegraphics[scale=0.5]{CpONE.jpg \section{Open Problems}\label{FIN The following problems seem to be particularly worthy of further pursuit.\vspace{-15pt} \subsection{Why L\'evy trends?}\label{LevySECRETS}\vspace{-5pt} What is the probabilistic reason for the occurrence of the symmetric L\'evy $\frac1s$-stable distributions associated with the trend factors? We recall that $X$ is a \emph{L\'evy-stable} random variable if and only if $X= c_1 X_1 +c_2X_2$, where $X_1$ and $X_2$ are i.i.d. copies of $X$ and $c_1$ and $c_2$ are suitable positive constants; see also \cite{GaroniFrankel}. Where is this ``L\'evy stability'' hiding in the random Riemann-$\zeta$ walks?\vspace{-10pt} \subsection{Does the singularity at $p=\tfrac12$ have statistical significance?}\label{singularSECRETS}\vspace{-5pt} The derivative singularity of $p\mapsto C_{p;s}$ at $p=\frac12$ is inherited from the derivative singularity of the absolute value function. Is this a consequence of our method of representing ${\mbox{Cl}_{p;s}^{}}(t)$, or does this have some statistical physics meaning for the family of random walks? Something akin to a ``percolation threshold''? In the random walks with $p<\frac12$ one more often stays put than moving to another position, while for $p>\frac12$ the opposite is true. Does this entail a singular change in the statistical random walk behavior, or is this only a peculiar singularity in the trend function? \subsection{Are there ``perfectly typical'' random Riemann-$\zeta$ walks?}\label{typicalSECRETS}\vspace{-5pt} If the intersection of all typical subsets of the set of random Riemann-$\zeta$ walks for given $p\in(0,1]$ and $s>0$ is non-empty, then the answer is ``Yes!'' --- in that case it would be very interesting to exhibit a perfectly typical walk explicitly, if at all possible. It is also conceivable that the intersection set is empty.\vspace{-10pt} \subsection{Complex random Riemann-$\zeta$ walks}\label{complexSECRETS}\vspace{-5pt} What happens if one extends $\Omega_{p}^\zeta(s)$ to complex $s$? The Riemann hypothesis implies for $\zeta(s)$ itself that its extremal walks with Im$(s)\neq 0$ converge to the origin if and only if $\Re{e}(s)=\hfrac12$ and Im$(s)$ is the imaginary part of a nontrivial zero of $\zeta(s)$. Does $\Re{e}(s)=\hfrac12$ play a special role also for the random Riemann-$\zeta$ walks? \vfill \section*{Acknowledgement}\vspace{-10pt} We truly thank: Benoit Cloitre for posing his problem; Alex Kontorovich for his enlightening explanations of the Riemann hypothesis; Norm Frankel and Larry Glasser for their interest in and helpful feedback on $C_{p;s}^{}$; Neil Sloane for OEIS and for his comments; Doron Zeilberger for noting that the combinatorics in our evaluation of $C_{p;s}^{}$ produces Catalan numbers. We also thank the referees for constructive comments. Some symbolic manipulations were obtained with MAPLE, as were the figures. \vfill \newpage \section*{Appendix on Power Walks} If instead of a step size which decreases by the power law $n\mapsto n^{-s}$ one uses an exponentially decreasing step size $n\mapsto s^{-n}$ with $s>1$, the outcome is a ``random geometric series'' (a sum over powers of $1/s$ with random coefficients $R_p^{}(n)\in\{-1,0,1\}$), \vspace{-15pt} \begin{equation}\label{RpowSERIES} \Omega_{p}^{\mbox{\tiny{pow}}}(s) := {\sum_{n\in\mathbb{N}}^{}} R_p^{}(n)\frac{1}{s^n}, \quad s >1,\quad p\in(0,1]; \end{equation} \vspace{-5pt} \noindent the pertinent walks are called ``geometric walks.'' With more general random coefficients one simply speaks of ``random power series'' and their ``power walks.'' All these random variables $\Omega_{p}^{\mbox{\tiny{pow}}}(s)$ have characteristic functions with infinite trigonometric product representations obtainable from our (\ref{charFUNC}) by replacing ${}^\zeta\to {}^{\mbox{\tiny{pow}}}$ and $n^{-s}\to s^{-n}$. Some of these can be evaluated in terms of elementary functions. We register a few special cases, beginning with three geometric walks and ending with a countable family of more general (but simple) power walks. \smallskip \noindent (i) Setting $p=1$ and $s=2$ gives the chacteristic function (see formula (1) of \cite{Morrison}) \vspace{-15pt} \begin{equation}\label{EULERprod} \Phi^{}_{\Omega_{1}^{\mbox{\tiny{pow}}}(2)}(t) = \prod_{n\in\mathbb{N}} \cos\left(\frac{t}{2^n}\right) \equiv \frac{\sin t}{t}, \end{equation} \vspace{-5pt} \noindent an infinite product\footnote{By substituting $\pi/2$ for $t$ and repeatedly using a trigonometric angle-halving identity one arrives at Vi\`ete's infinite product for $2/\pi$, allegedly the first infinite product ever proposed.} representation of the sinc function derived by Euler algebraically by exploiting the trigonometric angle-doubling formulas (see \cite{Morrison}). Recall that sinc$(t)=\int_{-1}^1\frac12 e^{it\omega}d\omega$ is the (inverse) Fourier transform of the PDF $f_{\Omega^{\mbox{\tiny{unif}}}}(\omega)$ of the uniform random variable $\Omega^{\mbox{\tiny{unif}}}$ on $[-1,1]$, i.e. $f_{\Omega^{\mbox{\tiny{unif}}}}(\omega) = \frac12$ if $\omega \in [-1,1]$, and $f_{\Omega^{\mbox{\tiny{unif}}}}(\omega) =0$ otherwise. Indeed, $\Omega_{1}^{\mbox{\tiny{pow}}}(2)$ is a random walk representation of $\Omega^{\mbox{\tiny{unif}}}$ equivalent to the binary representation of $[0,1]$: recalling that any real number $x\in[0,1]$ has a binary representation\footnote{Those representations are not unique and one needs to consider their equivalence classes to identify them uniquely with their real outcome on $[0,1]$, cf. \cite{Kac}.} $x = 0.b_1b_2b_3...\equiv \sum_{n\in\mathbb{N}} b_n /2^n$ with $b_n\in\{0,1\}$, and noting that if $x\in[0,1]$ then $\omega:=2x-1\in[-1,1]$, it follows that any real number $\omega\in[-1,1]$ has a binary representation $\omega = \sum_{n\in\mathbb{N}} r_2^{}(n) /2^n$ with $r^{}_2(n)\in\{-1,1\}$. It is manifest that any such representation of $\omega$ is an outcome of $\Omega_{1}^{\mbox{\tiny{pow}}}(2)$. \smallskip \noindent (ii) $\Omega_{1}^{\mbox{\tiny{pow}}}(3)$ is the random variable for which the characteristic function \vspace{-15pt} \begin{equation}\label{MorrisonCANTORprod} \Phi^{}_{\Omega_{1}^{\mbox{\tiny{pow}}}(3)}(t) = \prod_{n\in\mathbb{N}} \cos\left(\frac{t}{3^n}\right) =:\Phi^{}_{\Omega^{\mbox{\tiny\sc{Cantor}}}}(t/2) \end{equation} \vspace{-5pt} \noindent is a trigonometric product discussed in \cite{Morrison}. Morrison explains that $\Phi^{}_{\Omega^{\mbox{\tiny\sc{Cantor}}}}(t)$ is the characteristic function of a random variable ${\Omega^{\mbox{\tiny\sc{Cantor}}}}$ that is uniformly distributed over the Cantor set constructed from $[-1,1]$ by removing middle thirds ad infinitum. For uniform distributions on other Cantor sets, see \cite{DFT}. We remark that this is a nice example of a random walk whose endpoints are distributed by a singular distribution, in the sense that the Cantor set obtained from $[-1,1]$ has Lebesgue measure $0$. As pointed out to us by an anonymous referee, the distribution of $\Omega_{1}^{\mbox{\tiny{pow}}}(s)$ is singular and concentrated on some Cantor set \emph{for all} $s>2$, while for $1 < s < 2$ the story is more complicated: Solomyak \cite{Solomyak} proved that the distribution of $\Omega_{1}^{\mbox{\tiny{pow}}}(s)$ is absolutely continuous (i.e., it is equivalent to a PDF, an integrable function) \emph{for almost every} $s \in (1, 2)$; see also \cite{PerezSolomyak}. However, the distribution of $\Omega_{1}^{\mbox{\tiny{pow}}}(s)$ is not absolutely continuous for all $s\in(1,2)$ --- in 1939 Erd\H{o}s found values of $s\in(1,2)$ for which the distribution is singular; these are still the only ones known. See \cite{PerezSchlagSolomyak} for further reading. \smallskip \noindent (iii) Setting $p=\frac23$ and $s=3$ yields \vspace{-15pt} \begin{equation}\label{MorrisonPROD} \Phi^{}_{\Omega_{2/3}^{\mbox{\tiny{pow}}}(3)}(t) = \prod_{n\in\mathbb{N}}\left[\frac13 +\frac23\cos\left(\frac{t}{3^{n}}\right)\right] \equiv \frac{\sin (t/2)}{t/2} , \end{equation} \vspace{-5pt} \noindent which becomes formula (9) of \cite{Morrison} under the rescaling $t\mapsto 2t$ (see also Exercise 3 on page 11 of \cite{Kac}.) Recalling our discussion of example (i), we conclude that (\ref{MorrisonPROD}) is the characteristic function of the uniform random variable on the interval $[-\frac12,\frac12]$, expressed as a random walk equivalent to the ternary representation of the real numbers in $[0,1]$, shifted to the left by $-\frac12$. \noindent (iv) The sinc representations (\ref{EULERprod}) and (\ref{MorrisonPROD}) (after rescaling $t\mapsto 2t$) are merely the first two members of a countable family of infinite trigonometric product representations of $\sin t / t$ derived by Kent Morrison \cite{Morrison}, and given by \begin{equation}\label{MorrisonPRODgeneral} \frac{\sin t}{t} = \prod_{n\in\mathbb{N}}\sum_{m=1-s}^{s-1}\frac{1-(-1)^{s+m}\hspace{-15pt}}{2s}\hspace{15pt} \cos\left(\frac{m}{s^n}t\right),\; 1<s\in\mathbb{N}; \end{equation} $s$ even in (\ref{MorrisonPRODgeneral}) is formula (12) in \cite{Morrison}, $s$ odd in (\ref{MorrisonPRODgeneral}) is formula (13) in \cite{Morrison}. These representations of the characteristic function of the uniform random variable over $[-1,1]$ are obtained by considering random walks that enter with equal likelihood into any one of $s$ branches which ``$s$-furkate'' off of every vertex of a symmetric tree centered at 0, equivalent to the usual ``$s$-ary'' representation of the real numbers in $[0,1]$ (shifted to the left by $-\frac12$ and scaled up by a factor 2). When $s>3$ these are no longer random geometric series, but still simple random power series. \noindent
2,877,628,090,610
arxiv
\section{Introduction} The discovery made by Riess and Perlmutter and respective collaborators \cite{Riess:1998cb,Perlmutter:1998np} that the universe where we live is expanding with an accelerating rate, probably represents the greatest challenge of the century that Nature has provided to theoretical physicists (see, for example \cite{Padmanabhan:2006ag}). Among the large number of models and physical mechanisms proposed in order to explain the accelerating era of the universe, are of some interest the so-called $f(R)$-theories of gravity (see, for example\cite{review}). Loosely speaking, they represent alternative theories of gravity where, in place of the Einstein-Hilbert action \begin{equation} I_{EH} = \frac{1}{16\pi G} \int dx \sqrt{-g} R \nonumber \ee the Ricci scalar is substituted by some appropriately chosen function $f(R)$ (or other higher order invariants like the Gauss-Bonnet invariant as in \cite{sasaki,fGB,Mota,GB,cognola06}). >From this perspective, the present acceleration era is a manifestation of the new, more involved, geometry of the universe. This represents a conceptual difference from models where the acceleration is driven instead by the presence of exotic fields (quintessence, phantom field) which dominate the matter content of spacetime. We should point out that the idea of introducing a correction to the Einstein-Hilbert action in the form of $f(R) = R + R^2$ was proposed long time ago by Starobinsky \cite{Starobinsky:1980te} in order to solve many of the problems left open by the so-called Hot Universe Scenario\footnote{This in turn had the consequence of introducing an accelerating expansion in the primordial universe, so that the Starobinsky model can be considered as the first inflationary model.}. Nowadays, $f(R)$-theories of gravity are understood as toy models without any intention of definiteness, that is, useful playgrounds where new physics, possibly related to observable events of our universe, can appear. Their interest grew up in cosmology with the appearance of the papers \cite{turner,capo}. From a mathematical point of view, $f(R)-$theories of gravity, polynomial in the scalar curvature and with degree higher than one of course (as the Starobinsky model), are known to be asymptotically free \cite{Fradkin:1981iu} and renormalizable \cite{Stelle:1976gc,Utiyama:1962sn,Fradkin:1974df}. However, the introduction of higher derivatives of the metric seems to lead to ghosts, states with negative norm which we think are able to spoil any QFT from physical interest. A recent paper by Hawking \& Hertog \cite{Hawking:2001yt} has revitalized the interest in such kind of theories since, as shown by the authors, starting from a positive definite action, one can give meaning to the Euclidean path integral as a set of rules for calculating probabilities for observations. With reference to the specific model of a fourth order Lagrangian, namely that of the Pais-Uhlenbeck oscillator (PU)\cite{Pais:1950za} , it is shown that, paying the prize of losing unitarity, one can never produce or observe negative norm states. This result proves the goodness of the model and justify in our opinion the attention received recently in \cite{Andrzejewski:2009bn}. It is clear that the first step toward the comprehension of the PU quantum mechanical oscillator is represented by proper evaluation of its propagator. Both \cite{Hawking:2001yt,Andrzejewski:2009bn} have performed such calculation. Our results, modulo a normalization constant, are in agreement with the ones presented in \cite{Andrzejewski:2009bn}, also if different techniques have been implemented. Aim of the present paper is to evaluate explicitly step by step the propagator of the PU oscillator. \section{Path Integral Representation of PU Oscillator Propagator} Let us consider the one-dimensional PU Lagrangian \cite{Pais:1950za}, $t \in \mathbb{R}$, \begin{equation} L_{PU} = \frac{1}{2} \left(\frac{d q}{dt}\right)^2 - V(q) - \frac{\alpha^2}{2} \left(\frac{d^2 q}{dt^2}\right)^2\;, \qquad \mbox{with}\qquad m, \alpha >0 \;.\label{pu lagrangian} \ee Its Euclidean version, obtained by Wick rotating the time $t$, i.e. $t\rightarrow -i\tau$, is \begin{equation} - L_E = \frac{1}{2} \left(\frac{d q}{d\tau}\right)^2 + V(q) + \frac{\alpha^2}{2} \left(\frac{d^2 q}{d\tau^2}\right)^2\;\label{euclidean lagrangian}. \ee For brevity, we shall denote with an overdot the derivative with respect to $\tau$. \\ Set $\hbar =1$, we can formally write the propagator as \begin{equation} \int_{\mathcal{A}} \mbox{D} q \, e^{i \int_0^T dt L_{PU}} \quad\stackrel{Wick}{\longrightarrow}\quad Z_T (\mathcal{A}):=\int_{\mathcal{A}} \mbox{D} q\, e^{- I_E [q]} \label{propagator} \ee with the Euclidean action given by \begin{equation} I_E [q] = \int_0^T d\tau \left(\frac{1}{2} \dot q^2 + V(q) + \frac{\alpha^2}{2} \ddot q^2 \right)\; \label{euclidean action}. \ee The Euclidean action turns out to be positive definite as far as $V(q)$ is non negative and the propagator (\ref{propagator}) as explained in \cite{Hawking:2001yt} can be used, at least in the Gaussian approximation, to extract probabilities for physical observations. Here, $\mbox{D} q$ represents the formal functional measure and $\mathcal{A}$, the boundary conditions necessary to give a meaning to a formal path integral. \\ As well known, in the usual second order theory, it would be sufficient to specify $q$ on the initial and final time slice in order to make the propagator well defined. However, the present theory is of order higher than two so that extra boundary conditions are expected to be involved in the definition of (\ref{propagator}). As already proposed in \cite{kleinert}, and fully explained in \cite{Hawking:2001yt}, the right choice is provided by \begin{equation} \mathcal{A}: \qquad \qquad \qquad q(0) = q_0 \;, \qquad q(T)=q_T \;, \qquad\qquad \dot q(0)= \dot q_0\;, \qquad \dot q(T) = \dot q_T \label{A1}\;, \ee since, any condition on $\ddot q$ would make otherwise the action infinite. Established the boundary conditions, the propagator (\ref{propagator}) can be re-written in the more evocative form \begin{equation} Z_T (\mathcal{A}) = \langle q_T, \dot q_T; \tau=T \,|\, q_0, \dot q_0; \tau=0 \rangle. \label{propagator2} \ee Now, one may formally procedes by splitting $q$ into a \textquotedblleft classical'' part, $q_{cl}$, and a quantum fluctuation $\hat q$, i.e. \begin{equation} q(\tau) := q_{cl}(\tau) + \hat q(\tau) \label{splitting}\,. \ee $q_{cl}$ solves the classical EOM obtained by $\delta I_E =0$ with boundary conditions (\ref{A1}). From (\ref{splitting}) and (\ref{A1}), it turns out that quantum fluctuations have to satisfy the following boundary conditions, \begin{equation} \hat{\mathcal{A}}: \hspace{4.5cm} \hat q(0) =0=\hat q(T) \qquad \&\qquad \dot {\hat q}(0)=0=\dot {\hat q}(T) \;.\label{qf bc1} \ee The Euclidean action (\ref{euclidean action}) becomes: \begin{eqnarray} I_E [q_{cl} + \hat q] &=& I_E [q_{cl}] + I_E [q_{cl},\hat q] + \int_0^T d\tau \,(\dot{\hat q} \dot q_{cl} + m^2 \hat q q_{cl} + \alpha^2\ddot{\hat q} \ddot q_{cl} ) \nonumber \\ &\stackrel{PI}{=}& I_E [q_{cl}] + I_E [q_{cl},\hat q] + \int_0^T d\tau \,\hat q \left[\alpha^2 \frac{d^4 q_{cl}}{d \tau^4} - \frac{d^2 q_{cl}}{d\tau^2} + V'(q_{cl})\right] + \nonumber \\ & & \hspace{2.4cm} + \left(\dot q_{cl} \hat q + \alpha^2 \ddot q_{cl} \dot{\hat q} - \alpha^2 \dddot q_{cl} \hat q\right)\Big\vert_0^T\,, \label{wait} \ea where \begin{equation} I_E [q_{cl},\hat q]=\int_0^T d\tau \left(\frac{1}{2} \dot{\hat q}^2 + \frac{\alpha^2}{2} \ddot{\hat q}^2 + W(q_{cl},\hat q) \right)\;. \label{euclidean action testa} \ee with \begin{equation} W(q_{cl}, \hat q)=V(q_{cl}+\hat q)-\hat q V'(q_{cl})- V(q_{cl})\,. \label{w} \ee Notice that, by extremizing the action, we get the EOM for the classical solution, namely. \begin{equation} \mbox{EOM:} \hspace{4.5cm} \alpha^2 \frac{d^4 q_{cl}}{d \tau^4} - \frac{d^2 q_{cl}}{d\tau^2} + V'(q_{cl}) =0 \label{EOM}\,. \ee In (\ref{wait}) the integral vanishes on-shell, while the boundary term vanishes upon using (\ref{qf bc1}). To go on, we make the usual Gaussian approximation or equivalently, we may restict to quadratic potentials of the type $V(q)=\frac{m^2}{2}q^2$. Thus, $W(q_{cl}, \hat q) = W (\hat q)= \frac{m^2}{2} \hat q^2$, with $m^2$ constant. The Euclidean action after the splitting (\ref{splitting}) and in reason of this consideration neatly separates into \begin{eqnarray} I_E[q] &=& I_E[q_{cl}] + I_E [\hat q] \nonumber \\ &\stackrel{(\ref{qf bc1})}{=}&I_E[q_{cl}] + \frac{1}{2}\int_0^T d\tau \;\hat q(\tau) \left[\alpha^2 \frac{d^4}{d \tau^4} - \frac{d^2}{d\tau^2} +m^2\right] \hat q(\tau) \, \label{step} \ea and the classical action has been explicitly evaluated in \cite{Hawking:2001yt}.\\ The propagator assumes the form of a Gaussian integral in the quantum fluctuation variables: \begin{eqnarray} & &\langle q_T, \dot q_T; \tau=T \,|\, q_0, \dot q_0; \tau=0 \rangle \stackrel{(\ref{step})}{=} \exp -I_E[q_{cl}] \times \nonumber \\ & & \hspace{3.5cm}\times \int_{\mathcal{A}} \mbox{D} \hat q \,\exp -\frac{1}{2}\int_0^T d\tau \;\hat q(\tau) \left[\alpha^2 \frac{d^4}{d \tau^4} - \frac{d^2}{d\tau^2} +m^2\right] \hat q(\tau) \label{gaussian} \ea \\ \\ As a consequence, one has to give a meaning to the formal Gaussian path integral. To this aim, let us denote \begin{equation} K_\mathcal{X} := \alpha^2 \frac{d^4}{d \tau^4} - \frac{d^2}{d\tau^2} + m^2 \;,\label{K} \ee as the fourth-order differential operator in $L_2(0,T)$, defined in the dense domain $D(K) := \{f, K_\mathcal{X} f \in L_2 (0,T) \;|\; \mathcal{X} (f) = 0 \}$ with suitable boundary conditions $\mathcal{X}$.\\ Let us try to determine $\mathcal{X}$ such that $K$ is a self-adjoint operator in $L_2(0,T)$. To this purpose, suppose $f^{(n)}$ ($n=0,\dots,4$) be in $D(K)$: \begin{eqnarray} (g,K f) &:=& \int_0^T d\tau \bar g(\tau) \left[\alpha^2 \frac{d^4}{d \tau^4} - \frac{d^2}{d\tau^2} +m^2\right] f(\tau) \nonumber \\ &=& \int_0^T d\tau \left[\alpha^2 \frac{d^4}{d \tau^4} - \frac{d^2}{d\tau^2} +m^2\right] \bar g(\tau) f(\tau) + \nonumber\\ & & + \dot{\bar g} f - \bar g \dot f + \alpha^2 (\bar g \dddot f - \dot{\bar g} \ddot f + \ddot{\bar g} \dot f - \dddot{\bar g} f) \Big\vert_0^T \nonumber \\ &=& (Kg,f) + B(0,T) . \ea $K$ is certainly symmetric if and only if \begin{equation} B(0,T) = 0 . \ee This condition is realized, among the others, by the functions $f^{(n)}(\tau)$, $g^{(n)}(\tau)$ ($n=0, \dots,4$) absolute continuous in $[0,T]$ s.t. \begin{equation} f(0)=0=f(T) \quad \& \quad \dot f(0)=0=\dot f(T) \;;\quad \quad g(0)=0=g(T) \quad \& \quad \dot g(0)=0=\dot g(T) \label{BC1} \ee or \begin{equation} f(0)=0=f(T) \quad \& \quad \ddot f(0)=0=\ddot f(T) \;; \quad\quad g(0)=0=g(T) \quad \& \quad \ddot g(0)=0=\ddot g(T) \label{BC2} \;. \ee Thus, we may invoke the general Von-Neuman-Krein method to find all the self-adjoint extensions of a symmetric operator. In our case, one has to find the $L_2(0,T)$ solutions to equation \begin{equation} K^\dag u_{\pm}(\tau)=\pm i u_{\pm}(\tau)\,. \ee Since $(0,T)$ is compact, it turns out that the defect indices are $(n_+,n_-)=(4,4)$ meaning that all self-adjoint extensions are parametrized by $4 \times 4$ unitary matrices. For our pourposes, we simply observe that physical boundary conditions (\ref{BC1}), corresponding to (\ref{A1}) - (\ref{qf bc1}), can be represented by an unitary matrix, and the operator $K$ is self-adjoint. This is also confirmed by the fact that the associated spectral problem is well defined, eventhough the equation which implicitly defines the eingenvalues is highly trascendental. In fact, for example, even the spectrum of the simplest fouth order operator $K^{(0)}:= \alpha^2\frac{d^4}{d\tau^4}$ with boundary conditions (\ref{BC1}) , is analytically unaccessible due to the impossibility of solving the trascendent equation\footnote{Said $\lambda_n$ the $n$-th eigenvalue of $K^{(0)}$, then $z_n(\lambda_n):=\sqrt{\alpha} \lambda_n^{1/4}$ .} $\cos(T z_n) = \mbox{sech}(T z_n )$. \vspace{0.4cm}\\ On the other hand, it is also easy to show that the boundary conditions (\ref{BC2}) defines also a self-adjoint extension. But in this case, the spectral problem is much more easier to handle. In fact, let us denote by $\bar K$ \begin{equation} \bar K = L_0 + m^2 + \alpha^2 L_0^2 \;, \qquad \mbox{with (\ref{BC2}) as BC}\,, \ee where $L_0= - \frac{d^2}{d \tau^2}$. The eigenfunctions are $\sin\left(\frac{\pi\tau}{T}\right)$ and the the spectrum is \begin{equation} \sigma(\bar K):= \{\bar\lambda_n := \left(\frac{\pi n}{T}\right)^2 + m^2 + \alpha^2 \left(\frac{\pi n}{T}\right)^4, \, n\in N\}\;.\label{unphys spectrum} \ee The problem is that boundary conditions (\ref{BC2}) are ``unphysical'', in the sense that they do not enter in the expression of our propagator. Nevertherless, as we will see, they will play an important role. We conclude this Section with the final form of PU propagator, obtained performing the Gaussian integral (\ref{gaussian}): \begin{equation} \langle q_T, \dot q_T; \tau=T \,|\, q_0, \dot q_0; \tau=0 \rangle = \sqrt{\frac{2\pi}{\mbox{Det}\,K_{\mathcal{A}}}} \,\exp \left(-I_E[q_{cl}]\right) \;. \label{evaluation} \ee \section{Regularization of Functional Determinants} The goal is to give a rigourous meaning to the formal functional determinant which appears in (\ref{evaluation}). This is also called prefactor. In \cite{Hawking:2001yt}, the authors have performed the calculation of the prefactor under the (implicit) assumption that the Van-Vleck-Pauli method is valid even for the higher-order dynamical system. This is not obvious, since the Pauli theorem is based on Hamilton-Jacobi theory for ondinary second order Lagrangian. In our case, the Hamiltonian formalism is quite different from the ordinary one, known as Ostrogadsky formulation. A second attempt has been recently proposed by \cite{Andrzejewski:2009bn} but we reserve to comment it later. \vspace{0.6cm}\\ In our approach, we shall regularize the functional determinant by zeta-function regularization \cite{z1,z2,dowk76-13-3224,eli94,byts96}. A regularization is necessary since functional determinants are formally divergent. We recall that a simple regularization for the functional determinant associated with an elliptic operator $L$ may be chosen as \begin{equation} \ln\, \mbox{Det}\, L (\varepsilon)=-\int_0^\infty dt\ \frac{t^{\varepsilon-1}}{\Gamma(1+\varepsilon)} \Tr e^{-tL/\mu^2} =-\frac{1}{\varepsilon}\zeta\left(\varepsilon\,\Big\vert \frac{L}{\mu^2}\right)\,, \label{bb} \eeq where the zeta-function is defined by means of the Mellin-like transform \begin{equation} \zeta(s|L)=\frac{1}{\Gamma(s)}\int_0^\ii dt\ t^{s-1} \Tr e^{-tL}\qquad \& \qquad \zeta\left(s\, \Big\vert \frac{L}{\mu^2}\right) =\mu^{2s}\,\zeta(s|L)\:. \label{mt}\eeq For a $Q$ order differential operator in D-dimensions, the integral is convergent in the region $\mbox{Re}\, s> \frac{D}{Q}$ where the function $\zeta(s|L)$ is analytic. It is possible to show that $\zeta(s|L)$ can be analytically continued in the whole complex plane and it is regular at $s=0$. Thus, by expanding (\ref{bb}) in Taylor's series, we obtain \begin{equation} \ln\, \mbox{Det}\,L(\varepsilon)=-\frac{1}{\varepsilon} \zeta\left(0\, \Big\vert\frac{L}{\mu^2}\right) -\zeta'\left(0\, \Big\vert\frac{L}{\mu^2}\right) + \mathcal{O}(\varepsilon) \label{dow33} \eeq and the regularised functional determinant associated with $L$ can be defined by taking the finite part in the limit $\varepsilon \to 0$, that is \begin{equation} \ln\,\mbox{Det}\, L = -\zeta'\left(0\, \Big\vert\frac{L}{\mu^2}\right) = -\zeta'(0|L) -\ln\mu^2\:\zeta(0|L)\,, \eeq leading to the usual zeta-function regularisation prescription \cite{z1,z2}. The divergences are governed by the computable coefficient $\zeta(0|L)$, which does not depend on the arbitrary scale parameter $\mu$.\\ \\ Within this direct zeta-function regularization, in order to calculate $\mbox{Det} K_{\mathcal{A}}$, we need to know explicitly the spectrum. As alreday stressed, often one does not know explicitly the spectrum. Thus, we are forced to make use of a powerful theorem proved by Forman in \cite{Forman:1987}, which is a generalization of Gelfand-Yaglom \cite{GY} and Levit-Smilansky theorems Levit-Smilansky theorems \cite{LS}. Adapted to the case at end, we may state it in the following way\footnote{What we report here is a simplified version of \underline{Proposition 3.9} in \cite{Forman:1987}.}: \vspace{0.6cm} \begin{flushleft} \textbf{Theorem} (Forman, 1987): Let $K$, $\bar K$ be operators defined in $[0,T]$ of the form \begin{eqnarray} K &=& P_0(\tau) \frac{d^n }{d\tau^n} + \mathcal{O}\left(\frac{d^{n-2} }{d\tau^{n-2}}\right) \nonumber\\ \bar K &=& P_0(\tau) \frac{d^n }{d\tau^n} + \mathcal{O} \left(\frac{d^{n-2} }{d\tau^{n-2}}\right) . \ea Let $\mathcal{A}$ be any boundary condition represented by matrices $(M,N)$, s.t. \begin{equation} M \left(\begin{array}{c} h(0) \\ \cdots \\ h^{(n-1)}(0) \end{array}\right) + N \left(\begin{array}{c} h(T) \\ \cdots \\ h^{(n-1)}(T) \end{array}\right) = 0 \;. \ee Then, \begin{equation} \frac{\mbox{Det} K_\mathcal{A}}{\mbox{Det} \bar K_\mathcal{\bar A}} = \frac{\det (M + N Y_{K}(T))}{\det (\bar M + \bar N Y_{\bar K}(T))} \;.\label{forman} \ee where $(\bar M,\bar N)$ are matrices defining other boundary conditions smoothly related to $(M,N)$. Above, for any $h$ such that $K h =0$, $Y_K(\tau)$ acts as \begin{equation} \left(\begin{array}{c} h(\tau) \\ \cdots \\ h^{(n-1)}(\tau) \end{array}\right) = Y_K(\tau) \left(\begin{array}{c} h(0) \\ \cdots \\ h^{(n-1)}(0) \end{array}\right) \;. \label{Y} \ee \end{flushleft} \vspace{0.6cm} A solution of this equation is given by \begin{equation} Y_K(\tau) = \left(\begin{array}{cccc} u_1(\tau) & u_2(\tau) & u_3(\tau) & u_4(\tau)\\ \dot u_1(\tau) & \dot u_2(\tau) & \dot u_3(\tau) & \dot u_4(\tau) \\ \ddot u_1(\tau) & \ddot u_2(\tau) & \ddot u_3(\tau) & \ddot u_4(\tau) \\ \dddot u_1(\tau) & \dddot u_2(\tau) & \dddot u_3(\tau) & \dddot u_4(\tau) \end{array}\right) \;, \ee where $K u_j=0$ for $j=0,\dots 4$, satisfying the initial conditions \begin{eqnarray} u_1(0)&=&1 \;, \qquad u_j(0)=0 \;,\quad j\neq 1 \;, \nonumber \\ \dot u_2(0)&=&1 \;, \qquad \dot u_j(0)=0 \;, \quad j\neq 2 \;,\nonumber \\ \ddot u_3(0)&=&1 \;, \qquad \ddot u_j(0)=0 \;, \quad j\neq 3\;,\nonumber \\ \dddot u_4(0)&=&1 \;, \qquad \dddot u_j(0)=0 \;, \quad j\neq 4 \label{u BC} . \ea \vspace{0.4cm}\\ The role of $K$ which appears in the Theorem is played by the operator (\ref{K}). The boundary conditions (\ref{qf bc1}) can be put in correspondence with matrices\footnote{The choice is highly non unique of course. The final result, however, seems not to depend on this large freedom. Cfr. \cite{Forman:1987}.} \begin{equation} M= \left(\begin{array}{cc} \mathbb{I}_{(2)} & 0 \\ 0 & 0 \end{array}\right) \;, \qquad N= \left(\begin{array}{cc} 0 & 0 \\ \mathbb{I}_{(2)} & 0 \end{array}\right) .\label{MN} \ee Finally, in our case, the matrix $Y_{K}(T)$ can be easily computed. The general solution of $K u=0$ reads \begin{equation} u_j(\tau) = A_j \sinh(\lambda_1 \tau) + B_j \cosh(\lambda_1\tau) + C_j \sinh(\lambda_2\tau) + D_j \sinh(\lambda_2\tau) \,,\label{fundamental soln} \ee with \begin{equation} \lambda_{1,2} = \sqrt{\frac{1\mp \sqrt{1-4\alpha^2 m^2}}{2\alpha^2}} \label{lambda} \;. \ee In the oscillatory regime, that is $2\alpha m< 1$, the roots $\lambda_{1,2}$ are real as can be checked expanding the double radicals above.\\ Imposing the initial conditions (\ref{u BC}), one has \begin{eqnarray} u_1(\tau) &=& \frac{\lambda_2^2}{\lambda_2^2-\lambda_1^2} \cosh(\lambda_1\tau) - \frac{\lambda_1^2}{\lambda_2^2-\lambda_1^2} \cosh(\lambda_2\tau) \;,\nonumber \\ u_2(\tau) &=& \frac{\lambda_2^2}{\lambda_1(\lambda_2^2-\lambda_1^2)} \sinh(\lambda_1\tau) - \frac{\lambda_1^2}{\lambda_2(\lambda_2^2-\lambda_1^2)} \sinh(\lambda_2\tau) \;, \nonumber\\ u_3(\tau) &=& - \frac{1}{\lambda_2^2-\lambda_1^2} \cosh(\lambda_1\tau) + \frac{1}{\lambda_2^2-\lambda_1^2} \cosh(\lambda_2\tau) \nonumber \\ u_4(\tau) &=& - \frac{1}{\lambda_1(\lambda_2^2-\lambda_1^2)} \sinh(\lambda_1\tau) + \frac{1}{\lambda_2(\lambda_2^2-\lambda_1^2)} \sinh(\lambda_2\tau) . \label{u} \ea Therefore, the right hand side numerator of (\ref{forman}) is \begin{eqnarray} & & \det (M + N Y_K(T)) \stackrel{(\ref{MN})}{=} u_3(T) \dot u_4(T) - u_4(T) \dot u_3(T) \hspace{6cm} \nonumber \\ & & \hspace{1cm} \stackrel{(\ref{u})}{=} \frac{1}{(\lambda_2^2-\lambda_1^2)^2} \left(2-2\cosh(\lambda_1 T) \cosh(\lambda_2 T) + \frac{\lambda_1^2+\lambda_2^2}{\lambda_1\lambda_2} \sinh(\lambda_1 T) \sinh(\lambda_2 T)\right) \nonumber \\ & & \hspace{1cm}\stackrel{(\ref{lambda})}{=} \frac{\alpha^3}{m} \left[\frac{1}{1+2\alpha m} \sinh^2\left(\frac{\sqrt{1+2\alpha m} T}{2\alpha}\right) - \frac{1}{1-2\alpha m} \sinh^2\left(\frac{\sqrt{1-2\alpha m} T}{2\alpha}\right) \right] . \label{num} \ea In order to compute completely the propagator, we need to find the fourth-order differential operator which will play the role of $\bar K$ in Forman's theorem. The requirement on the candidate is that we must be able to compute its funtional determinant in an independent way. Loosely speaking, there are two ways to compute the functional determinant of an operator, that is: (i) taking the product of its eigenvalues (if you know the spectrum); (ii) using some \textquotedblleft smart'' mathematical theorem. We have already decided to follow the latter to compute $\mbox{Det} \,K$, something which forces us to take the former way for $\mbox{Det} \,\bar K$. \\ \\ As we have seen, Forman's theorem lets us to play with the boundary conditions both of $K$ and $\bar K$. This suggests us to choose $\bar K$ formally equal to $ K$, with unphysical boundary conditions (\ref{BC2}). We recall that in this case we known the spectrum, given in (\ref{unphys spectrum}).\\ A quick way to compute the functional determinant $\mbox{Det}\, \bar K$ may be the following. Zeta-funtion regularized determinants suffer, in general of the so-called multiplicative anomaly \cite{eliz98-194-613,byts98-38-1075,eliz98u-413}, namely \begin{equation} \mbox{Det} (\,A \,B)=\mbox{Det} \,A \;\mbox{Det} \,B\ e^{a(A,B)}\,. \ee The quantity$a(A,B)$ is called multiplicative anomaly. In our case, by noticing that \begin{equation} \bar K = \left(\alpha L_0 + \frac{1 - \sqrt{1-4\alpha^2 m^2}}{2\alpha}\right)\left(\alpha L_0 + \frac{1 + \sqrt{1-4\alpha^2 m^2}}{2\alpha}\right) \equiv \bar K_- \bar K_+\;, \ee and observing that, since $\bar K$ is an ordinary fourth order differential operator, the multiplicative anomaly is vanishing (cf. \cite{eliz98u-413}), we have \begin{equation} \mbox{Det}\,\bar K = (\mbox{Det}\,\bar K_-)(\mbox{Det}\,\bar K_+). \ee In this way, $\bar K_\pm$ are just second order differential operators obtained by shifting the simple $L_0$ with a constant term. The calculation of $(\mbox{Det}\,\bar K_{\pm})$ is standard and we are not going to repeat it here. The result is \begin{equation} \mbox{Det}\,\bar K = \frac{\alpha}{m T^2} \left[\sinh^2\left(\frac{\sqrt{1+2\alpha m} T}{2\alpha}\right) - \sinh^2\left(\frac{\sqrt{1-2\alpha m} T}{2\alpha}\right) \right] \label{Kbarra} \;. \ee We may confirm this result (and in turn, the absence of multiplicative anomaly) by evaluating directly the determinant of $\bar K$. This target can be accomplished by implementing a standard trick which consists in the observation that the derivative with respect to $m^2$ of $\ln (\mbox{Det}\,\bar K)$ is a well defined quantity, namely \begin{eqnarray} \frac{d}{d(m^2)} \ln (\mbox{Det}\,\bar K) &=& \frac{d}{d(m^2)} \tr (\ln \bar K)\nonumber\\ &=& \frac{d}{d(m^2)} \sum_{n=1}^\infty \ln(\bar\lambda_n) \nonumber \\ &=& \sum_{n=1}^\infty \frac{1}{\bar\lambda_n} < \infty \label{s} \ea Thus, the problem has been reduced to compute the convergent series \begin{eqnarray} \mathcal S &\equiv& \sum_{n=1}^\infty \frac{1}{\bar\lambda_n} \nonumber \\ &=&\sum_{n=1}^\infty \frac{1}{\left(\frac{\pi n}{T}\right)^2 + m^2 + \alpha^2 \left(\frac{\pi n}{T}\right)^4} \nonumber \\ &=& \frac{T^2}{\sqrt{1-4\alpha^2 m^2}} \left(\sum_{n=1}^\infty \frac{1}{(\pi n)^2 + z_-^2} -\sum_{n=1}^\infty\frac{1}{(\pi n)^2 + z_+^2}\right) \;, \ea where for simplicity we have introduced the quantities \begin{equation} z_\pm := T \sqrt{\frac{1\pm \sqrt{1-4\alpha^2 m^2}}{2\alpha^2}} = \frac{T}{2\alpha} \left(\sqrt{1+2\alpha m} \pm \sqrt{1-2\alpha m}\right). \ee In terms of these new quantities, \begin{equation} d(m^2) = \mp \frac{\sqrt{1-4\alpha^2 m^2}}{T^2} (2z_\pm) dz_\pm. \ee Recalling the Mittag-Leffler expansion of the $\coth$ function, i.e. \begin{equation} \coth z = \frac{1}{z} + 2z \sum_{n=1}^\infty \frac{1}{(\pi n)^2 + z^2}, \ee it turns out that \begin{eqnarray} \ln (\mbox{Det}\,\bar K) &=& \int dz_- \left(\coth(z_-)- \frac{1}{z_-}\right) + \int dz_+ \left(\coth(z_+)- \frac{1}{z_+}\right) \nonumber \\ &=& \ln \left(\frac{\sinh(z_+)\sinh(z_-)}{z_+z_-}\right)\,, \ea so that the final result coincides indeed with (\ref{Kbarra}). \vspace{0.6cm}\\ Now that we know $\mbox{Det}\,\bar K$ from independent considerations, we can close the circle just computing the denominator of (\ref{forman}). Since $K$ and $\bar K$ are formally the same, the $Y$ matrix does not change. What changes are the boundary conditions which now are represented, for example, by \begin{equation} \bar M= \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \;, \qquad \bar N= \left(\begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array}\right) .\label{MNbar} \ee The right hand side denominator of (\ref{forman}) becomes \begin{eqnarray} & & \det (\bar M + \bar N Y_{\bar K}(T)) = u_2(T)\ddot u_4(T) - u_4(T) \ddot u_2(T) \hspace{6.5cm}\nonumber\\ & &\hspace{1.4cm} \stackrel{(\ref{u})}{=} \frac{1}{\lambda_1\lambda_2} \sinh(\lambda_1 T)\sinh(\lambda_2 T) \nonumber \\ & & \hspace{1.4cm}\stackrel{(\ref{lambda})}{=} \frac{\alpha}{m} \cdot \left[\sinh^2\left(\frac{\sqrt{1+2\alpha m} T}{2\alpha}\right) - \sinh^2\left(\frac{\sqrt{1-2\alpha m} T}{2\alpha}\right) \right] \label{den}. \ea Inserting (\ref{den}), (\ref{num}) and (\ref{Kbarra}) into (\ref{forman}), the functional determinant of the $K$ operator with boundary conditions (\ref{A1}) - (\ref{qf bc1}) is \begin{eqnarray} \mbox{Det}\,K_{\mathcal{A}} &=& \frac{\alpha^3}{m T^2} \left[\frac{1}{1+2\alpha m} \sinh^2\left(\frac{\sqrt{1+2\alpha m} T}{2\alpha}\right) + \right. \nonumber \\ & & \left. \hspace{4.5cm} - \frac{1}{1-2\alpha m} \sinh^2\left(\frac{\sqrt{1-2\alpha m} T}{2\alpha}\right) \right]. \label{denK} \ea This is the main result of the paper. A dimensional analysis tells us that, in natural units, $[\alpha] = [T] =(\mbox{mass})^{-1}$, so that $[\mbox{Det} \,K] = (\mbox{mass})^{-2}$. Up to a minor adjustment of the numeric factor, (\ref{denK}) is the prefactor presented in \cite{Andrzejewski:2009bn}, equation (9).\\ It is straightforaward to check that, in the small $T$ limit, \begin{equation} \mbox{Det}\,K_{\mathcal{A}} \approx \frac{T^2}{12} + \frac{T^4}{180\alpha^2} + \mathcal{O}(T^6) \;,\qquad T\ll1\;, \ee but for $\tau \in [0, \infty)$, \begin{eqnarray} \mbox{Det}\,K_{\mathcal{A}} &=& \frac{\alpha}{m} \left[\frac{1}{1+2\alpha m} \,\frac{\exp\left(\sqrt{1+2\alpha m} \frac{T}{\alpha}\right)}{(2T/\alpha)^2} \,+\right. \nonumber \\ & & \left. \hspace{3.1cm} - \frac{1}{1-2\alpha m} \,\frac{\exp\left(\sqrt{1-2\alpha m} \frac{T}{\alpha}\right)}{(2T/\alpha)^2} \right] \,, \qquad T\gg 1\;, \ea something which shows that the ground state probability amplitude, being proportional to $\mbox{Det}\,K_{\mathcal A} ^{-1}$, is indeed exponentially suppressed and makes the Euclidean theory well defined.\\ Finally, notice that the vanishing "mass'' term case is \begin{equation} \mbox{Det}\,K_{\mathcal{A}}\; \stackrel{m\rightarrow 0}{\longrightarrow} \;\frac{\alpha^4}{T^2} \left[2\left(1-\cosh\left(\frac{T}{\alpha}\right)\right) + \frac{T}{\alpha} \sinh\left(\frac{T}{\alpha}\right)\right] =: \mbox{Det}\, K^{(1)}_{\mathcal{A}}\;,\label{denK1} \ee where we have defined the operator $K^{(1)}_{\mathcal{A}}:= \alpha^2 \frac{d^4}{d\tau^4} - \frac{d^2}{d\tau^2}$ with boundary conditions (\ref{A1}) - (\ref{qf bc1}). Given (\ref{denK1}), one can evaluate, for example, the determinant of the most elementary fourth-order differential operator of physical interest, namely $K^{(0)}_{\mathcal{A}}:= \alpha^2 \frac{d^4}{d\tau^4}$ in $\tau \in [0,T]$ (operator whose spectrum, as said above, is unaccessible to us), which is \begin{equation} \mbox{Det}\,K^{(0)}_{\mathcal{A}} = \frac{T^2}{12} \label{detK0}\,. \ee We leave the details of the computation to the interested reader. \section{Conclusions} To summarize, we have devoted the paper to the computation of the quantum mechanical Euclidean propagator of the one-dimensional PU oscillator. As we have seen, the PU Lagrangian gives rise to a fourth order differential operator whose functional determiant, $\mbox{Det}\,K_{\mathcal{A}}$, has represented the main difficulty along the way. As showed by Hawking \& Hertog, the correct boundary conditions $\mathcal{A}$ we have to impose in order to give meaning to the Euclidean propagator, are provided by equation (\ref{A1}). By making use of Forman's theorem, we have been able to evaluate $\mbox{Det} \,K_{\mathcal{A}}$, so that the main result of the paper can be stated in the following way, \begin{equation} \langle q_T, \dot q_T; \tau=T \,|\, q_0, \dot q_0; \tau=0 \rangle = \sqrt{\frac{2\pi}{\mbox{Det}\,K_{\mathcal{A}}}} \,\exp \left(-I_E[q_{cl}]\right) \;, \ee where \begin{eqnarray} \mbox{Det}\,K_{\mathcal{A}} &=& \frac{\alpha^3}{m T^2} \left[\frac{1}{1+2\alpha m} \sinh^2\left(\frac{\sqrt{1+2\alpha m} T}{2\alpha}\right) + \right. \nonumber \\ & & \left. \hspace{4.5cm} - \frac{1}{1-2\alpha m} \sinh^2\left(\frac{\sqrt{1-2\alpha m} T}{2\alpha}\right) \right] \ea and the classical Euclidean action can be found in \cite{Hawking:2001yt}, equations (A5)-(A6). \vspace{0.4cm}\\ As a byproduct, we have shown how Forman's theorem can be usefully implemented to give other functional determinants of potential interest. To this regard, Forman's theorem proves to be an instrument of extraordinary power. The generalization to higher order quadratic Lagrangian $L(q, q^{(1)},\cdots, q^{(r)})$ is immediate, eventhough the computations become of course much more involved. \\ We conclude with the observation that the method described in this paper may also be useful in the so-called Ho\v{r}ava-Lifshitz non-relativistic renormalizzable theory of gravity \cite{hora}, where higher spatial derivatives appear indeed in the Lagrangian of the theory and in inflationary cosmology \cite{tim}.
2,877,628,090,611
arxiv
\section{Introduction} Problem D21 in Guy's book \cite{Guy} asks whether there are any triangles whose area is rational and whose sides and medians have rational lengths. The question of rational medians was already considered by Euler \cite{Euler}, who parametrized triangles whose medians are rational, but without imposing the other conditions. To this day Guy's problem D21 remains open. Buchholz and Rathbun \cite{Buchholz-Rathbun, Buchholz-Rathbun2} parametrized families with two rational medians using elliptic curves. Other authors have worked with the elliptic curves that appear from this problem \cite{Dujella-Peral,Dujella-Peral2, Ismail}. More generally, Heron triangles (triangles with rational side lengths and rational area) have been extensively studied by various authors \cite{Sastry, Kramer, Goins, Bremner-Heron, vanLuijk, Luca, Stanica, Beardon, Halbeisen}. More general cevians were studied in \cite{Buchholz-thesis,Laflamme-Lalin}. In \cite{Hartshorne-vanLuijk} Hartshorne and van Luijk introduced the idea of studying rationality of lengths in hyperbolic triangles. The second and third named authors followed this idea and studied various problems related to finding rational cevians in hyperbolic triangles \cite{LalinMila}. It should be noted that a slightly different notion of rationality for hyperbolic triangles was considered by Brody and Schettler in \cite{Brody-Schettler}. In this work we study some analogous problems for spherical triangles. To do this, we need to define the idea of rationality in this context. A spherical triangle is a triangle on the surface of the unit sphere whose sides are given by arcs in great circles, i.e., it is determined by the intersections of three planes passing through the center of the sphere with the surface of the sphere. We will focus on {\em proper} triangles, which satisfy that the sides $a, b, c$ and the angles $\alpha, \beta, \gamma$ are smaller than $\pi$. Thus, in a proper spherical triangle, we have \[\pi < \alpha + \beta+ \gamma <3\pi\] and \[0<a +b+c<2\pi.\] The Gauss--Bonnet theorem implies that the area of such spherical triangle is given by \begin{equation}\label{eq:GB} A=\alpha+\beta+\gamma -\pi. \end{equation} Following a convention analogous to what was adopted in \cite{Hartshorne-vanLuijk,LalinMila}, we will call an angle $\omega$, the area $A$, or a length $x$ rational if and only if the sines and cosines of these quantities are rational, or equivalently, if $e^{i\omega}, e^{iA},$ or $e^{ix} \in \mathbb{Q}(i)$. In particular, notice that if $\alpha, \beta$, and $\gamma$ are rational, equation \eqref{eq:GB} implies that so is the area $A$. Recall that $e^{ix} \in \mathbb{Q}(i)$ if and only if $\cos(x) = \frac{1 - t^2}{1 + t^2}$ and $\sin(x) = \frac{2t}{1+t^2}$ for some $t \in \mathbb{Q}$. Indeed, we have \begin{equation}\label{eq:rational} e^{ix}=\frac{i-t}{i+t}\in \mathbb{Q}(i) \Longleftrightarrow t=\frac{\sin(x)}{1+\cos(x)}\in \mathbb{Q} \Longleftrightarrow (\cos(x),\sin(x))=\left(\frac{1-t^2}{1+t^2},\frac{2t}{1+t^2}\right). \end{equation} By abuse of terminology we will call $t$ the \emph{rational side} (resp.\ \emph{rational angle}) of a spherical triangle if its side (resp.\ angle) is $x$. In sum, a spherical triangle with area $A$, angles $\alpha, \beta, \gamma$ and sides $a, b, c$ is a {\it spherical Heron triangle} or {\it spherical rational triangle} if \[e^{ia}, e^{ib}, e^{ic}, e^{i\alpha}, e^{i\beta}, e^{i\gamma} \in \mathbb{Q}(i),\] and this implies that $e^{iA} \in \mathbb{Q}(i)$ as well. \begin{figure} \centering \newcommand{\InterSec}[3]{% \path[name intersections={of=#1 and #2, by=#3, sort by=#1,total=\t}] \pgfextra{\xdef\InterNb{\t}}; } \begin{tikzpicture} \pgfmathsetmacro\mathbb{R}{2} \draw (0,0,0) circle (\mathbb{R}); \foreach \angle[count=\n from 1] in {-10,225,110} { \begin{scope}[rotate=\angle] \path[draw,dashed,opacity=0.3,name path global=d\n] (2,0) arc [start angle=0, end angle=180, x radius=2cm, y radius=1cm] ; \path[draw,name path global=s\n] (-2,0) arc [start angle=180, end angle=360, x radius=2cm, y radius=1cm] ; \end{scope} } \InterSec{s1}{s2}{I3} ; \InterSec{s1}{s3}{I2} ; \InterSec{s3}{s2}{I1} ; % \fill[fill=black,opacity=0.2] (I1) to [bend left=19] (I2) to [bend left=21] (I3) to [bend left=20] (I1); \InterSec{d1}{d2}{J3} ; \InterSec{d1}{d3}{J2} ; \InterSec{d3}{d2}{J1} ; \end{tikzpicture} \caption{A spherical triangle.} \label{fig:my_label2} \end{figure} One of the goals of this article is to compare the situation in the spherical and hyperbolic worlds. In this sense, some of our results will be analogous to the ones in \cite{LalinMila}. First we treat the generation of spherical Heron triangles. If we fix two sides, we obtain the following result. \begin{thm} \label{th:sides} For all but finitely many choices of rational sides with parameters $u$ and $v$ there are infinitely many spherical triangles such that the third side and the angles are rational. \end{thm} This result is completely analogous to \cite[Theorem 3]{LalinMila}. It is achieved by parametrizing such triangles with points in the elliptic curve \[y^2 = x (x - (v + v^{-1})^2) (x - (w + w^{-1})^2)\] and showing that for most values of $v,w\in \mathbb{Q}$, this elliptic curve has positive rank. Another approach, which follows naturally from extending the congruent number problem and the techniques of \cite{Goins}, is to fix an angle and the area. While this was achieved for the hyperbolic case in \cite[Theorem 1, Corollary 2]{LalinMila}, we encounter a difficulty here, as we are not able to construct the corresponding elliptic curve over $\mathbb{Q}$. Instead, we consider some particular cases. For the spherical congruent number problem, we obtain \begin{thm}\label{thm:congruent} For all rational areas $m \neq 1$ there are infinitely many area $m$ right spherical triangles with rational angles and sides. Thus, the spherical congruent number problem has a positive solution. \end{thm} This is achieved by working with the elliptic curve \[y^2 = x (x -2 m (m^2 + 1)) (x - 4 m (m^2 + 1)).\] It is also known that the congruent number problem has a positive solution in the hyperbolic space \cite{LalinMila}. Thus, the Euclidean plane is very special from this point of view. We also consider the case of a isosceles triangle in this context, and likewise obtain infinitely many Heron isosceles triangles with prescribed area and repeated angle, for most choices of the parameters. A surprising result in the spherical setting is that problem D21 has a positive solution. \begin{thm} \label{thm:equilateral} There exists a unique rational equilateral spherical Heron triangle whose sides have lengths $\frac{\pi}{2}$ and whose angles measure $\frac{\pi}{2}$. The medians of this triangle measure $\frac{\pi}{2}$ and are, therefore, rational. \end{thm} This is contrary to the Euclidean plane and hyperbolic settings, where such equilateral triangle do not exist. More precisely, this is the first setting in which a positive solution can been found for the problem D21. We also explore and find positive results for the existence of triangles with rational sides and one rational median, isosceles triangles with rational sides and two rational medians, and certain existence results involving a rational area bisector. The results are analogous to what is known for the hyperbolic case. Finally we embark on a detailed study of isosceles triangles with meridians and equators as sides. For these particular triangles, one has a guaranteed rational median/bisector/height, and the goal is to find that the other two cevians are rational. We obtain a positive result with infinitely many solutions for the heights, while the medians and bisectors reduce to only one solution given by the equilateral triangle from the D21 problem. We also consider the area bisector and obtain a negative result in this case. The problem in this case depends on a non-trivial argument (originally due to Flynn and Wetherell \cite{FlynnWetherell}) for finding all the rational points of a bielliptic curve of genus 2. The main geometric tools we will use are the following basic results of spherical trigonometry. A basic reference is \cite{Todhunter}. Consider a spherical triangle with area $A$, angles $\alpha, \beta,$ and $\gamma$ and side lengths $a,b,$ and $c$, where $a$ (resp.\ $b, c$) is opposite to $\alpha$ (resp. \ $\beta, \gamma$). The spherical law of cosines says \begin{equation}\label{eq:cosines} \cos(c)= \cos(a) \cos(b) + \sin(a) \sin(b) \cos(\gamma), \end{equation} and similarly for $\cos(a), \cos(b)$. The dual of \eqref{eq:cosines} is the supplemental law of cosines, which says \begin{equation}\label{eq:supplementalcosines} \cos(\gamma)=-\cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)\cos(c), \end{equation} and analogously for $\cos(\alpha), \cos(\beta)$. The spherical law of sines gives \begin{equation}\label{eq:sines} \frac{\sin(\alpha)}{\sin(a)}=\frac{\sin(\beta)}{\sin(b)}=\frac{\sin(\gamma)}{\sin(c)}. \end{equation} This paper is organized as follows. Section~\ref{sec:angles} and Section~\ref{sec:sides} cover the parametrization of spherical Heron triangles in terms of angles and sides respectively. Section~\ref{sec:equilateral} is focused on medians in the simple case of equilateral triangles and includes the proof of Theorem~\ref{thm:equilateral}. Section ~\ref{sec:medians} includes the parametrization of spherical triangles with rational side lengths and one rational median, while Section~\ref{sec:area-bisectors} is devoted to the dual computation of the parametrization of spherical Heron triangles with one rational area bisector. Isosceles triangles with meridians and equator as sides are considered in Section \ref{sec:isosceles}. We close the paper with Section \ref{sec:further}, where we discuss variations of the definition of rationality that could lead to future directions of research. \section{Spherical Heron triangles - Angle parametrization}\label{sec:angles} In this section we give a parametrization of spherical Heron triangles in terms of angles and area. We consider a triangle with angles $\alpha, \beta, \gamma \in (0,\pi)$ that are \emph{rational} (as defined in the introduction). Since the area is given by equation \eqref{eq:GB}, it is also rational. The supplemental spherical law of cosines \eqref{eq:supplementalcosines} implies that the cosines of the sides are also rational, and it remains to check that the sines of the sides are rational. The spherical law of sines \eqref{eq:sines} implies that \[\sin(a)\sin(\beta)\sin(\gamma)=\sin(b)\sin(\alpha)\sin(\gamma)=\sin(c)\sin(\alpha)\sin(\beta).\] We call this common quantity $\Delta_1$; observe that it is rational if and only if the sines of all the sides are rational. Squaring the supplemental spherical law of cosines \eqref{eq:supplementalcosines}, we get \[\sin^2(\alpha)\sin^2(\beta)(1-\sin^2(c))=(\cos(\gamma)+\cos(\alpha)\cos(\beta))^2.\] This leads to \begin{equation}\label{eq:delta1}\Delta_1^2= \sin^2(\alpha)\sin^2(\beta)-(\cos(\gamma)+\cos(\alpha)\cos(\beta))^2 \in \mathbb{Q}. \end{equation} We remark that this expression is very similar to \cite[Eq. (6)]{LalinMila}, except that there is a sign difference on the right-hand side. From this point we can follow the treatment from \cite{LalinMila}. Using trigonometric identities, we can rewrite this as a symmetric expression in $\alpha, \beta, \gamma$: \begin{align*}2\Delta_1^2=& -\cos(-\alpha+\beta+\gamma)-\cos(\alpha-\beta+\gamma)-\cos(\alpha+\beta-\gamma)\\&-\cos(\alpha+\beta+\gamma)-\cos(2\alpha)-\cos(2\beta)-\cos(2\gamma)-1. \end{align*} Substituting for $\gamma=A+\pi-\alpha-\beta$, expanding the cosines, and writing $c_A=\cos(A), s_A=\sin(A)$, etc, we have \begin{align}\label{eq:c_Aandfriends} 2 \Delta_1^2 =& -(c_A^2 - s_A^2)\big[(c_\alpha c_\beta - s_\alpha s_\beta)^2 - (c_\alpha s_\beta + c_\beta s_\alpha)^2\big] \\ \nonumber & - 4 c_A s_A\big[c_\alpha s_\alpha (c_\beta^2 - s_\beta^2) + c_\beta s_\beta(c_\alpha^2 - s_\alpha^2)\big] \\ \nonumber & + c_A\big[(c_\alpha c_\beta - s_\alpha s_\beta)^2 - (c_\alpha s_\beta + c_\beta s_\alpha)^2 + 2c_\alpha^2 + 2c_\beta^2 - 1\big] \\ \nonumber & + 4 s_A(c_\alpha s_\alpha c_\beta^2 + c_\beta s_\beta c_\alpha^2) - 2 c_\alpha^2 - 2 c_\beta^2 + 1. \end{align} Since we wish to express this in terms of rational angles, we set \[t=\frac{\sin(\alpha)}{1+\cos(\alpha)}, \quad u=\frac{\sin(\beta)}{1+\cos(\beta)}, \quad m=\frac{\sin(A)}{1+\cos(A)},\] and $w = (m^2+1)(u^2 + 1)(t^2 + 1)\Delta_1$, equation \eqref{eq:c_Aandfriends} rewrites as: \begin{align} \label{eq:w-t-angles} w^2 = & -4 m (mu^2 - m + 2u) (mt^2 +2t - m) \big[(mu^2 - m + 2u)t^2 \\& + (-4mu + 2u^2 - 2)t - mu^2 + m - 2u\big]. \nonumber \end{align} Here we differ from the situation of \cite[Eq. 8]{LalinMila}, where we were able to find a change of variables $\{t,w\}\rightarrow \{x,y\}$ turning the equation into a Weierstrass form. In this case, having the opposite sign on the right-hand side of \eqref{eq:w-t-angles} creates an obstruction to find a general solution to the equation that is defined over $\mathbb{Q}(u,m)$. By twisting $w$ by $i$, one can actually recover the change of variables leading to \cite[Eq. 9]{LalinMila}, which in this case it will not be defined over $\mathbb{Q}(u,m)$, but over over $\mathbb{Q}(i)(u,m)$. In \cite[Lemma 2.1]{LalinMila}, a point of infinite order $P$ over $\mathbb{Q}(u,m)$ was found, but this will only lead to a point over $\mathbb{Q}(i)(u,m)$ after twisting. In our case, we will not be able to conclude that our problem over the spherical side has infinite solutions. Instead, we proceed to examine two particular cases of interest: $u=1$ (a right triangle) and $u=t$ (an isosceles triangle). \subsection{The case $u=1$} By setting $u=1$ equation \eqref{eq:w-t-angles} becomes \[w^2 = -16 m (mt^2 +2t - m) (t^2 -2m t- 1)\] with a solution $(t,w)=(1,8m)$. By applying the change of variables \begin{align*} y=&\frac{m((1+2m-m^2)(4mt^3-12mt-w)+(1-2m-m^2)(12mt^2-4m+wt))}{(t-1)^3},\\ x=& \frac{m(4(m^2t^2-(m-1)^2t+1)+w)}{(t-1)^2}, \end{align*} we obtain the Weierstrass form \begin{equation} \label{eq:congruent} E_{u=1}: y^2 = x (x -2 m (m^2 + 1)) (x - 4 m (m^2 + 1)). \end{equation} We remark that \eqref{eq:congruent} appeared in \cite{LalinMila}. In fact, it was proven that $E(\mathbb{C}(m))$ is a $K3$-surface of rank 2, and that $P(m)=((m^2+1)(m+1)^2,(m^2+1)^2(m^2-1))$ and $Q(m)=(2m(m+1)^2,4im^2(m^2-1))$ are two independent points of infinite order. We claim that for every rational value of $m \notin \{-1,0,1\}$, the point $P(m)$ has infinite order on $E_{u=1}$. Indeed, Mazur’s Theorem (see \cite{Mazur77, Mazur78}) implies that the torsion group of a rational elliptic curve has order at most 16. By looking at the points on $E_{u=1}$ of the form $\pm k P + \ell (0,0)$ for $k \in \{1,2,3,4\}, \ell \in \{0,1\}$, we see that we generically get 16 different points. Thus for each value of $m$, either one of these points is non-torsion (from which it follows easily that $P(m)$ has infinite order), or they are all torsion. In the latter case, together with $(0,0)$ we have 17 points, so two points of this list must coincide, and it is easily verified by looking at the equations for these points that this is only possible if $m \in \{-1,0,1\}$. Finally, observe that that the conditions (e.g., sum of angles $> \pi$, etc.) for a set of parameters $(\alpha, \beta, \gamma, A)$ to give rise to an actual spherical triangle translate into \emph{open conditions} (i.e., involving strict inequalities) on the variables $t,u,m$, which in turn also translate into open conditions on the variables $x,y,m$. Now by a theorem of Poincaré--Hurwitz (see \cite[Satz 11, p.\ 78]{Skolem}) the points $E_{u=1}(\mathbb{Q})$ form a dense subset of $E_{u=1}(\mathbb{R})$ as long as $E_{u=1}(\mathbb{Q})$ is infinite and intersects both connected components of $E_{u=1}(\mathbb{R})$. Since the three torsion points having $y=0$ are rational (and lie across both connected components of $E_{u=1}(\mathbb{R})$), and since unless $m\in\{-1,0,1\}$, the point $P(m)$ is a rational point of infinite order, we have proven the following: \begin{thm}[Theorem~\ref{thm:congruent} in the introduction] \label{th:thm4} For every positive rational $m \neq 1$, the congruent number problem has a solution in the spherical context. More precisely: for all rational areas $m \neq 1$ there are infinitely many area $m$ right spherical triangles with rational angles and sides. \end{thm} Note that this type of argument will be used later in the text to deduce existence of infinitely many triangles with given properties from elliptic curves having positive rank. Observe that in the case $m=1$, the elliptic curve has rank zero, and using the change of variables (when it is defined) it is possible to show that the only possible solution is with $t=1$. This corresponds to a triangle having area $\frac \pi 2$ (since $m=1$) and 2 angles also equal to $\frac \pi 2$ (since $u=t=1$). Thus by \eqref{eq:GB}, the third angle is also $\frac \pi 2$, and the only rational triangle with area and one angle equal to $\frac \pi 2$ is the unique rational equilateral triangle, with all sides, angles and area equal to $\frac \pi 2$. \subsection{The case $u=t$} By setting $u=t$, and $w=w_1(mt^2+2t-m)$, equation \eqref{eq:w-t-angles} becomes \[ w_1^2=-4m(mt^4+4t^3-6mt^2-4t+m) \] with particular solution $(t,w_1) = (1,4m)$. We apply Cassels' algorithm \cite[p. 37]{Cassels} and find the change of variables \begin{align*} y =& \frac{m}{2(t-1)^3} (-2m(m+1)t^3 +6m (m-1)t^2 + 6m (m+1)t \\ &+2m(1-m) +(mt+ m - t+ 1) w_1), \\ x =& \frac{m (4mt - 2t^2 +2+ w_1)}{2(t-1)^2}, \end{align*} that leads to the Weierstrass form \begin{equation} E_{u=t}:y^2=x(x^2-m^2(1+m^2)).\end{equation} \begin{lem} The rank of the rational elliptic surface $E_{u=t}(\mathbb{C}(m))$ is 2. Its torsion group is isomorphic to $\mathbb{Z}/2\mathbb{Z}$. The points \[P(m)=\big(-m^2,m^2\big),\quad Q(m)=\big( m(im-1) , (i +1)m^2(im-1)\big),\] are generators of the free subgroup. \end{lem} \begin{proof} Note that $E_{u=t}$ is a rational elliptic surface with discriminant $\mathrm{disc}(E_{u=t}) = 64m^6(m^2+1)^3$. By Tate's algorithm \cite[IV.9]{Silverman-advanced} $E_{u=t}$ has singularities at $m=0$ of type $I_0^*$, and $m=\pm i$ of type $III$. By the Shioda--Tate formula \cite[Corollary 6.7]{SS-book}, the rank of the N\'eron--Severi group is given by \begin{equation}\label{eq:ST}\rho(E)=\mathrm{rk}(E(\mathbb{C}(m))+2+\sum_{\nu} (m_\nu-1).\end{equation} In our case, we obtain \[\rho(E_{u=t})=\mathrm{rk}(E_{u=t}(\mathbb{C}(m))+2+(5-1)+2\cdot(2-1)=\mathrm{rk}(E_{u=t}(\mathbb{C}(m))+8.\] Since $\rho(E_{u=t})=10$ for rational elliptic surfaces, we conclude that $\mathrm{rk}(E_{u=t}(\mathbb{C}(m))=2$. By \cite[Table 4.5]{MirandaPersson}, since the rank is $R=2$ and the Euler characteristic is $\chi=1$, we conclude that the torsion is either $\mathbb{Z}/2\mathbb{Z}$ or $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$, but it is very clear that the only point of order 2 is $(0,0)$, and therefore the torsion is $\mathbb{Z}/2\mathbb{Z}$. Now, if one only wants to show that $P(m)$ and $Q(m)$ are independent points of infinite order, the easiest way is to specialize at $m=1$ and verify (using Sage for instance) that the corresponding curve has rank 2 over $\mathbb{Q}(i)$ and admits those points as generators of the free part. (In fact, checking that $P(m)$ is of infinite order is even easier: at $m=1$, we get the point $P=(-1,1)$ on the curve $y^2=x(x^2-2)$. Since torsion injects into specialization and $2P=(\frac{9}{4}, -\frac{21}{8})$ has non-integral coordinates, one concludes that $P$ can not be torsion due to the Nagell--Lutz theorem.) However, proving that they are actually generators is more involved. We can do this by computing the height pairing of the Mordell--Weil group on the elliptic surface $E_{u=t}$. In order to do this we need to find the height pairing of both points. By formulas (6.14) and (6.15) in \cite{SS-book}, \begin{align} \langle P, Q\rangle =& \chi +(P.O)+(Q.O)-(P.Q)-\sum_\nu \mathrm{contr}_\nu(P,Q), \label{eq:hPR}\\ h(P):=\langle P, P\rangle =& 2\chi +2(P.O)-\sum_\nu \mathrm{contr}_\nu(P), \label{eq:hP} \end{align} and similarly for $Q$. In the above formulas, $(P.Q)$ represents the intersection multiplicity of $P$ and $Q$ and $\mathrm{contr}_\nu(P,Q)$ represent certain correction terms given by the local contribution from the fiber at $\nu$ (see \cite[Definition 6.23]{SS-book}). We look at \cite[Table 6.1]{SS-book}. For the singularity at $m=0$ of type $I_0^*$, we get $\mathrm{contr}_0=1$ unless the point intersects $\Theta_0$ in the fiber. We have that $P(0)=Q(0)=(0,0)$ (the singular point), so they do not intersect $\Theta_0$ and therefore $\mathrm{contr}_0(P)=\mathrm{contr}_0(Q)=1$. We also have that $\mathrm{contr}_0(P,Q)=1/2$ since they do not intersect the same component. For the singularities $\pm i$, of type $III$, we have that $P(i)=P(-i)=(-1,1)\not = (0,0)$ (the singular point is again $(0,0)$) so we get $\mathrm{contr}_{\pm i}(P)=0$ since it intersects $\Theta_0$. We have that $Q(i)=(-2i,2+2i)\not = (0,0)$, so that $\mathrm{contr}_{i}(Q)=0$, but $Q(-i)= (0,0)$, so that $\mathrm{contr}_{-i}(Q)=1/2$. Finally we have $\mathrm{contr}_{\pm i}(P,Q)=0$ since they intersect different components. We also have that $P\cdot O=Q\cdot O=0$, since the coordinates are polynomials, and $P\cdot Q=0$ since the points do not intersect the same component at $(0,0)$, which is the only possible point where $P=Q$. Since $\chi=1$, we obtain from \eqref{eq:hP} that $h(P)=2\cdot 1 +2\cdot 0 -1-2\cdot0=1$, $h(Q)=2\cdot 1 +2\cdot 0 -1-0-1/2=1/2$ and from \eqref{eq:hPR}, $\langle P, Q\rangle =1+0+0-0-1/2-2\cdot 0=1/2$. On the one hand, we can compute the determinant of the Gram matrix associated to the height pairing of $P$ and $Q$. This gives \begin{equation}\label{eq:disc}\left|\begin{array}{cc} 1 & 1/2 \\ 1/2 & 1/2 \end{array}\right|=\frac{1}{4}. \end{equation} On the other hand, by the Determinant formula \cite[Corollary 6.39]{SS-book}), we have \begin{equation}\label{eq:det-formula} |\mathrm{disc}\, \mathrm{NS}(E_{u=t})| =\frac{|\mathrm{disc}\, \mathrm{Triv}(E_{u=t}) \cdot \mathrm{disc}\, \mathrm{MWL}(E_{u=t})|}{|E_{u=t}(\mathbb{C}(v))_\mathrm{tor}|^2}, \end{equation} where $\mathrm{MWL}(E_{u=t})$ is the Mordell--Weil lattice and $ \mathrm{Triv}(E_{u=t})$ is the trivial lattice. By \cite[Definition 7.3]{Shioda-MW}, \begin{equation}\label{eq:shiodaMW} \mathrm{disc}\, \mathrm{Triv}(E_{u=t}) = \prod_{\nu} m_\nu^{(1)}, \end{equation} where $m_\nu^{(1)}$ is the number of simple components of the corresponding singular fiber. We have $m_\nu^{(1)}=2$ if $\nu$ is of type $III$ and $m_\nu^{(1)}=4$ if $\nu$ is of type $I_0^*$. We thus get \[\mathrm{disc}\, \mathrm{Triv}(E_{u=t}) = 16.\] Since $\mathrm{disc}\, \mathrm{NS}(E_{u=t})=-1$ (as the N\'eron--Severi lattice of a rational elliptic surface is unimodular) and $|E_{u=t}(\mathbb{C}(m))_\mathrm{tor}|=2$, equation \eqref{eq:det-formula} becomes \begin{equation}\label{eq:discMWL} |\mathrm{disc}\, \mathrm{MWL}(E_{u=t})|=\frac{1}{4}. \end{equation} In conclusion, we have obtained the same value as in \eqref{eq:disc}, this proves that the points $P,Q$ are generators for the free part of $E_{u=t}(\mathbb{C}(m))$. \end{proof} Using arguments similar to those in the proof of Theorem~\ref{th:thm4}, one gets: \begin{thm}[Theorem~\ref{th:sides} in the introduction] For all but finitely many combinations of rational area $m$ and rational angle $u$ there are infinitely many isosceles spherical triangles with area $m$ and the repeated angle $u$ such that the third angle and the sides are rational. \end{thm} \section{Spherical Heron triangles - Side length parametrization} \label{sec:sides} In this section we parametrize spherical Heron triangles given by their side lengths. Let $a,b,c$ denote the side lengths of a spherical triangle, and assume that they are rational (as defined in the introduction, i.e., $e^{ia}, e^{ib}, e^{ic} \in \mathbb{Q}(i)$). Let $\alpha$ (resp.\ $\beta, \gamma$) be the angles opposing the sides of length $a$ (resp.\ $b,c$). By the spherical law of cosines \eqref{eq:cosines} the cosines of the angles are also rational, and it remains to check that the sines of the angles are rational. The spherical law of sines \eqref{eq:sines} implies that \[ \sin(\alpha) \sin(b) \sin(c) = \sin(\beta) \sin(a) \sin(c) = \sin(\gamma) \sin(a) \sin(b). \] Call this quantity $\Delta_2$; it is rational if and only if the sines of all the angles are rational. As in Section~\ref{sec:angles}, we square the spherical law of cosines \eqref{eq:cosines} to get \[ \sin(a)^2 \sin(b)^2(1 - \sin(\gamma)^2) = ( \cos(a)\cos(b)- \cos(c))^2. \] Hence \begin{equation}\label{eq:delta2} \Delta_2^2 = \sin(a)^2 \sin(b)^2 - (\cos(a) \cos(b) - \cos(c))^2 \in \mathbb{Q}. \end{equation} Applying the change of variables \[u=\frac{\sin(a)}{1+\cos(a)}, \quad v=\frac{\sin(b)}{1+\cos(b)}, \quad w=\frac{\sin(c)}{1+\cos(c)},\] we get the equation \[ D^2 = (-uvw + u + v + w) (uvw - u + v + w) (uvw + u - v + w) (uvw + u + v - w), \] where $D = \frac 1 2 (u^2 + 1)(w^2 + 1) (v^2 + 1) \Delta_2$. This has a solution $(u,D) = (\frac{v+w}{1-vw}, 0)$. Applying \cite[p. 37]{Cassels}, we find the change of variables \begin{align*} y=&\frac{(v+v^{-1})(w+w^{-1})(v+w)(vw-1)D}{vw(uvw-u+v+w)^2},\\ x=&-\frac{(v+v^{-1})(w+w^{-1})(uvw-u-v-w)}{uvw-u+v+w}, \end{align*} that yields the Weierstrass form \begin{equation} \label{eq:Weierstrass-sides} E_{v,w}: y^2 = x (x - (v + v^{-1})^2) (x - (w + w^{-1})^2). \end{equation} We remark the similarly of \eqref{eq:Weierstrass-sides} with \cite[Eq. 12]{LalinMila}. Indeed, both curves are isomorphic, we can go from one to the other by the change $(x,y)\rightarrow (-x,iy)$, $(v,w)\rightarrow (iv,iw)$. Applying this to \cite[Lemma 3.1]{LalinMila} we immediately obtain the following result. \begin{lem}\label{lem:rank-computations-sides} Let $E_v$ denote the $K3$-surface over $\mathbb{C}(w)$ resulting from fixing the parameter $v$. Its rank satisfies \[ 1 \leq \quad \mathrm{rk}(E_v(\mathbb{C}(w)))\quad \leq 2. \] In addition, the torsion group of $E_v$ is isomorphic to $\mathbb{Z}/4\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$, generated by \[S_0(v,w)=\big((v+v^{-1})(w+w^{-1}),i(v+v^{-1})(w+w^{-1})(v^{-1}-w^{-1})(vw-1)\big)\] and \[S_1(v,w)=\big((v+v^{-1})^2, 0\big).\] Finally, the point \[ R(v,w) = \big(vw(v+v^{-1})(w+w^{-1}),(v+v^{-1})(w+w^{-1})(v^2w^2-1)\big) \] has infinite order on $E$. \end{lem} Finally, an argument as in Theorem~\ref{th:thm4} gives: \begin{thm} For all but finitely many choices of rational sides with parameters $u$ and $v$ there are infinitely many spherical triangles such that the third side and the angles are rational. \end{thm} \section{Equilateral triangles} \label{sec:equilateral} The goal of this section is to explore the existence of equilateral spherical Heron triangles. In fact, we prove:. \begin{thm}[Theorem~\ref{thm:equilateral} in the introduction] There exists a unique rational equilateral spherical Heron triangle given by $a=b=c=\frac{\pi}{2}$ and $\alpha=\beta=\gamma=\frac{\pi}{2}$. \end{thm} We remark that for the triangle described in Theorem \ref{thm:equilateral} the medians have the same lengths as the sides and thus they are rational. Therefore, this provides a {\em positive answer to the problem D21 in the spherical world}. \begin{proof} For this we go back to equation \eqref{eq:delta1}, where we set $\alpha=\beta=\gamma$: \[\Delta_1^2=1-3\cos^2(\alpha)-2\cos^3(\alpha)=(1-2\cos(\alpha))(\cos(\alpha)+1)^2.\] Setting $u=\frac{\Delta_1}{\cos(\alpha)+1}$, the above equation can be rewritten as $u^2=1-2\cos(\alpha)$. Thus the solutions to the original equation are parametrized by \begin{equation}\label{eq:deltacos}\cos(\alpha)=\frac{1-u^2}{2} \mbox{ and } \Delta_1=\frac{u(3-u^2)}{2}.\end{equation} Squaring the first equation of \eqref{eq:deltacos}, writing $4\cos(\alpha)^2=4-4\sin^2(\alpha)$, and setting $v=2\sin(\alpha)$, we obtain \[v^2=-u^4+2u^2+3.\] Making the change of variables \begin{equation*} y= \frac{2v+3-u^3+u^2+u}{(u-1)^3},\qquad x= \frac{v+2}{(u-1)^2}, \end{equation*} we get \[E: y^2=x(x^2-x+1),\] and this curve has rank 0. It is not hard to see that \[E(\mathbb{Q})=\{O,(0,0),(1,\pm 1)\}\cong \mathbb{Z}/4\mathbb{Z}.\] These points only yield to solutions of the form $\sin(\alpha)=\pm 1$, $\cos(\alpha)=0$, thus leading to a triangle whose angles and sides are all equal to $\frac{\pi}{2}$. \end{proof} If we relax the condition that the angles be rational, we find another surprising result. \begin{prop} The only equilateral triangle that has rational sides and rational medians is the one that satisfies $a=b=c=\frac{\pi}{2}$ and $\alpha=\beta=\gamma=\frac{\pi}{2}$. \end{prop} \begin{proof} Consider an equilateral spherical triangle of side lengths $a$ and angles $\alpha$. Let $m$ denote the length of the median, and consider the half triangle defined by one median. This triangle has angles $\alpha, \frac{\alpha}{2}, \frac \pi 2$ and sides $a, \frac a 2, m$. Assume the length $a$ is rational, i.e., that $e^{ia} \in \mathbb{Q}(i)$. By the Pythagorean theorem (a particular case of the law of cosines \eqref{eq:cosines}), \begin{equation}\label{eq:Pyt}\cos(m) \cos\left(\frac a 2\right) = \cos(a).\end{equation} We immediately see that the above equation has solutions when $a=\pi, \frac{\pi}{2}$, when both sides of \eqref{eq:Pyt} equal zero. However, notice that $a=\pi$ is not a valid solution. Otherwise, we remark that $\cos(m) \in \mathbb{Q}$ if and only if $p = \cos(\frac a 2) \in \mathbb{Q}$. Let $t = \sin(m)$. Squaring \eqref{eq:Pyt}, we get the following equation for $t$: \[ (1 - t^2) p^2 = (2 p^2 -1)^2 \quad \text{i.e.} \quad s^2 = -4 p^4 + 5 p^2 - 1, \] writing $s = pt$. Changing variables \[ s=\frac{6y}{x^2},\quad p=\frac{x-6}{x}, \quad y=\frac{6s}{(p-1)^2},\quad x=-\frac{6}{p-1},\] we get the following elliptic curve: \[ y^2 = x^3-19x^2+96x-144=(x-3)(x-4)(x-12). \] This elliptic curve has rank $0$ and \[ E(\mathbb{Q})=\{O, (3,0), (4,0), (12,0)\} \cong \mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}. \] Taking $x=3,4,12$ gives $p=-1,-\frac{1}{2}, \frac{1}{2}$, and $a=\pi, \frac{2\pi}{3}, \frac{\pi}{3}$ respectively. Since $0<a<\frac{2\pi}{3}$, we can only take $a=\frac{\pi}{3}$. However, this gives \[\cos(m)=\frac{1}{2\cos\left(\frac{\pi}{6}\right)},\] which is irrational. \end{proof} Similarly, relaxing the condition that the sides be rational, we get: \begin{prop} The only equilateral triangle that has rational angles and rational medians is the one that satisfies $a=b=c=\frac{\pi}{2}$ and $\alpha=\beta=\gamma=\frac{\pi}{2}$. \end{prop} \begin{proof} The proof of this result proceeds in the same vein as the previous proposition. In this case the starting point in the Pythagorean theorem as a particular case of the supplementary law of cosines \eqref{eq:supplementalcosines}: \begin{equation} \label{eq:Pyt2} \cos(m) \sin\left(\frac{\alpha}{2}\right)=\cos(\alpha). \end{equation} We immediately see the solution $\alpha=\frac{\pi}{2}$ with $m=\frac{\pi}{2}$. Notice that in general $\frac{\pi}{3}<\alpha<\pi$, and therefore $\sin(\frac{\alpha}{2})\not = 0$. We remark that $\cos(m) \in \mathbb{Q}$ if and only if $p = \sin(\frac a 2) \in \mathbb{Q}$. Let $t = \sin(m)$. Squaring \eqref{eq:Pyt2}, we get the following equation for $t$: \[ (1 - t^2) p^2 = (1-2 p^2)^2, \] writing $s = pt$. This reduces to the same elliptic curve as in the previous result: \[ y^2 = x^3-19x^2+96x-144=(x-3)(x-4)(x-12). \] Taking $x=3,4,12$ gives $p=-1,-\frac{1}{2}, \frac{1}{2}$, and the only values $\frac{\pi}{3}<\alpha<\pi$ are $\alpha= \frac{\pi}{6}, \frac{5\pi}{6}$. However, the sine function evaluated in these angles is not rational. \end{proof} \section{Rational medians} \label{sec:medians} The goal of this section is to study spherical triangles with one rational median. We consider a spherical triangle with sides $a,b,c$ and opposite angles $\alpha, \beta, \gamma$ as before. Let $m$ denote the median at the angle $\alpha$, cutting the side $a$ into two equal parts. Denote by $\theta$ the angle at the intersection of $m$ and $a$ on the side of $\beta$ (the one on the side of $\gamma$ is $\pi-\theta$). Applying the law of cosines \eqref{eq:cosines} to both triangles, we have \begin{align*} \cos(b)=&\cos(m)\cos(a/2)+\sin(m)\sin(a/2)\cos(\pi-\theta),\\ \cos(c)=&\cos(m)\cos(a/2)+\sin(m)\sin(a/2)\cos(\theta). \end{align*} Combining both equations, we obtain \begin{equation} \label{eq:med} 2\cos(m) \cos(a/2)=\cos(b)+\cos(c). \end{equation} We assume that $a,b,c$ are rational, i.e., $e^{ia}, e^{ib}, e^{ic} \in \mathbb{Q}(i)$. Then for $\cos(m)$ to be rational it is necessary and sufficient that $\cos(a/2)$ be rational. Since $a$ is already rational, this is equivalent to $a/2$ being rational. We need in addition that $\sin(m)$ be rational. For this, we square equation \eqref{eq:med} and obtain that \begin{equation}\label{eq:sinm} 4\cos^2(a/2)-(\cos(b)+\cos(c))^2= 4\sin^2(m)\cos^2(a/2). \end{equation} We remark that the right-hand side of \eqref{eq:sinm} should be the square of a rational number. Let \[w=\frac{\sin(a/2)}{1+\cos(a/2)}, \quad u=\frac{\sin(b)}{1+\cos(b)}, \quad v=\frac{\sin(c)}{1+\cos(c)}.\] After simplification, we must solve \[(1-w^2)^2(1+u^2)^2(1+v^2)^2-(1+w^2)^2(1-u^2v^2)^2=t^2.\] By applying the change of variables \begin{align*} y=&\frac{4(u^2+1)^2(w^2-1)}{(uv-1)^3}(2u^2v^3w^4+v^3w^4+u^5v^2w^4+3u^3v^2w^4+uv^2w^4+u^4vw^4+3u^2vw^4\\&+vw^4+u^5w^4+2u^3w^4-4u^4v^3w^2-4u^2v^3w^2-2v^3w^2-2u^5v^2w^2-2u^3v^2w^2-2uv^2w^2\\&-2u^4vw^2+tu^2vw^2-2u^2vw^2+tvw^2-2vw^2-2u^5w^2+tu^3w^2-4u^3w^2+tuw^2-4uw^2\\&+2u^2v^3+v^3+u^5v^2+3u^3v^2+uv^2+u^4v-tu^2v+3u^2v-tv+v+u^5-tu^3+2u^3\!-tu),\\ x=&\frac{2(u^2+1)^2(w^2-1)(u^2v^2w^2+v^2w^2+u^2w^2+w^2-u^2v^2-v^2-u^2+t-1)}{(uv-1)^2}, \end{align*} we get the Weierstrass form \begin{align}\label{eq:weierstrassmedian} E_{u,w}:y^2=&x(x^2-4(u^4w^4+3u^2w^4+w^4-2u^4w^2-2u^2w^2-2w^2+u^4+3u^2+1)x\nonumber\\&+4(u^2+1)^4(w-1)^2(w+1)^2(w^2+1)^2) \end{align} Thus, we obtain the following result. \begin{thm} A spherical triangle with rational side $b$ with parameter $u$ and rational half-side $a/2$ with parameter $w$ has a rational median (intersecting the side $a$) if and only if it corresponds (using the above change of variables) to a rational point on the elliptic curve $E_{u,w}$. \end{thm} Again in this case we can be more specific about the arithmetic structure of $E_{u,w}$. \begin{lem}\label{lem:rank-computations-medians} Let $E_u$ (resp.\ $E_w$) denote the $K3$-surface over $\mathbb{C}(w)$ (resp.\ $\mathbb{C}(u)$) resulting from fixing the parameter $v$ (resp.\ $w$). The rank of $E_u(\mathbb{C}(w))$ satisfies \[ 2 \leq \quad \mathrm{rk}(E_u(\mathbb{C}(w)))\quad \leq 6, \] while the rank of $E_w(\mathbb{C}(u))$ satisfies \[ 2 \leq \quad \mathrm{rk}(E_w(\mathbb{C}(u)))\quad \leq 4. \] In addition, the torsion group is isomorphic to $\mathbb{Z}/2\mathbb{Z}$, generated by $(0,0)$. Finally, the points \[P(u,w)=\big((u^2+1)^2(w^2+1)^2,(u^2-1)(u^2+1)^2(w^2+1)^3\big) \] and \[Q(u,w)=\big(4u^2(w^2+1)^2,4u(u^4-1)(w^2-1)(w^2+1)^2\big)\] have infinite order on $E_{u,w}$ and are independent. \end{lem} \begin{proof} First notice that the discriminant of $E_{u,w}$ is given by \begin{align*}\mathrm{disc}=&4096(u^2+1)^8(w-1)^4(w+1)^4(w^2+1)^4(w^2-2u^2-1)(u^2w^2-u^2-2)\\ &\times (u^2w^2+2w^2-u^2)(2u^2w^2+w^2-1). \end{align*} First look at $E_u(\mathbb{C}(w))$. We have singularities at $w=\pm 1, \pm i$ of type $I_4$, and $w= \pm \sqrt{2u^2+1},$ $\pm\frac{\sqrt{u^2+2}}{u}, \pm \frac{u}{\sqrt{2+u^2}}, \pm\frac{1}{ \sqrt{2u^2+1}}$ of type $I_1$. Applying Shioda--Tate formula \eqref{eq:ST}, \[\rho(E_u)=\mathrm{rk}(E_{u}(\mathbb{C}(w))+2+4\cdot(4-1)=\mathrm{rk}(E_{u}(\mathbb{C}(w))+14,\] and since $\rho(E_u)\leq 20$ for $K3$-surfaces, we can bound the rank by 6. For $E_w(\mathbb{C}(u))$, we have singularities at $u=\pm i$ of type $I_8$ as well as at the roots of the other polynomials of type $I_1$. Shioda--Tate formula \eqref{eq:ST} gives \[\rho(E_w)=\mathrm{rk}(E_{w}(\mathbb{C}(u))+2+2\cdot(8-1)=\mathrm{rk}(E_{w}(\mathbb{C}(u))+16,\] and since $\rho(E_u)\leq 20$, we can bound the rank by 4. The lower bound for the rank will follow from the fact that $P(u,w)$ and $Q(u,w)$ are independent points of infinite order. This can be deduced directly from specializing at $u=2$ and $v=2$. Indeed, for these values, we obtain the Weierstrass form $y^2=x(x^2-1300x+562500)$, $P=(625, 9375)$, $Q=(400, 9000)$. Notice that $2P=(\frac{3025}{36}, -\frac{1343375}{216})$ and $2Q=(\frac{648025}{1296}, -\frac{420552125}{46656})$, which have non-integral coordinates, showing that $P$ and $Q$ are of infinite order. Moreover, the Mordell-Weil group has rank 2 with generators of the free part given by $A = (50, 5000)$ and $B = (1250, 25000)$, and one verifies directly that $P = A - B$ and $Q=2B$. Thus these points are of infinite order and independent at this specialization, and since any relation of dependence or finite order would automatically descend to the specialization, we conclude that these points are also independent and of infinite order over $\mathbb{C}(u,v)$. Finally, by \cite[Table 4.5]{MirandaPersson}, since the rank is $R\geq 2$ and the Euler characteristic is $\chi=1$, we conclude that the torsion is either $\mathbb{Z}/2\mathbb{Z}$ or $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$, but one can immediately see that the only point of order 2 is $(0,0)$, and therefore the torsion is $\mathbb{Z}/2\mathbb{Z}$. \end{proof} \subsection{The case $a=b$} Here we set $a=b$ in the previous discussion. The goal is to obtain two (equal) rational medians and three rational sides in an isosceles triangle. This is equivalent to imposing $u=\frac{2w}{1-w^2}$ in \eqref{eq:weierstrassmedian}. \begin{align*} E_{w}:y_0^2=&x_0\left(x_0^2 -\frac{4(w^2+1)^2(w^4+6w^2+1)}{(w^2-1)^2}x_0+\frac{4(w^2+1)^{10}}{(w^2-1)^6}\right). \end{align*} Making the change $x_0=\frac{(w^2+1)^2}{(w^2-1)^4}x$, $y_0=\frac{(w^2+1)^3}{(w^2-1)^6}y$, we obtain \begin{align}\label{eq:weierstrassmedianisosceles} E_{w}:y^2=&x\left(x^2 -4(w^4+6w^2+1)(w^2-1)^2x+4(w^2+1)^{6}(w^2-1)^2\right). \end{align} Looking at the degree of the coefficients, we conclude that $\chi=4$. We find two points of infinite order \[P(w)=\big((w^2+1)^4,(w^2+1)^4(w^2-2w-1)(w^2+2w-1)\big) \] and \[T(w)=\big(2(w-1)^2(w^2+1)^3,16w^2(w-1)^2(w^2+1)^3\big).\] One can check that $P$ and $T$ have infinite order by evaluating at $w=2$. This gives the curve $y^2=x(x^2-1476x+562500)$ and $P=(625, -4375)$, $T=(250, 8000)$. We have $2P=(\frac{75625}{196}, \frac{20301875}{2744})$ and $2T=(\frac{15625}{16}, -\frac{546875}{64})$, which have non-integral coordinates, showing that $P$ and $T$ are points of infinite order. Indeed, we find that the curve has rank 2 and a set of generators for the free part is given by $P$ and $T$. As in Theorem~\ref{th:thm4}, we conclude: \begin{thm} For all but finitely many values of $w$, there are infinitely many isosceles triangles with rational sides, two of which correspond to $w$, and two rational (symmetric) medians. \end{thm} \section{Area bisector}\label{sec:area-bisectors} This section considers the \emph{area bisector}, the geodesic segment from one vertex, meeting the opposite side and separating the triangle into two triangles of equal area. For the area bisector to be rational, we will demand that the length be rational, and also that the half-area of the triangle be rational. Consider a spherical triangle with sides $a,b,c$ having opposite angles $\alpha, \beta, \gamma$. Let $m$ denote the area bisector at angle $\alpha$, cutting $\alpha$ into $\alpha_1$ and $\alpha-\alpha_1$. Denote by $\theta$ the angle at the intersection of $m$ and $a$, on the side of $\alpha_1$, and (assume) on the side of $\beta$. Thus we have two triangles: one with angles $\alpha_1, \beta, \theta$ and one with $\alpha - \alpha_1, \gamma, \pi-\theta$. By the supplemental law of cosines \eqref{eq:supplementalcosines} we have \[ \sin(\alpha_1) \sin(\beta)\cos(c) = \cos(\theta) + \cos(\alpha_1)\cos(\beta). \] Combining this with the definition of area bisector \[ 2( \alpha_1 + \theta + \beta - \pi) = A \quad \text{i.e.}\quad \theta = \pi + \frac{A}{2} - \alpha_1 - \beta, \] we get \begin{equation}\label{eq:acosthm} \sin(\alpha_1) \sin(\beta)\cos(c) = - \cos\left(\frac{A}{2} - \alpha_1 - \beta\right) + \cos(\alpha_1)\cos(\beta). \end{equation} Using trigonometric identities, we get \[ \tan(\alpha_1) = \frac{ \cos(\beta) -\cos(A/2)\cos(\beta) - \sin(A/2)\sin(\beta) }% {\cos(\beta)\sin(A/2) - \cos(A/2)\sin(\beta) + \cos(c)\sin(\beta)}. \] Using the supplemental law of cosines \eqref{eq:supplementalcosines} again: \[ \sin(\alpha) \sin(\beta)\cos(c) = \cos(\gamma) + \cos(\alpha)\cos(\beta), \] we get \[ \tan(\alpha_1) = \frac{(\cos(A/2) - 1)\cos(\beta)\sin(\alpha) + \sin(A/2)\sin(\alpha)\sin(\beta)}{\cos(A/2)\sin(\alpha)\sin(\beta) - (\sin(A/2)\sin(\alpha) + \cos(\alpha))\cos(\beta) - \cos(\gamma)}. \] Now using $\cos(\gamma) = -\cos(A - \alpha - \beta)$, and expanding the trigonometric identities we get \begin{align*} \tan(\alpha_1) =& -\big((\cos(A/2) - 1)\cos(\beta)\sin(\alpha) + \sin(A/2)\sin(\alpha)\sin(\beta)\big) \big/ \\ & \big((2\cos(\alpha)\sin(A/2)^2 - (2\cos(A/2) - 1)\sin(A/2)\sin(\alpha))\cos(\beta) \\ &- (2\cos(A/2)\cos(\alpha)\sin(A/2) + (2\sin(A/2)^2 + \cos(A/2) - 1)\sin(\alpha))\sin(\beta)\big). \end{align*} Hence the tangent of $\alpha_1$ is always rational if $\alpha, \beta$ and $A/2$ are (i.e.\ sines and cosines of these quantities). Thus, for $\alpha_1$ to be a rational angle, we must ask that $\frac{1}{\cos(\alpha_1)} \in \mathbb{Q}$. Therefore, we need that \[ w^2 = 1 + \tan(\alpha_1)^2 \] for some $w \in \mathbb{Q}$. Applying the change of variables \[n=\frac{\sin(A/2)}{1+\cos(A/2)}, \quad u=\frac{\sin(\beta)}{1+\cos(\beta)}, \quad t=\frac{\sin(\alpha)}{1+\cos(\alpha)},\] and clearing a square (substituting $w = s^2w$), we get \begin{align*} w^2 = & 4 (n - u)^2 (nu + 1)^2 t^4 + 4 (n - u) (nu + 1) (-2n^3u + 3n^2u^2 - 3n^2 + 6nu - u^2 + 1) t^3 \\ & + \big(n^6u^4 + 2n^6u^2 - 8n^5u^3 + 11n^4u^4 + n^6 + 8n^5u - 50n^4u^2 + 64n^3u^3 \\ & - 13n^2u^4 + 11n^4 - 64n^3u + 86n^2u^2 - 24nu^3 + u^4 - 13n^2 + 24nu - 6u^2 + 1\big)t^2 \\ & + 4 (-n + u) (nu + 1) (-2n^3u + 3n^2u^2 - 3n^2 + 6nu - u^2 + 1)t + 4 (-n + u)^2 (nu + 1)^2, \end{align*} that has a rational point $(t,w)=(0, 2(-n+u)(nu+1))$. We remark that this equation is the same as in \cite[Section 6]{LalinMila} after making the change of variables $u\rightarrow -u$, $t\rightarrow -t$. Thus we get \begin{align} \label{eq:area-bisector} E_{n,u}: y^2=&(x-(n^2+1)^2(nu^2+2u-n)^2)(x^2-(n^2+1)(n^4u^4-8n^2u^4-u^4+16n^3u^3\\ \nonumber & -16nu^3-6n^4u^2+32n^2u^2- 10u^2-16n^3u+16nu+n^4-8n^2-1)x\\ \nonumber & -(n^2+1)^2(nu^2+2u-n)^2(3n^2u^2-u^2-2n^3u+6nu-3n^2+1)^2). \end{align} We, therefore, have the following result. \begin{thm} A spherical Heron triangle with rational half-area with parameter $n$ and rational angle with parameter $u$ has one rational area bisector if and only if it corresponds to a rational point of $E_{n,u}$. \end{thm} The analogue of \cite[Lemma 6.1]{LalinMila} gives us some information about the arithmetic structure of the $K3$-surface $E_n$, and in particular, that it has a point of infinite order. \begin{lem} \label{lem:rank-computations-area-bisectors} The rank of the $K3$-surface $E_n$ satisfies \[ 1 \leq \mathrm{rk}(E_n(\mathbb{C}(u))) \leq 4. \] Moreover, $E_n$ has a torsion point of order 2 given by $\left((n^2+1)^2(nu^2+2u-n)^2,0\right)$. The point \[ Q(n,u)=\Big( 0,(n^2+1)^2(nu^2+2u-n)^2(3n^2u^2-u^2-2n^3u+6nu-3n^2+1)\Big) \] is of infinite order. \end{lem} \section{Isosceles triangle with meridians and equator as sides} \label{sec:isosceles} \begin{figure}[h] \includegraphics{fig2.pdf} \caption{Schematic picture of the triangles under consideration.} \label{fig:my_label} \end{figure} In this section we consider a special family of spherical triangles. Namely we consider isosceles triangles with two half-meridians and a piece of the equator as sides. We will set that the side that is part of the equator has length $a$. The other two sides have length $\pi/2$. The angles are then $\alpha, \pi/2,\pi/2$. The median/bisector/height corresponding to $a$ is also $\pi/2$, and is, therefore, rational. Notice that the law of sines \eqref{eq:sines} gives $\sin(\alpha)=\sin(a)$ while the law of cosines \eqref{eq:cosines} gives $\cos(a)=\cos(\alpha)$. Thus $a$ and $\alpha$ are rational simultaneously. Assume they are. Our goal is to study when the other two cevians are rational. Thus consider a cevian $d$ from $B$ to $b$, intersecting the side $b$ at angle $\theta$ on the side of the vertex $A$, and dividing the angle $\beta$ at $B$ into two angles $\beta_1$ on the side of the vertex $A$ and $\pi/2-\beta_1$ on the side of $a$ (see Figure \ref{fig:my_label}). \subsection{Median} If the other two cevians are medians of length $m$, then they divide the corresponding opposite side into two geodesics of length $\pi/4$. But the law of cosines \eqref{eq:cosines} gives $\cos(m)=\cos(\pi/4)\cos(a)$. Since $\cos(\pi/4)$ is irrational, so is $m$, unless $\cos(a)=0$. But this is only possible when $a=\alpha=\pi/2$, and this leads to the equilateral triangle that appears as the sole solution of Theorem \ref{thm:equilateral}. \subsection{Height} If the other two cevians are heights of length $h$, then $\theta=\pi/2$, and the triangle containing the sides $a$ and $h$ must be isosceles since it has two angles of $\pi/2$. Thus $h=a$ and any triangle with $a$ rational gives a solution. \subsection{Bisector} If the other two cevians are bisectors of length $\flat$, then $\beta_1=\pi/4$. By the law of sines \eqref{eq:sines}, $\frac{\sin(\theta)}{\sin(\pi/2)}=\frac{\sin(\alpha)}{\sin(\flat)}$. From this \begin{equation}\label{eq:sinbisector}\sin(\theta)\sin(\flat)=\sin(\alpha). \end{equation} The supplemental law of cosines \eqref{eq:supplementalcosines} gives \begin{align*}\cos(\alpha)=&-\cos(\theta)\cos(\pi/4)+\sin(\theta)\sin(\pi/4)\cos(\flat),\\ \cos(\pi/2)=&-\cos(\pi-\theta)\cos(\pi/4)+\sin(\pi-\theta)\sin(\pi/4)\cos(\flat). \end{align*} Adding the above equations, \[ \sin(\theta)\cos(\flat)=\frac{\cos(\alpha)}{2\sin(\pi/4)}.\] By combining with equation \eqref{eq:sinbisector} we obtain \[\tan(\flat)=\tan(\alpha)2\sin(\pi/4).\] Since $\tan(\alpha)$ is rational, and $\sin(\pi/4)$ is not, we must have $\tan(\alpha)=0,$ and therefore $\alpha=\pi/2$. This leads, once again, to the equilateral triangle that appears as the sole solution of Theorem \ref{thm:equilateral}. \subsection{Area bisector} If the other two cevians are area bisectors of length $v$, the areas of the half-triangles are $\alpha+\beta_1+\theta-\pi$ and $\pi-\beta_1-\theta$. Combining these two equations, \[\pi =\alpha/2+\theta+\beta_1.\] By the supplemental law of cosines \eqref{eq:supplementalcosines}, \[\sin(\beta_1)\sin(\alpha)\cos(\pi/2)=\cos(\theta)+\cos(\beta_1) \cos(\alpha).\] Writing $\cos(\theta)=-\cos(\alpha/2+\beta_1)$, \[0=-\cos(\alpha/2)\cos(\beta_1)+\sin(\alpha/2)\sin(\beta_1)+\cos(\beta_1)\cos(\alpha).\] This gives \begin{equation}\label{eq:tanbeta} \tan(\beta_1)=\frac{\cos(\alpha/2)-\cos(\alpha)}{\sin(\alpha/2)}. \end{equation} Since $\alpha+\theta+\beta_1-\pi$ is half the area of the triangle, it must be rational, and therefore, $\theta+\beta_1$ is rational, and since $\alpha/2+\theta+\beta_1=\pi$, we conclude that $\alpha/2$ is rational. By the law of sines \eqref{eq:sines} we have \[\frac{\sin(v)}{\sin(\alpha)}=\frac{\sin(\pi/2)}{\sin(\theta)}=\frac{1}{\sin(\theta)}.\] Therefore, we need that $\sin(\theta)$ be rational. By the supplemental law of cosines \eqref{eq:supplementalcosines}, \begin{align} \cos(\alpha)=&-\cos(\theta)\cos(\beta_1)+\sin(\theta)\sin(\beta_1)\cos(v)\label{eq1}\\ \cos(\pi/2)=&-\cos(\pi-\theta)\cos(\pi/2-\beta_1)+\sin(\pi-\theta)\sin(\pi/2-\beta_1)\cos(v)\nonumber\\ =&\cos(\theta)\sin(\beta_1)+\sin(\theta)\cos(\beta_1)\cos(v).\label{eq2} \end{align} Multiplying \eqref{eq1} by $\sin(\beta_1)$, \eqref{eq2} by $\cos(\beta_1)$, and adding, we get \[\cos(\alpha) \sin(\beta_1)=\sin(\theta)\cos(v).\] From this, we see that $\sin(\beta_1)$ must be rational. Since $\tan(\beta_1)$ must be rational by \eqref{eq:tanbeta}, then $\cos(\beta_1)$ is also rational. We have then \begin{equation}\label{eq:betaw} \tan(\beta_1)^2+1=w^2 \end{equation} Setting \[n=\frac{\sin(\alpha/2)}{1+\cos(\alpha/2)}\] in \eqref{eq:tanbeta}, combining in \eqref{eq:betaw}, and substituting $w(n^2+1)\rightarrow w$, we get \[w^2=n^6 - 5n^4 + 11n^2 + 1.\] We will need a lemma. \begin{lem} \label{lem:gen2} The only rational points on the genus 2 curve \[C:Y^2=X^6-5X^4+11X^2+1\] are $(0,\pm 1)$ and the two points at infinity. \end{lem} Note that the points $X=0$ correspond to $n=0$ and yield a degenerate case with $\alpha=0$. Thus, assuming the lemma, we see that there are no such triangles with rational area bisectors. \begin{proof}[Proof of Lemma~\ref{lem:gen2}] We follow the method due to Flynn an Wetherell \cite{FlynnWetherell}. Notice that $C$ is a bielliptic curve of genus 2. $C$ covers two elliptic curves: \[E^a:Y^2=x^3-5x^2+11x+1,\] \[E^b:Y^2=x^3+11x^2-5x+1,\] with the maps $(X,Y)\rightarrow (X^2,Y)$ and $(X,Y)\rightarrow (1/X^2,Y/X^3)$. Both $E^a(\mathbb{Q})=\langle (3,4)\rangle$ and $E^b(\mathbb{Q})=\langle (-1,4)\rangle$ have rank 1, and $E^a\times E^b$ is isogenous to the Jacobian $J$ of $C$. Since the Jacobian has rank 2, the more standard methods for finding rational points, such as Chabauty's theorem, cannot be applied. Our goal is to apply Lemma 1.1(a) from \cite{FlynnWetherell}, to the curve $E^a$. Let \[F^a(x)=x^3-5x^2+11x+1.\] $F^a(x)$ is an irreducible polynomial over $\mathbb{Q}$. Let $\omega$ a root of $F^a(x)$. First we do the 2-descent and find that $E^a(\mathbb{Q})/2E^a(\mathbb{Q})=\{O,(3,4)\}$. Then \cite[Lemma 1.1(a)]{FlynnWetherell} asserts that if $(X,Y) \in C(\mathbb{Q})$, then $x=X^2$ satisfies one of the following two equations. \begin{align*} E_1^a=& y^2=x(x^2+(\omega-5)x+\omega^2-5\omega+11),\\ E_2^a=& y^2=(3-\omega)x(x^2+(\omega-5)x+\omega^2-5\omega+11). \end{align*} We remark that $E_1^a$ has rank 0 and torsion isomorphic to $\mathbb{Z}/4\mathbb{Z}$ generated by \[\left(\frac{\omega^2}{4} - \frac{3\omega}{2} + \frac{13}{4}, \frac{\omega^2}{4} - \frac{3\omega}{2} + \frac{17}{4}\right).\] Thus, the only affine points from $C(\mathbb{Q})$ arising from $E_1^a$ are $(0,\pm 1)$. We now consider $E_2^a(\mathbb{Q}(\omega))$. A standard descent argument shows that the rank is 1, with two generators: $(0,0)$ of order 2 and \[P_0=\left(1,-\frac{\omega^2}{2}+3\omega-\frac{9}{2}\right)\] of infinite order. We need to check that there are no extra points with rational $x$-coordinate. For this, we apply the argument from Section 2 in \cite{FlynnWetherell}, and reduce modulo $5$. (Remark that the prime $5$ satisfies the technical conditions required by \cite[Eq. (2.13)]{FlynnWetherell}.) Let us denote by $\;\widetilde{}\;$ the reduction modulo $5$. We see that $\widetilde{P_0}$ has order $28$ in $\widetilde{E_2^a}(\mathbb{F}_5(\widetilde{\omega}))$. Therefore, any point $P$ of $E_2^a(\mathbb{Q}(\omega))$ can be written uniquely as $P=S+nQ_0$, for $n \in \mathbb{Z}$, $Q_0=28P_0$ and $S$ a point in the set \[\{kP_0, kP_0+(0,0): k\in \mathbb{Z}, -14< k<14 \}.\] In the above set the only points that have rational $x$-coordinate when reduced to $\widetilde{E_2^a}(\mathbb{F}_5(\widetilde{\omega}))$ are those in \[M:=\{O, (0,0), \pm P_0, \pm 10P_0, \pm 4P_0+(0,0), \pm 13P_0+(0,0)\}.\] Of those, $O, (0,0)$, and $\pm P_0$ have actual rational $x$-coordinate when viewed in $E_2^a(\mathbb{Q}(\omega))$. Next we will check that if a point of the form $S+nQ_0$ with $S\in M$ has rational $x$-coordinate, then necessarily $n=0$. We work modulo $5^5$ as in \cite[Example 3.1]{FlynnWetherell}. Eventually we want to compute the $x$ coordinate of $nQ_0$ for $n$ an arbitrary integer. To do this efficiently, it is convenient to work on the formal group of the elliptic curve. Thus, we compute the $z$-coordinate of $Q_0$, where $z=-x/y$: \[5(343 \omega^2 + 534 \omega +379) \pmod{5^5}.\] In order to multiply by $n$, we will combine the logarithm and the exponential. Therefore our next step is to find the $\log$ of the $z$-coordinate of $Q_0$ (\cite[Eq. (2.9)]{FlynnWetherell}): \[5(18\omega^2 +534\omega + 429) \pmod{5^5}.\] Now we substitute $n\log(z)$ into the exponential and find the $z$-coordinate of $nQ_0$ (\cite[Eq. (2.10)]{FlynnWetherell}): \begin{equation} \label{eq:nQ} 5(18\omega^2+534\omega+429)n+ 5^3(18 \omega^2+5\omega+18)n^3+5^4(4\omega^2+4\omega+1)n^5 \pmod{5^5}. \end{equation} Finally we compute $1/x$ (\cite[Eq. (2.6)]{FlynnWetherell}): \[5^2\cdot 49 n^2\omega^2+ 5^2\cdot 61 n^2 \omega +5^2(75n^4+97n^2) \pmod{5^5}.\] In order to have a rational point of the form $nQ_0$, the coefficients of $\omega^2$ and $\omega$ must be $0$ in $\mathbb{Z}_5$. Thus, we must have $5^2\cdot 49 n^2=0$ in $\mathbb{Z}_5$. This has a double root at $n=0$, and Strassman's Theorem implies that the total number of roots can not exceed 2. Hence, we conclude that $n=0$ is the only possible solution. One must then do the same procedure for $S+nQ_0$ for each of the elements $S\in M$. To work with $(0,0)+nQ_0$, we replace the coordinates of $(0,0)$ and the value of equation \eqref{eq:nQ} in \cite[Eq. (2.8)]{FlynnWetherell}. This gives \[5^2(97\omega^2+91+6\omega)n^2+ 5^4\cdot (3\omega^2+3)n^4\pmod{5^5}.\] for the $z$-coordinate of $(0,0)+nQ_0$. We compute $1/x$ to get \[5^4\cdot 4 n^4\omega^2 + 5^4 n^4\omega + 5^4 \cdot 2n^4 \pmod{5^5}\] and conclude that $n=0$ as before. For $P_0+nQ_0$, we obtain \begin{align*} &1+5(231\omega^2+337 \omega+405)n +5^2(116\omega^2+30\omega+104)n^2 +5^3(14\omega^2+21\omega+22)n^3 \\ &+5^4(4\omega^2+3\omega+3)n^4+5^4(\omega^2+4\omega+1)n^5 \pmod{5^5} \end{align*} for the $z$-coordinate, and \begin{align*}&(5^4\cdot 3n^5 + 5^4\cdot 2n^4 + 5^3 \cdot 13n^3 + 5^2\cdot 71n^2 + 5\cdot 221n + 971)\omega^2 \\&+ (5^4 n^4 + 5^3\cdot 7n^3 + 5^2\cdot 124n^2 + 5\cdot 174n + 2028)\omega \\& + (5^4 n^5 + 5^4\cdot 4 n^4 + 5^3\cdot 8n^3 + 5^4n^2 + 5\cdot 197n + 2358)\pmod{5^5} \end{align*} for $1/x$. Since $5\nmid 971$, the coefficient of $\omega^2$ cannot be $0$ in $\mathbb{Z}_5$. For $10P_0+nQ_0$, we obtain \begin{align*} &(2780\omega^2+ 1980\omega+ 1584)+5(546\omega^2+157 \omega+476 )n+5^2 (112\omega^2+88\omega +100)n^2 \\&+5^3(5\omega^2 +17\omega +8)n^3+5^4(\omega^2+2\omega)n^4 + 5^4(4\omega^2+2\omega+4)n^5 \pmod{5^5} \end{align*} for the $z$-coordinate, and \begin{align*}&(5^4 n^5 + 5^4\cdot 4 n^4 +5^3\cdot 13n^3 + 5^2 \cdot 42 n^2 + 5\cdot 551 n + 2971)\omega^2\\& + (5^4 \cdot 3 n^5+5^4 n^4 + 5^4\cdot 3n^3 + 5^4\cdot 3n^2 + 5\cdot 489n + 573)\omega\\&+ (5^4\cdot 4 n^5 + 5^4\cdot n^4 + 5^4\cdot 3 n^3 + 5^2\cdot 72n^2 + 5\cdot 503n + 2058)\pmod{5^5}\end{align*} for $1/x$. Since $5 \nmid 2971$, the coefficient of $\omega^2$ cannot be $0$ in $\mathbb{Z}_5$. For $4P_0+(0,0)+nQ_0$, we have \begin{align*} &(2740\omega^2+1325\omega+2769)+5(389\omega^2+558 \omega+499)n+5^2(12\omega^2+98\omega+40)n^2\\&+ 5^3(20\omega^2+3\omega+17)n^3+5^4(\omega^2+2\omega)n^4+5^4(\omega^2+3\omega+1)n^5 \pmod{5^5}\end{align*} for the $z$-coordinate, and \begin{align*} &(5^4 \cdot 4 n^5 + 5^4 \cdot 4 n^4 +5^3\cdot 22 n^3+ 5^2\cdot 107n^2 + 5\cdot 259 n +2356)\omega^2 \\&+ (5^4\cdot 2 n^5 +5^4 n^4 + 5^4\cdot 4n^3 + 5\cdot 136 n + 2788)\omega\\& + (5^4 n^5 + 5^4 n^4 + 5^2\cdot 47 n^2 + 5\cdot 607n + 313)\pmod{5^5} \end{align*} for $1/x$. Since $5\nmid 2356$, the coefficient of $\omega^2$ cannot be $0$ in $\mathbb{Z}_5$. For $13P_0+(0,0)+nQ_0$, we have \begin{align*} &(2585\omega^2+1595\omega+1951)+5(149\omega^2+388\omega+390)n+5^2(111\omega^2+110\omega+39)n^2\\&+5^3(21\omega^2+9\omega+13)n^3+5^4(4\omega^2+3\omega+3)n^4 +5^4(4\omega^2+\omega+4)n^5 \pmod{5^5} \end{align*} for the $z$-coordinate, and \begin{align*} &(5^4\cdot 2 n^5 + 5^4\cdot 2 n^4 + 5^3\cdot 12 n^3+ 5^2\cdot 36 n^2 + 5\cdot 359 n + 2456) \omega^2\\& + (5^4 n^4 + 5^4\cdot 3 n^3 + 5^2\cdot 84 n^2 + 5\cdot 596 n + 3118)\omega \\&+ (5^4\cdot 4 n^5 + 5^4\cdot 4 n^4 + 5^3\cdot 17 n^3 + 5^3\cdot 18 n^2 + 5\cdot 403n + 803) \pmod{5^5}, \end{align*} for $1/x$. Since $5\nmid 2456$, the coefficient of $\omega^2$ cannot be $0$ in $\mathbb{Z}_5$. Finally, remark that we do not have to consider the points of the form $-P_0+nQ_0$, $-10P_0+nQ_0$, $-4P_0+(0,0)+nQ_0$, and $-13P_0+(0,0)+nQ_0$, separately since these points can be obtained by multiplying the previous cases by $-1$. Thus, we conclude that $n=0$. We examine the rational $x$-coordinates of the $S \in M$, and conclude that the only possibilities for points having rational $x$-coordinates are $0,1$ coming from $(0,0)$ and $\pm P_0$. It is immediate to see that $X=1$ does not lead to points in $C(\mathbb{Q})$, and therefore the only possibly solution is $X=0$, leading to a degenerate triangle as discussed before. \end{proof} \section{Further research}\label{sec:further} There are many topics of further research based on this current work. First, one could try considering different versions of ``rationality'' for triangles. One natural way would be to relax the condition that all trigonometric functions of the sides and angles/area be rational, and to call a length/angle rational if, say, its tangent is rational (compare \cite{Goins} and \cite{LalinMila}). Another way would be to call a spherical length rational if the length of the straight segment (inside the sphere) joining its two endpoints is rational. If $a$ denotes the spherical length, it is not hard to see that this corresponds to $\sin(a/2)$ being rational. Yet another definition of rationality would be that length / angles be rational multiples of $\pi$. It is easy to see that the isosceles triangle with apex on the north pole and bottom side on the equator of length $\frac p q \pi$ (see Figure~\ref{fig:my_label}) has all its sides, angles and area rational in this sense. It would be interesting to know if there exist triangles having this property that do not come from this construction. It is also interesting to consider the necessary conditions that prevent a spherical triangle from having multiple rational cevians (heights, medians, area bisectors, etc). This point of view is, to some extent, opposite to the investigation in Section \ref{sec:isosceles}. For example, one can prove that if a triangle is isosceles with the angle between the identical sides equal to $\frac{\pi}{2}$, then the medians cannot be rational simultaneously. It is natural then to wonder which of these assumptions can be lifted. Another possible direction for further research is the construction of high rank elliptic curves as in \cite{Dujella-Peral,IzadiNabardi}. More specifically, the authors of \cite{IzadiNabardi} used Heron's formula to derive elliptic curves with high ranks. As there is an analog of Heron's formula in the spherical world, namely L'Huilier's formula, it would be interesting to try to construct elliptic curves with high ranks by adapting the method of \cite{IzadiNabardi}. \bibliographystyle{amsalpha}
2,877,628,090,612
arxiv
\section{Introduction}\label{sec:Introduction} Network coding is a strong tool for effective data transmission in a network modelled as a directed acyclic multigraph with several sources and sinks. In \cite{AhlsCai00}, it was proved that the information flow of the network may be improved if the intermediate nodes are able to perform random linear combinations of the received inputs instead of simply routing them. Random network coding was introduced in \cite{HoMeKo06}, and an algebraic approach to it was presented in \cite{KoetKschi08}. In that work, the authors propose transmitting information by using vector subspaces of $\bbF_q^n$ and define \emph{subspace codes} as a class of codes well suited for error correction. In case all the codewords in a subspace code have the same dimension, it is said to be a \emph{constant dimension code}. The seminal paper \cite{KoetKschi08} has lately led to many lines of research on subspace codes addressed either to the construction of subspace codes with the best size fixed the minimum distance or to find algebraic constructions of subspace codes with good parameters (see \cite{TrautRosen18} and references therein). In \cite{TrautManRos2010}, Trautmann \emph{et al.} introduced the concept of \emph{orbit codes} as subspace codes obtained from the action of subgroups of the general linear group $\mathrm{GL}(n,q)$ on the set of subspaces of $\bbF_q^n$. When the acting group is cyclic, we speak about \emph{cyclic orbit codes}. This family of codes has awaken a lot of interest due to the simplicity of their algebraic structure and to the existence of efficient encoding/decoding algorithms. We refer the reader to \cite{BenEtGaRa16, ChenLi18, EtVar11, GLLe2021, GLMoTro2015, OtOz17, RothRaTa18, TrautManBraunRos2013, TrautManRos2010, ZhaoTang2019} for some of the more recent papers. Taking into account that $\bbF_q^n$ and the field extension $\bbF_{q^n}$ are isomorphic as $\bbF_q$-vector spaces, in \cite{GLMoTro2015}, the authors consider subspace codes as collections of $\bbF_q$-vector subspaces of $\bbF_{q^n}$ and study orbit codes arising from the natural action of the multiplicative subgroups of $\bbF_{q^n}^\ast$ (cyclic groups as well) on $\bbF_q$-vector spaces. Fixed a generating subspace $\mathcal{U}$ of the cyclic orbit code $\mathrm{Orb}(\mathcal{U})$, their main tool is the \emph{best friend} of $\mathcal{U}$, that is, the largest subfield of $\bbF_{q^n}$ over which $\mathcal{U}$ is a vector space. This concept is closely related with the stabilizer of $\mathcal{U}$, specially when the acting group is $\bbF_{q^n}^\ast$. The best friend allows the authors to give relevant information about the cardinality, distance and other features of cyclic orbit codes. \emph{Flag codes} were introduced in \cite{LiebNebeVaz18} as a generalization of constant dimension codes in network coding. In a flag code of constant type, codewords are given by sequences of nested subspaces (flags) of prescribed dimensions. In that paper, the multiplicative action of $\mathrm{GL}(n,q)$ is naturally extended from subspaces to flags and several constructions of \emph{orbit flag codes} are provided. In \cite{CasoPlanar, CasoNoPlanar}, flag codes attaining the maximum possible distance (\emph{optimum distance flag codes}) are characterized and obtained without regard to their possible orbital structure whereas in \cite{OrbitODFC} an orbital construction of them is proposed. In this work we follow the approach of Gluesing-Luerssen \emph{et al.} in \cite{GLMoTro2015}. Inspired by their ideas, we consider flags on $\bbF_{q^n}$ given by nested $\bbF_q$-subspaces of the field $\bbF_{q^n}$ and focus on \emph{cyclic orbit flag codes} constructed as orbits of subgroups of $\bbF_{q^n}^\ast$. We generalize the concept of the best friend of a subspace to the flags framework by defining the \emph{best friend} of a flag as the largest subfield of $\bbF_{q^m}$ over which every subspace in a flag is a vector space. As it occurs in the constant dimension codes scenario, the knowledge of the best friend of a generating flag allows us to easily determine the size of the cyclic orbit code as well as to give estimates for its distance. In particular, we pay special attention to two specific families of cyclic orbit flag codes attaining the extreme possible values of the distance. We introduce first the concept of \emph{Galois cyclic flag codes} as the cyclic orbit codes generated by sequences of nested subfields of $\bbF_{q^n}$. Despite the fact that these codes have the minimum possible distance (fixed the best friend), they present a nice gear of nested spreads compatible with the action of $\bbF_{q^n}^*$. Moreover, if one consider the subcodes of Galois cyclic flag codes that keep cyclic orbital structure, we can improve their distance in a controlled manner and reach even the maximum possible one. By the way, we also determine which dimensions in the type vector of a general generating flag are compatible with attaining the maximum distance, having a fixed best friend and being orbits under the action of subgroups of $\bbF_{q^n}^\ast$. In other words, we study \emph{optimum distance cyclic orbit flag codes} and their orbital cyclic subcodes. The text is organized as follows. In Section \ref{sec:Preliminaries}, the reader can find the general background on subspace codes. Particular care is devoted to the study of cyclic orbit (subspace) codes developed in \cite{GLMoTro2015}. In Section \ref{sec: cyclic orbit flag codes}, cyclic orbit flag codes are introduced. We also generalize the notions of stabilizer subfield and best friend to the flag codes setting by exhibiting the relationship between these two concepts and the corresponding ones for subspace codes. In Section \ref{sec: prescribed BF}, the cardinality and bounds for the distance of a cyclic orbit flag code with a given best friend are provided. We finish by introducing Galois cyclic flag codes and optimum distance cyclic flag codes with a prescribed best friend. We study their parameters and properties as well as the ones of respective subcodes coming also from the action of subgroups of $\bbF_{q^n}^*$. \section{Preliminaries}\label{sec:Preliminaries} Fix $\bbF_q$ the finite field of $q$ elements where $q$ is a primer power. For any natural number $n\geq 1$, $\bbF_q^n$ represents the $n$-dimensional vector space over $\bbF_q$. Given $1\leq k <n$, the \emph{Grassmannian} $\mathcal{G}_q(k, n)$ is the set of $k$-dimensional subspaces of $\bbF_q^n$ and we write $\mathcal{P}_q(n)$ to denote the \emph{projective geometry} of $\bbF_q^n$, that is, the set of all the subspaces of $\bbF_q^n$. The set $\mathcal{P}_q(n)$ can be considered as a metric space with the \emph{subspace distance} (see \cite{KoetKschi08}) defined as \begin{equation}\label{def: subspace distance} d_S(\mathcal{U}, \mathcal{V})= \dim(\mathcal{U}+\mathcal{V})-\dim(\mathcal{U}\cap\mathcal{V}). \end{equation} A \emph{subspace code} $\mathcal{C}$ of length $n$ is a nonempty subset of $\mathcal{P}_q(n)$ and its \emph{minimum subspace distance} is defined as $$ d_S(\mathcal{C})=\min\{ d_S(\mathcal{U}, \mathcal{V}) \ | \ \mathcal{U}, \mathcal{V} \in \mathcal{C}, \ \mathcal{U} \neq \mathcal{V} \}. $$ A subspace code in which every codeword has the same dimension, say $k$, is called \emph{constant dimension code} of dimension $k$ and length $n$ (see \cite{TrautRosen18} and references therein). The subspace distance between two subspaces $\mathcal{U}$ and $\mathcal{V}$ of dimension $k$ is given by $$ d_S(\mathcal{U}, \mathcal{V})= 2(k-\dim(\mathcal{U}\cap\mathcal{V})). $$ Consequently, the minimum distance of a constant dimension code of dimension $k$ is upper bounded by \begin{equation}\label{eq: bound subspace distance} d_S(\mathcal{C})\leq \left\lbrace \begin{array}{lll} 2k & \text{if} & 2k\leq n, \\ 2(n-k) & \text{if} & 2k > n. \end{array} \right. \end{equation} These bounds for the distance are attained by constant dimension codes in which every pair of codewords intersects with the minimum possible dimension. For dimensions $k$ up to $\lfloor\frac{n}{2}\rfloor$, constant dimension codes attaining the previous bound are known as \emph{partial spread codes} and their cardinality is, at most, $\lfloor \frac{q^n-1}{q^k-1}\rfloor.$ In contrast, a constant dimension code that attains the bound in (\ref{eq: bound subspace distance}) and has dimension $k > \lfloor \frac{n}{2}\rfloor$, cannot contain more than $\lfloor \frac{q^n-1}{q^{n-k}-1}\rfloor$ elements. A \emph{spread code} in $\mathcal{G}_q(k, n)$, or just a \emph{$k$-spread}, is a partition of $\bbF_q^n$ into $k$-dimensional subspaces. In other words, a spread is a partial spread that covers $\bbF_q^n$. Spreads are classical objects coming from Finite Geometry and it is well known that $k$-spreads exist if, and only if, $k$ divides $n$ (see \cite{Segre64}). As a consequence, the size of every $k$-spread is $\frac{q^n-1}{q^k-1}.$ For further information related to spread codes in the network coding framework, we refer the reader to \cite{GoManRo12, MangaGorlaRosen08, MangaTraut14, TrautRosen18}. There are constant dimension codes that can be obtained as orbits of the action of subgroups of the general linear group $\mathrm{GL}(n, q)$ on the Grassmannian of the corresponding dimension. In this case, we speak about \emph{orbit codes}, which were introduced for the first time in \cite{TrautManRos2010}. Given a $k$-dimensional subspace $\mathcal{U}$ of $\bbF_q^n$ and a subgroup $G$ of $\mathrm{GL}(n, q)$, the orbit of $\mathcal{U}$ under the action of $G$ is the constant dimension code given by $\mathrm{Orb}_G(\mathcal{U}) = \{ \mathcal{U} \cdot A \ | \ A\in G\},$ where $\mathcal{U}\cdot A = \rsp (UA)$ for any full-rank generator matrix $U$ of $\mathcal{U}$. The \emph{stabilizer} of $\mathcal{U}$ under the action of $G$ is the subgroup $\mathrm{Stab}_G(\mathcal{U})= \{ A\in G \ | \ \mathcal{U}\cdot A= \mathcal{U} \}.$ Clearly, $$ | \mathrm{Orb}_G(\mathcal{U})| = \frac{|G|}{|\mathrm{Stab}_G(\mathcal{U})|} $$ and its minimum distance is given by $$ d_S( \mathrm{Orb}_G(\mathcal{U}))= \min \{ d_s(\mathcal{U}, \mathcal{U}\cdot A) \ | \ A \in G \setminus\mathrm{Stab}_G(\mathcal{U}) \}. $$ If the group $G$ is cyclic, the code $\mathrm{Orb}_G(\mathcal{U})$ is called \emph{cyclic orbit code.} This special family of orbit codes was widely studied in \cite{GLMoTro2015, ManTrautRos11, RosTraut2013, TrautManBraunRos2013}. In particular, using the fact that $\bbF_q^n$ and $\bbF_{q^n}$ are isomorphic as $\bbF_q$-vector spaces, Trautmann \textit{et al.} provide in \cite{TrautManBraunRos2013} the following construction of a $k$-spread as a cyclic orbit code. Take a divisor $k$ of $n$ and let $\alpha$ denote a primitive element of $\bbF_{q^n}$, i.e, a generator of the multiplicative group $\bbF_{q^n}^\ast.$ If we put $c=\frac{q^n-1}{q^k-1},$ then it is clear that $\langle \alpha^c\rangle$ is the unique subgroup of order $q^k-1$ of $\bbF_{q^n}^\ast$ and that $\langle\alpha^c\rangle\cup\{0\}=\bbF_{q^k}.$ As proved in \cite[Th. 31]{TrautManBraunRos2013}, the stabilizer of $\bbF_{q^k}$ under the action of the cyclic group $\langle\alpha\rangle$ is precisely the subgroup $\langle\alpha^c\rangle$ and the orbit \begin{equation}\label{def: spread Anna-Lena} \mathcal{S} = \mathrm{Orb}_{\langle \alpha \rangle}(\bbF_{q^k})= \{ \bbF_{q^k}\alpha^i \ | \ i=0, \ldots, c-1 \} \end{equation} is a $k$-spread of $\bbF_{q^n}$. In \cite{GLMoTro2015}, Gluesing-Luerssen \textit{et al.} generalize the construction in (\ref{def: spread Anna-Lena}) for any $\beta \in \mathbb{F}_{q^n}^*$ by introducing the concept of $\beta$-\emph{cyclic orbit code generated by a subspace $\mathcal{U}$} of $\mathbb{F}_{q^n}$ and study these codes by specifying the largest subfield over which the subspace $\mathcal{U}$ is a vector space. Let us recall some definitions and results from that work that we will use along this paper. Consider any nonzero element $\beta$ in the finite field $\bbF_{q^n}$ and the natural multiplicative action of the group $\langle \beta \rangle$ on $\bbF_q$-vector subspaces of $\bbF_{q^n}.$ Orbits of this action are called $\beta$-\emph{cyclic orbit codes}. To be precise, if $1\leq k<n$ and $\mathcal{U}\subset \bbF_{q^n}$ is a $k$-dimensional subspace over $\bbF_q$, the $\beta$-\emph{cyclic orbit code generated by $\mathcal{U}$} is the constant dimension code in the Grassmannian $\mathcal{G}_q(k, n)$ given by $$ \mathrm{Orb}_{\beta}(\mathcal{U})= \{\mathcal{U} \beta^i \ | \ 0\leq i \leq |\beta|-1\}, $$ where $|\beta|$ denotes the \emph{multiplicative order} of $\beta$ (for further information on these orbits, see \cite{Drudge2002}). The \emph{stabilizer} of the subspace $\mathcal{U}$ under the action of $\langle \beta \rangle$ is the cyclic subgroup defined as $\mathrm{Stab}_{\beta}(\mathcal{U})= \{\beta^i \in \langle\beta\rangle \ | \ \mathcal{U}\beta^i= \mathcal{U} \}$ and the \emph{stabilizer subfield} $\mathrm{Stab}^{+}_{\beta}(\mathcal{U})$ of $\mathcal{U}$ (with respect to $\beta$) is the smallest subfield of $\bbF_{q^n}$ containing both $\bbF_q$ and $\mathrm{Stab}_{\beta}(\mathcal{U}).$ \begin{remark} When the acting group is $\bbF_{q^n}^\ast$, following the notation in \cite{GLMoTro2015}, we simply write by $\mathrm{Orb}(\mathcal{U})$ and call it the \emph{cyclic orbit code generated by $\mathcal{U}$}. In this situation, we also remove the subscript $\beta$ and write $\mathrm{Stab}(\mathcal{U})$ and $\mathrm{Stab}^+(\mathcal{U})$ to the denote the stabilizer and the stabilizer subfield of $\mathcal{U}$ respectively. \end{remark} Concerning the cardinality of a $\beta$-cyclic orbit code, there exists a nice relationship between $|\mathrm{Orb}_{\beta}(\mathcal{U})|$ and the dimension of the generating subspace $\mathcal{U}$. More precisely, in \cite[Prop. 3.7]{GLMoTro2015}, the authors showed that, if $\mathcal{U}$ is a $k$-dimensional subspace of $\mathbb{F}_{q^n}$, then \begin{equation}\label{eq:cardinal} |\beta^{q^k-1}|=\frac{|\beta|}{\gcd(|\beta|,q^k-1)} \textrm{ divides } | \mathrm{Orb}_{\beta}(\mathcal{U})|. \end{equation} Moreover, the equality $| \mathrm{Orb}_{\beta}(\mathcal{U})|= \frac{|\beta|}{q^k-1}$ holds if, and only if, $\mathcal{U}$ is a vector space over $\bbF_{q^k}$. More precisely, if $1\in\mathcal{U}$, for every divisor $k$ of $n$, the code $\mathrm{Orb}(\mathcal{U})$ is a $k$-spread if, and only if, $\mathcal{U}=\bbF_{q^k}$. Therefore, the spread defined in (\ref{def: spread Anna-Lena}) arises as the cyclic orbit code $\mathrm{Orb}(\bbF_{q^k})$ in this context. A subfield $\bbF_{q^m}$ of $\bbF_{q^n}$ is said to be a \emph{friend} of a subspace $\mathcal{U} \subset \bbF_{q^n} $ if $\mathcal{U}$ is an $\bbF_{q^m}$-vector space. In that case, if $t$ is the dimension of $\mathcal{U}$ as $\bbF_{q^m}$-vector space, we have that $\dim_{\bbF_q}(\mathcal{U}) = mt$. Moreover, if $\{u_1, \ldots, u_t\}\subseteq \mathcal{U}$ is a basis of $\mathcal{U}$ over $\bbF_{q^m},$ then it holds $$ \mathcal{U} = \bbF_{q^m}u_1 \oplus \cdots \oplus \bbF_{q^m}u_t. $$ Note that every subspace $\mathcal{U}$ is a vector space over $\mathrm{Stab}^{+}_{\beta}(\mathcal{U})$. In other words, the stabilizer subfield is a friend of $\mathcal{U}$. The largest friend of $\mathcal{U}$ is called its \emph{best friend} (see \cite{GLMoTro2015}). The concepts of stabilizer subfield and best friend of a subspace turn to be same in the following situation in which, in addition, the knowledge of the best friend of $\mathcal{U}$ provides straightforwardly the cardinality of the cyclic orbit code as well as a lower bound for its distance. \begin{proposition}\label{prop: stab+ es best friend}(\cite[ Prop. 3.3, 3.12, 3.13 and 4.1]{GLMoTro2015}) If $\mathcal{U}$ is a subspace of $\bbF_{q^n}$, then its stabilizer subfield satisfies $$ \mathrm{Stab}^+(\mathcal{U})= \mathrm{Stab}(\mathcal{U}) \cup \{0 \} $$ and it contains every friend of $\mathcal{U}$. As a consequence, the field $\mathrm{Stab}^+(\mathcal{U})$ is the best friend of the subspace $\mathcal{U}$. In particular, if $\mathrm{Stab}^+(\mathcal{U})=\bbF_{q^m}$, then $$ |\mathrm{Orb}(\mathcal{U})|=\frac{q^n-1}{q^m-1}. $$ Moreover, the value $2m$ divides the distance between every pair of subspaces in $\mathrm{Orb}(\mathcal{U})$ and, hence, we have that $d_S(\mathrm{Orb}(\mathcal{U}))\geq 2m.$ Besides, if $1\in \mathcal{U}$, we have the inclusion $\mathrm{Stab}^+(\mathcal{U})\subseteq \mathcal{U}$. \end{proposition} \section{Cyclic orbit flag codes}\label{sec: cyclic orbit flag codes} In classical linear algebra, a flag variety on the field extension $\bbF_{q^n}$ is a homogeneous space that generalizes the Grassmann variety and whose points are flags. The use of flags in network coding was proposed for the first time in \cite{LiebNebeVaz18}. We start this section by recalling some basic background on flag codes. Next, we will focus on the family of flag codes that are orbits under the action of a cyclic group on the flag variety. Finally, we introduce the concepts of stabilizer subfield and best friend of a flag, following the ideas in \cite{GLMoTro2015}, in order to deepen the structure and properties of the family of cyclic orbit flag codes. \subsection{Flag codes} \begin{definition} A {\em flag} $\mathcal{F}=(\mathcal{F}_1,\ldots, \mathcal{F}_r)$ on $\mathbb{F}_{q^n}$ is a sequence of nested $\bbF_q$-vector subspaces of $\mathbb{F}_{q^n}$, i.e., such that $$ \{0\}\subsetneq \mathcal{F}_1 \subsetneq \cdots \subsetneq \mathcal{F}_r \subsetneq \mathbb{F}_q^n. $$ The subspace $\mathcal{F}_i$ is said to be the {\em $i$-th subspace} of $\mathcal{F}$. The type of $\mathcal{F}$ is the vector $(\dim(\mathcal{F}_1), \dots, \dim(\mathcal{F}_r))$. In case the type vector is $(1, 2, \ldots, n-1),$ we say that ${\mathcal{F}}$ is a {\em full flag}. \end{definition} The \emph{flag variety} $\mathcal{F}_q((t_1,\dots,t_r),n)$ is the set of flags of type $(t_1, \dots, t_r)$ on $\mathbb{F}_{q^n}$. This variety can naturally be equipped with a metric by extending the subspace distance defined in (\ref{def: subspace distance}). Given two flags $\mathcal{F}=(\mathcal{F}_1,\ldots, \mathcal{F}_r)$ and $\mathcal{F}'=(\mathcal{F}'_1,\ldots, \mathcal{F}'_r)$ in $\mathcal{F}_q( (t_1, \ldots, t_r),n)$, their \emph{flag distance} is $$ d_f(\mathcal{F},\mathcal{F}')= \sum_{i=1}^r d_S(\mathcal{F}_i, \mathcal{F}'_i). $$ \begin{definition} A \emph{flag code} of type $(t_1, \dots, t_r)$ on $\bbF_{q^n}$ is a nonempty subset $\mathcal{C}\subseteq \mathcal{F}_q((t_1, \dots, t_r), n)$. Its {\em minimum distance} is given by $$ d_f(\mathcal{C})=\min\{d_f(\mathcal{F},\mathcal{F}')\ |\ \mathcal{F},\mathcal{F}'\in\mathcal{C}, \ \mathcal{F}\neq \mathcal{F}'\} $$ and, in case $|\mathcal{C}|=1$, we put $d_f(\mathcal{C})=0.$ \end{definition} For each dimension $t_i$ in the type vector of a flag code $\mathcal{C}$, we can associate to it the constant dimension code in the Grassmannian $\mathcal{G}_q(t_i, n)$ consisting of the set of the $i$-th subspaces of flags in $\mathcal{C}$. This set is called the \emph{$i$-projected code} of $\mathcal{C}$ and we denote it by $\mathcal{C}_i$. It is clear that $\vert \mathcal{C}_i\vert \leq \vert \mathcal{C} \vert$ for every $i=1, \dots, r$. In case $|\mathcal{C}_1|=\dots=|\mathcal{C}_r|=|\mathcal{C}|$, we say that $\mathcal{C}$ is \emph{disjoint}. As shown in \cite{CasoPlanar}, the property of being disjoint is necessary in order to have flag codes that achieve the maximum possible flag distance. For type $(t_1, \dots, t_r),$ that maximum distance is \begin{equation}\label{eq: dist max flags} 2 \left( \sum_{t_i \leq \lfloor \frac{n}{2}\rfloor} t_i + \sum_{t_i > \lfloor \frac{t}{2}\rfloor} (n-t_i) \right) \end{equation} and flag codes attaining it are called \emph{optimum distance flag codes}. In \cite{CasoPlanar, CasoNoPlanar} the reader can find constructions of this class of codes as well as the following characterization of them. \begin{theorem}\cite[Th. 3.11]{CasoPlanar}\label{theo: caracterización ODFC} A flag code is an optimum distance flag code if, and only if, it is disjoint and every projected code attains the maximum possible distance for its dimension. \end{theorem} As in the case of subspace codes, one can build families of flag codes through the action of a group. This approach already appears in \cite{LiebNebeVaz18}, where the authors generalize the action of $\mathrm{GL}(n, q)$ on subspaces of $\bbF_q^n$ to flags and provide several constructions of flag codes as orbits of the action of specific upper unitriangular matrix groups on the full flag variety. In the next section, following the ideas developed in \cite{GLMoTro2015} for subspace codes, we introduce the concept of \emph{cyclic orbit flag code} as the orbit of the multiplicative action of subgroups of $\bbF_{q^n}^\ast$ on flags on $\bbF_{q^n}$. \subsection{Cyclic orbit flag codes} Given a nonzero element $\beta$ in the field $\bbF_{q^n}$, we can extend the natural action of the cyclic group $\langle \beta \rangle$ on $\bbF_q$-subspaces of $\bbF_{q^n}$ to flags on $\bbF_{q^n}$ as follows. If $\mathcal{F}=(\mathcal{F}_1,\dots, \mathcal{F}_r)$ is a flag of type $(t_1, \ldots, t_r)$ on $\bbF_{q^n}$, we define the flag $\mathcal{F} \beta $ as $$ \mathcal{F}\beta = (\mathcal{F}_1\beta, \ldots, \mathcal{F}_r\beta). $$ The set \begin{equation}\label{def: cyclic orbit flag code} \mathrm{Orb}_{\beta}(\mathcal{F}) = \{ \mathcal{F} \beta^j \ | \ 0\leq j \leq |\beta|-1 \}. \end{equation} is called the $\beta$-\emph{cyclic orbit flag code} generated by $\mathcal{F}.$ The \emph{stabilizer} of the flag $\mathcal{F}$ (w.r.t. $\beta$) is the subgroup of $\langle \beta \rangle$ given by $$ \mathrm{Stab}_{\beta}(\mathcal{F})= \{\beta^j \in \langle\beta\rangle \ | \ \mathcal{F}\beta^j= \mathcal{F} \}. $$ When the acting group is $\bbF_{q^n}^\ast$, we do not specify it and simply write $\mathrm{Orb}(\mathcal{F})$ to denote the \emph{cyclic orbit flag code generated by $\mathcal{F}$.} We also drop the subscript in $\mathrm{Stab}(\mathcal{F})$. Observe that every $\mathrm{Orb}_{\beta}(\mathcal{F})$ is a subcode of $\mathrm{Orb}(\mathcal{F})$. Furthermore, it holds $$ \mathrm{Stab}_{\beta}(\mathcal{F})=\langle\beta\rangle \cap \mathrm{Stab}(\mathcal{F}). $$ As in the subspace codes framework, the orbital structure simplifies the computation of the code parameters: the cardinality of the flag code in (\ref{def: cyclic orbit flag code}) is given by \begin{equation}\label{cardinality cyclic orbit flag code} | \mathrm{Orb}_{\beta}(\mathcal{F})| = \dfrac{|\beta|}{|\mathrm{Stab}_{\beta}(\mathcal{F})|} = \dfrac{|\beta|}{|\langle\beta\rangle \cap \mathrm{Stab}(\mathcal{F})|} \end{equation} and its minimum distance can be computed as $$ d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= \min\{ d_f(\mathcal{F}, \mathcal{F}\beta^j) \ | \ \beta^j \notin \mathrm{Stab}_{\beta}(\mathcal{F}) \}. $$ \begin{remark} Notice that the projected codes associated to $\mathrm{Orb}_{\beta}(\mathcal{F})$ are $\beta$-cyclic orbit (subspace) codes as well. More precisely, for every $1\leq i \leq r$, we have $$ (\mathrm{Orb}_{\beta}(\mathcal{F}))_i = \mathrm{Orb}_{\beta}(\mathcal{F}_i). $$ Moreover, as for any other group action, it holds a clear relationship between the stabilizer of the flag $\mathcal{F}$ and the ones of its subspaces: \begin{equation}\label{eq: estabilizador flag} \mathrm{Stab}_{\beta}(\mathcal{F})=\bigcap_{i=1}^r \mathrm{Stab}_{\beta}(\mathcal{F}_i). \end{equation} \end{remark} This equality leads to a direct link between the cardinality of a $\beta$-cyclic orbit flag code, the ones of its projected codes, and the dimensions on the generating flag type vector. \begin{proposition}\label{prop: cardinal proyectado divide} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag of type $(t_1, \ldots, t_r)$ on $\bbF_{q^n}$ and $\beta \in \bbF_{q^n}^*.$ Then $|\mathrm{Orb}_{\beta}(\mathcal{F}_i)|$ divides $|\mathrm{Orb}_{\beta}(\mathcal{F})|$, for $1\leq i\leq r$. In particular, $$ \mathrm{lcm}\left\lbrace |\beta^{q^{t_i}-1}| \ | \ 1\leq i \leq r \right\rbrace \ \text{divides} \ |\mathrm{Orb}_{\beta}(\mathcal{F})|. $$ \end{proposition} \begin{proof} Recall that $|\mathrm{Orb}_{\beta}(\mathcal{F})| = \frac{|\beta|}{|\mathrm{Stab}_{\beta}(\mathcal{F})|}$ and $|\mathrm{Orb}_{\beta}(\mathcal{F}_i)| = \frac{|\beta|}{|\mathrm{Stab}_{\beta}(\mathcal{F}_i)|}$, for eve\-ry $1 \leq i\leq r$. Moreover, by means of (\ref{eq: estabilizador flag}), we have that $|\mathrm{Stab}_{\beta}(\mathcal{F})|$ divides $|\mathrm{Stab}_{\beta}(\mathcal{F}_i)|$ for every value of $i$. Hence, the cardinality of $\mathrm{Orb}_{\beta}(\mathcal{F}_i)$ must divide $|\mathrm{Orb}_{\beta}(\mathcal{F})|,$ for $1 \leq i\leq r$. The last part of the statement follows directly from this fact along with (\ref{eq:cardinal}). \end{proof} \subsection{Stabilizer subfield and best friend of a flag code} The following definition extends the concept of stabilizer subfield of a subspace defined in \cite{GLMoTro2015} to the flag codes setting. \begin{definition} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag of type $(t_1, \ldots, t_r)$ on the field $\bbF_{q^n}$ and $\beta\in\bbF_{q^n}^\ast$. We define the \emph{stabilizer subfield} of the flag $\mathcal{F}$ (w.r.t. $\beta$) as the smallest subfield $\mathrm{Stab}^{+}_{\beta}(\mathcal{F})$ of $\bbF_{q^n}$ containing both $\bbF_q$ and $\mathrm{Stab}_{\beta}(\mathcal{F})$. \end{definition} As before, if $\beta$ is a primitive element of $\bbF_{q^n}$, we just write $\mathrm{Stab}^+(\mathcal{F})$. In this case, the stabilizer subfield of a flag admits the following nice description: \begin{proposition}\label{prop: stab+ flag} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag on $\bbF_{q^n}$. It holds $$ \mathrm{Stab}^+(\mathcal{F})= \mathrm{Stab}(\mathcal{F})\cup \{0\}=\bigcap_{i=1}^r\mathrm{Stab}^+(\mathcal{F}_i) $$ and every $i$-th subspace $\mathcal{F}_i$ of the flag $\mathcal{F}$ is a vector space over $\mathrm{Stab}^+(\mathcal{F})$. Moreover, if $1\in\mathcal{F}_1$, the stabilizer subfield $\mathrm{Stab}^+(\mathcal{F})$ is contained in every subspace of $\mathcal{F}$. \end{proposition} \begin{proof} By application of Proposition \ref{prop: stab+ es best friend}, one has that $\mathrm{Stab}^+(\mathcal{F}_i)=\mathrm{Stab}(\mathcal{F}_i)\cup\{0\}$ for every $1\leq i\leq r$. Now, by means of (\ref{eq: estabilizador flag}), we conclude that \begin{equation}\label{stab+ de un flag como intersección} \begin{array}{ccccc} \mathrm{Stab}(\mathcal{F}) \cup \{0\} & = & \left(\bigcap_{i=1}^r \mathrm{Stab}(\mathcal{F}_i)\right) \cup\{0\} & & \\ & = & \left( \bigcap_{i=1}^r \mathrm{Stab}(\mathcal{F}_i) \cup\{0\} \right)& = & \bigcap_{i=1}^r\mathrm{Stab}^+(\mathcal{F}_i). \end{array} \end{equation} This proves that $\mathrm{Stab}(\mathcal{F}) \cup \{0\}$ is a field and then it is the stabilizer subfield of the flag $\mathcal{F}$. Moreover, it is a subfield of every $\mathrm{Stab}^+(\mathcal{F}_i)$. Hence, it is clear that the subspace $\mathcal{F}_i$ is a vector space over $\mathrm{Stab}^+(\mathcal{F})$. Besides, if $1\in \mathcal{F}_1$, by using Proposition \ref{prop: stab+ es best friend}, we obtain $$ \mathrm{Stab}^+(\mathcal{F})\subseteq \mathrm{Stab}^+(\mathcal{F}_1)\subseteq \mathcal{F}_1 \subset \mathcal{F}_2 \subset \dots \subset \mathcal{F}_r. $$ \end{proof} Notice that the condition $1\in \mathcal{F}_1$ in Proposition \ref{prop: stab+ flag} is by no means restrictive when the acting group is $\bbF_{q^n}^\ast$. In fact, we can always find a generating flag fulfilling this property. It suffices to see that, given an arbitrary flag $\mathcal{F}$, for every nonzero element $\beta\in \mathcal{F}_1$, the flag $\mathcal{F}\beta^{-1}$ clearly satisfies the required condition. Moreover, since $\beta$ is an element in the field $\bbF_{q^n}^\ast$, both flags $\mathcal{F}$ and $\mathcal{F}\beta^{-1}$ generate the same cyclic orbit flag code $\mathrm{Orb}(\mathcal{F})=\mathrm{Orb}(\mathcal{F}\beta^{-1})$. \begin{remark}\label{rem: vector space over stabsfbeta} Clearly, if $\beta \in \bbF_{q^n}^*,$ it holds $\mathrm{Stab}_{\beta}(\mathcal{F}) \subseteq \mathrm{Stab}(\mathcal{F})$ and, hence, $ \mathrm{Stab}^{+}_{\beta}(\mathcal{F}) \subseteq \mathrm{Stab}^+(\mathcal{F})$. As a consequence, every $\mathcal{F}_i$ is a vector space over the field $\mathrm{Stab}^{+}_{\beta}(\mathcal{F})$ as well as over all its subfields. Moreover, if $1 \in \mathcal{F}_1,$ then $\mathrm{Stab}^{+}_{\beta}(\mathcal{F}) \subseteq \mathcal{F}_i$ for $1\leq i\leq r$. \end{remark} \noindent As it occurs for constant dimension codes, the inclusion $ \mathrm{Stab}^{+}_{\beta}(\mathcal{F}) \subseteq \mathrm{Stab}^+(\mathcal{F})$ may be strict. Let us provide an example from a length-two flag inspired by \cite[Example 3.6]{GLMoTro2015}. \begin{example}\label{ex: inclusion estricta stabsfbeta} Consider the flag $\mathcal{F}=(\bbF_{3^2} , \bbF_{3^4})$ on the field $\bbF_{3^8}$ and let $\alpha$ be a primitive element of $\bbF_{3^8}$. Observe that $\mathrm{Stab}^+(\bbF_{3^2})=\bbF_{3^2}$ and $\mathrm{Stab}^+(\bbF_{3^4})=\bbF_{3^4}$. Hence, by Proposition \ref{prop: stab+ flag}, it follows that $\mathrm{Stab}^+(\mathcal{F})=\bbF_{3^2}\cap \bbF_{3^4} = \bbF_{3^2}.$ Let us now choose $\beta= \alpha^{1312}$, which have multiplicative order equal to $5$. Observe that $\mathrm{Stab}_{\beta}(\mathcal{F})\subseteq \langle\beta\rangle$ and also $\mathrm{Stab}_{\beta}(\mathcal{F})\subseteq \mathrm{Stab}^{+}_{\beta}(\mathcal{F})^\ast \subseteq \mathrm{Stab}^+(\mathcal{F})^\ast = \bbF_{3^2}^\ast$. As the orders of $\langle\beta\rangle$ and $\bbF_{3^2}^\ast$ are coprime, we have that $\mathrm{Stab}_{\beta}(\mathcal{F})= \{1\}$. This implies that $\mathrm{Stab}^{+}_{\beta}(\mathcal{F})=\bbF_3$. \end{example} There are remarkable connections between the cardinality of a $\beta$-cyclic orbit flag code and the generating flag when one has a divisor of $n$ among the dimensions of the type vector. \begin{proposition}\label{prop: cardinal beta-cíclico} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag of type $(t_1, \ldots, t_t)$ on $\bbF_{q^n}$. Assume that $m$ is a divisor of $n$ such that $m=t_i$ for some $i\in \{1,\ldots, r\}$ and consider the subfield $\bbF_{q^m}$ of $\bbF_{q^n}$. Take an element $\beta \in \bbF_{q^n}^\ast$ such that $\bbF_{q^m}^\ast \subseteq \langle\beta\rangle$. Then: \begin{enumerate} \item The value $\frac{|\beta|}{q^m-1}$ divides $|\mathrm{Orb}_{\beta}(\mathcal{F})|.$ \label{item1} \item We have $|\mathrm{Orb}_{\beta}(\mathcal{F})|= \frac{|\beta|}{q^m-1}$ if, and only if, each subspace $\mathcal{F}_j$ is a vector space over $\bbF_{q^m}.$ In particular, $t_1=m$. \label{item2} \end{enumerate} \end{proposition} \begin{proof} As $\bbF_{q^m}^\ast\subseteq \langle\beta\rangle$, we have that $q^m-1$ must divide $|\beta|$. This implies that $|\beta^{q^{t_i}-1}|=|\beta^{q^m-1}|=\frac{|\beta|}{q^m-1}$ and (\ref{item1}) follows directly from Proposition \ref{prop: cardinal proyectado divide}. To prove (\ref{item2}), observe that $|\mathrm{Orb}_{\beta}(\mathcal{F})|= \frac{|\beta|}{q^m-1}$ holds if, and only if, $\mathrm{Stab}_{\beta}(\mathcal{F})$ is a subgroup of order $q^m-1$ of $\langle\beta\rangle$. By the uniqueness of subgroups of a cyclic group, it follows that $\mathrm{Stab}_{\beta}(\mathcal{F})=\bbF_{q^m}^\ast$. Hence, the field $\mathrm{Stab}^{+}_{\beta}(\mathcal{F})=\bbF_{q^m}$ is a subfield of $\mathrm{Stab}^+(\mathcal{F})$ and, by means of Remark \ref{rem: vector space over stabsfbeta}, every subspace $\mathcal{F}_j$ has structure of $\bbF_{q^m}$-vector space. In particular, no dimension smaller than $m$ can appear in the type vector, i.e., $t_1=m$. Conversely, assume that every $\mathcal{F}_j$ is a vector space over $\bbF_{q^m}$ for $j\in \{1, \ldots, r\}$. In particular, $\mathcal{F}_1=\bbF_{q^m}\gamma$ for some $\gamma\in \bbF_{q^n}^*$. As a consequence, multiplication by elements in $\bbF_{q^m}^\ast\subseteq \langle\beta\rangle$ is closed on every subspace $\mathcal{F}_j$. Hence, we have $\bbF_{q^m}^\ast \subseteq \mathrm{Stab}_{\beta}(\mathcal{F}_j)$ for $1\leq j\leq r$ and, by means of (\ref{eq: estabilizador flag}), it holds $\bbF_{q^m}^\ast\subseteq \mathrm{Stab}_{\beta}(\mathcal{F})$. On the other hand, notice that $\mathrm{Stab}_{\beta}(\mathcal{F})\subseteq \mathrm{Stab}_{\beta}(\mathcal{F}_1) = \bbF_{q^m}^\ast$. Thus, it follows that $\mathrm{Stab}_{\beta}(\mathcal{F})=\bbF_{q^m}^\ast$ and $|\mathrm{Orb}_{\beta}(\mathcal{F})|=\frac{|\beta|}{q^m-1},$ as we wanted to prove. \end{proof} The second statement in Proposition \ref{prop: cardinal beta-cíclico} turns out specially interesting in the case of cyclic orbit codes, that is, when the acting group is $\bbF_{q^n}^\ast$. \begin{corollary}\label{cor: cardinal cíclico} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag of type $(t_1, \ldots, t_r)$ on $\bbF_{q^n}$. Assume that $m$ is a divisor of $n$ such that $m=t_i$ for some $i\in \{1,\ldots, r\}$. If $|\mathrm{Orb}(\mathcal{F})|= \frac{q^n-1}{q^m-1}$, then $m=t_1$ and the constant dimension code $\mathrm{Orb}(\mathcal{F}_1)$ is the $m$-spread $\mathrm{Orb}(\bbF_{q^m})$. Moreover, the value $m$ divides $t_j$, for $j\in \{1, \ldots, r\}$. \end{corollary} \begin{proof} By means of Proposition \ref{prop: cardinal beta-cíclico}, it is clear that the first dimension in the type vector is $t_1=m$ and it divides every $t_i$. Moreover, $\mathcal{F}_1$ must be a one-dimensional vector space over $\bbF_{q^m}$, that is, it is of the form $\mathcal{F}_1=\bbF_{q^m}\gamma$ for some $\gamma\in \bbF_{q^n}^*$. As a result, the first projected code $\mathrm{Orb}(\mathcal{F}_1)=\mathrm{Orb}(\bbF_{q^m})$ is the $m$-spread defined in (\ref{def: spread Anna-Lena}). \end{proof} \begin{remark} In the conditions of the previous corollary, if we require the subspace $\mathcal{F}_1$ to contain the element $1\in \bbF_{q^n}$, not only do we obtain that $\mathrm{Orb}(\mathcal{F}_1)=\mathrm{Orb}(\bbF_{q^m})$ but also the equality $\mathcal{F}_1=\bbF_{q^m}$. \end{remark} In view of Propositions \ref{prop: stab+ flag} and \ref{prop: cardinal beta-cíclico}, it also makes sense the extension to flags of the concept of best friend introduced in \cite{GLMoTro2015}. \begin{definition} Consider a flag $\mathcal{F}$ on $\bbF_{q^n}$. A subfield $\bbF_{q^m}$ of $\bbF_{q^n}$ is said to be a \emph{friend} of the flag $\mathcal{F}$ if all its subspaces are $\bbF_{q^m}$-vector spaces. In other words, a subfield of $\bbF_{q^n}$ is a friend of the flag $\mathcal{F}$ if it is a friend of all its subspaces. We call \emph{best friend} of the flag $\mathcal{F}$ to its largest friend. \end{definition} The next result states a necessary condition on the type vector of flags having a given subfield of $\bbF_{q^n}$ as a friend. The proof is straightforward. \begin{lemma}\label{lem: BF divides dimensions} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag of type $(t_1, \ldots, t_r)$ on $\bbF_{q^n}$. If $\bbF_{q^m}$ is a friend of $\mathcal{F}$ then $m$ divides $\gcd(t_1, \ldots, t_r, n).$ \end{lemma} \begin{remark} If follows that the best friend of a flag of type $(t_1, \ldots, t_r)$ with $\gcd(t_1, \ldots, t_r, n)=1$, in particular a full flag, is the ground field $\bbF_q$. \end{remark} Beyond conditions on the type vector, we can always characterize the best friend of an arbitrary flag in terms of the ones of its subspaces. To do so, we generalize Proposition \ref{prop: stab+ es best friend} to the flag codes scenario. \begin{proposition}\label{prop: stab+ es el best friend del flag} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag on $\bbF_{q^n}$. Then $\mathrm{Stab}^+(\mathcal{F})$ is the best friend of the flag $\mathcal{F}$ and it contains any other friend $\bbF_{q^m}$ of $\mathcal{F}$. Moreover, if $1\in \mathcal{F}_1$, then we have that $\bbF_{q^m} \subseteq \mathrm{Stab}^+(\mathcal{F}) \subseteq \mathcal{F}_1$. \end{proposition} \begin{proof} Let us prove that $\mathrm{Stab}^+(\mathcal{F})$ is the largest friend of $\mathcal{F}$, i.e., its best friend. To do so, assume that a subfield $\bbF_{q^m}$ of $\bbF_{q^n}$ is a friend of the flag $\mathcal{F}$. By definition of friend of a flag, we know that multiplication by elements in $\bbF_{q^m}$ is closed in every subspace $\mathcal{F}_i$ of the flag. As a consequence, $\bbF_{q^m}^\ast$ is a subgroup of $\mathrm{Stab}(\mathcal{F})$ and we can conclude that $\bbF_{q^m}$ is contained in $\mathrm{Stab}(\mathcal{F})\cup\{0\} = \mathrm{Stab}^+(\mathcal{F})$. This proves that the stabilizer subfield of $\mathcal{F}$ is its best friend. Finally, by using the condition $1\in \mathcal{F}_1$ together with Proposition \ref{prop: stab+ flag}, we obtain the inclusion $$ \bbF_{q^m} \subseteq \mathrm{Stab}^+(\mathcal{F}) \subseteq \mathrm{Stab}^+(\mathcal{F}_1) \subseteq \mathcal{F}_1. $$ \end{proof} \begin{remark}\label{rem: BF=interseccion BF_i y 1 in F_1} Observe that all flags in the code $\mathrm{Orb}(\mathcal{F})$ have the same best friend. In particular, since $\mathrm{Orb}_{\beta}(\mathcal{F})\subseteq\mathrm{Orb}(\mathcal{F})$, flags in a $\beta$-cyclic orbit flag code have all the same best friend for every $\beta\in\bbF_{q^n}^\ast$. Hence, we say that $\mathrm{Stab}^+(\mathcal{F})$ is the best friend of every $\mathrm{Orb}_{\beta}(\mathcal{F})$. \end{remark} As stated in the proof of Proposition \ref{prop: stab+ flag}, (see equation (\ref{stab+ de un flag como intersección})), the stabilizer subfield of the cyclic flag code $\mathrm{Orb}(\mathcal{F})$ can be computed as the intersection of the ones of its projected codes. Combining this with Proposition \ref{prop: stab+ es el best friend del flag}, we obtain the next result. \begin{corollary}\label{cor: BF flag is the intersection} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag on $\bbF_{q^n}$. Then its best friend is the intersection of the ones of its subspaces. Moreover, if $1\in\mathcal{F}_1$, every friend of the flag $\mathcal{F}$ is contained in $\mathcal{F}_1$. \end{corollary} It is clear that the best friend of a flag is a subfield of the ones of its subspaces. However, while the subspaces in a flag are nested, their respective best friends might not form a sequence of nested subfields as we can see in the following example. \begin{example}\label{ex:no nested best friends} Take $q$ a prime power and the flag of type $(2,3)$ on $\bbF_{q^4}$ given by $\mathcal{F}=(\bbF_{q^2}, \bbF_{q^2}+\bbF_{q}\alpha)$, where $\alpha$ denotes a primitive element of $\bbF_{q^4}$. In this case, the best friend of $\mathcal{F}_1$ is precisely $\bbF_{q^2}$ whereas, since $\gcd(3,4)=1$, the best friend of $\mathcal{F}_2$ is the ground field $\bbF_q$. \end{example} As it happens in the subspace codes setting, knowing the best friend of a cyclic orbit flag code gives relevant information about the code parameters as we will see below. \section{Cyclic orbit flag codes with fixed best friend}\label{sec: prescribed BF} This section is devoted to the study of cyclic orbit flag codes on $\bbF_{q^n}$ generated by flags with the subfied $\bbF_{q^m}$ as their best friend. From now on, the integer $m$ will denote a divisor of $n$. Let us first see how the close relationship between the best friend of a flag and its stabilizer allows us to compute the size of the generated cyclic or $\beta$-cyclic orbit flag code. The next result follows from (\ref{cardinality cyclic orbit flag code}) and Proposition \ref{prop: stab+ es el best friend del flag}. \begin{proposition} \label{prop: cardinality and best friend} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag on $\bbF_{q^n}$ and $\beta \in \bbF_{q^n}^*$. Assume that $\bbF_{q^m}$ is the best friend of $\mathcal{F}$. Then $$ |\mathrm{Orb}_{\beta}(\mathcal{F})|=\frac{|\beta|}{|\langle\beta\rangle \cap \bbF_{q^m}^\ast|}. $$ In particular, if $\beta$ is a primitive element of $\bbF_{q^n}$, it holds $|\mathrm{Orb}(\mathcal{F})|=\frac{q^n-1}{q^m-1}$. \end{proposition} \begin{remark} It is well known that any orbit coming from the action of a group can be partitioned into a set of orbits when we restrict the action to a subgroup. These orbits may have different cardinality in general. However, the cardinality of the code $\mathrm{Orb}_{\beta}(\mathcal{F})$ just depends on $|\beta|$ and the best friend of $\mathcal{F}$. Moreover, since all the flags in $\mathrm{Orb}(\mathcal{F})$ have the same best friend, we have that $|\mathrm{Orb}_{\beta}(\mathcal{F}')|=|\mathrm{Orb}_{\beta}(\mathcal{F})|$ for every $\mathcal{F}'\in\mathrm{Orb}(\mathcal{F})$. We conclude that, for any $\beta \in \bbF_{q^n}^*$ the code $\mathrm{Orb}(\mathcal{F})$ can be partitioned into a set of $\beta$-cyclic subcodes, all of them with the same cardinality. \end{remark} Proposition \ref{prop: cardinality and best friend} leads to a characterization of $\beta$-cyclic orbit flag codes whose size coincides with the order or the acting group. \begin{corollary} Let $\mathcal{F}$ be a flag on $\bbF_{q^n}$ with $\bbF_{q^m}$ as its best friend and consider $\beta\in\bbF_{q^n}^\ast$. Then $|\mathrm{Orb}_{\beta}(\mathcal{F})|=|\beta|$ if, and only if $|\beta|$ and $q^m-1$ are coprime. In particular, this equality always holds if $q=2$ and $m=1$. \end{corollary} Having the subfield $\bbF_{q^m}$ as best friend yields a condition on the type vector of a flag, as well as a description of the structure of all the flags in its $\beta$-cyclic orbit flag code in terms of $\bbF_{q^m}$. Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag of type $(t_1, \ldots, t_r)$ on $\bbF_{q^n}$ with $\bbF_{q^m}$ as its best friend. Hence, $\bbF_{q^m}$ must be a friend of all its subspaces and $m$ divides every dimension in the type vector. Consequently, we can write $t_i=m s_i$ for $i=1, \ldots, r$, where $1\leq s_1 < s_2 < \dots < s_r < s = \frac{n}{m}$. On the other hand, the nested structure of the flag $\mathcal{F}$ allows us to find linearly independent elements $a_1, \ldots, a_{s_r} \in \bbF_{q^{n}}$ (over $\bbF_{q^m}$) such that, for every $1\leq i\leq r$, we have $$ \mathcal{F}_i = \bigoplus_{j=1}^{s_i} \bbF_{q^m} a_{j}. $$ In particular, observe that if $m$ is a dimension in the type vector, then $s_1=1$ and the cyclic orbit code $\mathrm{Orb}(\mathcal{F}_1)$ is the $m$-spread of $\bbF_{q^n}$ described in (\ref{def: spread Anna-Lena}). Moreover, if $1\in\mathcal{F}_1$, this subspace must be the subfield $\bbF_{q^m}$. Concerning the distance of $\beta$-cyclic orbit flag codes, as in the constant dimension codes framework, we can also deduce some estimates from the knowledge of the best friend. \begin{proposition}\label{prop: distance bounds BF} Let $\mathcal{F}$ be a flag of type $(ms_1, \ldots, ms_r)$ on $\bbF_{q^n}$ with the subfield $\bbF_{q^m}$ as its best friend and take $\beta\in\bbF_{q^n}^\ast.$ Then $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))=0$ if, and only if, $\beta\in\bbF_{q^m}^\ast$. Out of this case, $2m$ divides $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))$ and it holds \begin{equation}\label{eq: distance bounds} 2m \leq d_f(\mathrm{Orb}_{\beta}(\mathcal{F})) \leq 2m \left( \sum_{s_i \leq \lfloor \frac{s}{2}\rfloor} s_i + \sum_{s_i > \lfloor \frac{s}{2}\rfloor} (s-s_i) \right). \end{equation} \end{proposition} \begin{proof} Assume that $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))=0$ or, equivalently, that $\mathrm{Orb}_{\beta}(\mathcal{F})=\{\mathcal{F}\}$. This happens if, and only if, $\beta$ stabilizes the flag $\mathcal{F}$, i.e., if $\beta\in\mathrm{Stab}(\mathcal{F})=\bbF_{q^m}^\ast$. Take now $\beta\in\bbF_{q^n}^\ast\setminus\bbF_{q^m}^\ast$. By the definition of best friend of the flag $\mathcal{F}$, it follows that $\bbF_{q^m}$ is a friend of every subspace $\mathcal{F}_i$. This implies that, for every $1\leq i\leq r$, subspaces in $\mathrm{Orb}_{\beta}(\mathcal{F}_i)$ are vector spaces over $\bbF_{q^m}$. Take a flag $\mathcal{F}'$ in $\mathrm{Orb}_{\beta}(\mathcal{F})\setminus\{\mathcal{F}\}.$ Since, for every $1\leq i \leq r,$ both $\mathcal{F}_i$, $\mathcal{F}'_i$, and hence $\mathcal{F}_i\cap\mathcal{F}_i'$, are vector spaces over $\bbF_{q^{m}}$, the value $m$ divides their dimensions (over $\bbF_q$). Taking into account that $d_S(\mathcal{F}_i, \mathcal{F}_i') = 2(\dim(\mathcal{F}_i)- \dim(\mathcal{F}_i\cap\mathcal{F}'_i))$, we conclude that $2m$ divides $d_S(\mathcal{F}_i, \mathcal{F}'_i)$ for every $1\leq i\leq r$. Consequently, the value $2m$ also divides $d_f(\mathcal{F}, \mathcal{F}') = \sum_{i=1}^r d_S(\mathcal{F}, \mathcal{F}'),$ for every choice of $\mathcal{F}'\in \mathrm{Orb}_{\beta}(\mathcal{F})\setminus\{\mathcal{F}\}$. In particular, $2m$ divides $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))$ and it is a lower bound for it. At the same time, if we consider the general upper bound for the distance of flag codes of type $(ms_1, \dots, ms_r)$ on $\bbF_{q^n}$ given in (\ref{eq: dist max flags}), taking into account that $n=ms$, we obtain the result. \end{proof} \begin{remark} Notice that for every $\beta\in\bbF_{q^n}^\ast$, it holds $\mathrm{Orb}_{\beta}(\mathcal{F}) \subseteq \mathrm{Orb}(\mathcal{F})$. Then it follows $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))\geq d_f(\mathrm{Orb}(\mathcal{F}))$ except for $\beta\in\mathrm{Stab}(\mathcal{F})=\bbF_{q^m}^\ast$. However, not every $\beta$ allows us to improve the distance with respect to the one of $\mathrm{Orb}(\mathcal{F})$. We can appreciate this fact in the next example. \end{remark} \begin{example} Take $q$ a prime power and $\alpha$ a primitive element of $\bbF_{q^{6}}$. Consider $\mathcal{F}$ a flag of type $(1,4)$ on $\bbF_{q^{6}}$ with subspaces $$ \mathcal{F}_1=\bbF_{q} \ \text{and} \ \mathcal{F}_2=\bbF_{q^{2}}+\bbF_{q^{2}}\alpha. $$ Notice that, since $\gcd(1,4,6)=1$, by application of Lemma \ref{lem: BF divides dimensions}, $\bbF_q$ is the best friend of $\mathcal{F}.$ Clearly, it is the best friend of $\mathcal{F}_1$ as well. Concerning $\mathcal{F}_2$, observe that $\bbF_{q^{2}}$ is one of its friends. Hence, its best friend is a subfield of $\bbF_{q^{6}}$ containing $\bbF_{q^{2}}$. We conclude that $\bbF_{q^2}$ is the best friend of $\mathcal{F}_2$. The cyclic orbit flag code $\mathrm{Orb}(\mathcal{F})$ contains exactly $\frac{q^6-1}{q-1}$ flags and we have $d_f(\mathrm{Orb}(\mathcal{F}))=2$. It suffices to see that, for every $\beta\in\bbF_{q^2}^\ast\setminus\bbF_q^\ast \subset \bbF_{q^6}^\ast$, it holds $\mathcal{F}_2=\mathcal{F}_2\beta$ and $$ d_f(\mathcal{F}, \mathcal{F}\beta)= d_S(\mathcal{F}_1, \mathcal{F}_1\beta)=2. $$ Observe that this is the minimum possible distance fixed the best friend $\bbF_q$. Now, if we consider the subgroup $\langle\gamma\rangle=\bbF_{q^2}^\ast$, the subcode $\mathrm{Orb}_\gamma(\mathcal{F})$ has cardinality $\frac{q^2-1}{q-1}=q+1$ and the same argument above gives that $d_f(\mathrm{Orb}_\gamma(\mathcal{F}))=2$. In this case, $\mathrm{Orb}_\gamma(\mathcal{F})$ does not have a better distance than $\mathrm{Orb}(\mathcal{F})$. Take now $\delta\in\bbF_{q^6}^\ast$ a generator of $\bbF_{q^3}^\ast$, then the $\delta$-cyclic flag code generated by $\mathcal{F}$ contains $\frac{q^3-1}{q-1}=q^2+q+1$ flags. To compute its distance, observe that $$ \mathrm{Stab}_\delta(\mathcal{F}_2)= \langle\delta\rangle \cap \bbF_{q^2}^\ast = \bbF_{q^3}^\ast \cap \bbF_{q^2}^\ast =\bbF_q^\ast = \mathrm{Stab}_\delta(\mathcal{F}_1) = \mathrm{Stab}_\delta(\mathcal{F}). $$ Hence, for every $\delta^i\notin\mathrm{Stab}_\delta(\mathcal{F})$ it holds $\mathcal{F}_j\neq\mathcal{F}_j\delta^i$, for $j=1,2$. On the one hand, we have $d_S(\bbF_q, \bbF_q\delta^i)=2.$ On the other hand, as $\bbF_{q^2}$ is the best friend of $\mathcal{F}_2$, the value $d_S(\mathcal{F}_2, \mathcal{F}_2\delta^i)$ is a multiple of $4$. Since the maximum possible distance between $4$-dimensional subspaces of $\bbF_{q^6}$ is precisely $2(6-4)=4$, it follows that $d_S(\mathcal{F}_2, \mathcal{F}_2\delta^i)=4$. As a result, $d_f(\mathcal{F}, \mathcal{F}\delta^i)=6$ for all $\delta^i\in\langle\delta\rangle\setminus \mathrm{Stab}_\delta(\mathcal{F})$ and we conclude that $$d_f(\mathrm{Orb}_\delta(\mathcal{F}))=6 >2=d_f(\mathrm{Orb}(\mathcal{F})).$$ \end{example} \begin{remark} Observe that the upper bound for the distance given in (\ref{eq: distance bounds}) coincides with the general bound for the flag distance given in (\ref{eq: dist max flags}). However, in Subsection \ref{subsec:optimum distance cyclic codes}, we will see that, in our scenario, not every type vector is compatible with attaining this upper bound. On the other hand, the lower bound for the distance of a $\beta$-cyclic flag code having $\bbF_{q^m}$ as its best friend obtained in (\ref{eq: distance bounds}) coincides with the one given in Proposition \ref{prop: stab+ es best friend} for cyclic (subspace) codes having the same best friend. The previous example shows that this lower bound can also be attained by $\beta$-cyclic obit flag codes of length at least two. Let us see another situation where the generating flag has a special form. \end{remark} \begin{example}\label{ex: minimim distance flags} Let $\mathcal{F}=(\bbF_{3^2}, \bbF_{3^4})$ be the flag of type $(2,4)$ on $\bbF_{3^8}$ defined in Example \ref{ex: inclusion estricta stabsfbeta} and consider the cyclic orbit flag code $\mathrm{Orb}(\mathcal{F})$. Observe that, as stated in \ref{ex: inclusion estricta stabsfbeta}, the best friend of the flag $\mathcal{F}$ is the subfield $\bbF_{3^2}$. Moreover, $\mathrm{Stab}(\mathcal{F})=\mathrm{Stab}(\mathcal{F}_1)=\bbF_{3^2}$ and $\mathrm{Stab}(\mathcal{F}_2)=\bbF_{3^4}$. Now, if $\alpha$ denotes a primitive element of $\bbF_{3^8}$, the power $\alpha^{82}$ is also a primitive element of the subfield $\bbF_{3^4}$. Hence, $\alpha^{82}$ clearly lies in $\bbF_{3^4}^\ast\setminus \bbF_{3^2}^\ast$. As a result, the flags $\mathcal{F}$ and $\mathcal{F}\alpha^{82}$ are different codewords in $\mathrm{Orb}(\mathcal{F})$ whereas we have the subspaces equality $\mathcal{F}_2=\mathcal{F}_2\alpha^{82}$. It follows that $$ d_f(\mathrm{Orb}(\mathcal{F}))\leq d_f(\mathcal{F}, \mathcal{F}\alpha^{82}) = d_S(\bbF_{3^2}, \bbF_{3^2}\alpha^{82})= 4, $$ which is the minimum possible distance between subspaces of dimension one over $\bbF_{3^2}$. Hence, we conclude that $d_f(\mathrm{Orb}(\mathcal{F}))=4$. \end{example} Notice that in the previous example the two subspaces of the generating flag are nested subfields of a given finite field. This example gives rise to the definition of a family of cyclic orbit flag codes inspired by the towers of subfields of $\bbF_{q^n}$. \subsection{Galois cyclic flag codes}\label{subsec:Galois codes} Let $1\leq t_1 < \dots < t_r < n$ be a sequence of divisors of $n$ such that $t_i$ divides $t_{i+1}$, for $1\leq i \leq r-1$. \begin{definition} We define the \emph{Galois flag} of type $(t_1, \dots, t_r)$ on $\bbF_{q^n}$ as the flag given by the sequence of nested subfields $(\bbF_{q^{t_1}}, \dots, \bbF_{q^{t_r}})$. For every $\beta\in \bbF_{q^n}^\ast$, the $\beta$-cyclic orbit flag code generated by this flag is called the \textit{Galois $\beta$-cyclic flag code} of type $(t_1, \dots, t_r)$. When $\beta$ is primitive, we just say \emph{Galois cyclic flag code.} \end{definition} \begin{remark} Notice that, for each subgroup $\langle\beta\rangle\subseteq\bbF_{q^n}^\ast$, there is just one Galois $\beta$-cyclic flag code for each type vector satisfying the condition above. In contrast, the Galois $\beta$-cyclic flag code of a fixed type can be generated by different flags consisting of sequences of subspaces, not necessarily fields. Nevertheless, if we impose the condition $1\in\mathcal{F}_1$, only the Galois flag of type $(t_1, \dots, t_r)$ can generate the Galois $\beta$-cyclic flag code of this type. Given the Galois flag $\mathcal{F}$ of type vector $(t_1, \dots, t_r)$, it is clear that its $i$-th subspace is its own best friend. Hence, contrary to what happens for general flags (see Example \ref{ex:no nested best friends}), the best friends of the Galois flag subspaces form a sequence of nested subfieds. As a consequence, the first subfield $\bbF_{q^{t_1}}$ is the best friend of the Galois flag of type $(t_1, \dots, t_r)$ and, in order to construct Galois $\beta$-cyclic flag codes with the subfield $\bbF_{q^m}$ as its best friend, it suffices to consider a sequence of suitable divisors $(t_1, \dots, t_r)$ starting at $t_1=m$. \end{remark} Let us start focusing on Galois cyclic flag codes ($\beta$ primitive). According to Proposition \ref{prop: cardinality and best friend}, the cardinality of the Galois cyclic flag code of type $(t_1, \dots, t_r)$ is $c_1=(q^n-1)/(q^{t_1}-1)$ whereas its distance is $2t_1$. In particular, its $i$-projected code contains exactly $c_i=(q^n-1)/(q^{t_i}-1)$ subspaces and has subspace distance equal to $2t_i$. In spite of the fact that the distance of Galois cyclic flag codes is the smallest possible for cyclic orbit flag codes with a fixed best friend, the kaleidoscopic algebraic structure of nested spreads inside them is remarkable and deserves to be pointed out. \begin{theorem}\label{theo: Galois flag codes structure} Let $\mathcal{F}=(\bbF_{q^{t_1}}, \dots, \bbF_{q^{t_r}})$ be the Galois flag of type $(t_1, \ldots, t_r)$ on the field $\bbF_{q^n}$ and $\mathrm{Orb}(\mathcal{F})$ the associated Galois cyclic flag code. Consider $\alpha$ and $\alpha_i$ respective primitive elements of the fields $\bbF_{q^n}$ and $\bbF_{q^{t_i}}$, for $1\leq i\leq r$. Then it holds: \begin{enumerate} \item Each projected code of $\mathrm{Orb}(\mathcal{F})$ is a $t_i$-spread of $\bbF_{q^n}$. \item The $\alpha_j$-cyclic orbit code $\mathrm{Orb}_{\alpha_j}(\bbF_{q^{t_i}}\alpha^l)$ is a $t_i$-spread of the subspace $\bbF_{q^{t_j}}\alpha^l$, for every $i<j\leq r$ and $0\leq l \leq c_j-1$, where $c_j=(q^n-1)/(q^{t_j}-1)$. \end{enumerate} \end{theorem} \begin{proof} Observe that, by the definition of Galois cyclic flag code, the $i$-projected code $\mathrm{Orb}(\mathcal{F}_i)= \mathrm{Orb}(\bbF_{q^{t_i}})$ is the $t_i$-spread of the field $\bbF_{q^n}$ described in (\ref{def: spread Anna-Lena}). The same argument allows us to state that, for every $i < j \leq r$, the $\alpha_j$-cyclic orbit code $\mathrm{Orb}_{\alpha_j}(\bbF_{q^{t_i}})$ is a $t_i$-spread of $\bbF_{q^{t_j}}$ as well. Moreover, since the subspace distance is invariant by the multiplicative action of $\bbF_{q^n}^\ast = \langle\alpha\rangle$ on subspaces, we have that $\mathrm{Orb}_{\alpha_j}(\bbF_{q^{t_i}}\alpha^l)$ is also a $t_i$-spread of the vector space $\bbF_{q^{t_j}}\alpha^l$, for every $0\leq l \leq q^n-2$. Now, taking into account that $\alpha^{c_j}$ is a primitive element of $\bbF_{q^{t_j}}$, we have that $\langle\alpha_j\rangle = \langle\alpha^{c_j}\rangle = \bbF_{q^{t_j}}^\ast$ and $\bbF_{q^{t_j}}=\bbF_{q^{t_j}}\alpha^{c_j}$. This fact allows us to restrict ourselves to exponents $0\leq l\leq c_j-1$. \end{proof} \begin{remark} Note that Theorem \ref{theo: Galois flag codes structure}, describes a striking cyclic spreads gear. First, every projected code of a Galois cyclic flag code is a spread. Then, every codeword in the $j$-projected code $\mathrm{Orb}(\bbF_{q^{t_j}}),$ i.e., every subspace of the form $\bbF_{q^{t_j}}\alpha^l$, is partitioned into the subspaces of the $\alpha_j$-cyclic orbit code $\mathrm{Orb}_{\alpha_j}(\bbF_{q^{t_i}}\alpha^l)$ if $i<j\leq r$. Thereby, we have that $\mathrm{Orb}_{\alpha_j}(\bbF_{q^{t_i}}\alpha^l)$ is a $t_i$-spread of $\bbF_{q^{t_j}}\alpha^l$ for every value $0 \leq l\leq c_j-1$ and also a partial spread of dimension $t_i$ of the field $\bbF_{q^n}$. Finally, the union of all these orbits $$ \dot\bigcup_{l=0}^{c_j-1} \mathrm{Orb}_{\alpha_j}(\bbF_{q^{t_i}}\alpha^l) $$ gives us back the $t_i$-spread $\mathrm{Orb}(\bbF_{q^{t_i}})=\mathrm{Orb}(\mathcal{F}_i)$. In other words, Galois cyclic flag codes provide collections of nested spreads that respect the orbital structure induced by the action of $\langle\alpha\rangle$ on flags. \end{remark} \begin{figure}[H] \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm] \clip(-2,1) rectangle (9,8); \draw [line width=0.8pt] (2,6.75)-- (5,6.25); \draw [line width=0.8pt] (5,6.25)-- (2,6.5); \draw [line width=0.8pt] (2,6.25)-- (5,6.25); \draw [line width=0.8pt] (5,6.25)-- (2.00132,5.75); \draw [line width=0.8pt] (5,6.25)-- (7.973362657377393,5.619019447339418); \draw [line width=0.8pt] (7.973362657377393,5.619019447339418)-- (5,5); \draw [line width=0.8pt] (2,5.5)-- (5,5); \draw [line width=0.8pt] (5,5)-- (2,5.25); \draw [line width=0.8pt] (5,5)-- (2,5); \draw [line width=0.8pt] (2,2.5+1.75)-- (5,2+1.75); \draw [line width=0.8pt] (2,2+1.75)-- (5,2+1.75); \draw [line width=0.8pt] (2,1.5+1.75)-- (5,2+1.75); \draw [line width=0.8pt] (5,0.75+1.75)-- (2,1.25+1.75); \draw [line width=0.8pt] (5,0.75+1.75)-- (2,1+1.75); \draw [line width=0.8pt] (5,0.75+1.75)-- (2,0.75+1.75); \draw [line width=0.8pt] (5,0.75+1.75)-- (2,0.25+1.75); \draw [line width=0.8pt] (5,2+1.75)-- (8,1.4005718048604943+1.75); \draw [line width=0.8pt] (8,1.4005718048604943+1.75)-- (5,0.75+1.75); \draw (4.85,6.1) node[anchor=north west] {$\vdots$}; \draw (4.85,3.65) node[anchor=north west] {$\vdots$}; \draw (7.709433808911543,4.911482843761254) node[anchor=north west] {$\vdots$}; \draw (1.825,5.25) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.825,6.5) node[anchor=north west] {{\tiny $\vdots$}}; \draw [line width=0.8pt] (2,4.5)-- (5,5); \draw [line width=0.8pt] (2,2.25+1.75)-- (5,2+1.75); \draw (1.825,2.25+1.75) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.825,2.75) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.1927775368697477,6.5) node[anchor=north west] {$\alpha_2$}; \draw (0.5760592776859816,5.95) node[anchor=north west] {$\alpha_3$}; \draw (-0.6573772406815505,4.75) node[anchor=north west] {$\alpha$}; \draw [shift={(1.7,6.25)},line width=0.4pt] plot[domain=1.5707963267948966:4.71238898038469,variable=\t]({1*0.5*cos(\t r)+0*0.5*sin(\t r)},{0*0.5*cos(\t r)+1*0.5*sin(\t r)}); \draw (1.8506103466657648,7.625043184169825) node[anchor=north west] {$t_1$}; \draw (4.831415266053967,7.090554026210561) node[anchor=north west] {$t_2$}; \draw (7.791662910136044,6.494393042332921) node[anchor=north west] {$t_3$}; \draw [shift={(1.7,5.625)},line width=0.4pt] plot[domain=1.5707963267948966:4.71238898038469,variable=\t]({1*1.125*cos(\t r)+0*1.125*sin(\t r)},{0*1.125*cos(\t r)+1*1.125*sin(\t r)}); \draw [shift={(1.7,4.375)},line width=0.4pt] plot[domain=1.5707963267948966:4.71238898038469,variable=\t]({1*2.375*cos(\t r)+0*2.375*sin(\t r)},{0*2.375*cos(\t r)+1*2.375*sin(\t r)}); \draw [line width=0.4pt] (1.6,5.79)-- (1.7,5.75)-- (1.6,5.73); \draw [line width=0.4pt] (1.6,4.54)-- (1.7,4.5)-- (1.6,4.48); \draw [line width=0.4pt] (1.6,2.04)-- (1.7,2)-- (1.6,1.98); \begin{scriptsize} \draw [fill=white] (5,2+1.75) circle (1.5pt); \draw [fill=red] (2,6.75) circle (1.5pt); \draw [fill=white] (2,6.25) circle (1.5pt); \draw [fill=white] (2,5.75) circle (1.5pt); \draw [fill=red] (5,6.25) circle (1.5pt); \draw [fill=white] (2,6.5) circle (1.5pt); \draw [fill=red] (7.973362657377393,5.619019447339418) circle (1.5pt); \draw [fill=white] (5,5) circle (1.5pt); \draw [fill=white] (2,5) circle (1.5pt); \draw [fill=white] (2,5.5) circle (1.5pt); \draw [fill=white] (2,5.25) circle (1.5pt); \draw [fill=white] (2,2.5+1.75) circle (1.5pt); \draw [fill=white] (2,2+1.75) circle (1.5pt); \draw [fill=white] (2,1.5+1.75) circle (1.5pt); \draw [fill=white] (2,1.25+1.75) circle (1.5pt); \draw [fill=white] (2,1+1.75) circle (1.5pt); \draw [fill=white] (2,0.75+1.75) circle (1.5pt); \draw [fill=white] (2,0.25+1.75) circle (1.5pt); \draw [fill=white] (5,0.75+1.75) circle (1.5pt); \draw [fill=white] (8,1.4005718048604943+1.75) circle (1.5pt); \draw [fill=white] (2,4.5) circle (1.5pt); \draw [fill=white] (2,2.25+1.75) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \caption{Nested spread structure of a Galois cyclic flag code}\label{fig1} \end{figure} \noindent The previous figure represents the structure of the Galois cyclic flag code of a given type $(t_1, t_2, t_3)$. Vertices are subspaces, (directed) edges denote inclusions (from left to right) and flags are given by directed paths in the graph. Each column in the graph is a projected code and, by Theorem \ref{theo: Galois flag codes structure}, all of them are spreads of $\bbF_q^n$ of the corresponding dimensions. In addition, every subspace in the graph is partitioned into the set of its left adjacent vertices. On the other hand, the Galois flag $\mathcal{F}=(\bbF_{q^{t_1}}, \bbF_{q^{t_2}}, \bbF_{q^{t_3}})$ is represented by the sequence of red vertices. Since $\mathrm{Stab}(\mathcal{F})=\bbF_{q^{t_1}}^\ast= \langle \alpha_1\rangle$, the code $\mathrm{Orb}_{\alpha_1}(\mathcal{F})$ consists of the single element $\mathcal{F}.$ In contrast, for $i=2, 3$, the code $\mathrm{Orb}_{\alpha_i}(\mathcal{F})$ is given by the set of flags in the graph marked by the round arrow labeled with $\alpha_i$. \vspace{0.25cm} Take now an element $\beta\in \bbF_{q^n}^\ast$. Let $\mathcal{F}$ be the Galois flag of type $(t_1, \dots, t_r)$ on $\bbF_{q^n}$ and consider the Galois $\beta$-cyclic flag code $\mathrm{Orb}_{\beta}(\mathcal{F})$. Since $\bbF_{q^{t_1}}$ is the best friend of $\mathcal{F}$, it follows that $\mathrm{Stab}_{\beta}(\mathcal{F})=\langle\beta\rangle\cap\bbF_{q^{t_1}}^\ast$. Moreover, for every value of $1\leq i\leq r$, it holds $\mathrm{Stab}_{\beta}(\mathcal{F}_i)=\langle\beta\rangle\cap\bbF_{q^{t_i}}^\ast.$ As a result, we have the following sequence of nested subgroups of $\langle\beta\rangle$ \begin{equation}\label{eq: Galois beta cyclic stab} \mathrm{Stab}_{\beta}(\mathcal{F}) = \mathrm{Stab}_{\beta}(\mathcal{F}_1) \subseteq \mathrm{Stab}_{\beta}(\mathcal{F}_2) \subseteq \dots \subseteq \mathrm{Stab}_{\beta}(\mathcal{F}_r) \subseteq \langle\beta\rangle. \end{equation} By means of Proposition \ref{prop: cardinality and best friend}, the cardinality of $\mathrm{Orb}_{\beta}(\mathcal{F})$ and the one of its $i$-projected code, for every $1\leq i\leq r$, are respectively $$ |\mathrm{Orb}_{\beta}(\mathcal{F})|= \frac{|\beta|}{|\langle\beta\rangle\cap\bbF_{q^{t_1}}^\ast|} \ \text{and} \ |\mathrm{Orb}_{\beta}(\mathcal{F}_i)|= \frac{|\beta|}{|\langle\beta\rangle\cap\bbF_{q^{t_i}}^\ast|} . $$ Furthermore, from Theorem \ref{theo: Galois flag codes structure}, and taking into account that $\mathrm{Orb}_{\beta}(\mathcal{F})\subseteq\mathrm{Orb}(\mathcal{F})$, we can derive the following result for the projected codes of a Galois $\beta$-cyclic flag code. \begin{corollary}\label{theo: Galois beta cyclic flag codes structure} Let $\mathcal{F}=(\bbF_{q^{t_1}}, \dots, \bbF_{q^{t_r}})$ be the Galois flag of type $(t_1, \ldots, t_r)$ on the field $\bbF_{q^n}$ and take a nonzero element $\beta\in \bbF_{q^n}$. For each $1\leq i\leq r,$ we write $\beta_i$ to denote a generator of the cyclic subgroup $\langle\beta_i\rangle=\mathrm{Stab}_{\beta}(\bbF_{q^{t_i}})=\langle\beta\rangle\cap\bbF_{q^{t_i}}^\ast.$ Then the following statements hold: \begin{enumerate} \item The projected code $\mathrm{Orb}_{\beta}(\mathcal{F}_i)$ is a partial spread of dimension $t_i$ of $\bbF_{q^n}$. \item The $\beta_j$-cyclic orbit code $\mathrm{Orb}_{\beta_j}(\bbF_{q^{t_i}}\beta^l)$ is a partial spread of dimension $t_i$ of the subspace $\bbF_{q^{t_j}}\beta^l$, for every $i<j\leq r$ and $0\leq l \leq |\beta_j|-1$. \end{enumerate} \end{corollary} Concerning the distance of Galois $\beta$-cyclic flag codes, since they are subcodes of the Galois cyclic flag code of the same type, their distance might be better than $2t_1,$ apart from the case of the trivial subcode consisting just of the Galois flag, which has distance equal to zero. Actually, it is possible to determine the exact distance of a Galois $\beta$-cyclic flag code by checking the relationship between the subgroup $\langle \beta \rangle$ and the subfields $\bbF_{q^{t_i}}$ and vice versa, that is, if we choose a permitted distance, we can find a suitable subgroup (possibly not unique) to build a $\beta$-cyclic orbit Galois attaining such a distance. We state the precise conditions in the following result: \begin{theorem}\label{theo: distance Galois beta cyclic} Let $\mathcal{F}$ be the Galois flag of type $(t_1, \dots, t_r)$ and consider an element $\beta\in \bbF_{q^n}^\ast$. Then $d_f(\mathrm{Orb}_{\beta}(\mathcal{F})) \in \{ 0, 2t_1, 2(t_1+t_2),\dots, 2(t_1+t_2+\dots+t_r)\}$. Moreover, \begin{enumerate} \item $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 0$ if, and only if, $\mathrm{Stab}_{\beta}(\mathcal{F}_1)=\mathrm{Stab}_{\beta}(\mathcal{F}_r)=\langle\beta\rangle$. \label{theo: distance Galois beta cyclic item 1} \item $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 2\sum_{i=1}^r t_i$ if, and only if, $\mathrm{Stab}_{\beta}(\mathcal{F}_1)=\mathrm{Stab}_{\beta}(\mathcal{F}_r)\neq\langle\beta\rangle$. \label{theo: distance Galois beta cyclic item 2} \item $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 2\sum_{i=1}^{j-1} t_i$ if, and only if, $\mathrm{Stab}_{\beta}(\mathcal{F}_1)\neq\mathrm{Stab}_{\beta}(\mathcal{F}_r)$ and $j\in\{2, \dots, r\}$ is the minimum index such that $\mathrm{Stab}_{\beta}(\mathcal{F}_1) \subsetneq \mathrm{Stab}_{\beta}(\mathcal{F}_j).$ \label{theo: distance Galois beta cyclic item 3} \end{enumerate} \end{theorem} \begin{proof} Recall that for every choice of $\beta$, the projected codes of $\mathrm{Orb}_{\beta}(\mathcal{F})$ are partial spreads. As a result, for every $0\leq l\leq |\beta|-1$, we have that $d_S(\mathcal{F}_j, \mathcal{F}_j\beta^l)\in\{0, 2t_j\}$. Moreover, $d_S(\mathcal{F}_j, \mathcal{F}_j\beta^l)=0$ holds if, and only if, $\beta^l\in\mathrm{Stab}_{\beta}(\mathcal{F}_j)$. In this case, since $\mathrm{Stab}_{\beta}(\mathcal{F}_j)\subseteq \dots \subseteq \mathrm{Stab}_{\beta}(\mathcal{F}_r)$ by (\ref{eq: Galois beta cyclic stab}), we have $d_S(\mathcal{F}_i, \mathcal{F}_i\beta^l)=0,$ for every $j\leq i\leq r$. Hence, distances between flags in $\mathrm{Orb}_{\beta}(\mathcal{F})$ belong to the set $\{0, 2t_1, 2(t_1+t_2),\dots, 2(t_1+t_2+\dots+t_r)\}.$ Let us see that all of them can be reached, by showing (\ref{theo: distance Galois beta cyclic item 1}), (\ref{theo: distance Galois beta cyclic item 2}) and (\ref{theo: distance Galois beta cyclic item 3}). \begin{enumerate} \item As proved in Proposition \ref{prop: distance bounds BF}, we have $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))=0$ if, and only if, $\beta\in\bbF_{q^{t_1}}^\ast=\mathrm{Stab}(\mathcal{F})$ or, by using (\ref{eq: estabilizador flag}), $\beta\in\mathrm{Stab}(\mathcal{F}_i)$ for all $1\leq i\leq r$. Since $\mathrm{Stab}_{\beta}(\mathcal{F}_i)=\langle\beta\rangle\cap\mathrm{Stab}(\mathcal{F}_i)$ is always a subgroup of $\langle\beta\rangle$, the previous condition is equivalent to $\mathrm{Stab}_{\beta}(\mathcal{F}_i)=\langle\beta\rangle$, for every $1\leq i\leq r$. Hence, by (\ref{eq: Galois beta cyclic stab}), we just need to check the equality $\mathrm{Stab}_{\beta}(\mathcal{F}_1)=\mathrm{Stab}_{\beta}(\mathcal{F}_r)=\langle\beta\rangle.$ \end{enumerate} In the remaining cases, $\mathrm{Stab}_{\beta}(\mathcal{F})$ must be a proper subgroup of $\langle\beta\rangle$. \begin{enumerate} \item[(2)] Assume now that $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 2\sum_{i=1}^r t_i$. Hence, for every $\beta^l\in\langle\beta\rangle\setminus\mathrm{Stab}_{\beta}(\mathcal{F})$, it must hold $d_S(\mathcal{F}_i,\mathcal{F}_i\beta^l)= 2t_i$, for all $1\leq i\leq r$. This happens if, and only if, $\beta^l\notin\mathrm{Stab}_{\beta}(\mathcal{F}_i)$ for every $1\leq i\leq r$. As a consequence, $\mathrm{Stab}_{\beta}(\mathcal{F}_i)\subseteq\mathrm{Stab}_{\beta}(\mathcal{F})$. On the other hand, by (\ref{eq: estabilizador flag}), we conclude that $\mathrm{Stab}_{\beta}(\mathcal{F})=\mathrm{Stab}_{\beta}(\mathcal{F}_i)$ for every $1\leq i\leq r$. Again, since these stabilizer subgroups are nested, this condition is equivalent to $\mathrm{Stab}_{\beta}(\mathcal{F}_1)=\mathrm{Stab}_{\beta}(\mathcal{F}_r).$ \item[(3)] Consider the case $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 2\sum_{i=1}^{j-1} t_i$ for some $2\leq j\leq r$. In other words, there exists some $\beta^l\in\langle\beta\rangle\setminus\mathrm{Stab}_{\beta}(\mathcal{F})$ such that $$ d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= d_f(\mathcal{F}, \mathcal{F}\beta^l)=2\sum_{i=1}^{j-1} t_i. $$ This happens if, and only if $$ d_S(\mathcal{F}_i, \mathcal{F}_i\beta^l)=\left\lbrace \begin{array}{ccl} 2t_i &\text{if} & 1\leq i\leq j-1,\\ 0 &\text{if} & j\leq i\leq r, \end{array} \right. $$ or equivalently, if $\beta^l\in\langle\beta\rangle\setminus\mathrm{Stab}_{\beta}(\mathcal{F}_i)$ for $1\leq i\leq j-1,$ and $\beta^l\in\mathrm{Stab}_{\beta}(\mathcal{F}_i)$ for $j\leq i\leq r$. Hence, we conclude $$\mathrm{Stab}_{\beta}(\mathcal{F})=\mathrm{Stab}_{\beta}(\mathcal{F}_1)=\dots=\mathrm{Stab}_{\beta}(\mathcal{F}_{j-1})\subsetneq\mathrm{Stab}_{\beta}(\mathcal{F}_j).$$ \end{enumerate} \end{proof} Graphically, Galois $\beta$-cyclic flag codes can be represented as subgraphs of the graph in Figure \ref{fig1}. In the next picture, flags in a Galois $\beta$-cyclic flag code are marked with black lines. In contrast, directed paths containing dotted edges represent flags in $\mathrm{Orb}(\mathcal{F})\setminus\mathrm{Orb}_{\beta}(\mathcal{F})$. The index $j$ in Theorem \ref{theo: distance Galois beta cyclic} states that no flags in the code share subspaces of dimensions $t_i$, for $1\leq i\leq j-1,$ whereas there exist different flags having the same $j$-th subspace. At left, and example of Galois $\beta$-cyclic flag code with distance $2t_1$ ($j=2$). At right, the corresponding index and distance are $j=3$ and $2(t_1+t_2),$ respectively. \begin{figure}[H]\label{fig2} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.75cm,y=1cm] \clip(-2,1.75) rectangle (15,7); \draw [line width=0.8pt] (2-2,6.75)-- (5-2,6.25); \draw [line width=0.8pt] (5-2,6.25)-- (2-2,6.5); \draw [line width=0.8pt, dotted] (2-2,6.25)-- (5-2,6.25); \draw [line width=0.8pt, dotted] (5-2,6.25)-- (2-2,5.75); \draw [line width=0.8pt] (5-2,6.25)-- (8-2,5.619019447339418); \draw [line width=0.8pt] (8-2,5.619019447339418)-- (5-2,5); \draw [line width=0.8pt] (2-2,5.5)-- (5-2,5); \draw [line width=0.8pt] (5-2,5)-- (2-2,5.25); \draw [line width=0.8pt, dotted] (5-2,5)-- (2-2,5); \draw [line width=0.8pt] (2-2,2.5+1.75)-- (5-2,2+1.75); \draw [line width=0.8pt, dotted] (2-2,2+1.75)-- (5-2,2+1.75); \draw [line width=0.8pt, dotted] (2-2,1.5+1.75)-- (5-2,2+1.75); \draw [line width=0.8pt] (5-2,0.75+1.75)-- (2-2,1.25+1.75); \draw [line width=0.8pt] (5-2,0.75+1.75)-- (2-2,1+1.75); \draw [line width=0.8pt, dotted] (5-2,0.75+1.75)-- (2-2,0.75+1.75); \draw [line width=0.8pt, dotted] (5-2,0.75+1.75)-- (2-2,0.25+1.75); \draw [line width=0.8pt] (5-2,2+1.75)-- (8-2,1.4005718048604943+1.75); \draw [line width=0.8pt] (8-2,1.4005718048604943+1.75)-- (5-2,0.75+1.75); \draw (4.85-2,6.1) node[anchor=north west] {$\vdots$}; \draw (4.85-2,3.65) node[anchor=north west] {$\vdots$}; \draw (7.709433808911543-2,4.911482843761254) node[anchor=north west] {$\vdots$}; \draw (1.825-2,5.25) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.825-2,6.5) node[anchor=north west] {{\tiny $\vdots$}}; \draw [line width=0.8pt, dotted] (2-2,4.5)-- (5-2,5); \draw [line width=0.8pt] (2-2,2.25+1.75)-- (5-2,2+1.75); \draw (1.825-2,2.25+1.75) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.825-2,2.75) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.8506103466657648-2,7.625043184169825) node[anchor=north west] {$t_1$}; \draw (4.831415266053967-2,7.090554026210561) node[anchor=north west] {$t_2$}; \draw (7.791662910136044-2,6.494393042332921) node[anchor=north west] {$t_3$}; \begin{scriptsize} \draw [fill=white] (5-2,2+1.75) circle (1.5pt); \draw [fill=red] (2-2,6.75) circle (1.5pt); \draw [fill=white] (2-2,6.25) circle (1.5pt); \draw [fill=white] (2-2,5.75) circle (1.5pt); \draw [fill=red] (5-2,6.25) circle (1.5pt); \draw [fill=white] (2-2,6.5) circle (1.5pt); \draw [fill=red] (8-2,5.619019447339418) circle (1.5pt); \draw [fill=white] (5-2,5) circle (1.5pt); \draw [fill=white] (2-2,5) circle (1.5pt); \draw [fill=white] (2-2,5.5) circle (1.5pt); \draw [fill=white] (2-2,5.25) circle (1.5pt); \draw [fill=white] (2-2,2.5+1.75) circle (1.5pt); \draw [fill=white] (2-2,2+1.75) circle (1.5pt); \draw [fill=white] (2-2,1.5+1.75) circle (1.5pt); \draw [fill=white] (2-2,1.25+1.75) circle (1.5pt); \draw [fill=white] (2-2,1+1.75) circle (1.5pt); \draw [fill=white] (2-2,0.75+1.75) circle (1.5pt); \draw [fill=white] (2-2,0.25+1.75) circle (1.5pt); \draw [fill=white] (5-2,0.75+1.75) circle (1.5pt); \draw [fill=white] (8-2,1.4005718048604943+1.75) circle (1.5pt); \draw [fill=white] (2-2,4.5) circle (1.5pt); \draw [fill=white] (2-2,2.25+1.75) circle (1.5pt); \end{scriptsize} \draw [line width=0.8pt] (2+6,6.75)-- (5+6,6.25); \draw [line width=0.8pt, dotted] (5+6,6.25)-- (2+6,6.5); \draw [line width=0.8pt, dotted] (2+6,6.25)-- (5+6,6.25); \draw [line width=0.8pt, dotted] (5+6,6.25)-- (2+6,5.75); \draw [line width=0.8pt] (5+6,6.25)-- (8+6,5.619019447339418); \draw [line width=0.8pt] (8+6,5.619019447339418)-- (5+6,5); \draw [line width=0.8pt] (2+6,5.5)-- (5+6,5); \draw [line width=0.8pt, dotted] (5+6,5)-- (2+6,5.25); \draw [line width=0.8pt, dotted] (5+6,5)-- (2+6,5); \draw [line width=0.8pt] (2+6,2.5+1.75)-- (5+6,2+1.75); \draw [line width=0.8pt, dotted] (2+6,2+1.75)-- (5+6,2+1.75); \draw [line width=0.8pt, dotted] (2+6,1.5+1.75)-- (5+6,2+1.75); \draw [line width=0.8pt] (5+6,0.75+1.75)-- (2+6,1.25+1.75); \draw [line width=0.8pt, dotted] (5+6,0.75+1.75)-- (2+6,1+1.75); \draw [line width=0.8pt, dotted] (5+6,0.75+1.75)-- (2+6,0.75+1.75); \draw [line width=0.8pt, dotted] (5+6,0.75+1.75)-- (2+6,0.25+1.75); \draw [line width=0.8pt] (5+6,2+1.75)-- (8+6,1.4005718048604943+1.75); \draw [line width=0.8pt] (8+6,1.4005718048604943+1.75)-- (5+6,0.75+1.75); \draw (4.85+6,6.1) node[anchor=north west] {$\vdots$}; \draw (4.85+6,3.65) node[anchor=north west] {$\vdots$}; \draw (7.709433808911543+6,4.911482843761254) node[anchor=north west] {$\vdots$}; \draw (1.825+6,5.25) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.825+6,6.5) node[anchor=north west] {{\tiny $\vdots$}}; \draw [line width=0.8pt, dotted] (2+6,4.5)-- (5+6,5); \draw [line width=0.8pt, dotted] (2+6,2.25+1.75)-- (5+6,2+1.75); \draw (1.825+6,2.25+1.75) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.825+6,2.75) node[anchor=north west] {{\tiny $\vdots$}}; \draw (1.8506103466657648+6,7.625043184169825) node[anchor=north west] {$t_1$}; \draw (4.831415266053967+6,7.090554026210561) node[anchor=north west] {$t_2$}; \draw (7.791662910136044+6,6.494393042332921) node[anchor=north west] {$t_3$}; \begin{scriptsize} \draw [fill=white] (5+6,2+1.75) circle (1.5pt); \draw [fill=red] (2+6,6.75) circle (1.5pt); \draw [fill=white] (2+6,6.25) circle (1.5pt); \draw [fill=white] (2+6,5.75) circle (1.5pt); \draw [fill=red] (5+6,6.25) circle (1.5pt); \draw [fill=white] (2+6,6.5) circle (1.5pt); \draw [fill=red] (8+6,5.619019447339418) circle (1.5pt); \draw [fill=white] (5+6,5) circle (1.5pt); \draw [fill=white] (2+6,5) circle (1.5pt); \draw [fill=white] (2+6,5.5) circle (1.5pt); \draw [fill=white] (2+6,5.25) circle (1.5pt); \draw [fill=white] (2+6,2.5+1.75) circle (1.5pt); \draw [fill=white] (2+6,2+1.75) circle (1.5pt); \draw [fill=white] (2+6,1.5+1.75) circle (1.5pt); \draw [fill=white] (2+6,1.25+1.75) circle (1.5pt); \draw [fill=white] (2+6,1+1.75) circle (1.5pt); \draw [fill=white] (2+6,0.75+1.75) circle (1.5pt); \draw [fill=white] (2+6,0.25+1.75) circle (1.5pt); \draw [fill=white] (5+6,0.75+1.75) circle (1.5pt); \draw [fill=white] (8+6,1.4005718048604943+1.75) circle (1.5pt); \draw [fill=white] (2+6,4.5) circle (1.5pt); \draw [fill=white] (2+6,2.25+1.75) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \caption{Two different Galois $\beta$-cyclic of type $(t_1, t_2, t_3).$} \end{figure} Observe that Theorem \ref{theo: distance Galois beta cyclic} allows us to provide specific constructions of Galois $\beta$-cyclic flag codes with a prescribed distance just by choosing a suitable element $\beta\in\bbF_{q^n}^\ast$. Moreover, since $\bbF_{q^n}^\ast=\langle\alpha\rangle$, being $\alpha$ a primitive element of $\bbF_{q^n}$, we can translate the above conditions on the stabilizers (w.r.t. $\beta$) in terms of suitable powers of $\alpha$ as follows. Given $\beta\in\bbF_{q^n}^\ast$, we can write $|\beta|=(q^n-1)/l$ for some divisor $l$ of $q^n-1$. Hence, by the uniqueness of subgroups of a given order of the cyclic group $\bbF_{q^n}^\ast$, it is clear that $\langle\beta\rangle=\langle\alpha^l\rangle$. In particular, if $c_i=(q^n-1)/(q^{t_i}-1)$, we have that $\bbF_{q^{t_i}}^\ast=\langle\alpha^{c_i}\rangle$, for every $1\leq i\leq r$. As a consequence, it holds $\mathrm{Stab}_{\beta}(\bbF_{q^{t_i}})=\langle\beta\rangle\cap\bbF_{q^{t_i}}^\ast= \langle\alpha^l\rangle\cap \langle\alpha^{c_i}\rangle = \langle\alpha^{l_i}\rangle,$ where $l_i=\mathrm{lcm}(l, c_i)$. Moreover, given that each $c_{i+1}$ divides $c_i$, then $l_{i+1}$ divides $l_i$, for every $1\leq i\leq r-1$, and the sequence of nested stabilizers given in (\ref{eq: Galois beta cyclic stab}) becomes $$ \langle\alpha^{l_1}\rangle\subseteq \langle\alpha^{l_2} \rangle \subseteq \dots \subseteq \langle\alpha^{l_r}\rangle \subseteq \langle\alpha^l\rangle. $$ Now, since $l, c_1, \dots, c_r$ divide $q^n-1$, every exponent $l_i$ divides $q^n-1$ as well. Hence, the order of each stabilizer is $|\mathrm{Stab}_{\beta}(\bbF_{q^{t_i}})|=|\alpha^{l_i}|= \frac{q^n-1}{l_i}$, for every $1\leq i\leq r$. We can reformulate Theorem \ref{theo: distance Galois beta cyclic} as follows: \begin{theorem} Let $\mathcal{F}$ be the Galois flag of type $(t_1, \dots, t_r)$ and consider $\beta\in \bbF_{q^n}^\ast$ such that $\langle\beta\rangle=\langle\alpha^l\rangle$ for some divisor $l$ of $q^n-1$. It holds: \begin{enumerate} \item $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 0$ if, and only if, $l_1=l_r=l$. \item $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 2\sum_{i=1}^r t_i$ if, and only if, $l_1=l_r\neq l$. \item $d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))= 2\sum_{i=1}^{j-1} t_i$ if, and only if, $l_1\neq l_r$ and $2\leq j\leq r$ is the minimum index such that $l_1\neq l_j$ \end{enumerate} \end{theorem} \begin{example} Take $\mathcal{F}$ the Galois flag of type $(2,4,8)$ on $\bbF_{2^{16}}$ and let $\alpha$ be a primitive element of $\bbF_{2^{16}}.$ The following table shows the parameters of all possible Galois $\beta$-cyclic flag codes of this type. The sizes of the stabilizer subgroups (w.r.t. $\beta$) of the fields $\bbF_{2^2}$, $\bbF_{2^4}$ and $\bbF_{2^8}$ are given, together with the cardinality and distance (just denoted by $d_\beta$) of $\mathrm{Orb}_{\beta}(\mathcal{F}).$ \begin{table}[H] \centering \begin{small} \begin{tabular}{ccccccc} \hline $\beta$ & $|\beta|$ & $|\mathrm{Stab}_{\beta}(\bbF_{2^{2}})|$ & $|\mathrm{Stab}_{\beta}(\bbF_{2^{4}})|$ & $|\mathrm{Stab}_{\beta}(\bbF_{2^{8}})|$ & $|\mathrm{Orb}_{\beta}(\mathcal{F})|$ & $d_\beta$ \\ \hline $\alpha$ & 65535 & 3 & 15 & 255 & 21845 & 4 \\ $\alpha^3$ & 21845 & 1 & 5 & 85 & 21845 & 4 \\ $\alpha^5$ & 13107 & 3 & 3 & 51 & 4369 & 12 \\ $\alpha^{15}$ & 4369 & 1 & 1 & 17 & 4369 & 12 \\ $\alpha^{17}$ & 3855 & 3 & 15 & 15 & 1285 & 4 \\ $\alpha^{51}$ & 1285 & 1 & 5 & 5 & 1285 & 4 \\ $\alpha^{85}$ & 771 & 3 & 3 & 3 & 257 & 28 \\ $\alpha^{255}$ & 257 & 1 & 1 & 1 & 257 & 28 \\ $\alpha^{257}$ & 255 & 3 & 15 & 255 & 85 & 4 \\ $\alpha^{771}$ & 85 & 1 & 5 & 85 & 85 & 4 \\ $\alpha^{1285}$ & 51 & 3 & 3 & 51 & 17 & 12 \\ $\alpha^{3855}$ & 17 & 1 & 1 & 17 & 17 & 12 \\ $\alpha^{4369}$ & 15 & 3 & 15 & 15 & 5 & 4 \\ $\alpha^{13107}$ & 5 & 1 & 5 & 5 & 5 & 4 \\ $\alpha^{21845}$ & 3 & 3 & 3 & 3 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ \hline \end{tabular} \end{small} \caption{Parameters of all Galois $\beta$-cyclic flag codes of type $(2,4,8)$ on $\bbF_{2^{16}}$.} \end{table} Clearly, different subgroups of $\bbF_{q^n}^\ast$ can provide the same code. For instance, the subgroup $\langle\alpha^3\rangle$ gives the Galois cyclic flag code $\mathrm{Orb}(\mathcal{F})$. We have also $\mathrm{Orb}_{\alpha^{5}}(\mathcal{F})=\mathrm{Orb}_{\alpha^{15}}(\mathcal{F})$ or $\mathrm{Orb}_{\alpha^{85}}(\mathcal{F})=\mathrm{Orb}_{\alpha^{255}}(\mathcal{F})$, among other possibilities. \end{example} \begin{remark} As proved in the previous theorem, the Galois $\beta$-cyclic code of type $(t_1, \dots, t_r)$ attains the maximum possible distance for its type if, and only if, it holds \begin{equation}\label{eq: condition Galois ODFC} \mathrm{Stab}_{\beta}(\mathcal{F}_1)= \mathrm{Stab}_{\beta}(\mathcal{F}_r) \subsetneq \langle\beta\rangle . \end{equation} In other words, if condition (\ref{eq: condition Galois ODFC}) is satisfied, we can build an optimum distance flag code with $\bbF_{q^{t_1}}$ as its best friend. This fact drives us to investigate cyclic orbit flag codes with the maximum possible distance and fixed best friend when the generating flag is not necessarily a Galois flag. \end{remark} \subsection{Optimum distance cyclic orbit flag codes}\label{subsec:optimum distance cyclic codes} This subsection is devoted to the study of flag codes on $\bbF_{q^n}$ reaching the maximum distance and being also $\beta$-cyclic orbit flag codes with a prescribed best friend $\bbF_{q^m}$. To tackle this problem, we have to take into account first that, in particular, optimum distance flag codes must be disjoint as proved in \cite{CasoPlanar}. Recall that a flag code $\mathcal{C}$ of type $(t_1, \ldots, t_r)$ is said to be \emph{disjoint} if $|\mathcal{C}|=|\mathcal{C}_1|= \dots= |\mathcal{C}_r|.$ In our specific context, we have that a $\beta$-cyclic flag code $\mathrm{Orb}_{\beta}(\mathcal{F})$ is disjoint if, and only if, $$ \frac{|\beta|}{|\mathrm{Stab}_{\beta}(\mathcal{F})|}= \frac{|\beta|}{|\mathrm{Stab}_{\beta}(\mathcal{F}_1)|}= \cdots= \frac{|\beta|}{|\mathrm{Stab}_{\beta}(\mathcal{F}_r)|} $$ or, equivalently, if all the stabilizers $\mathrm{Stab}_{\beta}(\mathcal{F}), \mathrm{Stab}_{\beta}(\mathcal{F}_1), \ldots, \mathrm{Stab}_{\beta}(\mathcal{F}_r)$ have the same order. In fact, by the uniqueness of subgroups of a cyclic group, all these stabilizers must coincide. Moreover, by using (\ref{eq: estabilizador flag}), we have the next result: \begin{proposition}\label{prop: disjunto iff estabilizadores coinciden} The following statements are equivalent: \begin{enumerate} \item $\mathrm{Orb}_{\beta}(\mathcal{F})$ is a disjoint flag code, \label{condición 1} \item $\mathrm{Stab}_{\beta}(\mathcal{F})=\mathrm{Stab}_{\beta}(\mathcal{F}_1)=\dots= \mathrm{Stab}_{\beta}(\mathcal{F}_r)$.\label{condición 2} \item $\mathrm{Stab}_{\beta}(\mathcal{F}_1)=\dots= \mathrm{Stab}_{\beta}(\mathcal{F}_r)$.\label{condición 3} \end{enumerate} \end{proposition} In light of Propositions \ref{prop: stab+ flag} and \ref{prop: stab+ es el best friend del flag}, the best friend of a flag $\mathcal{F}$ can be computed as $\mathrm{Stab}^+(\mathcal{F})=\mathrm{Stab}(\mathcal{F})\cup\{0\}$. Similarly, the best friend of its subspaces are given by $\mathrm{Stab}^+(\mathcal{F}_i)=\mathrm{Stab}(\mathcal{F}_i)\cup\{0\}$. The next result leads directly a characterization of disjoint $\beta$-cyclic orbit flag codes in terms of $\beta$ and the best friends of the generating flag and its subspaces. \begin{proposition}\label{prop: disjunto best friend} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag on $\bbF_{q^n}$ with $\bbF_{q^m}$ as its best friend and take $\beta\in\bbF_{q^n}^\ast$. If $\bbF_{q^{m_i}}$ denotes the best friend of $\mathcal{F}_i$, then the $\beta$-cyclic orbit code $\mathrm{Orb}_{\beta}(\mathcal{F})$ is disjoint if, and only if $$ \langle\beta\rangle\cap\bbF_{q^m}^\ast= \langle\beta\rangle\cap\bbF_{q^{m_1}}^\ast= \dots = \langle\beta\rangle\cap\bbF_{q^{m_r}}^\ast. $$ In particular, the cyclic orbit flag code $\mathrm{Orb}(\mathcal{F})$ is disjoint if, and only if, all the subspaces in the flag have the field $\bbF_{q^m}$ as their best friend. \end{proposition} \begin{proof} By means of Proposition \ref{prop: disjunto iff estabilizadores coinciden}, the code $\mathrm{Orb}_{\beta}(\mathcal{F})$ is disjoint if, and only if, for every $1\leq i\leq r$, it holds $\mathrm{Stab}_{\beta}(\mathcal{F}_i)=\mathrm{Stab}_{\beta}(\mathcal{F})$. Since $\mathrm{Stab}_{\beta}(\mathcal{F}_i)=\langle\beta\rangle\cap\bbF_{q^{m_i}}^\ast$, for every $1\leq i\leq r$, and $\mathrm{Stab}_{\beta}(\mathcal{F})=\langle\beta\rangle\cap\bbF_{q^m}^\ast$, the result follows. In the particular case of $\beta$ primitive, then it must hold $\mathrm{Stab}(\mathcal{F}_i)=\bbF_{q^m}^\ast$, i.e., the best friend of each $\mathcal{F}_i$ coincides with the one of $\mathcal{F}$. \end{proof} Observe that it is possible to give a tighter lower bound for the distance of disjoint $\beta$-cyclic orbit flag codes with $\bbF_q^m$ as best friend. In order to avoid codes with distance equal to zero, throughout the rest of the section we only consider elements $\beta\in\bbF_{q^n}^\ast\setminus\bbF_{q^m}^\ast$. \begin{proposition}\label{prop: distancia disjunto best friend} Let $\mathcal{F}=(\mathcal{F}_1, \ldots, \mathcal{F}_r)$ be a flag on $\bbF_{q^n}$ with the subfield $\bbF_{q^m}$ as its best friend and $\beta\in\bbF_{q^n}^\ast$. If the code $\mathrm{Orb}_{\beta}(\mathcal{F})$ is disjoint, then $2mr \leq d_f(\mathrm{Orb}_{\beta}(\mathcal{F}))$. \end{proposition} \begin{proof} Let $\mathcal{F}'$ be a flag in $\mathrm{Orb}_{\beta}(\mathcal{F})$ with $\mathcal{F}'\neq \mathcal{F}$. As $\mathrm{Orb}_{\beta}(\mathcal{F})$ is a disjoint flag code, we have that $\mathcal{F}_i\neq \mathcal{F}'_i$ for every $1\leq i\leq r$. Hence, by means of Proposition \ref{prop: stab+ es best friend}, for every $1\leq i \leq r$, we have that $d_S(\mathcal{F}_i, \mathcal{F}'_i) \geq 2m$. We conclude that $d_f(\mathcal{F}, \mathcal{F}')\geq 2mr$, for every $\mathcal{F}'\in \mathrm{Orb}_{\beta}(\mathcal{F})\setminus\{\mathcal{F}\}$, and the result holds. \end{proof} As shown in Proposition \ref{prop: cardinality and best friend}, the cardinality of a $\beta$-cyclic flag code $\mathrm{Orb}_{\beta}(\mathcal{F})$ with $\bbF_{q^m}^\ast$ as its best friend is completely determined. Moreover, we know that $\langle\beta\rangle=\langle \alpha^l\rangle$, for the divisor $l$ of $q^n-1$ such that $|\beta|=\frac{q^n-1}{l}$. Similarly, $\bbF_{q^m}^\ast=\langle\alpha^{\frac{q^n-1}{q^m-1}}\rangle$. Moreover, it holds $$ \mathrm{Stab}_{\beta}(\mathcal{F}) = \langle\alpha^{\mathrm{lcm}(l, \frac{q^n-1}{q^m-1})}\rangle \ \text{and} \ |\mathrm{Stab}_{\beta}(\mathcal{F})|=\frac{q^n-1}{\mathrm{lcm}(l, \frac{q^n-1}{q^m-1})}. $$ As a result, \begin{equation}\label{eq: cardinality beta-cyclic power of alpha} |\mathrm{Orb}_{\beta}(\mathcal{F})| = \frac{\mathrm{lcm} \left( l, \frac{q^n-1}{q^m-1} \right)}{l} . \end{equation} Using this notation, the next result follows. \begin{theorem}\label{theo: type vector ODFC orbital} Let $\mathcal{F}$ be a flag on $\bbF_{q^n}$ with best friend $\bbF_{q^m}$. Take $\beta\in\bbF_{q^n}^\ast$ and write $\langle\beta\rangle=\langle\alpha^l\rangle$ with $l$ a divisor of $q^n-1$. If $\mathrm{Orb}_{\beta}(\mathcal{F})$ is an optimum distance flag code and $t$ is a dimension in the type vector of $\mathcal{F}$, then $m$ divides $t$ and it must hold $$ \frac{\mathrm{lcm}(l, \frac{q^n-1}{q^m-1})}{l} \leq \left\lbrace \begin{array}{cll} \left\lfloor\frac{q^n-1}{q^t-1}\right\rfloor & \text{if} & 2t\leq n, \\ \left\lfloor\frac{q^n-1}{q^{n-t}-1}\right\rfloor & \text{if} & 2t > n. \end{array} \right. $$ \end{theorem} \begin{proof} Consider a flag $\mathcal{F}$ on $\bbF_{q^n}$ with the subfield $\bbF_{q^m}$ as its best friend and assume that the code $\mathrm{Orb}_{\beta}(\mathcal{F})$ is an optimum distance flag code. Hence, by application of Lemma \ref{lem: BF divides dimensions}, $m$ must divide every dimension in the type vector. Moreover, by means of Theorem \ref{theo: caracterización ODFC}, all the projected codes attain the maximum possible distance for their dimension and $\mathrm{Orb}_{\beta}(\mathcal{F})$ is disjoint. In other words, the cardinality of every projected code coincides with $|\mathrm{Orb}_{\beta}(\mathcal{F})|$. In particular, this value has to satisfy the bounds for the cardinality of constant dimension codes of maximum distance given in Section \ref{sec:Preliminaries} for dimensions in the type vector. As a result, if $t$ is a dimension in the type vector, it must hold: \begin{enumerate} \item If $2t\leq n$, then $|\mathrm{Orb}_{\beta}(\mathcal{F})| \leq \left\lfloor\frac{q^n-1}{q^t-1}\right\rfloor$ and \item if $2t>n$, then $|\mathrm{Orb}_{\beta}(\mathcal{F})| \leq \left\lfloor\frac{q^n-1}{q^{n-t}-1}\right\rfloor$. \end{enumerate} Moreover, assuming $\langle\beta\rangle=\langle\alpha^l\rangle$ for some divisor $l$ of $q^n-1$, by using (\ref{eq: cardinality beta-cyclic power of alpha}), the result holds. \end{proof} \begin{remark} Observe that a dimension $t$ satisfies the necessary condition provided in Theorem \ref{theo: type vector ODFC orbital} if, and only if, the dimension $n-t$ does it as well. This is due to the fact that the upper bound for the cardinality of constant dimension codes with maximum distance of dimensions $t$ and $n-t$ of $\bbF_{q^n}$ coincide. Moreover, these upper bounds decrease as dimensions get closer to $n/2$. Hence, central dimensions are allowed for a smaller number elements $\beta\in\bbF_{q^n}^\ast$ than the other ones. In contrast, extreme dimensions, that is, $m$ and $n-m$, are allowed for every subgroup of $\bbF_{q^n}^*$. In fact, when the acting group is $\bbF_{q^n}^\ast$, we can derive the following corollary. \end{remark} \begin{corollary}\label{cor: type odfc cyclic} Assume that the cyclic orbit code $\mathrm{Orb}(\mathcal{F})$ is an optimum distance flag code on $\bbF_{q^n}$ with the subfield $\bbF_{q^m}$ as its best friend. Then one of the following statements holds: \begin{enumerate} \item $\mathrm{Orb}(\mathcal{F})$ is a constant dimension code of dimension either $m$ or $n-m$. \item $\mathrm{Orb}(\mathcal{F})$ has type vector $(m, n-m)$. \end{enumerate} In any of the three cases above, the code $\mathrm{Orb}(\mathcal{F})$ has the largest possible size, that is, $\frac{q^n-1}{q^m-1}$. \end{corollary} \begin{proof} This result follows by application of Theorem \ref{theo: caracterización ODFC} when $\beta$ is a primitive element of $\bbF_{q^n}^\ast$. In this case, the cardinality of every projected code is $\frac{q^n-1}{q^{m}-1}$. Moreover, if $t$ is a dimension in the type vector, it has to be a multiple of $m$. Observe that, both $m$ and $n-m$ satisfy the necessary condition given in Theorem \ref{theo: type vector ODFC orbital}. On the other hand, this condition is violated by any other multiple of $m$. Hence, only dimensions $m$ or $n-m$ could appear in the type vector of $\mathcal{F}$. As a result, optimum distance cyclic orbit flag codes with $\bbF_{q^m}$ as their best friend could only be constructed for type vectors equal to $(m)$, $(n-m)$ or $(m, n-m)$. For these three type vectors, the cardinality of $\mathrm{Orb}(\mathcal{F})$, which is also $\frac{q^n-1}{q^{m}-1},$ coincides with the largest possible size of constant dimension codes with maximum distance for both dimensions $m$ and $n-m$. Hence, it is the best size for optimum distance flag codes with any of these type vectors. \end{proof} Apart from the case where the type vector is $(m, n-m)$, we see that optimum distance cyclic orbit flag codes with $\bbF_{q^m}$ as their best friend are actually cyclic orbit (subspace) codes of dimension either $m$ or $n-m$. In case the dimension is $m$, the code $\mathrm{Orb}(\mathcal{F})$ is, in addition, the $m$-spread $\mathrm{Orb}(\bbF_{q^m})$ of $\bbF_{q^n}$. From Theorem \ref{theo: type vector ODFC orbital} and Corollary \ref{cor: type odfc cyclic}, one can deduce that not every type vector is compatible with attaining the maximum possible distance once we have fixed the best friend of the generating flag of a $\beta$-cyclic orbit flag code. The following examples exhibit this fact. \begin{example} Let $\mathcal{F}$ be a flag on $\bbF_{2^{12}}$ with the subfield $\bbF_{2^2}$ as its best friend. This condition implies that the dimensions in the type vector of $\mathcal{F}$ must be even integers. Notice that $|\bbF_{2^{12}}^\ast|=2^{12}-1=4095= 273 \cdot 15$ and $\langle\alpha^{15}\rangle$ is the only subgroup of $\bbF_{2^{12}}^\ast$ of order $273$. On the other hand, we have $\bbF_{2^2}^\ast=\langle\alpha^{1365}\rangle$. Since $\mathrm{lcm}(15, 1365)=1365$, we have $|\mathrm{Orb}_{\beta}(\mathcal{F})|=\frac{1365}{15} = 91,$ for every $\beta\in\langle\alpha^{15}\rangle$. Now, assume that $\mathrm{Orb}_{\beta}(\mathcal{F})$ is an optimum distance flag code. If we compare its size with the upper bounds for the cardinality of constant dimension codes of $\bbF_{2^{12}}$ with maximum distance, we conclude that the dimension $6$ cannot appear in the type vector of $\mathrm{Orb}_{\beta}(\mathcal{F})$ since $\frac{2^{12}-1}{2^6-1}=65 < 91.$ In contrast, dimensions $2, 4, 8$ and $10$ satisfy the necessary condition given in Theorem \ref{theo: type vector ODFC orbital}. \end{example} \begin{example} Consider a flag $\mathcal{F}$ on $\bbF_{q^n}$ with the subfield $\bbF_{q^m}$ as its best friend and let $\alpha$ denote a primitive element of $\bbF_{q^n}$. The tables below illustrate which dimensions are susceptible to appear in the type vector of the optimum distance $\beta$-cyclic orbit flag code generated by $\mathcal{F}$ for different choices of $\beta$ and specific values of $q, n$ and $m$. \begin{table}[H] \centering \begin{small} \begin{tabular}{cccccc} \hline $\beta$ &$|\beta|$& $\langle\beta\rangle\cap\bbF_{q^m}^\ast$ & $|\mathrm{Orb}_{\beta}(\mathcal{F})|$& Allowed dimensions & Max. distance \\ \hline $\alpha$ & 6560 & $\bbF_{3}^\ast$ & 3280 & 1, 7 & 4 \\ $\alpha^2$ & 3280 & $\bbF_{3}^\ast$ & 1640 & 1, 7 & 4 \\ $\alpha^4$ & 1640 & $\bbF_{3}^\ast$ & 820 & 1, 2, 6, 7 &12 \\ $\alpha^5$ & 1312 & $\bbF_{3}^\ast$ & 656 & 1, 2, 6, 7 &12 \\ $\alpha^8$ & 820 & $\bbF_{3}^\ast$ & 410 & 1, 2, 6, 7 &12 \\ $\alpha^{10}$ & 656 & $\bbF_{3}^\ast$ & 328 & 1, 2, 6, 7 &12 \\ $\alpha^{16}$ & 410 & $\bbF_{3}^\ast$ & 205 & 1, 2, 3, 5, 6, 7 &24 \\ $\alpha^{20}$ & 328 & $\bbF_{3}^\ast$ & 164 & 1, 2, 3, 5, 6, 7 &24 \\ $\alpha^{32}$ & 205 & $\{1\}$ & 205 & 1, 2, 3, 5, 6, 7 &24 \\ $\alpha^{40}$ & 164 & $\bbF_{3}^\ast$ & 82 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{41}$ & 160 & $\bbF_{3}^\ast$ & 80 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{80}$ & 82 & $\bbF_{3}^\ast$ & 41 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{82}$ & 80 & $\bbF_{3}^\ast$ & 40 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{160}$ & 41 & $\{1\}$ & 41 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{164}$ & 40 & $\bbF_{3}^\ast$ & 20 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{205}$ & 32 & $\bbF_{3}^\ast$ & 16 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{328}$ & 20 & $\bbF_{3}^\ast$ & 10 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{410}$ & 16 & $\bbF_{3}^\ast$ & 8 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{656}$ & 10 & $\bbF_{3}^\ast$ & 5 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{820}$ & 8 & $\bbF_{3}^\ast$ & 4 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{1312}$ & 5 & $\{1\}$ & 5 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{1640}$ & 4 & $\bbF_{3}^\ast$ & 2 & 1, 2, 3, 4, 5, 6, 7 &32 \\ $\alpha^{3280}$ & 2 & $\bbF_{3}^\ast$ & 1 & 1, 2, 3, 4, 5, 6, 7 & 0 \\ $1$ & 1 & $\{1\}$ & 1 & 1, 2, 3, 4, 5, 6, 7 & 0 \\ \hline \end{tabular} \end{small} \caption{Values for $q=3$, $n=8$, $m=1$ and all subgroups of $\bbF_{3^8}^\ast$.}\label{table: q=3, n=8, m=1} \end{table} As it occurs when considering Galois $\beta$-cyclic flag codes, in these tables we can see that different subgroups of $\bbF_{q^n}^\ast$ (hence, subgroups with different order) can provide the same $\beta$-cyclic orbit flag code. Furthermore, there are different subgroups providing in turn different orbits but sharing the set of allowed dimensions and, as a consequence, also sharing the maximum possible value for the distance. For instance, in Table \ref{table: q=2, n=12, m=2}, both subgroups $\langle\alpha^{3}\rangle$ and $\langle\alpha^{9}\rangle$ give us the same orbit. On the other hand, the orbits under the action of $\langle\alpha^{5}\rangle$ and $\langle\alpha^{7}\rangle$ have different cardinality (thus, they are different codes) but their sets of allowed dimensions are equal. \begin{table}[H] \centering \begin{small} \begin{tabular}{cccccc} \hline $\beta$ & $|\beta|$ & $\langle\beta\rangle\cap\bbF_{q^m}^\ast$ & $|\mathrm{Orb}_{\beta}(\mathcal{F})|$ & Allowed dimensions & Max. distance \\ \hline $\alpha$ & 4095 & $\bbF_{2^2}^\ast$ & 1365 & 2, 10 & 8 \\ $\alpha^3$ & 1365 & $\bbF_{2^2}^\ast$ & 455 & 2, 10 & 8 \\ $\alpha^5$ & 819 & $\bbF_{2^2}^\ast$ & 273 & 2, 4, 8, 10 & 24 \\ $\alpha^{7}$ & 585 & $\bbF_{2^2}^\ast$ & 195 & 2, 4, 8, 10 & 24 \\ $\alpha^{9}$ & 455 & $\{1\}$ & 455 & 2, 10 & 8 \\ $\alpha^{13}$ & 315 & $\bbF_{2^2}^\ast$ & 105 & 2, 4, 8, 10 & 24 \\ $\alpha^{15}$ & 273 & $\bbF_{2^2}^\ast$ & 91 & 2, 4, 8, 10 & 24 \\ $\alpha^{21}$ & 195 & $\bbF_{2^2}^\ast$ & 65 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{35}$ & 117 & $\bbF_{2^2}^\ast$ & 39 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{39}$ & 105 & $\bbF_{2^2}^\ast$ & 35 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{45}$ & 91 & $\{1\}$ & 91 & 2, 4, 8, 10 & 24 \\ $\alpha^{63}$ & 65 & $\{1\}$ & 65 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{65}$ & 63 & $\bbF_{2^2}^\ast$ & 21 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{91}$ & 45 & $\bbF_{2^2}^\ast$ & 15 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{105}$ & 39 & $\bbF_{2^2}^\ast$ & 13 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{117}$ & 35 & $\{1\}$ & 35 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{195}$ & 21 & $\bbF_{2^2}^\ast$ & 7 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{273}$ & 15 & $\bbF_{2^2}^\ast$ & 5 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{315}$ & 13 & $\{1\}$ & 13 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{455}$ & 9 & $\bbF_{2^2}^\ast$ & 3 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{585}$ & 7 & $\{1\}$ & 7 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{819}$ & 5 & $\{1\}$ & 5 & 2, 4, 6, 8, 10 & 36 \\ $\alpha^{1365}$ & 3 & $\bbF_{2^2}^\ast$ & 1 & 2, 4, 6, 8, 10 & 0 \\ $1$ & 1 & $\{1\}$ & 1 & 2, 4, 6, 8, 10 & 0 \\ \hline \end{tabular} \end{small} \caption{Values for $q=2$, $n=12$, $m=2$ and all subgroups of $\bbF_{2^{12}}^\ast$.}\label{table: q=2, n=12, m=2} \end{table} \end{example} \begin{remark} Observe that results \ref{theo: type vector ODFC orbital} and \ref{cor: type odfc cyclic} give us necessary conditions on the type vector for the existence of optimum distance $\beta$-cyclic orbit flag codes but the problem of constucting them remains open. In Subsection \ref{subsec:Galois codes} we have characterized optimum distance Galois $\beta$-cyclic flag codes and built them by providing a suitable subgroup $\langle\beta\rangle$ of $\bbF_{q^n}^\ast$. Recall that in that case, the allowed dimensions correspond to the divisors appearing in the type vector of the generating Galois flag. Looking at Table \ref{table: q=2, n=12, m=2}, for instance, we can obtain optimum distance Galois $\beta$-cyclic flag codes of types $(2,4)$ and $(2,6)$. \end{remark} Apart from optimum distance cyclic flag codes of Galois type, as far as we know, there are only two constructions of optimum distance flag codes given by the action of a cyclic subgroup of $\bbF_{q^n}$. One of them can be found in \cite[Prop. 2.5]{Kurz20}, where the author, for every prime power $q$, provide a cyclic orbit full flag code on $\bbF_{q^3}$ (hence, of type $(1,2)$) with maximum distance as a matching code obtained from the action of $\bbF_{q^3}^\ast$. The same argument allows us to build optimum distance cyclic orbit flag codes with best friend $\bbF_q$ of type $(1, n-1)$ as matching codes for every $n\geq 3$. On the other hand, in \cite{OrbitODFC}, the authors present a construction of an optimum distance orbit full flag code on $\bbF_{q^{2k}}$ arising from the action of a subgroup of $\mathrm{GL}(2k, q)$ that is a cyclic group generated by the companion matrix of a primitive polynomial of degree $2k$ in $\bbF_q[x]$. Observe that this action can be naturally translated into our scenario by identifying such a companion matrix with a primitive element of $\bbF_{q^{2k}},$ as it was pointed out in \cite[Lemma 21]{TrautManBraunRos2013}. \section{Conclusions and future work} We have introduced the concept of cyclic orbit flag code as a generalization of cyclic orbit (subspace) code to the flag codes setting. Following the viewpoint of \cite{GLMoTro2015}, we analyze the structure and properties of this family of codes by defining the best friend of a flag. This approach allows us to easily compute the cardinality of the code and to provide bounds for its distance. In particular, we explore families of codes attaining the extreme possible values for the distance. For the minimum one, we introduce the family of Galois cyclic flag codes, whose elements present a rich structure of nested spreads compatible with the action of $\bbF_{q^n}^*$ on flags. We also study the subcodes of Galois cyclic flag codes whose structure is also orbital cyclic, the Galois $\beta$-cyclic flag codes, and show that we can improve the distance of such codes by choosing a suitable $\beta$ to attain even the maximum possible one. On the other hand, concerning optimum distance flag codes with a fixed best friend, we have provided a necessary condition on the type vector of orbit flag codes that attain the maximum possible distance and arise also from the action of subgroups of $\bbF_{q^n}^\ast$. In future work we want to come up with other constructions of $\beta$-cyclic orbit flag codes as well as to study conditions and properties of cyclic obit codes with a prescribed distance not necessarily being the maximum one. Despite the study of union of cyclic and $\beta$-cyclic orbit flag codes has not been addressed in this paper, it would be also interesting to tackle this problem. In addition, we would like to exploit the structure of cyclic orbit flag codes in order to determine efficient decoding algorithms taking advantage of the ones already designed for cyclic orbit (subspace) codes in \cite{TrautManBraunRos2013}.
2,877,628,090,613
arxiv
\section{Introduction} $~~~$Let $\mathbb{L}$ be a lattice in the Euclidean space $\mathbb{R}^n.$ By the reduction theory of quadratic forms introduced by Korkine and Zolotareff \cite{KZ}, a cartesian co-ordinate system may be chosen in $\mathbb{R}^n$ in such a way that $\mathbb{L}$ has a basis of the form $$~(A_1,0,0,\cdots,0),~(a_{2,1},A_2,0,\cdots,0),\cdots,(a_{n,1},a_{n,2},\cdots,a_{n,n-1},A_n),$$ where $A_1,A_2,$ $\cdots,A_n$ are all positive and further for each $i=1,2,\cdots,n~$ any two points of the lattice in $\mathbb{R}^{n-i+1}$ with basis $$(A_i,0,...,0), (a_{i+1,i},A_{i+1},0,\cdots,0), \cdots,(a_{n,i},a_{n,i+1}, \cdots,a_{n,n-1},A_n)$$ are at a distance at least $A_i$ apart. Here we shall be considering the following conjecture of Woods :\\ \noindent {\bf Conjecture (Woods).} If $A_1A_2\cdots A_n = 1$ and $A_i\leq A_1$ for each $i$ then any closed sphere in $\mathbb{R}^n$ of radius $\sqrt{n}/2$ contains a point of $\mathbb{L}.$\vspace{2mm} \noindent This conjecture is known to be true for $n \leq 9$. Woods \cite{W1, W2, W3} proved it for $4\leq n\leq 6~$. Hans-Gill et al. \cite{HRS7, HRS8} proved it for $n=7$ and $n=8$. In a previous paper, the authors \cite{KR1} proved it for $n=9$. In \cite{KR2}, the authors have obtained estimates to the Conjecture of Woods for $10 \le n \le 33$. In particular we obtained a weaker result for $n=10$ that if hypothesis of Woods' Conjecture holds, then any closed sphere in $\mathbb{R}^{10}$ of radius $\frac{\sqrt{10.3}}{2}$ contains a point of $\mathbb{L}.$ In 2017, Regev et al. \cite{RSW} showed that Woods' Conjecture is not always true, it is false for $ n\ge 30$. It will be of great interest to find the largest (smallest) value of $n$, $10 \le n \le 29$, for which Woods' Conjecture is true (false). In this direction, we find that Woods' Conjecture is true for $n = 10.$ \vspace{2mm} \noindent Woods \cite{W2,W3} showed that his conjecture implies the following conjecture:\vspace{3mm} \noindent {\bf Conjecture I.} If $\wedge$ is a lattice of determinant 1 and there is a sphere $|X|<R$ which contains no point of $\wedge$ other than $O$ and has $n$ linearly independent points of $\wedge$ on its boundary then $\wedge$ is a covering lattice for the closed sphere of radius $ \sqrt{n/4}.$ Equivalently every closed sphere of radius $\sqrt{n/4}$ lying in $\mathbb{R}^n$ contains a point of $\wedge$.\vspace{2mm} It is well known that together with the result of McMullen \cite{Mc}, truth of Conjecture I for a fixed $n$ would imply the following long standing classical conjecture attributed to Minkowski on the product of $n$ non-homogeneous linear forms in $n$ variables:\\ {\noindent \bf Conjecture (Minkowski).} Let $L_i = a_{i1}x_1+\cdots + a_{in}x_n,$ $1\leq i\leq n,$ be $n$ real linear forms in $n$ variables $x_1,\cdots,x_n$ having determinant $\Delta = \det{(a_{ij})}\neq 0.$ For any given real numbers $c_1,\cdots,c_n$ there exist integers $x_1,\cdots,x_n$ such that \begin{equation*}|(L_1+c_1)\cdots(L_n+c_n)|\leq |\Delta|/2^n.\end{equation*} Minkowski's Conjecture is known to be true for $n \leq 9.$ For more detailed history of {\mm}'s Conjecture and related results, see Gruber \cite{PG}, Gruber and Lekkerkerker \cite{GL}, Bambah et al. \cite{BDH} and Hans-Gill et al. \cite{HRS7}. While answering a question of Shapira and Weiss \cite{SW}, the authors along with Hans-Gill \cite{KHR} have given another proof of Minkowski's Conjecture for $n \le 7$.\vspace{1mm} \noindent In this paper we shall prove \begin{theorem} Woods' Conjecture is true for n = 10.\end{theorem} Therefore Conjecture I and hence Minkowski's Conjecture is proved for $n=10$. One notes that falsehood of Woods' Conjecture for $n \ge 30$ does not mean that Minkowski's Conjecture is also false for those $n$.\vspace{2mm} We use the notations and method of proof of Hans-Gill et al. \cite{HRS7, HRS8}. In this method, we need to maximize /minimize frequently functions of several variables. For n=7 and 8, Hans-Gill et al. \cite{HRS7, HRS8} did all the calculations by hand using calculus only. While proving it for $n=9$, see \cite{KR1}, we reduced the number of variables one by one using calculus and replaced them with values where it could have its optimum value. This way we obtained functions in at most 3 variables. Finally we arrived at a conclusion by plotting 2 or 3 dimensional graphs in software Mathematica. Because of the heavy calculation work, the proof became very lengthy. The detailed proof of Woods' Conjecture for $n=9$ consisting of 132 pages can be seen at arXiv:1410.5743v1[math.NT]. With the increase of one more dimension i.e. for $n=10,$ the same process is extremely difficult to give a result without using some computational package. Here we use Optimization tools of the software Mathematica (non-linear global optimization) to prove Woods' Conjecture for $n=10$. All these optimization computations were initially verified by package Lingo. The authors are very grateful to Lindo Systems for providing its access free of charge. \section{Preliminary Lemmas} For a unit sphere $S_{n}$ with center $O$ in $\mathbb{R}^{n}$, let $\Delta(S_{n})$ be the \emph{critical determinant} of $S_{n}$, defined as \vspace{-2mm}$$\Delta(S_{n}) = \inf\{d(\Lambda):\Lambda~\mbox{has no non-zero point in the interior of}~S_{n}\},$$ where $d(\Lambda)$ denotes the determinant of the lattice $\Lambda$. \par Let $\mathbb{L}$ be a lattice in $\mathbb{R}^{n}$ reduced in the sense of Korkine and Zolotareff and $A_1, A_2,\ldots,A_n$ be defined as in Section 1. We state below some preliminary lemmas. Lemmas 1 and 2 are due to Woods \cite{W1}, Lemma 3 is due to Korkine and Zolotareff \cite{KZ} and Lemma 4 is due to Pendavingh and Van Zwam \cite{PV}. In Lemma 5, the cases $n=2$ and $3$ are classical results of Lagrange and Gauss; $n=4$ and $5$ are due to Korkine and Zolotareff \cite{KZ} while $n=6, 7$ and $8$ are due to Blichfeldt \cite{Bh}.\vspace{2mm}\\ \begin{lemma} \emph{If $2\Delta(S_{n+1})A_{1}^{n}\geq d(\mathbb{L})$, then any closed sphere of radius $$R=A_{1}\{ 1-(A_{1}^{n} \Delta (S_{n+1})/ d(\mathbb{L}))^{2}\}^{1/2}$$ in $\mathbb{R}^{n}$ contains a point of $\mathbb{L}$.}\end{lemma} \begin{lemma} \emph{For a fixed integer $i$ with $1\leqslant i\leqslant n-1$, denote by $\mathbb{L}_{1}$ the lattice in $\mathbb{R}^{i}$ with the reduced basis $$(A_1,0,0,\ldots,0),(a_{2,1},A_2,0,\ldots,0),\ldots,(a_{i,1},a_{i,2},\ldots,a_{i,i-1},A_i) $$ and denote by $\mathbb{L}_{2}$ the lattice in $\mathbb{R}^{n-i}$ with the reduced basis $$(A_{i+1},0,0,\ldots,0),(a_{i+2,i+1},A_{i+2},0,\ldots,0),\ldots,(a_{n,i+1},a_{n,i+2},\ldots,a_{n,n-1},A_n). $$ If any sphere in $\mathbb{R}^{i}$ of radius $r_{1}$ contains a point of $\mathbb{L}_{1}$ and if any sphere in $\mathbb{R}^{n-i}$ of radius $r_{2}$ contains a point of $\mathbb{L}_{2}$ then any sphere in $\mathbb{R}^{n}$ of radius $(r_{1}^{2}+r_{2}^{2})^{1/2}$ contains a point of $\mathbb{L}$.}\end{lemma} \begin{lemma} \emph{ For all relevant $i$, $A_{i+1}^2\geq\frac{3}{4}A_i^2$ and $A_{i+2}^2\geq\frac{2}{3}A_i^2 ~$.}\end{lemma} \begin{lemma} \emph{ For all relevant $i$, $A_{i+4}^2\geq0.46873A_i^2 $~.}\end{lemma} \begin{lemma} \emph{$~~~\Delta (S_n) = ~1/\sqrt{2}, ~1/2, ~1/2\sqrt{2}, ~\sqrt{3}/8,1/8$ and $1/16 ~for~ n=~3,~4,~5,$ $6,~7$ and $~8$ ~respectively.}\end{lemma} \section{Plan of the Proof}\label{Plan} As remarked earlier, we use the notation and approach of Hans-Gill et al. \cite{HRS7} and that of \cite{HRS8}. We include some of the details given there for the convenience of the reader. We assume that Woods' Conjecture is false for $n=10$ and derive a contradiction. Let $\mathbb{L}$ be lattice satisfying the hypothesis of the conjecture for $n=10$ i.e. $A_{1}A_{2}\cdots A_{10}=1$~\mbox{and}~ $A_{i}\leqslant A_{1}$ for each $i$. Suppose that there exists a closed sphere of radius $\sqrt{10}/2$ in $\mathbb{R}^{10}$ that contains no point of $\mathbb{L}$. Write $A=A_1^2,~ B=A_2^2,~ C=A_3^2,\ldots, J=A_{10}^2$. So we have $ABCDEFGHIJ=1$.\vspace{2mm}\\ $~~~$ If $(\lambda_{1},\lambda_{2},\cdots,\lambda_{s})$ is an ordered partition of $n$, then the conditional inequality arising from it, by using Lemmas 1 and 2, is also denoted by $(\lambda_{1},\lambda_{2},\cdots,\lambda_{s})$. If the conditions in an inequality $(\lambda_{1},\lambda_{2},\cdots,\lambda_{s})$ are satisfied then we say that $(\lambda_{1},\lambda_{2},\cdots,\lambda_{s})$ holds.\vspace{2mm}\\ For example the inequality (1, 1, 1, 1, 1, 1, 1, 1, 2) results in the conditional inequality : \begin{equation}\label{2} {\rm if~~} 2I\geq J~~ {\rm then~~~} A+B+C+D+E+F+G+H+4I-\frac{2I^2}{J}>10. \end{equation} Since $4I-2I^2/J\leq 2J$, the second inequality in (\ref{2}) gives \begin{equation}\label{3} A+B+C+D+E+F+G+H+2J>10. \end{equation} One may remark here that the condition $2I\geq J$ is necessary only if we want to use inequality (\ref{2}), but it is not necessary if we want to use the weaker inequality (\ref{3}). This is so because if $2I< J$, using the partition $(1,1)$ in place of $(2)$ for the relevant part, we get the upper bound $I+J$ which is clearly less than $2J$. We shall call inequalities of type (\ref{3}) as \emph{weak} inequalities and inequalities of type (\ref{2}) as \emph{strong} inequalities.\vspace{1mm} \\ $~~~$Sometimes, instead of Lemma 1, we are able to use the fact that Woods Conjecture is true for dimensions less than or equal to $9$. The use of this is indicated by putting $^{*}$ on the corresponding part of the partition. For example, the inequality $(6^{*},4)$ is \begin{equation} {\rm if~~} G^4ABCDEF\geq 2~~ {\rm then~~~} 6(ABCDEF)^{\frac{1}{6}}+4G-\frac{1}{2}G^{5}ABCDEF > 10,\end{equation} the hypothesis of the conjecture in $6$ variables being satisfied.\vspace{2mm}\\ $~~~~$We observe that the inequalities of the type $(3,1,1,\cdots,1)$, $(2,1,1,\cdots,1)$, $(1,2,1,\cdots,1)$, $(1,2,2,1,\cdots,1)$, $(3,2,1,\cdots,1)$ etc. always hold. \vspace{2mm}\\ $~~~~~$We can assume $A>1$, because if $A\leq 1$, we must have $A=B=C=D=E=F=G=H=I=J=1.$ In this case Woods' Conjecture can be seen to be true using inequality $(1,1,1,1,1,1,1,1,1,1)$. Also it is known that $A\leq \gamma_{10}<2.2636302$ (See \cite{CE, CS}), where $\gamma_n$ denotes the Hermite's constant for minima of positive definite quadratic forms.\vspace{2mm}\\ $~~~~~$Each of $B,C,\ldots,J$ can either be $>1$ or $\leqslant$ 1. This give rise to $2^9=512$ cases. Case 1, where each of $B,C,D\cdots,J > 1$ does not arise as $ABCDEFGHIJ=1$. In Section \ref{Easy}, the 353 easy cases have been considered under Propositions 1-4. The remaining 158 cases need much more intricate analysis of the available inequalities. Out of these 158 cases, 151 cases are somewhat less difficult and have been dealt in Section \ref{Difficult}. The remaining 7 cases are very difficult to solve as the ranges of the variables have to be divided into many sub-intervals. So these cases have been dealt seperately in Section \ref{Most Difficult}. For the cases in Sections \ref{Difficult} and \ref{Most Difficult}, we have used the Optimization tool of the software Mathematica(non-linear global optimization) for the optimization of the functions in 10 variables $A, B, C, D, E, F, G, H, I, J$ under a number of constraints.\vspace{1mm}\\ $~~~~~$We write all the 512 cases in lexicographical order. For the 353 easy cases, the inequalities used to get a contradiction and the propositions where they are dealt with are listed in Table I. For 158 difficult cases either the proposition where these are dealt with or the main inequalities used to get a contradiction are also listed in Table I.\\ \textbf{Remark:} In many cases there are alternative ways to get a contradiction. We have chosen to describe the method which we find convenient. {\footnotesize$${\rm \bf Table~~~I}$$ \begin{tabular}{lcccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 1&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$-$&ABCDEFGHIJ=1\\ 2&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$1$&$(8^*,2)$\\ 3&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$ &$>$&3(i)&$(1,1,1,1,1,1,1,2,1)$ \\ 4&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 5&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$ &$>$&3(i)&$(1,1,1,1,1,1,2,1,1)$\\ 6&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 7&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$ &$>$&3(v)&$(1,1,1,1,1,1,3,1)$ \\ 8&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$ &$\leq$&6&$-$ \\ 9&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&3(i)&$(1,1,1,1,1,2,1,1,1)$ \\ 10&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 11&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$ &$>$&3(ii)&$(1,1,1,1,2,2,1)$ \\ 12&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 13&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$ &$>$&3(v)&$(1,1,1,1,1,3,1,1)$ \\ 14&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$ \\ 15&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$ &$>$&$6$ &$-$ \\ 16&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$ &$\leq$&$7$ &$-$ \\ 17&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$ &$>$&3(i)&$(1,1,1,1,2,1,1,1,1)$ \\ 18&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$ &$\leq$&1&$(8^*,2)$ \\ 19&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$ &$>$&3(ii)&$(1,1,1,1,2,1,2,1)$ \\ 20&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 21&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$ &$>$&3(ii)&$(1,1,1,1,2,2,1,1)$ \\ 22&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$ \\ 23&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$ &$>$&3(ix)& $(1,1,1,1,2,3,1)$ \\ 24&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$ &$\leq$&-& $(2,2,2,3,1)$ \\ 25&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$ &$>$&3(v)& $(1,1,1,1,3,1,1,1)$\\ 26&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$ &$\leq$&1& $(8^*,2)$\\ 27&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$ &$>$&3(xi)& $(3,1,3,2,1)$ \\ 28&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$ &$\leq$&1& $(7^*,3)$ \\ 29&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$ &$>$&6&$ -$\\ 30&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$ &$\leq$&1&$ (8^*,2)$\\ 31&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$ &$>$&7& $-$\\ 32&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$ &$\leq$&7& $-$\\ 33&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$ &$>$&3(i)&$(1,1,1,2,1,1,1,1,1)$ \\ 34&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$ &$\leq$&1&$(8^*,2)$ \\ 35&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$ &$>$&3(ii)& $(1,1,1,2,1,1,2,1)$ \\ 36&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$ &$\leq$&1& $(7^*,3)$ \\ 37&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$ &$>$&3(ii)&$(1,1,1,2,1,2,1,1)$ \\ 38&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$ \\ 39&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$ &$>$&3(xi)&$(3,2,1,3,1)$ \\ 40&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$ &$\leq$&-&$(1,2,2,1,3,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 41&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$ &$>$&3(ii)&$(1,1,1,2,2,1,1,1)$\\ 42&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$ &$\leq$&1&$(8^*,2)$\\ 43&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$ &$>$&3(iii)& $(1,1,1,2,2,2,1)$ \\ 44&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$ &$\leq$&1& $(7^*,3)$ \\ 45&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$ &$>$&3(xi)&$(3,2,3,1,1)$ \\ 46&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$ \\ 47&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$ &$>$&-& $(1,2,2,3,1,1)$\\ 48&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$ &$\leq$&6& $-$\\ 49&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$ &$>$&3(v)&$(1,1,1,3,1,1,1,1)$ \\ 50&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$ &$\leq$&1&$(8^*,2)$ \\ 51&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$ &$>$&3(xi)&$(3,3,1,2,1)$\\ 52&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$\\ 53&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$ &$>$&3(xi)&$(3,3,2,1,1)$\\ 54&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 55&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(vii)&$(3,3,3,1)$\\ 56&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&-&$(1,2,3,3,1)$\\ 57&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$ &$>$&-&$(1,2,3,1,2,1)$\\ 58&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$ &$\leq$&1&$(8^*,2)$\\ 59&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$ &$>$&-&$(1,2,3,1,2,1)$ \\ 60&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 61&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ &$>$& 7&$-$\\ 62&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 63&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&7&$-$\\ 64&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&7&$-$\\ 65 &$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&3(i)&$(1,1,2,1,1,1,1,1,1)$\\ 66 &$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 67&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&3(ii)&$(1,1,2,1,1,1,2,1)$ \\ 68&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 69&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&3(ii)&$(1,1,2,1,1,2,1,1)$\\ 70&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 71&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&3(ix)&$(2,2,1,1,3,1)$\\ 72&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&-&$(2,2,1,1,3,1)$\\ 73&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&3(ii)&$(1,1,2,1,2,1,1,1)$\\ 74&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 75&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&3(iii)&$(1,1,2,1,2,2,1)$\\ 76&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 77&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&3(ix)&$(2,2,1,3,1,1)$\\ 78&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 79&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&-&$(2,2,1,3,1,1)$\\ 80&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(2,2,1,3,1,1)$\\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 81&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&3(ii)&$(1,1,2,2,1,1,1,1)$\\ 82&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 83&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&3(iii)&$(1,1,2,2,1,2,1)$\\ 84&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 85&$>$ &$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&3(iii)&$(1,1,2,2,2,1,1)$\\ 86&$>$ &$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 87&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(x)&$(2,2,2,3,1)$\\ 88&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&-&$(2,2,2,3,1)$\\ 89&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&3(ix)&$(2,2,3,1,1,1)$\\ 90&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 91&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&3(x)&$(2,2,3,2,1)$\\ 92&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 93&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&-& $(2,2,3,1,1,1)$\\ 94&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1& $(8^*,2)$\\ 95&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&6&$-$\\ 96&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(2,2,3,1,1,1)$\\ 97&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&3(v)&$(1,1,3,1,1,1,1,1)$\\ 98&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 99&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&3(ix)&$(2,3,1,1,2,1)$\\ 100&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 101&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&3(ix)&$(2,3,1,2,1,1)$\\ 102&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 103&$>$ &$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&3(xi)&$(2,3,1,3,1)$ \\ 104&$>$ &$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&-&$(2,3,1,3,1)$ \\ 105&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&3(ix)& $(2,3,2,1,1,1)$\\ 106&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&1& $(8^*,2)$\\ 107&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&3(x)&$(2,3,2,2,1)$\\ 108&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 109&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&3(xi)&$(2,3,3,1,1)$\\ 110&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 111&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&-&$(2,2,1,2,1,1,1)$\\ 112&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(2,2,1,2,1,1,1)$\\ 113&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&-&$(2,2,2,1,1,1,1)$\\ 114&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 115&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&-&$(2,2,2,1,2,1)$\\ 116&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 117&$>$ &$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&-&$(2,2,2,2,1,1)$\\ 118&$>$ &$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 119&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&-&$(2,2,2,2,1,1)$\\ 120&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&-&$(2,2,2,2,1,1)$\\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 121&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&6&$-$\\ 122&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 123&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&6&$-$\\ 124&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 125&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ &$>$&7&$-$\\ 126&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 127&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&6&$-$\\ 128&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(1,4,1,\cdots,1),(1,2,1,\cdots,1)$\\ 129&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&3(i)&$(1,2,1,1,1,1,1,1,1)$ \\ 130&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 131&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&3(ii)&$(1,2,1,1,1,1,2,1)$ \\ 132&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 133&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&3(ii)&$(1,2,1,1,1,2,1,1)$ \\ 134&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 135&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&3(vi)&$(3,1,1,1,3,1)$ \\ 136&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&-&$(1,2,1,1,1,3,1)$ \\ 137&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&3(ii)&$(1,2,1,1,2,1,1,1)$ \\ 138&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 139&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&3(iii)&$(1,2,1,1,2,2,1)$ \\ 140&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 141&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&3(vi)&$(3,1,1,3,1,1)$ \\ 142&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 143&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&-&$(1,2,1,1,3,1,1)$ \\ 144&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(1,2,1,1,3,1,1)$ \\ 145&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&3(ii)&$(1,2,1,2,1,1,1,1)$ \\ 146&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 147&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&3(iii)&$(1,2,1,2,1,2,1)$ \\ 148&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 149&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&3(iii)&$(1,2,1,2,2,1,1)$ \\ 150&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&&$(8^*,2)$ \\ 151&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(xi)&$(3,1,2,3,1)$\\ 152&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&-&$(1,2,1,2,3,1)$\\ 153&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&3(vi)&$(3,1,3,1,1,1)$\\ 154&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 155&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&3(ix)&$(3,1,3,2,1)$ \\ 156&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 157&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&-&$(1,2,1,3,1,1,1)$ \\ 158&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 159&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&-&$(1,2,1,3,1,1,1)$ \\ 160&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(1,2,1,3,1,1,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 161&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&3(ii)&$(1,2,1,2,1,1,1,1)$ \\ 162&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 163&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&3(iii)&$(1,2,2,1,1,2,1)$ \\ 164&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 165&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&3(iii)&$(1,2,2,1,2,1,1)$ \\ 166&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 167&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&3(ix)&$(3,2,1,3,1)$\\ 168&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&-&$(1,2,2,1,3,1)$\\ 169&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&3(iii)&$(1,2,2,2,1,1,1)$ \\ 170&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 171&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&3(iv)&$(1,2,2,2,2,1)$\\ 172&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 173&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&3(ix)&$(3,2,3,1,1)$ \\ 174&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 175&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&-&$(1,2,2,3,1,1)$ \\ 176&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(1,2,2,3,1,1)$ \\ 177&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&3(vi)&$(3,3,1,1,1,1)$ \\ 178&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 179&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&3(ix)&$(3,3,1,2,1)$ \\ 180&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 181&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&3(ix)&$(3,3,2,1,1)$\\ 182&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 183&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(vii)&$(3,3,3,1)$ \\ 184&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&-&$(1,2,3,3,1)$ \\ 185&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&-&$(1,2,3,1,1,1,1)$ \\ 186&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 187&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&-&$(1,2,3,1,2,1)$ \\ 188&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 189&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&6&$-$\\ 190&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 191&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&-&$(3,3,1,1,1,1)$ \\ 192&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(1,2,3,1,1,1,1)$ \\ 193&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&3(v)&$(1,3,1,1,1,1,1,1)$ \\ 194&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 195&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&-&$(1,3,1,1,1,2,1)$ \\ 196&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 197&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&-&$(1,3,1,1,2,1,1)$\\ 198&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 199&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&-&$(1,3,1,1,3,1)$ \\ 200&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&-&$(1,2,1,1,1,2,1,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 201&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&-&$(1,3,1,2,1,1,1)$ \\ 202&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 203&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&-&$(1,3,1,2,2,1)$\\ 204&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 205&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&-&$(1,3,1,3,1,1)$ \\ 206&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 207&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&-&$(1,2,1,1,2,1,1,1)$ \\ 208&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(3,1,1,3,1,1)$ \\ 209&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&-&$(1,3,2,1,1,1,1)$ \\ 210&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 211&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&-&$(1,3,2,1,2,1)$\\ 212&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 213&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&-&$(1,3,2,2,1,1)$\\ 214&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 215&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&-&$(1,3,2,3,1)$\\ 216&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&4&$(3,1,\cdots,1),(1,2,1,2,2,1,1)$\\ 217&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&-&$(1,3,3,1,1,1)$ \\ 218&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 219&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&-&$(1,3,3,2,1)$ \\ 220&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 221&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&-&$(1,3,3,1,1,1)$ \\ 222&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 223&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&-&$(1,2,1,2,2,1,1)$ \\ 224&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&4&$(3,1,\cdots,1),(1,2,1,2,1,1,1,1)$ \\ 225&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&-&$(1,2,2,1,1,1,1,1)$ \\ 226&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 227&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&-&$(1,3,1,1,2,1,1)$ \\ 228&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 229&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&-&$(1,3,1,1,2,1,1)$ \\ 230&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 231&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&-&$(1,2,1,1,1,2,1,1)$\\ 232&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&-&$(1,2,1,1,1,4),(1,2,1,1,1,2,1,1)$\\ 233&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&-&$(1,2,1,1,2,1,1,1)$ \\ 234&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 235&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&-&$(1,2,1,1,2,2,1)$ \\ 236&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 237&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&-&$(2,1,1,1,2,1,1,1)$ \\ 238&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 239&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&-&$(2,1,1,1,2,1,1,1)$ \\ 240&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&4&$(3,1,\cdots,1)(1,2,1,1,2,1,1,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 241&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&-&$(1,2,2,1,1,1,1),(4,1,\cdots,1)$ \\ 242&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 243&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&-&$(1,2,1,\cdots,1)$ \\ 244&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 245&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&-&$(1,2,1,\cdots,1),(4,1,\cdots,1)$ \\ 246&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 247&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&-&$(3,1,1,1,2,1,1)$ \\ 248&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&-&$(3,1,1,1,2,1,1),(3,1,\cdots,1)$ \\ 249&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&6&$-$\\ 250&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 251&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&-&$(2,2,2,1,1,1,1)$ \\ 252&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 253&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&6&$-$ \\ 254&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 255&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&-&$(3,1,\cdots,1)$ \\ 256&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&-&$(3,1,\cdots,1)$ \\ 257&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&3(i)&$(2,1,1,1,1,1,1,1,1)$\\ 258&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 259&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$ &$>$&3(ii)&$(2,1,1,1,1,1,2,1)$ \\ 260&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 261&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$ &$>$&3(ii)&$(2,1,1,1,1,2,1,1)$\\ 262&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 263&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$ &$>$&3(viii)&$(2,1,1,1,1,3,1)$ \\ 264&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$ &$\leq$&-&$(2,1,1,1,1,3,1)$ \\ 265&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&3(ii)&$(2,1,1,1,2,1,1,1)$ \\ 266&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 267&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$ &$>$&3(iii)&$(2,1,1,1,2,2,1)$ \\ 268&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 269&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$ &$>$&3(viii)&$(2,1,1,1,3,1,1)$ \\ 270&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$ \\ 271&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$ &$>$&-&(2,1,1,1,3,1,1)\\ 272&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$ &$\leq$&5(ii)&$(2,1,\cdots,1),(2,1,1,1,3,1,1)$ \\ 273&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$ &$>$&3(ii)&$(2,1,1,2,1,1,1,1)$ \\ 274&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$ &$\leq$&1&$(8^*,2)$ \\ 275&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$ &$>$&3(iii)&$(2,1,1,2,1,2,1)$ \\ 276&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 277&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$ &$>$&3(iii)&(2,1,1,2,2,1,1) \\ 278&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$ &$\leq$&1& $(8^*,2)$\\ 279&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$ &$>$&3(ix)& $(2,1,1,2,3,1)$ \\ 280&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$ &$\leq$&5(ii)& $(2,1,\cdots,1),(2,1,1,2,3,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 281&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$ &$>$&3(viii)& $(2,1,1,3,1,1,1)$\\ 282&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$ &$\leq$&1& $(8^*,2)$\\ 283&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$ &$>$&3(ix)& $(2,1,1,3,2,1)$ \\ 284&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$ &$\leq$&1& $(7^*,3)$ \\ 285&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$ &$>$&-&(2,1,1,3,1,1,1)\\ 286&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 287&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$ &$>$&5(ii)& $(2,1,\cdots,1),(2,1,1,3,1,1,1)$\\ 288&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$ &$\leq$&5(i)& $(2,1,\cdots,1),(2,1,1,3,1,1,1)$\\ 289&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$ &$>$&3(ii)&$(2,1,2,1,1,1,1,1)$ \\ 290&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$ &$\leq$&1&$(8^*,2)$ \\ 291&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$ &$>$&3(iii)& $(2,1,2,1,1,2,1)$ \\ 292&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$ &$\leq$&1& $(7^*,3)$ \\ 293&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$ &$>$&3(iii)&$(2,1,2,1,2,1,1)$\\ 294&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 295&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$ &$>$&3(ix)&$(2,1,2,1,3,1)$ \\ 296&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$ &$\leq$&5(ii)&$(2,1,\cdots,1),(2,1,2,1,3,1)$ \\ 297&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$ &$>$&3(iii)&$(2,1,2,2,1,1,1)$\\ 298&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$ &$\leq$&1&$(8^*,2)$\\ 299&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$ &$>$&3(iv)& $(2,1,2,2,2,1)$ \\ 300&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$ &$\leq$&1& $(7^*,3)$ \\ 301&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$ &$>$&3(ix)&$(2,1,2,3,1,1)$ \\ 302&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$ \\ 303&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$ &$>$&5(ii)& $(2,1,\cdots,1),(2,1,2,3,1,1)$\\ 304&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$ &$\leq$&5(i)&$(2,1,\cdots,1),(2,1,2,3,1,1)$\\ 305&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$ &$>$&3(viii)&$(2,1,3,1,1,1,1)$ \\ 306&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$ &$\leq$&1&$(8^*,2)$ \\ 307&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$ &$>$&3(ix)&$(2,1,3,1,2,1)$\\ 308&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$\\ 309&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$ &$>$&3(ix)&$(2,1,3,2,1,1)$\\ 310&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 311&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(xi)&$(2,1,3,3,1)$\\ 312&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&5(i)&$(2,1,\cdots,1),(2,1,3,3,1)$\\ 313&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$ &$>$&-&$(2,1,3,1,1,1,1)$\\ 314&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$ &$\leq$&1&$(8^*,2)$\\ 315&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$ &$>$&5(ii)&$(2,1,\cdots,1),(2,1,3,1,2,1)$ \\ 316&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$ &$\leq$&1&$(7^*,3)$ \\ 317&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ & $>$&5(ii)&$(2,1,\cdots,1),(2,1,3,1,1,1,1)$\\ 318&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ & $\leq$&1&$(8^*,2)$\\ 319&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(2,1,3,1,1,1,1)$\\ 320&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$\\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 321&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&3(ii)&$(2,2,1,1,1,1,1,1)$\\ 322&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 323&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&3(iii)&$(2,2,1,1,1,2,1)$ \\ 324&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 325&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&3(iii)&$(2,2,1,1,2,1,1)$\\ 326&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 327&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&3(ix)&$(2,2,1,1,3,1)$\\ 328&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&5(ii)&$(2,1,\cdots,1),(2,2,1,1,3,1)$\\ 329&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&3(iii)&$(2,2,1,2,1,1,1)$\\ 330&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 331&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&3(iv)&$(2,2,1,2,2,1)$\\ 332&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 333&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&-&$(2,2,1,3,1,1)$\\ 334&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 335&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&5(ii)&$(2,1,\cdots,1),(2,2,1,3,1,1)$\\ 336&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&5(i)&$(2,1,\cdots,1),(2,2,1,3,1,1)$\\ 337&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&3(iii)&$(2,2,2,1,1,1,1)$\\ 338&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 339&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&3(iv)&$(2,2,2,1,2,1)$\\ 340&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 341&$>$ &$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&3(iv)&$(2,2,2,2,1,1)$\\ 342&$>$ &$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 343&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(x)&$(2,2,2,3,1)$\\ 344&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&5(i)&$(2,1,\cdots,1),(2,2,2,3,1)$\\ 345&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&3(ix)&$(2,2,3,1,1,1)$\\ 346&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 347&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&3(x)&$(2,2,3,2,1)$\\ 348&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 349&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&5(ii)& $(2,1,\cdots,1),(2,2,2,1,1,1,1)$\\ 350&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1& $(8^*,2)$\\ 351&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(2,2,2,1,1,1,1)$\\ 352&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$\\ 353&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&3(viii)&$(2,3,1,1,1,1,1)$\\ 354&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 355&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&3(ix)&$(2,3,1,1,2,1)$\\ 356&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 357&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&3(ix)&$(2,3,1,2,1,1)$\\ 358&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 359&$>$ &$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&3(xi)&$(2,3,1,3,1)$ \\ 360&$>$ &$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&5(i)&$(2,1,\cdots,1),(2,3,1,3,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 361&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&3(ix)& $(2,3,2,1,1,1)$\\ 362&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&1& $(8^*,2)$\\ 363&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&3(x)&$(2,3,2,2,1)$\\ 364&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 365&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&3(xi)&$(2,3,3,1,1)$\\ 366&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 367&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(2,3,3,1,1)$\\ 368&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$\\ 369&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&-&$(2,3,1,1,1,1,1)$\\ 370&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 371&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&5(ii)&$(2,1,\cdots,1),(2,3,1,1,2,1)$\\ 372&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 373&$>$ &$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&5(ii)&$(2,1,\cdots,1),(2,3,1,2,1,1)$\\ 374&$>$ &$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 375&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(2,3,1,1,1,1,1)$\\ 376&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$\\ 377&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&5(ii)&$(2,1,\cdots,1),(2,3,1,1,1,1,1)$\\ 378&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$,\\ 379&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(2,3,1,1,1,1,1)$\\ 380&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 381&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ &$>$&5(i)&$(2,1,\cdots,1),(2,3,1,1,1,1,1)$\\ 382&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$ &$\leq$&1&$(8^*,2)$\\ 383&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&-&$(2,1,1,1,1,1,1,1,1)$\\ 384&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$\\ 385&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$>$&3(v)&$(3,1,1,1,1,1,1,1)$ \\ 386&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 387&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$>$&3(viii)&$(3,1,1,1,1,2,1)$ \\ 388&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 389&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$>$&3(viii)&$(3,1,1,1,2,1,1)$ \\ 390&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 391&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&3(vi)&$(3,1,1,1,3,1)$ \\ 392&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&5(ii)&$(2,1,\cdots,1),(3,1,1,1,3,1)$ \\ 393&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$>$&3(viii)&$(3,1,1,2,1,1,1)$ \\ 394&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 395&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&3(ix)&$(3,1,1,2,2,1)$ \\ 396&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 397&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&3(vi)&$(3,1,1,3,1,1)$ \\ 398&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 399&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,1,3,1,1)$ \\ 400&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&5(i)&$(2,1,\cdots,1),(3,1,1,3,1,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 401&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$>$&3(viii)&$(3,1,2,1,1,1,1)$ \\ 402&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 403&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&3(ix)&$(3,1,2,1,2,1)$ \\ 404&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 405&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&3(ix)&$(3,1,2,2,1,1)$ \\ 406&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 407&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(xi)&$(3,1,2,3,1)$\\ 408&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&5(i)&$(2,1,\cdots,1),(3,1,2,3,1)$\\ 409&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&3(vi)&$(3,1,3,1,1,1)$\\ 410&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 411&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&3(xi)&$(3,1,3,2,1)$ \\ 412&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 413&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,3,1,1,1)$ \\ 414&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 415&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(2,1,1,3,1,1,1)$ \\ 416&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&2&(2,1,1,1,1,1,1,1,1) \\ 417&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$>$&3(viii)&$(3,2,1,1,1,1,1)$ \\ 418&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 419&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&3(ix)&$(3,2,1,1,2,1)$ \\ 420&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 421&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&3(ix)&$(3,2,1,2,1,1)$ \\ 422&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 423&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&3(xi)&$(3,2,1,3,1)$\\ 424&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&-&$(3,2,1,3,1)$\\ 425&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&3(ix)&$(3,2,2,1,1,1)$ \\ 426&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 427&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&3(x)&$(3,2,2,2,1)$\\ 428&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 429&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&3(xi)&(3,2,3,1,1) \\ 430&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 431&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(3,2,3,1,1)$ \\ 432&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 433&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&3(vi)&$(3,3,1,1,1,1)$ \\ 434&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 435&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&3(xi)&$(3,3,1,2,1)$ \\ 436&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 437&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&3(xi)&$(3,3,2,1,1)$\\ 438&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 439&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&3(vii)&$(3,3,3,1)$ \\ 440&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 441&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&5(ii)&$(2,1,\cdots,1),(3,3,1,1,1,1)$ \\ 442&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 443&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&-&$(3,2,1,1,2,1)$ \\ 444&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 445&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&-&$(2,1,2,1,1,1,1,1),(4,1,\cdots,1)$\\ 446&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 447&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 448&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 449&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$>$&-&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 450&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 451&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$>$&-&$(3,1,1,1,1,2,1)$ \\ 452&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 453&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$>$&-&$(3,1,1,1,2,1,1)$\\ 454&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 455&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,1,1,2,1,1)$ \\ 456&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&$\leq$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 457&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$>$&-&$(3,1,1,2,1,1,1)$ \\ 458&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 459&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,1,2,1,1,1)$\\ 460&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 461&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,1,2,1,1,1)$ \\ 462&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 463&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 464&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 465&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$>$&-&$(3,1,2,1,1,1,1)$\\ 466&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 467&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,2,1,2,1)$\\ 468&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$\\ 469&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,2,2,1,1)$\\ 470&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$\\ 471&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$\\ 472&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$\\ 473&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$>$&5(ii)&$(2,1,\cdots,1),(3,1,3,1,1,1)$ \\ 474&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 475&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 476&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 477&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 478&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 479&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 480&$>$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$ \\ \end{tabular} \newpage \begin{tabular}{ccccccccccccc} Case & A & B & C & D & E & F & G & H & I &J&Proposition& Inequalities \\ &&&&&&&&&&&&\\ 481&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$>$&-&$(3,1,\cdots,1)$ \\ 482&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 483&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$>$&-&$(3,1,1,1,1,2,1),(4,1,1,1,2,1)$ \\ 484&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 485&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$>$&-&$(3,1,1,1,2,1,1)$,$(4,1,1,2,1,1)$ \\ 486&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 487&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$\\ 488&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$\\ 489&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$>$&-&$(3,1,1,2,1,1,1)$,\\ 490&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$\\ 491&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 492&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 493&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 494&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 495&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$>$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 496&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&$\leq$&2&(2,1,1,1,1,1,1,1,1) \\ 497&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$>$&-&$(2,1,\cdots,1)$\\ 498&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 499&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 500&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 501&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 502&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 503&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$>$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 504&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 505&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$>$&5(i)&$(2,1,\cdots,1),(4,1,\cdots,1)$ \\ 506&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&$\leq$&1&$(8^*,2)$ \\ 507&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$>$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 508&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&$\leq$&1&$(7^*,3)$ \\ 509&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$>$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 510&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&$\leq$&1&$(8^*,2)$ \\ 511&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$>$&2&$(2,1,1,1,1,1,1,1,1)$ \\ 512&$>$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&$\leq$&2&$(2,1,1,1,1,1,1,1,1)$ \\ \end{tabular}} \section{Easy Cases}\label{Easy} \begin{prop} Cases where $I > 1,~J \leq 1$ or where $H>1, ~I\leq 1, ~J \leq 1$ do not arise.\end{prop} \noindent Proof is similar to that of Propositions 1 and 2 of \cite{HRS8}.\vspace{1mm}\\ This settles $128+64$ cases.\hfill$\Box$ \begin{prop} Cases in which $B\leq1$ and at most two out of $C,D,E,F,G,H,I,J$ are greater than 1 do not arise.\end{prop} \noindent Proof is similar to that of Proposition 3(i) of \cite{HRS8}.\vspace{1mm}\\ This settles another 23 cases.\hfill$\Box$\\ \begin{lemma}\label{lemma 6} Let $X_{1},\cdots,X_{10}$ be positive real numbers, each $<2.2636302$ and satisfying \begin{equation}\label{1.4.1}X_{1}>1,~~~X_{1}X_{2}\cdots X_{10}=1~~~{\rm and}~~~x_i=|X_i-1|.\end{equation} Then the following hold :\vspace{2mm}\\ (i)~~~{If $X_{i}>1$ for $3\leq i\leq10$, then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_1=4X_1-\frac{2X_{1}^2}{X_2}+X_3+\cdots+X_{10}\leq10$.\vspace{2mm}\\ (ii)~~~{If $X_{i}>1$ for $i=3,5,6,7,8,9,10$, then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_2=4X_1-\frac{2X_{1}^2}{X_2}+4X_3-\frac{2X_{3}^2}{X_4}+X_5+\cdots+X_{10}\leq10$.\vspace{2mm}\\ (iii)~~~{If $X_{i}>1$ for $i=3,5,7,8,9,10$, then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_3=4X_1-\frac{2X_{1}^2}{X_2}+4X_3-\frac{2X_{3}^2}{X_4}+4X_5-\frac{2X_{5}^2}{X_6}+X_7+X_8+X_9+X_{10}\leq10$.\vspace{2mm}\\ (iv)~~~{If $X_{i}>1$ for $i=3,5,7,9,10$, then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_4=4X_1-\frac{2X_{1}^2}{X_2}+4X_3-\frac{2X_{3}^2}{X_4}+4X_5-\frac{2X_{5}^2}{X_6}+4X_7-\frac{2X_{7}^2}{X_8}+X_9+X_{10}\leq10$.\vspace{2mm}\\ (v)~~~{If $X_{i}>1$ for $i=4,5,6,7,8,9,10$, then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_5=4X_1-\frac{X_{1}^3}{X_{2}X_{3}}+X_4+X_{5}+X_{6}+X_7+X_8+X_9+X_{10}\leq10$.\vspace{2mm}\\ (vi)~~~{If $X_{i}>1$ for $i=4,7,8,9,10$ and $X_7\leq X_1$, $X_8\leq X_1$, $X_9\leq X_1$, $X_{10}\leq X_1$}\vspace{1mm}\\ \indent~~ then we have\vspace{1mm}\\ \indent~~ $\mathfrak{S}_6=4X_1-\frac{X_{1}^3}{X_{2}X_{3}}+4X_4-\frac{X_{4}^3}{X_{5}X_{6}}+X_7+X_8+X_9+X_{10}\leq10$.\vspace{2mm}\\ (vii)~~~{If $X_{i}>1$ for $i=4,7,10$ and $X_{10}\leq X_1$, then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_7=4X_1-\frac{X_{1}^3}{X_{2}X_{3}}+4X_4-\frac{X_{4}^3}{X_{5}X_{6}}+4X_7-\frac{X_{7}^3}{X_{8}X_{9}}+X_{10}\leq10$.\vspace{2mm}\\ (viii)~~~{If $X_{i}>1$ for $i=4,6,7,8,9,10$ and $X_i\leq X_{1}X_{4}$ for $i=6,7,8,9,10$,\\\indent~~ $X_5\leq X_4$, } then we have\vspace{1mm}\\ \indent~~ $\mathfrak{S}_8=4X_1-\frac{X_{1}^3}{X_{2}X_{3}}+4X_4-\frac{2X_{4}^2}{X_{5}}+X_6+X_7+X_8+X_9+X_{10}\leq10$.\vspace{2mm}\\ (ix)~~~{If $X_{i}>1$ for $i=3,5,8,9,10$ and $X_2\leq X_1$, $X_4\leq X_3$, $X_i\leq X_{1}X_{5}$, \\\indent~~for $i=8,9,10$,} then we have\vspace{1mm}\\ \indent~~ $\mathfrak{S}_9=4X_1-\frac{2X_{1}^2}{X_2}+4X_3-\frac{2X_{3}^2}{X_4}+4X_5-\frac{X_{5}^3}{X_{6}X_{7}}+X_8+X_9+X_{10}\leq10$.\vspace{2mm}\\ (x)~~~{If $X_{i}>1$ for $i=3,5,7,10$ and $X_2\leq X_1$, $X_4\leq X_3$, $X_6\leq X_5$, $X_{10}\leq X_{1}X_{7}$,\\\indent~~ then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_{10}=4X_1-\frac{2X_{1}^2}{X_2}+4X_3-\frac{2X_{3}^2}{X_4}+4X_5-\frac{2X_{5}^2}{X_6}+4X_7-\frac{X_{7}^3}{X_{8}X_{9}}+X_{10}\leq10$.\vspace{2mm}\\ (xi)~~~{If $X_{i}>1$ for $i=4,7,9,10$, $X_8\leq X_7$, $X_9\leq X_1X_{4}X_{7}$, $X_{10}\leq X_1X_{4}X_{7}$,\\\indent~~ then we have}\vspace{1mm}\\ \indent~~ $\mathfrak{S}_{11}=4X_1-\frac{X_{1}^3}{X_{2}X_{3}}+4X_4-\frac{X_{4}^3}{X_{5}X_{6}}+4X_7-\frac{2X_{7}^2}{X_8}+X_9+X_{10}\leq10$.\end{lemma} Proof is similar to that of Lemma 5 of \cite{HRS8}, so omitted.\hfill$\Box$\\ {\noindent \bf Remark.} The above lemma can be generalized for arbitrary $n$. \begin{prop}\label{Prop 3} The following cases do not arise.\\ $\begin{array}{ll}{\rm (i)}&(3),(5),(9),(17),(33),(65),(129),(257)\\ {\rm (ii)}&(11), (19), (21), (35), (37), (41), (67), (69), (73), (81), (131), (133), (137), (145),\\& (161), (259), (261), (265), (273), (289), (321).\\ {\rm (iii)}&(43), (75), (83), (85), (139), (147), (149), (163), (165), (169), (267), (275), (277),\\& (291), (293), (297), (323), (325), (329), (337). \\ {\rm (iv)}&(171), (299), (331), (339), (341).\\ {\rm (v)}&(7), (13), (25), (49), (97), (193), (385).\\ {\rm (vi)}&(135), (141), (153), (177), (391), (397), (409), (433).\\ \end{array} $ $\begin{array}{ll} {\rm (vii)}&(55), (183), (439).\\ {\rm (viii)}&(263), (269), (281), (305), (353), (387), (389), (393), (401), (417).\\ {\rm (ix)}&(23), (71), (77), (89), (99), (101), (105), (155), (167), (173), (179), (181), (279), (283),\\& (295), (301), (307), (309), (327), (345), (355), (357), (361), (395), (403), (405),\\& (419), (421), (425).\\ {\rm (x)}&(87), (91), (107), (343), (347), (363), (427).\\ {\rm (xi)}&(27), (39), (45), (51), (53), (103), (109), (151), (311), (359), (365), (407), (411),\\& (423), (429), (435), (437).\end{array} $ \end{prop} Each part of Proposition 3 follows from corresponding part of Lemma 6, after selecting suitable inequality. The inequality used for each case has been mentioned in Table I.\hfill$\Box$ \begin{lemma} Let $X_{1},\cdots,X_{10}$ be positive real numbers, each $< 2.2636302$, satisfying $\eqref{1.4.1}$. Let $$ \gamma={\displaystyle\sum_{4 \leq i \leq 10\atop{X_{i} \leq 1}}}x_{i}~~~{\rm and} ~~~~\delta={\displaystyle\sum_{4 \leq i \leq 10\atop{X_{i}>1}}}x_{i}$$ Suppose that $\gamma\leq x_{1}\leq 0.5$, then $$\mathfrak{S}_{12} = 4X_1-X_{1}^{4}X_{4}\ldots X_{10}+X_{4}+\ldots +X_{10}\leq 10.$$\end{lemma} Proof is similar to that of Lemma 6(i) of \cite{HRS8}, so omitted.\hfill$\Box$\\ \begin{prop} The Cases 216, 224, 240 do not arise.\end{prop} {\noindent Proof.} These cases are dealt by using the above lemma. The suitable inequalities used for each case, have been mentioned in the Table I.\hfill$\Box$ \section{Difficult Cases}\label{Difficult} In this section we consider 151 cases, which need more intricate analysis of available inequalities. These cases have been solved using the Optimization tool of the software Mathemetica(non-linear global optimization) for the optimization of the functions in 10 variables $A, B, C, D, E, F, G, H, I, J$ under a number of constraints. \par NMinimize and NMaximize implement several algorithms for finding constrained global optima. The methods are flexible enough to cope with functions that are not differentiable or continuous and are not easily trapped by local optima. The constraints to NMinimize and NMaximize may be either a list or a logical combination of equalities, inequalities, and domain specifications. Equalities and inequalities may be nonlinear. The settings for Accuracy Goal and Precision Goal specify the number of digits to seek in both the value of the position of the maximum, and the value of the function at the maximum. NMaximize continues until either of the goals specified by AccuracyGoal or Precision Goal is achieved. The default settings for Accuracy Goal and Precision Goal are Working Precision/2. Method is an option for various algorithm-intensive functions that specifies what internal methods they should use. With the default setting Method$->$Automatic Mathematica will automatically try to pick the best method for a particular computation.\vspace{2mm}\\ The common and obvious constraints, which arise from Lemmas 3 and 4, for all cases are :\vspace{2mm}\\ $\begin{array}{lllll} 1<A<2.2636302,&B\geq(3/4)\times A,&B\leq A,&&\\ C\geq(3/4)\times B,&C\geq(2/3)\times A,&C\leq A,&&\\ D\geq(3/4)\times C,&D\geq(2/3)\times B,&D\geq(1/2)\times A,&D\leq A,&\\ E\geq(3/4)\times D,&E\geq(2/3)\times C,&E\geq(1/2)\times B,& E\geq0.46873\times A,& E\leq A,\\ F\geq(3/4)\times E,&F\geq(2/3)\times D,&F\geq(1/2)\times C,&F\geq0.46873\times B,&F\leq A,\\ G\geq(3/4)\times F,&G\geq(2/3)\times E,&G\geq(1/2)\times D,&G\geq0.46873\times C,&G\leq A,\\ H\geq(3/4)\times G,&H\geq(2/3)\times F,&H\geq(1/2)\times E,&H\geq0.46873\times D,&H\leq A,\\ I\geq(3/4)\times H,&I\geq(2/3)\times G,&I\geq(1/2)\times F ,&I\geq0.46873\times E,&I\leq A,\\ J\geq(3/4)\times I,&J\geq(2/3)\times H,&J\geq(1/2)\times G,&J\geq0.46873\times F,&J\leq A.\\ \end{array}$\vspace{2mm}\\ We also use the following 88 constraints which arise from all the possible \emph{weak} inequalities corresponding to the partitions of 10 with summands equal to 1 and 2:\vspace{2mm}\\ $\begin{array}{ll} 2B+C+D+E+F+G+H+I+J>10,& A+2C+D+E+F+G+H+I+J>10,\\ A+B+2D+E+F+G+H+I+J>10,&A+B+C+2E+F+G+H+I+J>10,\\ A+B+C+D+2F+G+H+I+J>10,& A+B+C+D+E+2G+H+I+J>10,\\ A+B+C+D+E+F+2H+I+J>10,& A+B+C+D+E+F+G+2I+J>10,\\ A+B+C+D+E+F+G+H+2J>10,&\\ 2B+2D+E+F+G+H+I+J>10 &A+2C+2E+F+G+H+I+J>10 \\A+B+2D+2F+G+H+I+J>10 &A+B+C+2E+2G+H+I+J>10 \\ A+B+C+D+2F+2H+I+J>10 &A+B+C+D+E+2G+2I+J>10 \\A+B+C+D+E+F+2H+2J>10 &2B+C+2E+F+G+H+I+J>10 \\A+2C+D+2F+G+H+I+J>10 &A+B+2D+E+2G+H+I+J>10 \\A+B+C+2E+F+2H+I+J>10 &A+B+C+D+2F+G+2I+J>10 \\A+B+C+D+E+2G+H+2J>10 &2B+C+D+2F+G+H+I+J>10 \\A+2C+D+E+2G+H+I+J>10 &A+B+2D+E+F+2H+I+J>10 \\A+B+C+2E+F+G+2I+J>10 &A+B+C+D+2F+G+H+2J>10, \\2B+C+D+E+2G+H+I+J>10 &A+2C+D+E+F+2H+I+J>10 \\A+B+2D+E+F+G+2I+J>10 &A+B+C+2E+F+G+H+2J>10 \\2B+C+D+E+F+2H+I+J>10 &A+2C+D+E+F+G+2I+J>10 \\A+B+2D+E+F+G+H+2J>10 &2B+C+D+E+F+G+2I+J>10 \\A+2C+D+E+F+G+H+2J>10, &2B+C+D+E+F+G+H+2J>10, \\2B+2D+2F+G+H+I+J>10 &A+2C+2E+2G+H+I+J>10 \\A+B+2D+2F+2H+I+J>10 &A+B+C+2E+2G+2I+J>10 \\A+B+C+D+2F+2H+2J>10 &2B+C+2E+2G+H+I+J>10 \\A+2C+D+2F+2H+I+J>10 &A+B+2D+E+2G+2I+J>10 \\A+B+C+2E+F+2H+2J>10, &2B+2D+E+2G+H+I+J>10 \\A+2C+2E+F+2H+I+J>10 &A+B+2D+2F+G+2I+J>10 \\2B+C+D+2F+2H+I+J>10 &A+2C+D+E+2G+2I+J>10 \\A+B+2D+E+F+2H+2J>10 &2B+2D+E+F+2H+I+J>10 \\A+2C+2E+F+G+2I+J>10 &A+B+2D+2F+G+H+2J>10, \\2B+C+D+E+2G+2I+J>10 &A+2C+D+E+F+2H+2J>10, \\2B+2D+E+F+G+2I+J>10 &A+2C+2E+F+G+H+2J>10, \\2B+C+2E+F+2H+I+J>10 &A+2C+D+2F+G+2I+J>10 \\A+B+2D+E+2G+H+2J>10 &2B+C+D+2F+G+2I+J>10 \\A+2C+D+E+2G+H+2J>10, &2B+C+2E+F+G+2I+J>10 \\A+2C+D+2F+G+H+2J>10 &2B+C+D+E+F+2H+2J>10, \\2B+2D+E+F+G+H+2J>10, &2B+C+D+E+2G+H+2J>10 \end{array}$ $\begin{array}{ll} \\2B+C+D+2F+G+H+2J>10 &2B+C+2E+F+G+H+2J>10, \\A+B+C+2E+2G+H+2J>10 &2B+2D+2F+2H+I+J>10 \\A+2C+2E+2G+2I+J>10 &A+B+2D+2F+2H+2J>10, \\2B+C+2E+2G+2I+J>10 &A+2C+D+2F+2H+2J>10, \\2B+2D+E+2G+2I+J>10 &A+2C+2E+F+2H+2J>10, \\2B+2D+2F+G+2I+J>10 &A+2C+2E+2G+H+2J>10 \\2B+C+D+2F+2H+2J>10 &2B+2D+E+F+2H+2J>10, \\2B+2D+2F+G+H+2J>10, &2B+C+2E+F+2H+2J>10, \\2B+2D+E+2G+H+2J>10, &2B+C+2E+2G+H+2J>10 \\2B+2D+2F+2H+2J>10. \end{array}$\vspace{2mm}\\ Some specific constraints due to the numerical bounds on the variables arise in the individual case. In order to write all these constraints in Mathematica Notebook, we call $A=x1, B=x2, \cdots, J=x10.$ Let $\phi_l$ and $\psi_l$ for $l\geq1$, be non-linear functions in 10 variables $A, B, \cdots, J$, listed below. The functions $\phi_l$'s are associated with various conditional inequalities and the same have been written against the respective function. In each of the cases, we find using Mathematica that Global maximum of $\phi_l$ is always less than 10 and the Global minimum of $\psi_l$ is always greater than 2 under all the above mentioned constraints and some specific constraints relating to the numerical bounds on the variables, which consequently contradicts to the corresponding inequality (as mentioned in Section \ref{Plan}).\\\hyperref[Fig 1]{Figure 1} shows the Mathematica Model of $\phi_3$ used in the Case 8 and \hyperref[Fig 2]{Figure 2} shows Mathematica Model of $\psi_2$ used in Case 48.\vspace{2mm}\\ $\psi_1\label{s1}=\frac{A^3}{BCD}$;~$\psi_2\label{s2}=\frac{B^3}{CDE}$;~$\psi_3\label{s3}=\frac{C^3}{DEF}$;~$\psi_4\label{s4}=\frac{D^3}{EFG}$; ~$\psi_5\label{s5}=\frac{E^3}{FGH}$;~$\psi_6\label{s6}=\frac{F^3}{GHI}$;~$\psi_7\label{s7}=\frac{G^3}{HIJ}$; $\begin{array}{ll} \phi_1\label{k1}=4A-\frac{2A^2}{B}+C+D+4E-\frac{E^3}{FG}+H+I+J ;& (2,1,1,3,1,1,1)\vspace{1mm}\\ \phi_2\label{k2}=4A-\frac{2A^2}{B}+C+D+E+4F-\frac{F^3}{GH}+I+J ;& (2,1,1,1,3,1,1)\vspace{1mm}\\ \phi_3\label{h1}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{2E^2}{F}+4G-\frac{1}{2}\frac{G^4}{HIJ} ;& (2,2,2,4)\vspace{1mm}\\ \phi_4\label{h2}=4A-\frac{A^3}{BC}+D+E+F+4G-\frac{G^3}{HI}+J ;& (3,1,1,1,3,1)\vspace{1mm}\\ \phi_5\label{h3}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+E+4F-\frac{1}{2}\frac{F^4}{GHI}+J ; &(2,2,1,4,1)\vspace{1mm}\\ \phi_6\label{h4}=4A-\frac{A^3}{BC}+D+E+4F-\frac{F^3}{GH}+I+J ;&(3,1,1,3,1,1)\vspace{1mm}\\ \phi_7\label{h5}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{1}{2}\frac{E^4}{FGH}+I+J ; &(2,2,4,1,1)\vspace{1mm}\\ \phi_8\label{h6}=4A-\frac{A^3}{BC}+D+4E-\frac{E^3}{FG}+H+I+J ;& (3,1,3,1,1,1)\vspace{1mm}\\ \phi_9\label{h7}=A+4B-\frac{2B^2}{C}+4D-\frac{2D^2}{E}+4F-\frac{1}{2}\frac{F^4}{GHI}+J ; &(1,2,2,4,1)\vspace{1mm}\\ \phi_{10}\label{h8}=A+4B-\frac{1}{2}\frac{B^4}{CDE}+4F-\frac{1}{2}\frac{F^4}{GHI}+J ;& (1,4,4,1)\vspace{1mm}\\ \phi_{11}\label{h9}=A+4B-\frac{2B^2}{C}+4D-\frac{2D^2}{E}+4F-\frac{2F^2}{G}+4H-\frac{2H^2}{I}+J ; &(1,2,2,2,2,1)\vspace{1mm}\\ \phi_{12}\label{h10}=4A-\frac{1}{2}\frac{A^4}{BCD}+4E-\frac{2E^2}{F}+4G-\frac{2G^2}{H}+I+J ;& (4,2,2,1,1)\vspace{1mm}\\ \end{array}$ \newpage $\begin{array}{ll} \phi_{13}\label{h11}=A+4B-\frac{1}{2}\frac{B^4}{CDE}+4F-\frac{2F^2}{G}+4H-\frac{2H^2}{I}+J ; &(1,4,2,2,1)\vspace{1mm}\\ \phi_{14}\label{h12}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{2E^2}{F}+4G-\frac{2G^2}{H}+I+J ;& (2,2,2,2,1,1)\vspace{1mm}\\ \phi_{15}\label{h13}=4A-\frac{2A^{2}}{B}+4C-\frac{1}{2}\frac{C^4}{DEF}+4G-\frac{2G^2}{H}+I+J ;& (2,4,2,1,1)\vspace{1mm}\\ \phi_{16}\label{h14}=A+4B-\frac{1}{2}\frac{B^4}{CDE}+4F-\frac{2F^2}{G}+H+I+J ;& (1,4,2,1,1,1)\vspace{1mm}\\ \phi_{17}\label{h15}=4A-\frac{1}{2}\frac{A^4}{BCD}+4E-\frac{2E^2}{F}+G+H+I+J ;& (4,2,1,1,1,1)\vspace{1mm}\\ \phi_{18}\label{h16}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{2E^2}{F}+G+H+I+J ;& (2,2,2,1,1,1,1)\vspace{1mm}\\ \phi_{19}\label{h17}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{2E^2}{F}+G+4H-\frac{2H^2}{I}+J ;& (2,2,2,1,2,1)\vspace{1mm}\\ \phi_{20}\label{h18}=4A-\frac{2A^2}{B}+4C-\frac{1}{2}\frac{C^4}{DEF}+G+4H-\frac{2H^2}{I}+J ;& (2,4,1,2,1)\vspace{1mm}\\ \phi_{21}\label{h19}=A+4B-\frac{1}{2}\frac{B^4}{CDE}+F+G+H+I+J ;& (1,4,1,1,1,1)\vspace{1mm}\\ \phi_{22}\label{h20}=A+4B-\frac{2B^2}{C}+4D-\frac{D^3}{EF}+G+H+I+J ;& (1,2,3,1,1,1,1)\vspace{1mm}\\ \phi_{23}\label{h21}=4A-\frac{A^3}{BC}+D+E+F+G+H+I+J ;& (3,1,\cdots,1)\vspace{1mm}\\ \phi_{24}\label{h22}=4A-\frac{1}{2}\frac{A^4}{BCD}+E+F+G+H+I+J ;& (4,1,\cdots,1)\vspace{1mm}\\ \phi_{25}\label{h23}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{1}{2}\frac{E^4}{FGH}+4I-\frac{2I^2}{J} ;& (2,2,4,2)\vspace{1mm}\\ \phi_{26}\label{h24}=4A-\frac{2A^2}{B}+C+4D-\frac{1}{2}\frac{D^4}{EFG}+4H-\frac{2H^2}{I}+J ;& (2,1,4,2,1)\vspace{1mm}\\ \phi_{27}\label{h25}=4A-\frac{2A^2}{B}+4C-\frac{1}{2}\frac{C^4}{DEF}+4G-\frac{1}{2}\frac{G^4}{HIJ} ;& (2,4,4)\vspace{1mm}\\ \phi_{28}\label{h26}=4A-\frac{2A^2}{B}+4C-\frac{1}{2}\frac{C^4}{DEF}+4G-\frac{2G^2}{H}+4I-\frac{2I^2}{J} ;& (2,4,2,2)\vspace{1mm}\\ \phi_{29}\label{h27}=A+4B-\frac{2B^2}{C}+4D-\frac{1}{2}\frac{D^4}{EFG}+4H-\frac{2H^2}{I}+J ;& (1,2,4,2,1)\vspace{1mm}\\ \phi_{30}\label{h28}=4A-\frac{2A^2}{B}+4C-\frac{1}{2}\frac{C^4}{DEF}+4G-\frac{2G^2}{H}+I+J ;& (2,4,2,1,1)\vspace{1mm}\\ \phi_{31}\label{h29}=4A-\frac{1}{2}\frac{A^4}{BCD}+4E-\frac{1}{2}\frac{E^4}{FGH}+4I-\frac{2I^2}{J} ;& (4,4,2)\vspace{1mm}\\ \phi_{32}\label{h30}=4A-\frac{1}{2}\frac{A^4}{BCD}+4E-\frac{2E^2}{F}+4G-\frac{2G^2}{H}+4I-\frac{2I^2}{J} ;& (4,2,2,2)\vspace{1mm}\\ \phi_{33}\label{h31}=A+4B-\frac{2B^2}{C}+4D-\frac{1}{2}\frac{D^4}{EFG}+H+I+J ;& (1,2,4,1,1,1)\vspace{1mm}\\ \phi_{34}\label{h32}=4A-\frac{1}{2}\frac{A^4}{BCD}+4E-\frac{1}{2}\frac{E^4}{FGH}+I+J ;& (4,4,1,1)\vspace{1mm}\\ \phi_{35}\label{h33}=4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{1}{2}\frac{E^4}{FGH}+I+J ;& (2,2,4,1,1)\vspace{1mm}\\ \phi_{36}\label{h34}=4A-\frac{A^3}{BC}+4D-\frac{1}{2}\frac{D^4}{EFG}+4H-\frac{2H^2}{I}+J ;& (3,4,2,1)\vspace{1mm}\\ \phi_{37}\label{h35}=A+4B-\frac{2B^2}{C}+4D-\frac{2D^2}{E}+4F-\frac{2F^2}{G}+H+I+J ;& (1,2,2,2,1,1,1). \end{array}$ \begin{lemma} Cases in which $B\leq1$ and \\ (i) any three out of $C,D,E,F,G,H,I,J$ are greater than 1 and $A<1.196$ or \\ (ii) any four out of $C,D,E,F,G,H,I,J$ are greater than 1 and $A<1.096$,\\ do not arise.\end{lemma} Proof is similar to that of Proposition 3 of \cite{HRS8}, so omitted.\hfill$\Box$\\ \begin{prop} The following cases do not arise.\\ $\begin{array}{ll}{\rm (i)}& 288, 304, 312, 319, 336, 344, 351, 360, 367, 375, 379, 381, 400, 408, 415, 431, 456, 463, \\&471, 475, 477, 487, 491, 493, 499, 501, 505.\\ {\rm (ii)}& 272, 280, 287, 296, 303, 315, 317, 328, 335, 349, 371, 373, 377, 392, 399, 413, 441, 455,\\& 459, 461, 467, 469, 473.\end{array}$\end{prop} {\noindent\bf Proof.} We apply Lemma 8(i) for the cases in part (i) and Lemma 8(ii) for the cases in part (ii). For sake of convenience we illustrate the Cases 288 and 272 here. Other cases are similar and we have mentioned the inequalities used against each in Table I.\\ Case 288: $A>1, B\leq1, C>1, D>1, E>1, F\leq1, G\leq1, H\leq1, I\leq1, J\leq1 $\\ Using the Lemma 8(i) we get $A\geq1.196$. Now using Optimization Tool of Mathematica we find that $max~\hyperref[k1]{\phi_1}\leq10$, which gives contradiction to the corresponding inequality (2,1,1,3,1,1,1).\\ Case 272: $A>1, B\leq1, C>1, D>1, E>1, F>1, G\leq1, H\leq1, I\leq1, J\leq1 $\\ Using the Lemma 8(ii) we get $A\geq1.096$. Now using Optimization Tool of Mathematica we find that $max~\hyperref[k2]{\phi_2}\leq10$, which gives contradiction to the corresponding inequality (2,1,1,1,3,1,1). \hfill$\Box$\\ Proposition 5 settles 50 difficult cases. Out of remaining 101 difficult cases, we discuss here 11 cases only in Proposition 6. The other 90 cases are similar and the main inequalities used to find contradiction are listed in Table 1 against each case. \noindent \begin{figure}[h!] \includegraphics[width=155mm,height=155mm]{Case_8} \caption{Mathematica Model for $\phi_3$ (Case 8)} \label{Fig 1} \end{figure} \noindent \begin{figure}[h!] \includegraphics[width=155mm,height=155mm]{Case_48} \caption{Mathematica Model for $\psi_2$ (Case 48)} \label{Fig 2} \end{figure} \begin{prop} The following cases do not arise : \end{prop} {\noindent \bf Case 8} $A>1, B>1, C>1, D>1, E>1, F>1, G>1, H\leq1, I\leq1, J\leq1$.\\ Proof. First suppose $G^3/HIJ\geq2$, then the inequality (2,2,2,4) holds, but $max~\hyperref[h1]{\phi_3} \leq10$. So $G^3/HIJ<2$, then $max~\hyperref[h2]{\phi_4}\leq10$, which contradicts to the inequality (3,1,1,1,3,1).\vspace{3mm}\\ {\noindent \bf Case 15} $A>1, B>1, C>1, D>1, E>1, F>1, G\leq1, H\leq1, I\leq1, J>1$.\\ Proof. Suppose $F^3/GHI\geq2$, then the inequality (2,2,1,4,1) holds, but $max~\hyperref[h3]{\phi_5}\leq10$. So $F^3/GHI<2$, then $max~\hyperref[h4]{\phi_6}\leq10$, which gives contradiction to (3,1,1,3,1,1).\vspace{1mm}\\ {\noindent \bf Case 29} $A>1, B>1, C>1, D>1, E>1, F\leq1, G\leq1, H\leq1, I>1, J>1$.\\ Proof. Suppose $E^3/FGH\geq2$, then (2,2,4,1,1) holds, but $max~\hyperref[h5]{\phi_7}\leq10$. So $E^3/FGH<2$, then $max~\hyperref[h6]{\phi_8}\leq10$, which gives contradiction to (3,1,3,1,1,1).\vspace{1mm}\\ {\noindent \bf Case 48} $A>1, B>1, C>1, D>1, E\leq1, F>1, G\leq1, H\leq1, I\leq1, J\leq1$.\\ Proof. Suppose $F^3/GHI>2$. If $B<1.6$, then $max~\hyperref[h7]{\phi_9}\leq10$, contradicts to (1,2,2,4,1) and if $B\geq1.6$, then $ min~\hyperref[s2]{\psi_2}>2$, so the inequality (1,4,4,1) holds, but $max~\hyperref[h8]{\phi_{10}}\leq10$. Thus we must have $F^3/GHI<2$, but then $max~\hyperref[h9]{\phi_{11}}\leq10$ which is contradiction to the inequality (1,2,2,2,2,1).\vspace{1mm}\\ {\noindent \bf Case 95} $A>1, B>1, C>1, D\leq1, E>1, F\leq1, G\leq1, H\leq1, I\leq1, J>1$.\\ Proof. Claim (i) $A^3/BCD<2$\\ If $A^3/BCD\geq2$, then (4,2,2,1,1) holds, but $max~\hyperref[h10]{\phi_{12}}\leq 10$. \\ Claim (ii) $B^3/CDE<2$\\ If $B^3/CDE\geq2$, then the inequality (1,4,2,2,1) holds, but $\hyperref[h11]{\phi_{13}}\leq10$.\\ Finally $max~\hyperref[h12]{\phi_{14}}\leq10$, which gives contradiction to the inequality (2,2,2,2,1,1).\vspace{1mm}\\ {\noindent \bf Case 121} $A>1, B>1, C>1, D\leq1, E\leq1, F\leq1, G\leq1, H>1, I>1, J>1$.\\ Proof: Claim (i) $C<1.05$ \\ If $C\geq1.05$, then $max~\hyperref[s3]{\psi_3}>2$ and so the inequality (2,4,2,1,1) holds, but $max~\hyperref[h13]{\phi_{15}}\leq10$.\\ Claim(ii) $B\leq 1.24$\\ If $B\geq 1.24$, then $min~\hyperref[s2]{\psi_2}>2$, so the inequality (1,4,2,1,1,1) holds, but $max~\hyperref[h14]{\phi_{16}}\leq10$.\\ Claim(iii) $\frac{A^3}{BCD}<2$\\ If $\frac{A^3}{BCD}\geq 2$, then (4,2,1,1,1,1) holds, but $max~\hyperref[h15]{\phi_{17}}\leq10$.\\ Finally the inequality (2,2,2,1,1,1,1) holds, but $max~\hyperref[h16]{\phi_{18}}\leq10$\vspace{1mm}\\ {\noindent \bf Case 123} $ A > 1, B > 1, C > 1, D \leq 1, E \leq 1, F \leq 1, G \leq 1, H > 1, I \leq 1, J > 1$.\\ Proof: If $C\leq1.08$, then $max~\hyperref[h17]{\phi_{19}}\leq10$, so the inequality (2,2,2,1,2,1) contradicts and if $C>1.08$, then $min~\hyperref[s3]{\psi_3}>2$ and so (2,4,1,2,1) holds, but $max~\hyperref[h18]{\phi_{20}}\leq10$, which gives contradiction.\vspace{1mm}\\ {\noindent \bf Case 127} $A>1, B>1, C>1, D\leq1, E\leq1, F\leq1, G\leq1, H\leq1, I\leq1, J>1$.\\ Proof. Claim(i) $B>1.25$\\ For $B\leq1.25$, $max~\hyperref[h16]{\phi_{18}}\leq10$, which contradicts to (2,2,2,1,1,1,1).\\ Claim (ii) $C<1.3$ \\ For $C\geq1.3$, $C^3/DEF>2$, so (2,4,2,1,1) holds, but $max~\hyperref[h13]{\phi_{15}}\leq10$\\ Claim(iii) $A<1.62$ \\ For $A\geq1.62$, $A^3/BCD>2$ and $2G>H$, so (4,2,2,1,1) holds, but $max~\hyperref[h10]{\phi_{12}}\leq 10$.\\ Finally $B^3/CDE>2$ and so (1,4,2,1,1,1) holds. But $max~\hyperref[h14]{\phi_{16}}\leq10$.\vspace{1mm}\\ {\noindent \bf Case 189} $A>1, B>1, C\leq1, D>1, E\leq1, F\leq1, G\leq1, H\leq1, I>1, J>1$.\\ Proof. If $A^3/BCD\geq2$, then (4,2,1,1,1,1) holds, but $max~\hyperref[h15]{\phi_{17}}\leq10$, a contradiction.\\ If $A^3/BCD<2$, then $max~\hyperref[h20]{\phi_{22}}\leq10$, so $(1,2,3,1,1,1,1)$ contradicts.\vspace{1mm}\\ {\noindent \bf Case 249} $A>1, B>1, C\leq1, D\leq1, E\leq1, F\leq1, G\leq1, H>1, I>1, J>1$.\\ Proof. Claim (i) $A^3/BCD<2$\\ If $A^3/BCD\geq2$, then (4,2,1,1,1,1) holds, but $max~\hyperref[h15]{\phi_{17}}\leq10$, a contradiction.\\ Claim (ii) $B^3/CDE<2$\\ If $B^3/CDE\geq2$, then (1,4,1,1,1,1) holds, but $max~\hyperref[h19]{\phi_{21}}\leq10$.\\ Finally $max~\hyperref[h16]{\phi_{18}}\leq10$, which contradicts to (2,2,2,1,1,1,1).\vspace{1mm}\\ {\noindent \bf Case 253} $A>1, B>1, C\leq1, D\leq1, E\leq1, F\leq1, G\leq1, H\leq1, I>1, J>1$.\\ Proof. If $A^3/BCD\geq2$, then $(4,1,\cdots,1)$ holds, but $max~\hyperref[h22]{\phi_{24}}\leq10$.\\ If $A^3/BCD<2$, then $max~\hyperref[h16]{\phi_{18}}\leq10$, which contradicts to (2,2,2,1,1,1,1).\hfill$\Box$ \section{Most Difficult Cases}\label{Most Difficult} In this section we consider the remaining 7 cases, which are the most difficult to solve. These cases have also been solved using the Optimization tool of the software Mathemetica, as in previous section. Here we use the obvious and common constraints, along with the 88 constraints which arise from all the possible \emph{strong} inequalities corresponding to the partitions of 10 with summands equal to 1 and 2, provided the condition is satisfied for the particular part, e.g. we use $4G-2G^2/H$, only if $2G>H$. Few \emph{strong} inequalities are: \\ $4A-\frac{2A^2}{B}+C+D+E+F+G+H+I+J>10$,\\ $4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+E+F+G+H+I+J>10$,\\%1.(2,2,1,1,1,1,1,1)\\ $A+4B-\frac{2B^2}{C}+4D-\frac{2D^2}{E}+4F-\frac{2F^2}{G}+H+I+J>10$,\\%30.(1,2,2,2,1,1,1)\\ $4A-\frac{2A^2}{B}+4C-\frac{2C^2}{D}+4E-\frac{2E^2}{F}+4G-\frac{2G^2}{H}+4I-\frac{2I^2}{J}>10$. \begin{prop} The following cases do not arise: \end{prop} {\noindent \bf Case 16} $A > 1, B > 1, C > 1, D > 1, E > 1, F > 1, G \leq 1, H \leq 1, I\leq 1, J \leq 1$.\\ Proof. Here $2A>B, 2B>C, 2C>D, 2D>E, 2E>F, 2F>G, 2G>H, 2H>I, 2I>J$\\ Claim (i) $F<1.03$\\ If $F\geq1.03$, then $min~\hyperref[s6]{\psi_6}>2$, so the inequality (1,2,2,4,1) holds, but $\max~\hyperref[h7]{\phi_9}\leq10.$ \\ Claim (ii) $E<1.202$\\ For $E\geq1.202$, $min~\hyperref[s5]{\psi_5}>2$, so the inequality (2,2,4,2) holds, but $\max~\hyperref[h23]{\phi_{25}}\leq10$.\\ Claim (iii) $D<1.415$\\ For $D\geq1.415$, $min~\hyperref[s4]{\psi_4}>2$, so the inequality (2,1,4,2,1) holds, but $max~\hyperref[h24]{\phi_{26}}\leq10$.\\ Claim(iv) $\frac{C^3}{DEF}<2$\\ Suppose $\frac{C^3}{DEF}\geq2$. Now if $\frac{G^3}{HIJ}\geq2$, then (2,4,4) holds, but $\max~\hyperref[h25]{\phi_{27}}<10$ and if $\frac{G^3}{HIJ}<2$, then $\max~\hyperref[h26]{\phi_{28}}\leq10$, which contradicts to (2,4,2,2).\\ Claim(v) $\frac{F^3}{GHI}<2$\\ If $\frac{F^3}{GHI}\geq2$, then the inequality (1,2,2,4,1) holds, but $\max~\hyperref[h7]{\phi_9}<10$.\\ Finally $\max~\hyperref[h9]{\phi_{11}}\leq10$, which contradicts to (1,2,2,2,2,1).\vspace{2mm}\\ {\noindent \bf Case 31} $A>1, B>1, C>1, D>1, E>1, F\leq1, G\leq1, H\leq1, I\leq1, J>1$.\\ Proof. Here $2A>B, 2B>C, 2C>D, 2D>E, 2E>F, 2F>G, 2G>H, 2H>I$\\ Claim(i) $E<1.03$ \\ For $E\geq1.03$, $\min~\hyperref[s5]{\psi_5}>2$, so the inequality (2,2,4,1,1) holds, but $\max~\hyperref[h5]{\phi_7}\leq10$.\\ Claim(ii) $D<1.202$ \\ For $D\geq1.202$, $\min~\hyperref[s4]{\psi_4}>2$, so (1,2,4,2,1) holds, but $\max~\hyperref[h27]{\phi_{29}}\leq10$.\\ Claim(iii) $C<1.414$ \\ For $C\geq1.414$, $\min~\hyperref[s3]{\psi_3}>2$, so (2,4,2,1,1) holds, but $\max~\hyperref[h28]{\phi_{30}}\leq10$.\\ Claim(iv) $B<1.52$ \\ For $B\geq1.52$, $\min~\hyperref[s2]{\psi_2}>2$\\ If $\frac{F^3}{GHI}\geq2$, then (1,4,4,1) holds, but $\max~\hyperref[h8]{\phi_{10}}\leq10$. \\ If $\frac{F^3}{GHI}<2$, then $\max~\hyperref[h8]{\phi_{13}}\leq10$, which contradicts to (1,4,2,2,1).\\ Claim (v) $A^3/BCD<2$\\ As for $A^3/BCD\geq2$, (4,2,2,1,1) holds, but $\max~\hyperref[h10]{\phi_{12}}\leq10$. \\ Claim (vi) $B>1.37$, $G<0.77$, $H<0.66$\\ As if $B\leq1.37$ or $G\geq0.77$ or $H\geq0.66$, the Max of $\max~\hyperref[h12]{\phi_{14}}\leq10$, which contradicts to (2,2,2,2,1,1)\\ Now working as in Claims(iii) and (iv) respectively we get $C<1.37$ and $B^3/CDE<2$.\\ Finally we get $\frac{E^3}{FGH}>2$, so (2,2,4,1,1) holds , but $\max~\hyperref[h5]{\phi_{7}}\leq10.$\vspace{2mm}\\ {\noindent \bf Case 32} $A>1, B>1, C>1, D>1, E>1, F\leq1, G\leq1, H\leq1, I\leq1, J\leq1$.\\ Proof. Here $2A>B, 2B>C, 2C>D, 2D>E, 2E>F, 2F>G, 2G>H, 2H>I$\\ First take $2I\leq J$, then $\frac{E^3}{FGH}>2$ and so the inequality (2,2,4,1,1) holds, but $\max~\hyperref[h5]{\phi_{7}}\leq10$. So we must have $2I>J$, and consequently we can use $4I-2I^2/J$ instead of $2J$, wherever required.\\ Now we have the following claims:\\ Claim (i) $E<1.208$\\ For $E\geq1.208$, $\min~\hyperref[s5]{\psi_{5}}>2$ and so the inequality (2,2,4,2) holds, but $\max~\hyperref[h23]{\phi_{25}}\leq10$.\\ Claim(ii) $D<1.418$\\ If $D\geq1.418$, then $\min~\hyperref[s4]{\psi_{4}}>2$ and so (2,1,4,2,1) holds, i.e. $4A-\frac{2A^2}{B}+C+4D-\frac{1}{2}\frac{D^4}{EFG}+4H-\frac{2H^2}{I}+J>10$. By taking this inequality as an additional constraint, we see that $\max~\hyperref[h27]{\phi_{29}}\leq10$, which contradicts to (1,2,4,2,1).\\ Claim (iii) $C<1.605$\\ For $C\geq 1.605$, $\min~\hyperref[s3]{\psi_{3}}>2$. Further if $G^3/HIJ\geq2$, the inequality (2,4,4) will hold, but $\max~\hyperref[h25]{\phi_{27}}\leq10$. If $G^3/HIJ<2$, then $\max~\hyperref[h26]{\phi_{28}}\leq10$, thus the inequality (2,4,2,2) contradicts.\\ Claim(iv) $B^3/CDE<2$\\ Suppose $B^3/CDE\geq2$. Further if $F^3/GHI\geq 2$, the inequality (1,4,4,1) holds, but $\max~\hyperref[h8]{\phi_{10}}\leq10$. If $F^3/GHI<2$, then $\max~\hyperref[h7]{\phi_{13}}\leq10$, thus the inequality (1,4,2,2,1) contradicts.\\ Claim(v) $A^3/BCD<2$ \\ Suppose $A^3/BCD\geq2$. Further if $E^3/FGH\geq 2$, the inequality (4,4,2) holds, but $\max~\hyperref[h29]{\phi_{31}}\leq10$. If $E^3/FGH<2$, then $\max~\hyperref[h30]{\phi_{32}}\leq10$, thus the inequality (4,2,2,2) contradicts.\\ Claim (vi) $B>1.38, H<0.775; I<0.7$\\ If either of $B\leq 1.38$, $H\geq 0.775$, $I\geq0.7$ holds, then $\max~\hyperref[h9]{\phi_{11}}\leq10$, so the inequality (1,2,2,2,2,1) contradicts. \\ Claim (vii) $G<0.847$ and $C<1.555$\\ For $G\geq0.847$, then $\min~\hyperref[s7]{\psi_{7}}>2.$ Further if $C^3/DEF\geq2$, the inequality (2,4,4) will hold, but $\max~\hyperref[h25]{\phi_{27}}\leq10$. If $C^3/DEF<2$, then $\max~\hyperref[h1]{\phi_3}\leq10$, thus the inequality (2,2,2,4) contradicts.\\ Further working as in Claim(iii), we get $C<1.555$.\\ Claim(viii) $E<1.185$ and $D<1.394$\\ If $E\geq1.185$, then $E^3/FGH>2$, so the inequality (1,2,1,4,2) holds, i.e. $A+4B-2B^2/C+D+4E-(1/2)E^4/FGH+4I-2I^2/J>10$. By taking this inequality as an additional constraint, we see that $\max~\hyperref[h23]{\phi_{25}}\leq10$, which contradicts to (2,2,4,2).\\ Further working as in Claim(ii), we get $D<1.394$.\\ Claim(ix) $C\leq1.49$\\ Take $C>1.49$, then $\min~\hyperref[s3]{\psi_{3}}>2$. Further if $G^3/HIJ\geq2$, the inequality (2,4,4) contradict. So we must have $G^3/HIJ<2$, then working as in Claim(viii) we get $E<1.11$. Now $\max~\hyperref[h26]{\phi_{28}}\leq10$, thus the inequality (2,4,2,2) contradicts. So we must have $C\leq1.49$\\ Now working as in Claims (ii),(viii),(vi) respectively, we get $D<1.34$, $E<1.155$, $B>1.41$, $J<0.855$. Claim(x) $F<0.989$\\ For $F\geq0.989$, $(F^3)/(GHI)>2$ and the inequality (1,2,2,4,1) holds, but $\max~\hyperref[h7]{\phi_{9}}\leq10$.\\ Further working as in Claims (vii),(ii),(viii) respectively we get $G<0.836$, $D<1.315$, $E<1.14$.\\ Claim(xii) $C<1.44$\\ Suppose $C^3/DEF>2$. Further if $G^3/HIJ\geq2$, the inequality (2,4,4) contradicts. So we must have $G^3/HIJ<2$, then working as in Claim(viii) we get $E<1.09$, which contradicts to (2,4,2,2).\\ Finally working as in Claim(ii), we get $D^3/EFG<2$, which further gives $\min~\hyperref[s5]{\psi_{5}}>2$, but $\max~\hyperref[h23]{\phi_{25}}\leq10$, which contradicts to (2,2,4,2).\vspace{2mm}\\ {\noindent \bf Case 61} $A>1, B>1, C>1, D>1, E\leq1, F\leq1, G\leq1, H\leq1, I>1, J>1$.\\ Proof. Here $2A>B, 2B>C, 2C>D, 2D>E, 2E>F, 2F>G, 2G>H$\\ Claim(i) $D<1.03$\\ For $D\geq1.03$, $\min~\hyperref[s4]{\psi_4}>2$, and so the inequality (1,2,4,1,1,1) holds, but $\max~\hyperref[h31]{\phi_{33}}\leq10$.\\ Claim(ii) $C<1.2$\\ For $C\geq1.2$, $\min~\hyperref[s3]{\psi_3}>2$ and so the inequality (2,4,2,1,1) holds, but $\max~\hyperref[h28]{\phi_{30}}\leq10$.\\ Claim(iii) $B<1.414$\\ For $B\geq1.414$, $\min~\hyperref[s2]{\psi_2}>2$ and so (1,4,2,1,1,1) holds, but $\max~\hyperref[h14]{\phi_{16}}\leq10$.\\ Claim(iv) $A^3/BCD<2$\\ As for $A^3/BCD\geq2$, the inequality (4,2,2,1,1) holds, but $\max~\hyperref[h10]{\phi_{12}}\leq10$. \\ Claim(v) $G<0.66$\\ For $G\geq0.66$, $\max~\hyperref[h12]{\phi_{14}}\leq10$, which contradicts to the inequality (2,2,2,2,1,1).\\ Finally $\min~\hyperref[s4]{\psi_4}>2$ and we get contradiction working as in Claim(i).\vspace{2mm}\\ {\noindent \bf Case 63} $A>1, B>1, C>1, D>1, E\leq1, F\leq1, G\leq1, H\leq1, I\leq1, J>1$.\\ Proof. Here $2A>B, 2B>C, 2C>D, 2D>E, 2E>F, 2F>G, 2G>H$\\ IF $H<0.5$ then $\min~\hyperref[s3]{\psi_3}>2$, so the inequality (2,4,2,1,1) holds, but $\max~\hyperref[h13]{\phi_{15}}\leq10$, which contradicts .\\ Thus we have $H\geq0.5$, then $2H>I$, and consequently we can use $4H-2(H^2)/I$ instead of $2I$, wherever required. Now we have following claims:\\ Claim(i) $D<1.2$\\ If $D\geq1.2$, then $\min~\hyperref[s4]{\psi_4}>$ and so the inequalities (2,1,4,2,1), (3,4,2,1) hold, i.e. $4A-2A^2/B+C+4D-(1/2)D^4/EFG+4H-2H^2/I+J>10$ and $4A-A^3/BC+4D-(1/2)D^4/EFG+4H-2H^2/I+J>10$. But by taking these two inequalities as additional constraints, we find that $\max~\hyperref[h27]{\phi_{29}}\leq10$, which contradicts to (1,2,4,2,1).\\ Claim(ii) $C<1.418$\\ For $C\geq1.418,$ $\min~\hyperref[s3]{\psi_3}>2$, so the inequality (2,4,2,1,1) holds, but $\max~\hyperref[h13]{\phi_{15}}\leq10$.\\ Claim(iii) $B<1.605$\\ For $B\geq1.605$, $\min~\hyperref[s2]{\psi_2}>2$. If $F^3/GHI>2$, then the inequality (1,4,4,1) holds, but $\max~\hyperref[h8]{\phi_{10}}\leq10.$ If $F^3/GHI\leq2$, the $\max~\hyperref[h11]{\phi_{13}}\leq10$, which contradicts to (1,4,2,2,1).\\ Claim(iv) $A^3/BCD<2$\\ Suppose $A^3/BCD\geq2$.\\ If $E^3/FGH\geq2$, then (4,4,1,1) holds, but $\max~\hyperref[h32]{\phi_{34}}\leq10$.\\ If $E^3/FGH<2$, the $\max~\hyperref[h10]{\phi_{12}}\leq10$, which contradicts to (4,2,2,1,1).\\ Claim(v) $G<0.78$ and $B<1.555$ \\ As for $G\geq0.78$, we get $\max~\hyperref[h12]{\phi_{14}}\leq10$, contradicting (2,2,2,2,1,1).\\ Further working as in Claim (iii), we get $B<1.555$.\\ Claim(vi) $B<1.51$\\ Take $B>1.51$, then $\min~\hyperref[s2]{\psi_2}>2$. Further if $F^3/GHI\geq2$, we get contradiction to (1,4,4,1). So $F^3/GHI<2$. Now working as in Claim(i), we get $D<1.11$. This further implies that $\max~\hyperref[h11]{\phi_{13}}\leq10$, which contradicts to (1,4,2,2,1).\\ Now working as in Claims(ii) and (i) respectively, we get $C<1.32$ and $D<1.145$.\\ Claim(vii) $F<0.837$\\ For $F\geq0.837$, $F^3/GHI>2$, so the inequality (1,2,2,4,1) holds, but $\max~\hyperref[h7]{\phi_{9}}\leq10$.\\ Claim(viii) $A>1.42$ and $B>1.39$\\ If $A\leq1.42$ or $B\leq1.39$, we get $\max~\hyperref[h12]{\phi_{14}}\leq10$, contradicting (2,2,2,2,1,1).\\ Now working as in Claims(ii), (i) and (iii) respectively, we get $C<1.307$, $D<1.133$, $B<1.49$\\ Claim(ix) $E<0.975$\\ For $E\geq0.975$, $E^3/FGH>2$, so the inequality (2,2,4,1,1) holds, but $\max~\hyperref[h33]{\phi_{35}}\leq10$.\\ Now working as in Claim (ii), we get contradiction if $C^3/DEF\geq2$.\\ So $C^3/DEF<2$. Finally we get $\min~\hyperref[s2]{\psi_2}>2$ and $\max~\hyperref[h11]{\phi_{13}}\leq10$, which contradicts to (1,4,2,2,1).\vspace{2mm}\\ {\noindent \bf Case 64} $A>1, B>1, C>1, D>1, E\leq1, F\leq1, G\leq1, H\leq1, I\leq1, J\leq1$.\\ Proof. Here $2A>B, 2B>C, 2C>D, 2E>F, 2F>G, 2G>H$\\ Claim(i) $A^3/BCD<2$\\ Suppose $A^3/BCD\geq2$, then the inequality (4,2,2,1,1) holds, but $\max~\hyperref[h10]{\phi_{12}}\leq10$.\\ Claim(ii) $D<1.268$\\ If $D\geq1.268$, then (3,4,2,1) holds, but $\max~\hyperref[h34]{\phi_{36}}\leq10$.\\ Claim(iii) $C^3/DEF<2$\\ For $C^3/DEF\geq2$, (2,4,2,1,1) holds, but $\max~\hyperref[h28]{\phi_{30}}\leq10$.\\ Finally $\max~\hyperref[h12]{\phi_{14}}\leq10$, which contradicts to (2,2,2,2,1,1).\vspace{2mm}\\ {\noindent \bf Case 125} $A>1, B>1, C>1, D\leq1, E\leq1, F\leq1, G\leq1, H\leq1, I>1, J>1$.\\ Proof. Here $2A>B, 2B>C, 2C>D, 2D>E, 2E>F, 2F>G$\\ First suppose $2G<H$, then $\max~\hyperref[h35]{\phi_{37}}\leq10$, which contradicts to (1,2,2,2,1,1,1).\\ Now suppose $2G>H$. We have the following claims:\\ Claim(i) $C<1.2$\\ If $C\geq1.2$, then $\min~\hyperref[s3]{\psi_3}>2$. So (2,4,2,1,1) holds, but $\max~\hyperref[h13]{\phi_{15}}\leq10$.\\ Claim(ii) $B<1.421$. For $B\geq1.421$, $\min~\hyperref[s2]{\psi_2}>2$, so (1,4,2,1,1,1) holds, but $\max~\hyperref[h14]{\phi_{16}}\leq10$.\\ Claim(iii) $A^3/BCD<2$\\ For $A^3/BCD\geq2$, (4,2,1,1,1,1) holds, but $\max~\hyperref[h15]{\phi_{17}}\leq10$.\\ Claim(iv) $F<0.79, G<0.7, H<0.84$\\ If $F\geq0.79 ~or~ G\geq0.7 ~or~ H\geq0.84$, then $\max~\hyperref[h35]{\phi_{37}}\leq10$, which contradicts to (1,2,2,2,1,1,1).\\ Claim(v) $E<0.85$\\ For $E\geq0.85,$ $\min~\hyperref[s5]{\psi_5}>2$, so (2,2,4,1,1) holds. But $\max~\hyperref[h5]{\phi_{7}}\leq10.$\\ Finally $\min~\hyperref[s3]{\psi_3}>2$ and we get contradiction as in claim(i) to the inequality (2,4,2,1,1).\hfill$\Box$\vspace{2mm}\\ This settles all the cases and hence completes the proof of Woods' Conjecture for $n=10$.
2,877,628,090,614
arxiv
\section{Introduction} Human pose estimation is a popular research topic in computer vision for its wide potential in many applications, such as video games, gesture control, action understanding, pose retrieval. Human pose estimation from depth images is much more mature than estimation from 2D image. Some algorithms~\cite{Shotton_2011} based on depth maps have already been used in practice. However the majority of visual media are in 2D format, and most mobile devices are only equipped with 2D camera. Therefore, it is very useful to estimate human pose from 2D image. 2D pose estimation from images is more difficult than estimation from depth maps due to ambiguities of appearance and self-occlusion. In general, human pose estimation approaches can be classified into two types: methods based on part-based graphical models, and methods based on regression. % In the first approach using part-based graphical models, the human body structure is embedded into the connections between nodes of the graphical model, and the pose is estimated by finding the pose configuration that best matches the observation as measured by a score function or distribution~\cite{Felzenszwalb05, Eichner2012IJCV, Sapp2010, YiYang2011, Eichner-pami2012, Johnson2011}. % One popular graphical model for human pose estimation is the pictorial structure model~\cite{Felzenszwalb05} (PSM), which uses pairwise connections between parts to form a tree. Exact inference is possible and the solution is guaranteed to be globally optimal~\cite{Felzenszwalb05}, but the inference is still very expensive for real-time applications. In general, there are two definitions of parts, namely using joints as parts and using limbs as parts. Using joint points as parts avoids the need to predict the orientation of parts, although appearances around joints are more ambiguous. For both definitions of parts, the appearance model is critical for learning a good PSM~\cite{DGLG13, Eichner2012IJCV}. Simple appearance models using linear filters are not capable of capturing the parts' appearances, while complicated features are expensive to evaluate at each sliding window. Several methods have been proposed to alleviate this problem by truncating the pose space \cite{Sapp2010, Eichner2012IJCV}. On the other hand, \cite{Yi2013pami} extends the traditional PSM by allowing each body part to have multiple modes. Also, multimodal models, such as mixtures of PSM or hierarchical PSM \cite{Johnson2011, Eichner-pami2012, modec13,Pishchulin:2013}, have been proposed. The computation complexity increases rapidly along with the number of modes. In the second approach, pose estimation is viewed as a regression task \cite{tgp:2010}. These methods train their model to learn a mapping between feature space and pose space. A good feature that encodes pose information is more critical for these methods. Currently, these approaches can only handle small amounts of training data, since calculating a prediction requires solving an expensive optimization problem. In recent years, deep neural network architectures have achieved success in many computer vision tasks~\cite{Sun2013, Farabet2013, Alex2012}. Convolutional neural networks (CNN) are one of the most popular architectures used in computer vision problems because of their reduced number of parameters compared to fully connected models and intuitive structure, which allows the network to learn translation invariant features. In addition, convolution is a ``cheap'' operation that can be computed efficiently. However, because of the larger capacity (i.e., more parameters) of a deep neural network, it is hard to train a network that generalizes well with limited data. In this paper, we propose a heterogeneous multi-task framework for human pose estimation using a deep convolutional neural network. We frame pose estimation as a regression task, while also defining several accessory tasks to guide the network to learn useful features for pose estimation. In particular, these accessory tasks are sliding window detectors for different body-parts. In our framework, the heterogeneous tasks (regression and detection) are trained simultaneously, and we show that the regression network benefits greatly from the accessory detection tasks, and converges to much better local minima than the network trained with only regression tasks. We also empirically show that the activation patterns of neurons in the middle layers preserve location information and are selective to localized body-part shapes. \begin{figure*}[!t] \begin{center} \includegraphics[width=\linewidth]{img/sys-diagram.png} \end{center} \vspace{-0.15in} \caption{Heterogeneous multi-task learning for pose estimation. Given an image, a human body detector is used to find the bounding box around the human. Next, a convolutional neural network (CNN) extracts shared features from the cropped image, and the shared features are the inputs to the joint point regression tasks and the body-part detection tasks. The CNN, regression, and detection tasks are learned simultaneously, resulting in a shared feature representation that is good for all tasks. } \vspace{-0.2in} \label{fig:sysdiagram} \end{figure*} \section{Related work} Multi-task learning is typically applied when multiple tasks resemble each other and training data for each task is limited \cite{Yu:2005, Yang2009, Evgeniou:2005}. We refer reader to \cite{Yu:2005,Evgeniou:2005} for a review. In the following, we will briefly compare with previous multi-task approaches and regression networks that are most related to our work. In \cite{Yang2009}, a heterogeneous multi-task model is trained by encouraging the parameters for the regression task and the classification task to share the same sparsity pattern. They found that joint-training tends to find the most useful features in the input for both tasks. Instead of sharing a sparsity pattern, our framework forces the heterogeneous tasks to share the same feature layers, which results in learning shared feature representation that is good for both tasks. In \cite{Farabet2013}, a deep convolutional network is trained for scene labeling, by defining a multi-category classification task for each pixel. Instead, we define our detection tasks over sliding windows in the image. Since we allow each window to contain multiple body parts, each detection task is essentially a binary classification task in a window. \cite{taylor2010pse} trains a deep CNN to learn a pose-sensitive embedding with nonlinear NCA (neighbourhood components analysis) regression, and predicts the location of the head and hands by finding the nearest neighbor with the learned embedding features. In contrast to \cite{taylor2010pse}, we introduce accessory tasks for learning shared ``pose features'', and output the joint locations directly from the regression network. In \cite{Sun2013}, a multi-stage system with deep convolutional networks is built for predicting facial point locations. In order to embed a structure prior of the face, they use a set of neural networks that focus on different regions of the input image. Similarly, \cite{deeppose2014} trained cascaded convolutional networks for human pose estimation. Instead of increasing the number of stages for refinement, here we explore how to improve the performance of a single regression network by introducing accessory tasks. Our multi-task strategy could also be used in conjunction with the multi-stage strategy. In \cite{Weston2008} semi-supervised learning is used to guide the network to learn an internal representation that reflects the similarity between training samples. The authors propose that the unsupervised network can either share layers with a supervised network, or serve as an input into the supervised network. In contrast, we design multiple classification tasks for body parts detection at different location, while all the tasks share the same learned feature space. Finally, in order to investigate the feature representation learned by the neural network, \cite{Quoc2012} estimates the ``optimal'' input that maximizes the activation of a selected neuron, and find that the ``optimal'' input resembles a human face. In contrast to \cite{Quoc2012}, we visualize a feature by averaging image patches that are associated with the neurons with maximum responses in an upper-layer, and obtain similar results. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{img/pose-anno.pdf} \end{center} \vspace{-0.15in} \caption{(left) Joint point and body part annotation for pose estimation, and (right) the corresponding indicator map for left-upper arm detection. In the left image, the green dot indicates the joint point, and the red line is the body part. The white boxes indicate detection windows that contain the left upper arm. } \label{fig:annotation} \vspace{-0.2in} \end{figure} \section{Heterogeneous Multi-task Learning} Our heterogeneous multi-task framework consists of two types of tasks: 1) a pose regression task, where the aim is to predict the locations of human body joints in an image; 2) a set of body-part detection tasks, where the goal is to classify whether a window in the image contains the specific body part. In the following, we assume that a bounding box around the human has already been provided, e.g., using an upper body detector~\cite{ubd}. All the coordinates are with respect to the bounding box containing the human. Our framework is summarized in Figure~\ref{fig:sysdiagram}. \subsection{Joint point regression} The regression task is to predict the location of joint points for each human body part. The coordinates of each joint point are taken as the target values. We normalize all the coordinates with the size of bounding box so that their values will be in range of [0, 1]. We use the squared-error as the cost function for our regression task, \begin{align} E_{r}(\hat{J}_{i}, J_i) = \| J_i - \hat{J}_i \|_{2}^2, \end{align} where ${J}_i$ and $\hat{J}_i$ are the ground truth and predicted positions for the $i$-th joint, respectively. \subsection{Body part detection} For the body part detection tasks, the goal is to determine whether a given window in the image contains a specific body part. Let $P$ be the total number of body parts, and let $L$ be the number of overlapping windows inside the bounding box. For the $p$-th body part, we train $L$ classifiers, namely $C_{p,1}, ..., C_{p,L}$, to determine whether the $l$-th window contains body part $p$. Note that we train a separate classifier for each location $L$, which allows the part detector to learn a location-specific appearance for the part, as well as location-specific contextual information with other parts. For example, a lower arm in the upper corner of the bounding box will more likely be vertical or diagonal. In our training set, the annotated body parts are represented as sticks. Hence, to train the body-part detectors, we need to first identify the windows in the training set that contain each body part. % A window is considered to contain a body part if the portion of the body part inside the window is at least a particular length, relative to the total length of the part. % Specifically, we use the following formula to convert the stick annotation of body part $p$ into a binary label indicating its presence/absence in the $l$-th window, \begin{align} y_{p,l} = \begin{cases} 1 ,\quad \text{if} \ \mathrm{len}(window_{l} \cap stick_{p}) > \beta \cdot \mathrm{len}(stick_{p}) \\ 0 , \quad \text{otherwise}, \end{cases} \label{equ:indmap} \end{align} where $stick_{p}$ is the segment of the $p$-th body part, and $window_l \cap stick_p$ is the portion of $stick_p$ inside $window_l$. $\beta$ is a fixed threshold, which we empirically set $\beta=0.3$ in all of our experiments. % Finally, calculating the binary indicator $y_{p,l}$ for each window $l$, results in a binary indicator map for part $p$. Figure \ref{fig:annotation} shows an example converting the upper-arm annotation into an indicator map. Note that we allow multiple body parts to appear in the same window, and also allow one body part to appear in several windows. For each detection task for part $p$ and window $l$, we minimize the cross-entropy error function, \begin{align} E_d(\hat{y}_{p, l}, y_{p,l}) = - y_{p, l} \log ( \hat{y}_{p,l}) - (1 - y_{p, l}) \log (1- \hat{y}_{p,l}), \label{equ:cost-reg} \end{align} where $y_{p, l}$ is the ground-truth label, and $\hat{y}_{p, l}$ is the corresponding detection probability from the classifier. \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\linewidth]{img/deep-pose-conv-flat.pdf} \end{center} \vspace{-0.1in} \caption{Network architecture for pose estimation: The input layer is $112 \times 112$ RGB image. The shared CNN consists of 3 convolutional layers, each followed by a max-pooling layer. % The final pooling layer is connected to separate sub-networks for the joint point regression and body part detection tasks. Each subnetwork contains three fully connected layers. } \vspace{-0.2in} \label{fig:network-structure} \end{figure*} \subsection{Global cost function} Our global cost function is the linear combination of the regression cost function for all joints and the detection cost function for all parts and windows, over all training images, \begin{align} \label{equ:globalcost} \Phi = \lambda_{r} \sum_{t} \sum_{i}E_{r}(\hat{J}_{i}^{(t)}, J_i^{(t)}) + \lambda_{d}\sum_{t}\sum_{p}\sum_{l} E_{d}(\hat{y}_{p,l}^{(t)}, y_{p,l}^{(t)}), \end{align} where $\lambda_{r}$ and $\lambda_{d}$ are the weights for regression and detection tasks, respectively, and the superscript $(t)$ indicates the index of the training image. \subsection{Network Structure} The design of our network is based on the following considerations: \begin{itemize} \item {\bf Low level feature sharing}: We allow the detection tasks and regression tasks to share the same learned feature representation. This is motivated by the following two reasons. % First, features learned for the detection task should also be helpful for identifying parts or joints in the regression task. Second, feature sharing will reduce the number of parameters and encourage the network to generalize on a larger range of samples. \vspace{-0.2in} \item {\bf Preservation of location information}: The detection task is to determine whether a local window contains the specific body part, while the regression task is to predict the coordinates of the joint position. Hence, the features extracted from the lower layers should not be translation invariant, i.e., the positions of the features should be preserved in the feature map. \vspace{-0.1in} \item {\bf Integration of context information}: Sometimes it is difficult to distinguish different body parts by only looking at the bounding box of the body parts. For example, when wearing long-sleeves, the upper arm and lower arm can have very similar appearance, and hence it is hard to distinguish them by only looking at the windows containing these two parts. Including context information about neighboring parts can help to improve the part detector. Hence, the input for each local part detector is the whole bounding box image (the whole human). \vspace{-0.1in} \end{itemize} Our network structure is shown in Figure~\ref{fig:network-structure}. The input is an RGB image with human. The first 6 hidden layers are shared by both regression and detection tasks. In the shared layers, we only use convolutional layers and pooling layers to ensure the activation of neurons are affected by only local patterns in the input. We also choose to use a small filter and stride size to keep more location information. Each convolutional layer consists of several maps. Filter weights are shared within each map, which means the neurons within the same map are sensitive to the same patterns at different location in the previous layer. Neurons at the same position (but belonging to different maps) will always contribute to the same unit in the next layer. The max-pooling layer is added after each convolutional layer to increase non-linearity and to integrate local information. The value of neuron $i$ in a convolutional layer or regression layer is calculated by \begin{equation} \vspace{-0.11in} v(i) = f_{act}( \sum_{j \in R_{i}}w_{i,j} v(j) ), \end{equation} where $R_{i}$ is the set of neurons from which neuron $i$ receives input, $w_{i,j}$ is the weight between neuron $i$ and neuron $j$, and $f_{act}$ is the activation function of that layer. Most of the neurons in our network are Rectified Linear Units (ReLu)~\cite{relu2010}, where $f_{act}(x) = \max(0,x)$. \cite{relu2010} showed that ReLus are good for recognition tasks and fast to train. We use the hyperbolic tangent as the activation function in the last layer of the regression task, and the logistic function in all the last layer of detection tasks. \vspace{-0.06in} \subsection{Training} \vspace{-0.08in} We jointly train the regression and detection networks with the global cost function in (\ref{equ:globalcost}). We use back-propagation~\cite{Lecun98} to update the weights. Given a training image, predictions for both tasks are calculated, and the corresponding gradients are back-propagated through the network. For layers with several output layers, the gradient from their output layers are summed together for weight updating. ``Dropout'' \cite{dropout2012} is also used in the first fully connected layers for the regression and detection tasks to prevent over-fitting. The dropout probability is set to be 0.5 in the experiments. In each iteration, the neurons in dropout layers will be randomly selected with probability 0.5 to forward their activation to the output units, and only the selected neurons will participate in the back-propagation during this iteration. In the testing stage, all the neurons are used for prediction with their activation value multiplied by 0.5 for normalization. This strategy turns out to be very effective, since without ``dropout'', our network will severely overfit. We refer reader to \cite{Alex2012} for more details about the training procedure. % \section{Experiments} \vspace{-0.1in} We present experiments using our method HMLPE (heterogeneous multi-task learning for pose estimation). \vspace{-0.05in} \subsection{Training data} We collect training data from several data sets, including Buffy Stickmen~\cite{Eichner2012IJCV}, ETHZ Stickmen~\cite{Eichner2009BMVC}, Leed Sport Pose (LSP~\cite{Johnson2011}), Synchronic Activities Stickmen (SA~\cite{Eichner-pami2012}), Frames Labeled In Cinema (FLIC~\cite{modec13}), We Are Family(WAF)~\cite{eichner2010}. For Buffy, LSP, FLIC we only use their respective training sets, while we use the whole ETHZ, SA, and WAF datasets for training. In total, we have collected 8427 images for training. % We represent the human body with a set of joints, and use the segments between those joints to represent body parts. For data sets with only stick labels, we use the nearest end of stick or average of nearest ends as the joint point. We define 8 joints (nose, neck, left and right shoulders, left and right elbows, and left and right wrists), and 7 body parts (head, left and right shoulder, left and right upper arms, and left and right lower arms). Since Buffy, ETHZ, SA, WAF only provide the upper-end and lower-end of the head, we use the middle point as the nose position. We illustrate our parts and joints definition in Figure~\ref{fig:annotation}. % % Bounding boxes for the training images are generated according to the ground-truth labels. % We select a bounding box for each training image that contains all the annotated body parts, % and then resize the image inside the bounding box to $128 \times 128$. We then augment the dataset by randomly selecting 16 bounding box of size $112 \times 112$ inside the extracted human image, and apply a mirror transformation to double the training set. In total, the training set is augmented by a factor of 32. In the current experiments, images with occluded body parts are removed, % although our framework could be extended to handle training poses with occlusion. % \vspace{-0.02in} \subsection{Experiment setup} For our HMLPE, the pose regression task predicts 8 joint positions (16 outputs in total), and the detection task has 7 body parts. For the detection task in HMLPE, we use 64 local windows of equal size uniformly distributed in the bounding box. The window size is set to $30 \times 30$ in all experiments, which is comparable to the size of a body parts found in the training set. We pre-train the network using the training data discussed in the previous section, in order to obtain an initial network. % Then, we use the initial network as the starting point for training the network using the training data of a specific dataset, either Buffy or FLIC. The initial network serves as a prior to help regularize the network. % We train and evaluate our network on a Dell T3400 with GTX 770 4G. Training the network takes 1 to 2 days, while the evaluation for 4000 images takes 5-6 seconds. % \vspace{-0.02in} \subsection{Evaluation on Buffy Set} \begin{table} \small \begin{center} \caption{PCP on Buffy test set. LL, RL, LU, and RU mean left-lower, right-lower, left-upper, and right-upper.} \label{tab:pcp-buffy} \begin{tabular}{|c|c|c|c|c|} \multicolumn{5}{c}{whole test set (276 images)} \\ \hline PCP ($\alpha = 0.5$) & LL arms & RL arms& LU arms & RU arms \\ \hline HMLPE & 55.80 & 56.88 & 90.22 & 93.12 \\ \hline RoDG-Boost\cite{Hara2013} & \multicolumn{2}{c|}{51.5} & \multicolumn{2}{c|}{92.8} \\ \hline Eichner\cite{Eichner2012IJCV} & \multicolumn{2}{c|}{50.0} & \multicolumn{2}{c|}{81.9} \\ \hline MoP \cite{Yi2013pami} $M=6$ & 51.45 & 55.43 & 82.25 & 87.68 \\ \hline MoP \cite{Yi2013pami} $M=9$ & 56.52 & 55.80 & 84.78 & 89.13 \\ \hline MoP \cite{Yi2013pami} $M=12$ & 60.87 & 59.78 & 85.87 & 88.41 \\ \hline \multicolumn{5}{c}{test subset with correct upper-body detections (267 images)}\\ \hline HMLPE & 57.68 & 58.80 & 93.26 & 96.26 \\ \hline MoP \cite{Yi2013pami} & \multicolumn{2}{c|}{57.5} & \multicolumn{2}{c|}{94.3}\\ \hline \end{tabular} \vspace{-0.3in} \end{center} \end{table} % We use the same upper body detector as \cite{Eichner2012IJCV, Hara2013}. In order to get the human bounding box, the width and height of the upper body detection windows are scaled by a fixed factor ($s_{width}=1.7$, $s_{height}=4.2$), which were empirically set according to the training set. The scaled detection window is used as the human bounding box, and the image is cropped and resized to $112 \times 112$. We use Percentage of Correct Part (PCP) to measure the accuracy of pose estimation. As pointed out in \cite{Hara2013}, the previous PCP evaluation measure does not compute PCP correctly. We use the evaluation tool provided by \cite{Hara2013} to calculate the corrected PCP, where an estimated body part with end points $(e_{1}, e_{2})$ is considered as correct if \begin{equation} \begin{split} \| e_{1} - g_{1} \|_{2} \le \alpha \cdot L & \ \ \mathrm{and}\ \ \| e_{2} - g_{2} \|_{2} \le \alpha \cdot L \\ & \mathrm{or} \\ \| e_{2} - g_{1} \|_{2} \le \alpha \cdot L & \ \ \mathrm{and}\ \ \| e_{1} - g_{2} \|_{2} \le \alpha \cdot L \end{split} \end{equation} where $(g_{1}, g_{2})$ and $L$ are ground truth position and length of the part, and $\alpha$ is the parameter for PCP. We use the standard value of $\alpha=0.5$. Table~\ref{tab:pcp-buffy} presents the PCP results of lower and upper arms (since we have different definitions of torso and head parts, we do not show the evaluation here). % % % On the whole Buffy test set (276 images), HMLPE achieves better results than \cite{Hara2013, Eichner2012IJCV} on the more difficult parts, lower arms (4.8\% improvement), but gets a slightly worse result than \cite{Hara2013} on upper arms (1.1\% lower) . % % % Evaluation on the whole Buffy test set includes errors due to mis-detection of the upper body. % To investigate the pose estimation performance alone, we also present results on the subset of the Buffy test set where the upper body detector predicts the correct bounding box. In this case, HMLPE achieves slightly better results than \cite{Yi2013pami} (0.7\% better on lower arms and 0.5\% better on upper arms). % % We also run the code from \cite{Yi2013pami} on our full training set, % using different number of components per part (denoted as $M=\{6,9,12\}$). % % The default setting of $M=6$ gets worse results than the model trained with only the Buffy training set, % most likely because the full training set contains more variance in poses. % % % % Increasing the number of components improves the accuracy, but at an increased cost of training, e.g., 4 days were needed to train the $M=9$ model. % % Using $M=12$, \cite{Yi2013pami} has better PCP (4\%) on the lower arms compared to HMLPE. On the other hand, HMLPE has better PCP (4.5\%) than \cite{Yi2013pami} on upper arms. % % % % % % % \subsection{Evaluation on FLIC Data set} Next we evaluate on the FLIC test set. We use the same torso box as \cite{modec13} with scale factors ($s_{width} = 3.5$, $s_{height} = 4.5$) set empirically from the training set. \cite{modec13} uses the following accuracy to evaluate their performance, \begin{equation} acc_{{J}_{i}}(r) = \frac{100}{N_{sample}} \sum_{t = 1}^{N_{sample}} \pmb{1}\left(\frac{100\cdot \| {J}_{i}^{(t)} - \hat{J}_{i}^{(t)} \|_{2}}{\|{J}_{lhip}^{(t)} - {J}_{rsho}^{(t)} \|_{2}} \le r\right). \end{equation} where ${J}_{i}^{(t)}$ and $\hat{J}_{i}^{(t)}$ are the ground truth annotation and predicted position for the $i$-th joint point of test image $t$. Since \cite{modec13} compares their methods with several previous approaches, and show that their model performs the best under this criteria, we only compare with \cite{modec13}. The accuracy results are shown in Figure~\ref{fig:flic-compare}. HMLPE has better accuracy with a looser criteria (larger $r$); for $r=20$, the accuracy of HLMPE on wrists and elbows is about $6\%$ higher than MODEC. On the other hand, HLMPE has worse accuracy with a strict criteria (when $r=6$, HMLPE is about $7\%$ and $5\%$ lower than MODEC on wrists and elbows). These results suggest that HLMPE can robustly estimate the general pose, but is less accurate at estimating the exact location of each joint. Also, we have trained MODEC on our full training set, but did not observe any improvement. In addition, we measure PCP on the FLIC dataset to facilitate future comparisons (see Table~\ref{tab:pcp-flic}). \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth,height=0.9\linewidth]{img/HMLPE-FLIC-cvpr2014ws-crop-embed.pdf} \end{center} \vspace{-0.1in} \caption{Test results on the FLIC data set.} \vspace{-0.10in} \label{fig:flic-compare} \end{figure} \begin{table} \begin{center} \small \caption{PCP on FLIC test set.} \label{tab:pcp-flic} \begin{tabular}{|c|c|c|c|c|} \hline PCP( $\alpha = 0.5$) & LL arm & RL arm& LU arm & RU arm \\ \hline HMLPE & 59.05 & 56.70 &92.81 & 92.52 \\ \hline \end{tabular} \vspace{-0.3in} \end{center} \end{table} \subsection{Effect of multi-task training} Next we study the effect of multi-task training, i.e., the joint learning of the regression and detection tasks. We set different values for the weights of the regression and detection tasks. All parameters except the weights on the cost function are kept the same. We % show training and testing error in Figure~\ref{fig:net-structure-comparision} and in Table~\ref{tab:net-structure-comparison}. Firstly, the network with only the regression task performs poorly on both the training and testing sets.\footnote{Training the network with different initializations gave similar results.} Even using tiny weights on the detection tasks help to improve the convergence, leading to a significant performance increase. Within a certain range, increasing weights on the detection tasks leads to lower errors on the test set. For larger weights on the detection tasks, the performance decreases. This is reasonable since the gradient will be dominated by detection task in this case. These results suggest that the regression task benefits greatly from the feature representation induced by the detection tasks. The gradients from detection tasks not only guide the network to converge to a better minimum on the training set, but also help to enhance the generalization. Although the network needs to learn 7*8*8 detectors from limited training data, sharing features among the detection and regression tasks seems to be an effective way for learning useful features for both tasks. \begin{figure*}[t] \begin{center} \small \begin{tabular}{cccc} \multicolumn{2}{c}{\quad \quad pose regression task} & \multicolumn{2}{c}{part detection tasks} \\ \parbox{0.24\textwidth}{ \quad \qquad \qquad training error} & \parbox{0.2\textwidth}{\qquad \quad test error} & \parbox{0.2\textwidth}{\quad\qquad training error} & \parbox{0.25\textwidth}{\quad\qquad test error} \\ \multicolumn{4}{l}{\includegraphics[width=0.9\linewidth, height=0.4\linewidth]{img/task-weighting.pdf}} % % % % \end{tabular} \caption{Effect of changing the weights in multi-task learning: training and test errors for (left) the pose regression task, and (right) the detection tasks. The test errors are the average costs of regression (left) and detection (right) tasks on the Buffy and FLIC test datasets.} \vspace{-0.3in} \label{fig:net-structure-comparision} \end{center} \end{figure*} \begin{table}[tbhp] \caption{Effect of changing the weights on each task - the training and testing errors are for epoch 100.} \tabcolsep=0.07cm \small \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $\lambda_{R}/\lambda_{P}$&0 &0.5 & 1 & 2 & 4 & $10^{10}$& $\infty$ \\ \hline training error (R)& - &0.059&0.039&0.036&0.036&0.036 &0.241\\ test error (R) & - &0.149&0.131&0.132&0.137&0.163 &0.284\\ \hline training error (P)& 0.045 &0.051&0.041&0.041&0.042&0.693 & - \\ test error (P) & 0.069 &0.070&0.064&0.064&0.066&0.693 & - \\ \hline \end{tabular} \label{tab:net-structure-comparison} \end{table} \vspace{-0.1in} \section{Visualization of features} \label{sec:visualization} In this section, we investigate the features learned by the network. Since the first convolutional layer operates on the input image, the filter response can reflect what low-level patterns in the image to which the neurons are sensitive. The learned filters are in Fig.~\ref{fig:vt}a, and as expected, they look like edge or gradient detectors for different orientations. For the 2nd and 3rd layers (mid- and high-level features), we use a different approach than \cite{Quoc2012}, which finds the input that maximizes one specific neuron. Instead, we use the property that our network is only locally connected in the first 6 layers. That is, the activation of some neurons in the middle layers are only affected by a sub-region of the input image. In addition, the connection is regular, we can backtrack through the network to find the region of the image from which a neuron received its input. We present the backtracking algorithm in Algorithm~\ref{algo:backtrack}. Since filter weights are shared within the same feature map, neurons in the same map are ``expecting'' the same local patterns in the previous layer. Based on these properties, we consider the activation of one feature map at a time. Instead of solving an optimization problem, we select the patches in the original image that contribute to the maximum activation in one feature map. Figure~\ref{fig:backtrackingshow} shows the backtracked patches on a Buffy test image for different features in the 3rd convolutional layer. Surprisingly, we find some feature maps work like body part detectors --- the maximal activation in some maps occurs more frequently on neurons that take inputs from region of body parts, such as head, shoulders and arms. To visualize the feature of a map, we average all its corresponding backtracked patches from all training images. The average backtracked patches for each map in the 2nd and 3rd convolutional layers are shown in Figure~\ref{fig:vt}b and \ref{fig:vt}c. The average backtracked patches show more clear patterns of body parts like head, shoulder, upper arms. In particular, the visualizations of the mid-level features in Fig.~\ref{fig:vt}b look like body part detectors, such as head (feature 1), neck (feature 9), arms (feature 5), and shoulders (feature 14). Similarly, the high-level features in Fig.~\ref{fig:vt}c look like {\em localized} body parts, e.g., heads in different positions (features 2, 3, and 11), left and right shoulders (features 1 and 10), and arms (features 6, 9, and 15). There are also a few high-level features that do not correspond to specific body parts. For example, feature 8 in Fig.~\ref{fig:vt}c has two horizontal bands of color, and appears to respond to horizontal background structures, such as windows and the tops of door frames (see Fig.~\ref{fig:backtrackingshow}). This feature could be useful for identifying context information, such as the location of the top of the door relative to the top of the head. \begin{algorithm}[b!] \caption{Backtracked patches} \small \label{algo:backtrack} \begin{algorithmic} \REQUIRE $layer\_list, R = (mx,my,mx,my) $ \COMMENT{(mx,my) are the location of maximum activation} \FOR{$l$ in reversed($layer\_list$)} \STATE $R_{lx} \gets R_{lx} \cdot l.stride$ \STATE $R_{ly} \gets R_{ly} \cdot l.stride$ \STATE $R_{ux} \gets R_{ux} \cdot l.stride + l.filter\_size - 1$ \STATE $R_{uy} \gets R_{uy} \cdot l.stride + l.filter\_size -1$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \multicolumn{2}{c}{ \raisebox{0.35in}{(a)} \includegraphics[width=0.8\linewidth]{img/filter-net-10-22-conv1.png} }\\ \raisebox{0.45in}{(b)} \includegraphics[width=0.43\linewidth]{img/avgconv2.png} & \raisebox{0.45in}{(c)} \includegraphics[width=0.43\linewidth]{img/avgconv3.png} \\ \end{tabular} \caption{Visualization of low-, mid-, and high-level features in our trained network: (a) shows 32 filter weights in the first convolutional layer; visualizations of the (b) mid-level features from the second convolutional layer; and (c) high-level features from the third convolutional layer. } \vspace{-0.3in} \label{fig:vt} \end{center} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\linewidth]{img/img_37_backtrack.png} & \includegraphics[width=0.5\linewidth]{img/img_309_backtrack.png} \\ \end{tabular} \caption{Examples of backtracked patches in the original image. Each image patch is the backtracked patch that caused maximum activation in a feature map of the 3rd convolutional layer. The order of patches corresponds to the order of features in Fig.~\ref{fig:vt}c.} \label{fig:backtrackingshow} \vspace{-0.4in} \end{center} \end{figure*} \vspace{-0.1in} \section{Conclusion} \vspace{-0.05in} In this paper, we have proposed a heterogeneous multi-task learning framework with deep convolutional neural network for human pose estimation. Our framework consists of two tasks: pose regression and body-part detection via sliding-window classifiers. We empirically show that jointly training pose regression with the detection tasks guides the network to learn meaningful features for pose estimation, and makes the network generalize well on testing data. Finally, we visualize the mid- and high-level features using the average of backtracked patches from the maximally responding neurons. We found that these neurons are selective to shape patterns resembling localized human body parts. In future work, we will extend our network for learning poses with occlusion, and combine our framework with unsupervised learning for pre-training the network. In addition, we would like to extend our framework for estimating human pose from video sequences, as well as other structured objects. \begin{acknowledgements} \vspace{-0.1in} This work was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CityU 123212, CityU 118810, and CityU 119313). \vspace{-0.1in} \end{acknowledgements} \vspace{-0.1in} {\small \bibliographystyle{ieee}
2,877,628,090,615
arxiv
\section{Introduction} The most information-rich exoplanetary systems are those in which the companion happens to transit in front of its parent star. Transiting systems are enormously useful for enabling detailed measurements of a seemingly endless array of physical properties of extrasolar planets and their host stars (see reviews by \citealt{winn2009,winn2010}). The most basic properties that can be measured using transiting planets are the planet mass and radius, and so average density. These parameters alone allow for interesting constraints on the internal composition and structure of planets \citep{guillot05,fortney2007,rogers2010,miller2011}. In addition to these basic parameters, transiting planets enable the study of their atmospheres \citep{seager2000,charbonneau02,vidal03,seagerd2010} and thermal emission \citep{deming05,charbonneau2005,knutson2008}. They also allow measurement of planetary and stellar oblateness, rotation rate, and spin-orbit alignment \citep{seager02,spiegel2007,carter2010,rossiter1924,mclaughlin1924,winn2005,gaudi06}. Transiting planets may also be searched for associated rings and moons \citep{brown01,barnes04,tusnski2011}. Further, variations in transit timing may indicate the presence of other bodies in the system \citep{holman05,agol05,steffen05,ford06,ford07,kipping2009}. With sufficiently precise observations, one may constrain the presence of planets with masses smaller than that of the Earth \citep{agol06,carter2010}. The high scientific value of transiting planet systems motivated the first dedicated wide-field transit surveys, which by now have identified over 100 transiting systems (TrES, \citealt{alonso2004}; XO, \citealt{mccullough2006}; HATNet, \citealt{bakos2007}; SuperWASP, \citealt{cameron2007a}, QES, \citealt{alsubai2011}). Although there is substantial diversity in their design, strategy, and sensitivity, these surveys can be grossly characterized as having relatively small cameras with apertures of order $10$~cm, and detectors with relatively wide fields-of-view of tens of square degrees. These surveys are primarily sensitive to giant, close-in planets with radii $R_P\ga 0.5R_{\rm Jup}$ and periods of $P\la 10~{\rm d}$, orbiting relatively bright FGK stars with $V\sim 10-12$. The space-based missions CoRoT \citep{baglin2003} and Kepler \citep{borucki2010} have dramatically expanded the parameter space of transit surveys, enabling the detection of planets with sizes down to that of the Earth and below, planets with periods of several years, and planets orbiting a much broader range of host stars. Furthermore, their large target samples have allowed the detection of rare and therefore interesting planetary systems. These missions have already announced over 50 confirmed planets, and the Kepler mission has announced an additional $\sim 2300$ candidates \citep{batalha2012}, most of which are smaller than Neptune. Notable individual discoveries include the first detection of a transiting Super-Earth \citep{leger2009}, the detection of a `temperate' gas giant with a relatively long period of $ \sim 100$ days \citep{deeg2010}, the first multi-planet transiting systems \citep{steffen2010,holman2010,latham2011,lissauer2011}, the first circumbinary planets \citep{doyle2011,welsh2012}, and the detection of planets with radius of $\la R_\oplus$ \citep{muirhead2012, fressin2012}. Although Kepler and CoRoT have revolutionized our understanding of the demographics of planets, the opportunities for follow-up of the systems detected by these missions are limited. By design, both missions primarily monitor relatively faint stars with $V\ga 12$. Consequently, many of the follow-up observations discussed above that are generically enabled by transiting systems are not feasible for the systems detected by Kepler and CoRoT. Detailed characterization of the majority of these systems will therefore be difficult or impossible. There is thus an ongoing need to discover transiting planets orbiting the bright stars, as well as to increase the diversity of such systems. All else being equal, the brightest stars hosting transiting planets are the most valuable. Larger photon flux permits more instruments and/or facilities to be employed for follow-up, allows subtler effects to be probed, reduces statistical uncertainties, and generally allows for improved or more extensive calibration procedures that help to control systematic errors. Furthermore, brighter stars are also easier to characterize, and are more likely to have pre-existing information, such as proper motions, parallaxes, metallicities, effective temperatures, angular diameters, and broadband colors. The majority of the brightest ($V\la 8$) FGK dwarfs in the sky have been monitored using precision radial velocity surveys for many years, and as a result most of the giant planets with periods of less than a few years orbiting these stars have already been discovered (e.g., \citealt{wright2012}). A smaller subset of these stars have been monitored over a shorter time baseline with the sensitivity needed to detect Neptune- and SuperEarth-mass planets. Because of the low a priori transit probability for all but short period planets, the transiting systems constitute a very small fraction of this sample. To date, seven planets first discovered via radial velocity have subsequently been discovered to also transit; all of the host stars for these planets are brighter than $V=9$. Although there are projects that aim to increase this sample \citep{kane2009}, the overall yield is expected to be small. Because RV surveys generically require spectroscopic observations that are observationally expensive and must be obtained in series, it is more efficient to discover transiting planets around the much more abundant fainter stars by first searching for the photometric transit signal, and then following these up with targeted RV observations to eliminate false positives and measure the planet mass. However, in order to compensate for the rarity and low duty cycle, many stars must be monitored over a long time baseline. Photometric transit surveys that target brighter stars therefore require larger fields of view. Most of the original transit surveys had fields of view and exposure times that were optimized to detect planets orbiting stars with $V\ga 10$. Indeed, only $\sim 20$ transiting planets orbiting stars with $V\le 10$ are currently known ($\sim 40$ with $V\la 11$). Of those with $V\le 10$, $\sim 40\%$ were originally detected by RV surveys. The Kilodegree Extremely Little Telescope-North (KELT-North) transit survey \citep{KELT_SYNOPTIC} was designed to detect giant, short-period transiting planets orbiting the brightest stars that are not readily accessible to RV surveys. \citet{KELT_THEORY} determined the optimal hardware setup specifically to detect transiting planets orbiting stars with $V\sim 8-10$, and based on the specified design requirements in that paper, the KELT-North survey telescope system was constructed using off-the-shelf, high-end consumer equipment. In fact, as the current detection demonstrates, KELT has exceeded its design goals, and is sensitive to transiting systems in some favorable cases down to $V\sim 12$. In addition to the goal of filling in the magnitude gap between radial velocity and other transit surveys, the KELT-North survey also has the potential to detect fainter systems with $V\ga 10$ that are in the magnitude range of previous surveys, but were missed or overlooked for various reasons. The detection discussed in this paper is an example of this opportunity. Here the fact that the KELT-North survey is only now starting to vet candidates, more than eight years after the first candidates were announced by other transit surveys, can be seen an advantage. In particular, previous surveys have established the existence of massive brown dwarf companions \citep{deleuil2008,irwin2010,bouchy2011a,johnson2011,bouchy2011b}, and have demonstrated the feasibility of detecting low-mass companions to hot, rapidly rotating stars \citep{cameron2010}. Partially in response to these results, the KELT-North survey deliberately broadened our search targets to include hot and/or rapidly-rotating stars, which were previously neglected by many transit surveys. The evolving perception of what kinds of stars constitute viable transit search targets played an interesting role in the discovery of KELT-1b, as discussed in \S\ref{sec:hat}. The KELT-North survey has been collecting data since September 2006, and has acquired a sufficient number of high-quality images to detect transit candidates. We have been systematically identifying and vetting transit candidates since February 2011, and in this paper we report our first confirmed low-mass transiting companion, which we designate KELT-1b. KELT-1b has a mass of $\sim 27~M_{\rm Jup}$, and we will therefore follow convention and refer to it as a `brown dwarf' throughout the majority of this paper. However, as we discuss in \S\ref{sec:bdd}, we are, in fact, agnostic about its true nature and therefore how it should be categorized. The outline of this paper is as follows. In order to introduce the survey and provide the appropriate context for our discovery, in \S\ref{sec:survey} we summarize the properties of the KELT-North survey and our procedure for candidate selection. In \S\ref{sec:observations} we review the observations of KELT-1, starting with the properties of the candidate in the KELT-North data, and then summarize the follow-up photometry, spectroscopy, and high-contrast imaging. \S\ref{sec:char} describes our analysis and characterization of the host star and its substellar companion. In \S\ref{sec:discussion} we provide a speculative discussion of the possible implications of this unique system for theories of the emplacement and tidal evolution of short-period substellar companions, models of the structure and atmosphere of brown dwarfs, and the demographics of substellar companions to stars. We briefly summarize in \S\ref{sec:summary}. \section{The KELT-North Survey}\label{sec:survey} Because this is the first paper from the KELT-North survey, we describe the survey, selection criteria, and follow-up observations and reduction methodology in some detail. Readers who are not interested in these details, but are rather primarily interested in the properties and implications of the KELT-1b system, can skip to \S\ref{sec:char}. \subsection{KELT-North Instrumentation and Survey Strategy}\label{sec:instrument} The KELT-North survey instrument consists of a collection of commercially-available equipment, chosen to meet the requirements of \cite{KELT_THEORY} and tuned to find the few brightest stars with transiting planets in the Northern sky. The optical system consists of an Apogee AP16E (4K x 4K 9$\mu$m pixels) thermo-electrically cooled CCD camera attached using a custom mounting plate to a Mamiya camera lens with a 80mm focal length and 42mm aperture (f/1.9). The resultant field of view of the detector is $26^\circ \times 26^\circ$ at roughly $23$\arcsec per pixel, allowing simultaneous observation of nearly 40,000 stars in typical high Galactic latitude fields. The medium-format image size is markedly larger than the CCD detector (which measures $37 \times 37$mm) which greatly reduces the severity of vignetting across the large field of view. At the same time, the small aperture permits longer exposures, which improve observing efficiency (assuming fixed camera read-out time). A Kodak Wratten \#8 red-pass filter is mounted in front of the lens to further reduce the impact of atmospheric reddening (which primarily affects blue wavelengths) on our photometry. The resultant bandpass resembles a widened Johnson-Cousins $R$-band. This optical system is mounted atop a Paramount ME robotic mount from Software Bisque on a fixed pier at Winer Observatory in Sonoita, AZ (Latitude 31$^\circ$ 39\arcmin 56.08\arcsec N, Longitude 110$^\circ$ 36\arcmin 06.42\arcsec W, elevation 1515.7 meters). See \citet{KELT_SYNOPTIC} for additional details about the system hardware. The primary KELT-North transit survey consists of 13 fields centered at 31.7 declination, spanning all 24 hours of right ascension. Including the slight overlap between fields, the total survey area is $\approx 40\%$ of the Northern sky. Survey observations consist of 150-second exposures with a typical per-field cadence of 15-30 minutes. The KELT-North telescope has been collecting survey data in this manner since September 2006, and to date has acquired between 5000 and 9300 images per field. Given this quantity of data and the typical achieved photometric precision of $\sim 1\%$ for $V\la 11$, the KELT-North survey is able to detect short-period giant transiting planets orbiting most FGK stars with magnitudes from saturation near $V\sim 8$ down to $V\sim 12$. \subsection{KELT-North Pipeline}\label{sec:pipeline} Relative photometry is generated from flat-fielded images using the ISIS image subtraction package (\citealt{ISIS,alard2000}, see also \citealt{hartman2004}), in combination with point-spread function fitting using the stand-alone DAOPHOT II \citep{stetson1987,stetson1990}. Although highly effective, the image subtraction procedures are highly computer-intensive. To improve reduction performance, the default ISIS scripts were modified to facilitate distributed image reduction across many computers in parallel. ISIS operation in this fashion permits thorough exploration of various reduction parameters, which would be intractable if executed serially. Other elements of the ISIS reduction package have also been modified or replaced with more robust alternatives. For example, the standard ISIS source-identification routines and utilities are ill-equipped to deal with the nature and ubiquity of aberrations in KELT-North images. In response, we have replaced the ISIS `extract' utility with the popular SExtractor program \citep{SExtractor}. A more complete explanation of these modifications and driver scripts that implement them are available online\footnote{http://astro.phy.vanderbilt.edu/$\sim$siverdrj/soft/is3/index.html}. \subsection{KELT-North Candidate Selection} Once we have the light curves created by {\sc ISIS} for all of the {\sc DAOPHOT}-identified point sources in the reference image, we begin a series of post-processing steps before doing the initial candidate selection. To begin, we convert the {\sc ISIS} light curves from differential flux to instrumental magnitude using the results of the {\sc DAOPHOT} photometry on the reference image. We also apply 5$\sigma$ iterative clipping to all of the light curves at this stage; this typically removes $\sim 0.6\%$ of the data points. All of the uncertainties for the converted and clipped light curves in a given field are then scaled as an ensemble. The scaling is chosen such that the $\chi^2/{\rm dof}=1$ for the main locus of the light curves on a magnitude versus $\chi^2/{\rm dof}$ plot. Typically this scaling is around a factor of 1.2, implying that the uncertainties are somewhat underestimated. We next attempt to match all of the {\sc DAOPHOT}-identified point sources in the reference image to stars in the Tycho-2 catalog. We obtain a full-frame WCS with sub-pixel accuracy on our reference frame using Astrometry.net \citep{lang2010}. Using this solution, we match stars by taking the closest Tycho-2 entry within $45$\arcsec. This typically generates matches for 98\% of the Tycho-2 stars within each field. A successful Tycho-2 match also will provide a 2MASS ID. We use the proper motions and $JHK$ apparent magnitudes from these two catalogs. With this catalog information, we next identify and exclude giant stars by means of a reduced proper motion ($H_J$) diagram \citep{gould2003}. Following the specific prescription of \citet{cameron2007b}, we place each of our matched stars on a $J$ vs. $H_J$ plot. We compute the reduced proper motion of a star as \begin{equation} H_{J} = J + 5\log(\mu/{\mathrm{mas~yr^{-1}}}) \label{eqn:rpm} \end{equation} and determine the star to be a giant if it fails to satisfy \begin{eqnarray}\nonumber H_J &>& -141.25(J-H)^4 + 473.18(J-H)^3\\ && - 583.6(J-H)^2 + 313.42(J-H) - 43.0 \label{eqn:rpmcut} \end{eqnarray} This process leaves us with anywhere from 10,000 to 30,000 catalog-matched putative dwarf stars and subgiants (hereafter dwarfs) per field, depending primarily on the location of the field relative to the Galactic plane. The dwarfs are then run through the Trend Filtering Algorithm \citep[TFA,][]{kovacs2005}\footnote{We used the versions of TFA and BLS (described later) found in the {\sc vartools} package \citep{hartman2008}.} to reduce systematic noise. We select a new set of detrending stars for each light curve by taking the 150 closest stars -- outside of a 20 pixel exclusion zone centered on the star being detrended -- that are within two instrumental magnitudes of the star being detrended. KELT's Paramount ME is a German Equatorial mount, which requires a "flip" as it tracks stars past the meridian. Therefore, the optics and detector are rotated 180 degrees with respect to the stars between observations in the Eastern and Western hemispheres, and detector defects, optical distortions, PSF shape, flat fielding errors, etc., for a given star can be completely different. This requires us to treat observations in the East and West essentially as two separate instruments. Thus the preceding steps (magnitude conversion, error scaling, dwarf identification, TFA) are each performed separately on the East and West images of each field. After the dwarf stars in the East and West have been run through TFA, we then combine the two light curves of each target into one East+West light curve. We first match stars from the East and the West pointings by their Tycho IDs, and then determine the error-weighted scaling factor of the Western light curve needed to match the error-weighted mean instrumental magnitude of the East light curve. All of the light curves from the matched Tycho dwarf stars in a field are given an internal ID. We next search the combined East+West light curves of the dwarfs for transit-like signals using the box-fitting least squares algorithm (BLS; \citealt{kovacs2002}). We use a custom version of BLS modified to skip over integer and half integer multiples of the sidereal day to reduce the effect of spurious signals due to diurnal systematics and their aliases on the BLS statistics. We perform selection cuts along six of the statistics that are output by the {\sc vartools} implementation of the BLS algorithm: signal detection efficiency SDE, signal to pink noise SPN\footnote{See \citet{kovacs2002} and \citet{hartman2009}, respectively, for the definitions of SDE and SPN}, the fraction of transit points from one night $f_{1n}$, depth $\delta$, the ratio of $\Delta\chi^2$ for the best transit model to best inverse transit model $\Delta\chi^2/\Delta\chi^2_{-}$ \citep{burke2006}, and the fraction of the orbit spent in transit or duty cycle $q$. In order to determine the appropriate threshold values for these statistics, we injected realistic transit signals with a range of properties into a large sample of light curves, and then attempted to recover these using the BLS algorithm. We then determined the values of these statistics that roughly maximize the overall detection efficiency while minimizing the number of spurious detections. The final adopted values are given in Table \ref{tab:criteria}. In addition to the cuts we make on the BLS statistics, we also impose restrictions on the effective temperature and inferred density of the candidate host stars. For the temperature, we require that $\ensuremath{T_{\rm eff}}<7500$K. We calculate the stellar effective temperature of each candidate from its 2MASS $J-K$ colors. We used the Yonsei-Yale isochrones \citep{demarque2004} at 5 Gyr with solar metallicity and no alpha enhancement to create a simple polynomial fit for $\ensuremath{T_{\rm eff}}$ as a function of $J-K$: \begin{eqnarray}\nonumber \log\ensuremath{T_{\rm eff}}&=&3.94808-0.7353(J-K)\\ &&+1.0116(J-K)^2-0.8334(J-K)^3. \label{eqn:teffrho} \end{eqnarray} As we have conducted our follow-up spectroscopy, we have found that this relation generally predicts $\ensuremath{T_{\rm eff}}$ to within $\sim 100$K for $\ensuremath{T_{\rm eff}}\la 7000$K and to within $\sim 300$K for stars with $\ensuremath{T_{\rm eff}}=7000-7500$K. \begin{deluxetable}{ll|ll} \tabletypesize{\small} \tablecaption{\sc KELT-North BLS Candidate Selection Criteria} \tablewidth{0pt} \startdata \hline \\ Signal Detection Efficiency & SDE$>$7.0 & Depth & $\delta <0.05$ \\ Signal to Pink-noise & SPN$>$7.0 & $\chi^2$ ratio & $\frac{\Delta\chi^2}{\Delta\chi_{-}^2} > 1.5$ \\ Fraction from one night & $f_{1n}<0.8$ & Duty cycle & $q< 0.1$\\ \enddata \label{tab:criteria} \end{deluxetable} We also require that the stellar density, $\rho_*$, as inferred from the BLS transit fit to the KELT-North light curve, to be within 1.0 dex of the stellar density calculated for each star using its $J-K$ colors, assuming the star is on the main sequence. A large disparity in the observed versus the calculated density is indicative of a blend or of a giant that made it through the reduced proper motion cuts \citep{seager2003}. Again using the Yonsei-Yale isochrones at 5 Gyr with solar metallicity and no alpha enhancement, we made a fit for density as a function of $J-K$: \begin{eqnarray}\nonumber \log (\rho_{*,\rm{calc}}/\rho_\odot)&=&-1.00972+2.82824(J-K)\\ &&-1.19772(J-K)^2. \label{eqn:rhocalc} \end{eqnarray} We require that this value be within 1.0 dex of the stellar density we calculate from the KELT-North lightcurve \begin{equation} \log \rho_{*,\rm obs}=\log\left[\frac{3}{G\pi^2q^3P^2}\right], \label{eqn:rhoobs} \end{equation} where $P$ and $q$ are the orbital period and duty cycle (transit duration relative to the period) as returned by BLS. This equation assumes circular orbits and that the companion mass is much smaller than the host star mass, $M_P \ll M_*$. Also, because BLS does not attempt to fit for the ingress/egress duration, and furthermore KELT-North data typically do not resolve the ingress or egress well, we are not able to determine the transit impact parameter and thus the true stellar radius crossing time. Equation \ref{eqn:rhoobs} therefore implicitly assumes an equatorial transit, and so formally provides only an upper limit to the true stellar density. For a transit with an impact parameter of $b=0.7$, the true density is $\sim 0.5$ dex smaller than that inferred from Equation \ref{eqn:rhoobs}. All of the light curves that pass these selection criteria are designated as candidates, and a number of additional diagnostic tests are then performed on them, including Lomb-Scargle (LS, \citealt{lomb76,scargle82}) and AoV \citep{AoV,devor2005} periodograms. The results of these tests, the values of the BLS statistics, the light curves themselves, as well as a host of additional information, are all collected into a webpage for each candidate. Members of the team can then use this information to vote on the true nature of the candidate (probable planet, eclipsing binary, sinusoidal variable, spurious detection, blend or other). All candidates with at least one vote for being a probable planet are then discussed, and the most promising are then passed along for follow-up photometry, reconnaissance spectroscopy, or both. \section{Observations}\label{sec:observations} \subsection{KELT-North Photometry, Candidate Identification, and Vetting Overview} KC20C05168 emerged as a strong candidate from the analysis of the combined light curves from stars in the overlap area between fields 1 and 13. The KC20C05168 light curve contains 8185 epochs distributed over $\sim 4.2$ years, between UT October 25, 2006 and UT December 28, 2010, with a weighted RMS of $9.8$ millimagnitudes (mmag). This RMS is typical for KELT-North light curves of stars with this magnitude ($V \sim 10.7$). A strong BLS signal was found at a period of $P \simeq 1.2175$ days, with a depth of $\delta \simeq 3.8$ mmag, and detection statistics SPN=8.53, SDE=12.41, $q=0.09$, $\Delta\chi^2/\Delta\chi^2_-=2.06$, and $\log(\rho_{*,obs}/\rho_{*,cal})=-0.06$. The phased KELT-North light curve is shown in Figure \ref{fig:keltlc}. A significant signal also appeared in SuperWASP data \citep{butters2010} of this star at the same period. The KELT-North data exhibit some evidence for out-of-transit variability at the mmag level and exhibit some relatively weak peaks in the LS and AoV periodograms, but we did not consider these signals to be strong enough to warrant rejection of the candidate. In addition, the depth of the photometric transit signal in the original KELT-North light curve is substantially smaller than we find in the high-precision follow-up data (see \S\ref{sec:phot}). Further analysis indicates that the out-of-transit variability and smaller depth were likely due to a minor problem with the original data Based on the strength of the K20C05168 signal, the estimated effective temperature of the host star of $\ensuremath{T_{\rm eff}} \sim 6500$K, and the fact that the star was sufficiently isolated in a DSS image, we submitted the candidate for reconnaissance spectroscopy with the Tillinghast Reflector Echelle Spectrograph (TRES; \citealt{furesz2008}) on the 1.5m Tillinghast Reflector at the Fred Lawrence Whipple Observatory (FLWO) on Mount Hopkins in Arizona. The first observation on UT November 9, 2011 at the predicted quadrature confirmed the $\ensuremath{T_{\rm eff}}$ estimate of the star, and also demonstrated that it was a slightly evolved dwarf with $\ensuremath{\log{g}} \sim 4$, and that it was rapidly rotating with $\ensuremath{\,{v\sin{I_*}}} \sim 55~\ensuremath{\,{\rm km~s^{-1}}}$. A second observation was obtained on UT November 11, 2011 separated by $\sim 1.9$ days, or $\sim 1.54$ in phase, from the first observation and thus sampled the opposite quadrature. The two observations exhibited a large and significant radial velocity shift of $\sim 8\ensuremath{\,{\rm km~s^{-1}}}$, consistent with a brown dwarf companion. Efforts to obtain photometric follow-up during the primary transit and secondary eclipse were then initiated. Concurrently, additional spectra with TRES were taken to characterize the spectroscopic orbit. In addition, we obtained adaptive optics imaging of the target to search for close companions. Finally, once we were fairly confident that the signals were due to a low-mass transiting companion, we obtained continuous spectroscopic time series with TRES during the primary transits on UT December 21, 2011 and UT January 7, 2012 for the purposes of measuring the Rossiter-McLaughlin (RM) effect. All of these observations are described in greater detail in the subsequent sections and summarized in Table \ref{tab:obs}. \begin{figure}[t] \epsscale{1.1} \plotone{keltlc.eps} \caption{ \label{fig:keltlc} The KELT-North light curve of KELT-1 phased to the BLS determined period of $P=1.2175$ days is shown in the grey points. The black points show the data binned 0.02 in phase. } \end{figure} \subsection{Previous Identification of the Photometric Candidate by HATNet}\label{sec:hat}. KELT-1b was also recognized as a photometric transiting planet candidate by the HATNet project, based on observations obtained in 2006. In September 2009 the candidate HTR162-002 was forwarded to D.\ Latham's team for spectroscopic follow up. An initial observation with TRES confirmed that the target was a late F main-sequence star, as expected from 2MASS color $J-K_s$=0.245. The synthetic template with $\ensuremath{T_{\rm eff}} = 6250$K and $\ensuremath{\log{g}}=4.0$ and assumed solar metallicity, gave the best match to the observed spectrum. However, that first TRES spectrum also revealed that the star was rotating rapidly, with $\ensuremath{\,{v\sin{I_*}}}=55~\ensuremath{\,{\rm km~s^{-1}}}$. At that time, D.\ Latham's team routinely put aside candidates rotating more rapidly than about $\ensuremath{\,{v\sin{I_*}}}=30~\ensuremath{\,{\rm km~s^{-1}}}$, arguing that it would not be possible to determine velocities with a precision sufficient for an orbital solution for a planetary companion. HTR162-002 remained on the HATNet "don't observe with TRES" list until it was independently rediscovered by the KELT-North team and was forwarded as candidate KC20C05168 to D.\ Latham's team in November 2011 for spectroscopic follow up with TRES. During the intervening 26 months, there were two relevant developments in the procedures and tools used by Latham's team, both resulting from contributions by L.\ Buchhave. The first development, enabled by convenient tools in the observing website, was the practice of observing new candidates only near opposite quadratures, according to the discovery ephemeris and assuming circular orbits. The second development was a much improved procedure for deriving radial velocities for rapidly-rotating stars, initially motivated by the Kepler discovery of hot white dwarfs transiting rapidly-rotating A stars \citep{rowe2010}. As it turned out, the second observation of KC20C05168 with TRES described above was taken before the first observation was reviewed, so the candidate was not relegated to the rejected list due to its rapid rotation before the opposite quadrature was observed. When the results were reviewed after the second observation, the evidence for a significant radial velocity shift between the two observations was obvious, despite the rapid rotation, therefore suggesting that the unseen companion was probably a brown dwarf, if not a giant planet. It should also be recognized that over the 26 months since the first observation of HTR162-002, the attitude against pursuing rapidly rotating stars as possible hosts for transiting planets had gradually softened among the exoplanet community. An important development was the demonstration that slowly-rotating subgiants that have evolved from rapidly-rotating main-sequence A stars do occasionally show the radial-velocity signatures of orbiting planetary companion (e.g., \citealt{johnson2007}). A second insight came from the demonstration that the companion that transits the rapidly-rotating A star WASP-33 must be a planet, using Doppler imaging \citep{cameron2010}. Finally, the discovery of transiting brown dwarf companions suggested the possibility of detecting their large amplitude RV signals even when they orbit stars with large $\ensuremath{\,{v\sin{I_*}}}$ and thus poor RV precision. In the early days of wide-angle photometric surveys for transiting planets, Latham's team had established procedures for handling candidates forwarded for spectroscopic follow up by more than one team. Such duplications were fairly common, and the goal was to assign credit to the initial discovery team, which was especially important in an era when few transiting planets had been confirmed. By the time it was noticed in mid December 2011 that KC20C05168 was the same as HTR162-002, the KELT-North team already had in hand a convincing orbital solution from TRES and high-quality light curves from several sources, confirming that KELT-1b was indeed a substellar companion. \subsection{Spectroscopy from FLWO/TRES}\label{sec:flwos} A total of 81 spectra of KELT-1 were taken using the TRES spectrograph on the 1.5m Tillinghast Reflector at FLWO. These were used to determine the Keplerian parameters of the spectroscopic orbit, measure bisector variations in order to exclude false positive scenarios, measure the spectroscopic parameters of the primary, and measure anomalous RV shift of the stellar spectral lines as the companion transits in front of the rapidly-rotating host star, i.e., the RM effect \citep{rossiter1924,mclaughlin1924}. The TRES spectrograph provides high resolution, fiber-fed echelle spectroscopy over a bandpass of $3900-8900\AA$ \citep{furesz2008}. The observations obtained here employed the medium fiber for a resolution of $R\sim 44,000$. The data were reduced and analyzed using the methods described in \citet{quinn2012} and \citet{buchave2010}. A subset of six spectra were combined in order to determine the spectroscopic parameters of the host star using the Spectral Parameter Classification (SPC) fitting program (Buchhave et al., in preparation). SPC cross-correlates the observed spectrum against a grid of synthetic Kurucz \citep{kurucz1979} spectra. This analysis yielded $\ensuremath{T_{\rm eff}}=6512 \pm 50$K, $\ensuremath{\log{g}} = 4.20 \pm 0.10$, [Fe/H]$=0.06 \pm 0.08$, and $\ensuremath{\,{v\sin{I_*}}}=55.2 \pm 2\ensuremath{\,{\rm km~s^{-1}}}$. These parameters were used as priors for the joint global fit to the RV, RM, and photometric data as described in \S\ref{sec:analysis}. Spectra were taken at a range of phases in order to characterize the spectroscopic orbit and search for bisector span variations indicative of a blend. One of these spectra happened to be taken during a primary transit on UT 2011-11-18, and so was not used in the analysis because it is likely to be affected by the RM effect. The RV and bisector data for the remaining 23 spectra are listed in Table \ref{tab:rvorbit}. These observations span $\sim 88$ days from UT 2011-11-09 through UT 2012-02-05. The uncertainties on the listed radial velocities have been scaled by a factor of 1.214 based on an independent fit to these data, as described in \S\ref{sec:analysis}. The scaled median RV uncertainty is $\sim 230~\ensuremath{\,{\rm m~s^{-1}}}$. The uncertainties in the bisector measurements have not been scaled. The median bisector uncertainty is $\sim 110\ensuremath{\,{\rm m~s^{-1}}}$. \begin{figure}[t] \epsscale{1.2} \hskip-0.5in \plotone{RVs.unphased.ps} \caption{ \label{fig:unphased} (Top panel) The black points with uncertainties show the measured RVs for KELT-1 as a function of time in $\ensuremath{\rm {BJD_{TDB}}}$. The barycentric velocity of the system, as determined from the model fit shown in red (see \S\ref{sec:analysis}), has been been subtracted from the data. (Bottom panel) The residuals from the model fit. } \end{figure} Time series spectroscopy was obtained with TRES on two different nights of primary transits in order to measure the spin-orbit alignment of the companion via the RM effect. Fifteen observations were obtained on UT 2011-12-21 and forty-two observations on UT 2012-01-07. Conditions were relatively poor for the first run, resulting in a factor $\sim 2$ larger uncertainties and incomplete coverage of the transit. We therefore decided not to include these data in our final analysis, although we confirmed that this has no effect on our final inferred parameters. The RV and bisector data for the RM run on UT 2012-01-07 are listed in Table \ref{tab:rvrm}. The RV uncertainties have been scaled by a factor of 0.731, also based on the global model fit described in \S\ref{sec:analysis}. We note that the majority of the $\chi^2$ for these data are due to a few outliers. The median scaled RV uncertainty is $\sim 160~\ensuremath{\,{\rm m~s^{-1}}}$. The bisector uncertainties were not scaled. \begin{figure}[t] \epsscale{1.2} \hskip-0.5in \plotone{RVs.phased.ps} \caption{ \label{fig:phased} The black points with uncertainties show the measured RVs for KELT-1 relative to the barycentric velocity of the system, phased to the best-fit period as determined from the model fit shown in red (see \S\ref{sec:analysis}). The phases are normally referenced to the time of periastron ($T_P$), but have been shifted such that a phase of 0.25 corresponds to the time of inferior conjunction $T_C$ or primary transit. RV data near this phase show deviations from the Keplerian expectation due to the RM effect, which was included in the model. (Middle panel) The residuals of the RV data from the model fit. (Bottom panel) Bisector spans as a function of phase. } \end{figure} \begin{figure}[t] \epsscale{1.2} \hskip-0.5in \plotone{RVs.rm.ps} \caption{ (Top panel) The black points with uncertainties show the measured RVs relative to the barycentric velocity of the system for KELT-1, as a function of the time since primary transit $T_C$, for data taken near $T_C$. The Keplerian RV variation as determined from the best-fit model has been removed from both the data and model. Data taken within $\sim 1.4$ hours of $T_C$ occur during primary transit, and are thus strongly affected by the RM effect. The shape of the RM signal indicates that the projected obliquity of the host star with respect to the orbit is small. (Bottom panel) The residuals of the data to the RM fit. \label{fig:RM} } \end{figure} All of the RV and bisector measurements used in the subsequent analysis are shown as a function of epoch of observation in \ensuremath{\rm {BJD_{TDB}}}\ in Figure \ref{fig:phased}. The measurements phased to the best-fit companion period from the joint fit to photometric and RV data are shown in Figure \ref{fig:phased}, demonstrating the very high signal-to-noise ratio with which the RV signal of the companion was detected, and the good phase coverage of the orbit. A detail of the RV data near the time primary transit with the orbital (Doppler) RV signal removed is shown in Figure \ref{fig:RM}, showing the clear detection of the RM effect and a suggestion that the orbit normal is well aligned with the projected stellar spin axis. Finally, we determined the absolute radial velocity of the system barycenter using a simple circular orbit fit to radial velocities determined from the full set of spectra, which were determined using a single order near the Mg b line. (Note that the relative RVs used for determining the orbit were determined using the full, multi-order analysis of the spectra.) The zero point correction to these velocities were determined using contemporaneous monitoring of five RV standard stars. The final value we obtain is $\gamma_{\rm obs}= -14.2 \pm 0.2~\ensuremath{\,{\rm km~s^{-1}}}$, where the uncertainty is dominated by the systematic uncertainties in the absolute velocities of the standard stars. This zero point, along with the global fit to the data in \S\ref{sec:analysis}, were used to place the instrumental relative radial velocities on an absolute scale. Therefore, the RVs listed in Tables \ref{tab:rvorbit} and \ref{tab:rvrm} are on an absolute scale. \begin{figure}[t] \epsscale{1.1} \plotone{phot.transits.ps} \caption{ The black points show the relative flux as a function of time from primary transit ($T_C$) for the six sets of follow-up observations of primary transits we analyze here. The data sets are labeled and summarized in Table \ref{tab:obs}. The data are normalized by the fitted out-of-transit flux, and a linear trend with airmass has been removed (see \S\ref{sec:analysis}). In addition, an arbitrary offset has been applied to each light curve for clarity. For each observation, we plot the data above and the residuals below. In all cases, the red lines show the model fit from the analysis in \S\ref{sec:analysis}. \label{fig:transits} } \end{figure} \subsection{Follow-Up Photometry}\label{sec:phot} We obtained high-precision follow-up photometry of KELT-1 in order to confirm the K20C05168 transit signal, search for evidence of a strongly wavelength-dependent transit depth indicative of a stellar blend, and search for evidence of a secondary eclipse. Also, these data enable precision measurements of the transit depth, ingress/egress duration, and total duration, in order to determine the detailed parameters of the KELT-1 system. In all, we obtained coverage of 9 complete and 4 partial primary transits, and two complete and one partial secondary eclipse, using six different telescopes in all. Many of these data were taken under relatively poor conditions and/or suffer from strong systematics. We therefore chose to include only a subset for the final analysis, including six of the primary transits and the three secondary eclipses. In the following subsections, we detail the observatories and data reduction methods used to obtain these data. The dates, observatories, and filters for these data sets are summarized in Table \ref{tab:obs}. The light curves for the primary transits are displayed in Figure \ref{fig:transits} and the data are listed in Tables \ref{tab:transit0} through \ref{tab:transit4}, whereas the light curves for the secondary eclipse are displayed in Figure \ref{fig:secondary} and the data are listed in Tables \ref{tab:second0} through \ref{tab:second2}. \begin{figure}[t] \epsscale{1.0} \plotone{seclc.eps} \caption{The grey points show the combined $i$ and Pan-STARRS-$Z$ relative photometry of KELT-1 as a function of the predicted time of secondary eclipse of KELT-1b ($T_S$), obtained from the observatories listed in Table \ref{tab:obs}. The data have been corrected for a linear trend with airmass and normalized by the zero point of the linear fit (see \S\ref{sec:second}). The larger circles with error bars show the binned observations. Note we do not fit to the binned data; these are shown for the purposes of illustration only. The over plotted example light curve is the secondary eclipse depth we would expect if KELT-1b had a geometric albedo of $A_g= 0.1$ and instantaneously reradiated its incident stellar flux ($f'=2/3$). We would have detected this event with a confidence level of $\ga 95\%$.} \label{fig:secondary} \end{figure} \begin{figure}[t] \epsscale{1.3} \hskip-0.5in \plotone{phot.bintrans.ps} \caption{ (Top panel) The black points show the six data sets displayed in Figure \ref{fig:transits}, combined and binned in 5 minute intervals. Since these data sets were taken with different filters and have different systematics, we do {\it not} use this combined light curve for analysis, but rather show it for the purposes of illustrating the overall quality and statistical constraining power of the ensemble of follow-up light curves. The red curve shows the six transit models for each of the individual fits combined and binned in 5 minute intervals the same way as the data. (Bottom panel) The residuals of the binned light curve from the binned model in the top panel. \label{fig:bintrans} } \end{figure} \subsubsection{Peter van de Kamp Observatory (PvdKO)}\label{sec:pvdko} Data on the primary transit starting on UT 2011-12-03 were acquired with the 0.6-meter, f/7.8 Ritchey-Chr\'{e}tien telescope at the Peter van de Kamp Observatory at Swarthmore College (Swarthmore, PA, USA). The telescope is equipped with an Apogee U16M CCD with 4096x4096 9-micron pixels, giving a field of view 26 arcminutes on a side. Available filters are 50mm-square Johnson-Cousins $UBVR_cI_c$ and SDSS $ugriz$, both from Omega Optical. The telescope is autoguided to minimize photometric errors from imperfect flatfielding, keeping the centroid shift of each star to within typically 3-4 pixels over the course of a night. The observations used here were obtained with the $i$ filter and used 2x2 binning, giving a binned pixel scale of 0.76\arcsec / pixel. The data were reduced in IRAF using standard procedures for flat-fielding (with twilight sky flats) and dark and bias subtraction. Aperture photometry was performed, and then differential magnitudes for the target star were calculated using an ensemble of comparison stars in the same field, chosen to minimize the scatter in the final light curve. \subsubsection{University of Louisville Moore Observatory (ULMO)}\label{sec:ulmo} Data on the primary transits starting UT 2011-12-03 and 2011-12-31, and on the secondary eclipses starting 2011-12-02 and 2012-01-04, were obtained with the University of Louisville Moore Observatory RC24 telescope (MORC24) located near Brownsboro, Kentucky. MORC24 is a RC Optical Systems Ritchey-Chr\'{e}tien 0.6 m telescope on an equatorial fork mount. The telescope is equipped with an Apogee U16M 4096$\times$4096 pixel CCD camera which has a focal plane scale of 0$\farcs$39 pixel$^{-1}$ and a field of view (FOV) of 26$\farcm$3$\times$26$\farcm$3. The UT 2011-12-03 and 2011-12-31 data were obtained using an Astrodon Photometrics Sloan r filter, while the other two sets of data were obtained using an Astrodon Photometrics Sloan i filter. The MORC24 mount has excellent free-running tracking, so we did not use a separate guide camera. Instead, minor telescope pointing corrections are made after each exposure by comparing the CCD pixel coordinates of the centroid of the target star to its initial position on the CCD. KELT-1b was held to within 3-4 pixels of the starting position on the CCD throughout each observing session. Since KELT-1b is separated from its nearest detectable neighbor in DSS2 imagery by $\sim18\arcsec$, we were able to defocus the telescope to allow for longer exposures without the risk of blending from the neighbor star. An exposure time of 100~s was used for all observations, resulting in a 120~s cadence when combined with the 20~s CCD readout time. We used AstroImageJ (Collins \& Kielkopf 2012, in preparation) to calibrate the image data. The algorithm includes bias subtraction, CCD non-linearity correction, dark subtraction, and flat-field division. AstroImageJ was also used to perform aperture photometry using a circular aperture. An aperture size and an ensemble of comparison stars in the same field were chosen to minimize the scatter in the final light curves. AstroImageJ provides the option to use a standard fixed radius aperture or a variable radius aperture based on the measured FWHM of the target star in each image of the series. When a star is well separated from other stars, the variable aperture option tends to reduce photometric scatter under observing conditions that result in significant changes to the PSF during the observing session. The variable aperture produced optimal results for all four MORC24 KELT-1b light curves. For the observations starting on UT 2011-12-02, cirrus clouds were present during the first half of the observations, and airmass ranged from 1.16 at the start of observations to 3.19 at the end. For the observations starting on UT 2011-12-04, skies were clear until clouds moved in about 30 minutes after ingress. The clouds cleared just prior to egress, however, sky transparency remained highly variable until about an hour after egress. Airmass ranged from 1.05 at the beginning of observations to 1.40 at the end. Although guiding was maintained through the cloud cover, data during that time have been removed. For the observations starting on UT 2011-12-31, skies were clear with minimal variations in transparency. Airmass ranged from 1.00 at the beginning of observations to 2.17 at the end. For the observations on UT 2012-01-04, cirrus clouds were present during the second half of the observations, and airmass ranged from 1.03 at the start of observations to 1.96 at the end. \subsubsection{Hereford Arizona Observatory (HAO)}\label{sec:hao} Data on the primary transit starting UT 2011-12-10 were obtained at the Hereford Arizona Observatory, HAO (observatory code G95 in the IAU Minor Planet Center). This is a private observatory in Southern Arizona consisting of a 14-inch Meade LX-200 GPS telescope equipped with a SBIG ST-10XME CCD, a focal reducer and a 10-position filter wheel with SDSS filters $ugriz$. The telescope and dome are operated via buried cables, permitting automation of observing sessions. Calibrations usually employ a master flat frame obtained during dusk prior to the observing session. The field-of-view (27 x 18 arcminutes) is sufficient for the use of approximately two dozen stars as candidates for reference in a typical field. The observations reported here were obtained with the $i$ filter. The data were reduced and a light curve was generated as follows. An artificial star was inserted in each image before photometry readings for the purpose of monitoring smooth extinction as well as extra extinction events caused by thin clouds, dew formation, and atmospheric seeing degradations that could swell the PSF beyond the photometry aperture circle. Photometry magnitude readings were made by the program MaxIm DL and imported to a spreadsheet, where several steps of manual reduction were completed. The first was to solve for an extinction model (including a temporal extinction trend) based on the sum of all candidate reference star fluxes versus air mass. Second, subsets of reference stars were evaluated for suitability, by toggling individual stars "on and off" in order to determine the subset that minimize the RMS scatter in the target star light curve. Finally, the light curve for the target was fitted using a model for systematic effects and a transit signature. Systematics were represented by a temporal trend and air mass curvature (AMC). The AMC is caused by the target star having a color that differs from the flux-weighted color of the reference stars. The transit parameters were depth, total duration, ingress/egress duration, and a parameter related to the stellar limb darkening. The solution was obtained by minimizing the $\chi^2$ of the fit. Outliers were identified using an objective rejection criterion based on deviations from the model solution. Finally, the light curve is corrected for extinction and systematic effects and scaled to the out-of-transit model flux. \subsubsection{FLWO/KeplerCam}\label{sec:flwop} Data on the primary transits on UT 2011-12-16 and 2012-01-07 were obtained with KeplerCam on the 1.2m telescope at FLWO. KeplerCam has a single 4K $\times$ 4K Fairchild CCD with a pixel scale of 0.366 arcseconds per pixel, for a total FOV of 23.1 x 23.1 arcminutes. A full transit was observed on UT 2011-12-16 with clear conditions. Observations were obtained in the SDSS $z$ filter with 30-second exposures. We also obtained a full transit on UT 2012-01-07 and observations were obtained with the SDSS $i$ filter with 15-second exposures. Clouds came in at the end of the transit and as a result there is some increased scatter in the out-of-transit baseline. The data were reduced using a light curve reduction pipeline outlined in \citet{carter2011} which uses standard IDL techniques. \subsubsection{Las Cumbres Observatory Global Telescope Network (LCOGT)}\label{sec:lcogt} Data on the secondary eclipse on UT 2011-12-30 were obtained with the 2.0m Faulkes Telescope North (FTN) telescope, which is located on Haleakala on the island of Maui in Hawaii. The FTN telescope is part of the Las Cumbres Observatory Global Telescope Network \footnote{http://lcogt.net}. These observations were made using the 4K $\times$ 4K Spectral camera (Fairchild Imaging CCD486 BI) in bin 2x2 mode for a faster readout together with the PanSTARRS-$Z$ filter. As scintillation noise becomes significant ($>$1 millimag) in exposures shorter than $\sim$30\,sec for telescopes of this aperture, the exposure time was kept to 60 sec and the telescope defocused to avoid saturation of the target while ensuring sufficient signal-to-noise ratio in the comparison stars. These data were debiased, dark-subtracted and flat fielded by the LCOGT offline pipeline (developed by the Astrophysics Research Institute at Liverpool John Moores) and aperture photometry was carried out using the stand-alone DAOPHOT II \citep{stetson1987,stetson1990}. Differential photometry was then computed using an ensemble of 15 comparison stars. \subsection{Keck Adaptive Optics Imaging}\label{sec:keckao} To further assess the multiplicity of KELT-1, we acquired adaptive optics images using NIRC2 (PI: Keith Matthews) at Keck on UT 2012-01-07. Our observations consist of dithered frames taken with the $K'$ ($\lambda_c=2.12 \mu{\rm m}$) and H ($\lambda_c=1.65 \mu{\rm m}$) filters. We used the narrow camera setting to provide fine spatial sampling of the stellar point-spread function. The total on-source integration time was 81 seconds in each bandpass. Images were processed by replacing hot pixel values, flat-fielding, and subtracting thermal background noise. No companions were identified in individual raw frames during the observations; however, upon stacking the images we noticed a point source ($8\sigma$) to the south-east of KELT-1. Figure \ref{fig:aoimage} shows the final processed $K'$ image. Inspection of the companion location showed that its separation from the star does not change with wavelength, demonstrating that it is not a speckle. This object is too faint and close to the primary to be detected with seeing-limited images. We performed aperture photometry to estimate the relative brightness of the candidate tertiary, finding $\Delta H=5.90 \pm 0.10$ and $\Delta K'= 5.59 \pm 0.12$. An $H-K'=0.4\pm0.2$ color is consistent with spectral-types M1-L0 \citep{leggett2002,kraus2007}. If the candidate is bound to KELT-1 and thus at the same distance of $262 \pm 14$pc and suffers the same extinction of $A_V=0.18 \pm 0.10$ (see \S\ref{sec:hostprops}), then we estimate its absolute $H$ magnitude to be $M_H = 8.31 \pm 0.15$, corresponding to a M4-5 spectral type, consistent with its color (see, e.g., the compilation of \citealt{kirkpatrick2012}). We also measured an accurate position of the companion relative to the star by fitting a Gaussian model to each of the point-spread function cores. After correcting for distortion in the NIRC2 focal plane \footnote{http://www2.keck.hawaii.edu/inst/nirc2/forReDoc/post\_observing/dewarp/}, and adopting a plate scale value of $9.963 \pm 0.006$ mas $\mbox{pix}^{-1}$ and instrument orientation relative to the sky of $0.13^{\circ}\pm0.02^{\circ}$ \citep{ghez2008}, we find a separation of $\rho=588 \pm 1$ mas and position angle $PA=157.4^{\circ} \pm 0.2^{\circ}$ east of north. If it is bound to KELT-1, it has a projected physical separation of $\sim 154 \pm 8 ~{\rm AU}$, and a period of $\sim 1700$ years assuming a circular, face-on orbit. We used the Galactic model from \citet{dhital2010} to assess the probability that the companion is an unrelated star (i.e., a chance alignment). The model uses empirical number density distributions to simulate the surface density of stars along a given line-of-sight and thus determine probability of finding a star within a given angular separation from KELT-1. We estimate an a priori probability of $\sim 0.05\%$ of finding a star separated by $\la 0.59$\arcsec \ from KELT-1b. We therefore conclude that the companion is likely to be a bona fide, physically associated binary system. With a total proper motion of $\sim 20$~mas/year, it will be possible to definitively determine whether the candidate tertiary is physically associated with KELT-1 within one year. \begin{figure}[t] \epsscale{1.0} \plotone{kelt1_AO_image_kp.eps} \caption{ A Keck AO image of KELT-1 taken with NIRC2 on UT 2012-01-07 in the $K'$ filter. North is up and East is to the left. A 0.5\arcsec bar is shown for scale. A faint companion with $\Delta K' = 5.59 \pm 0.12$ located $\sim 558 \pm 1$ mas to the southeast is clearly visible. \label{fig:aoimage}} \end{figure} We note that the companion is unresolved in our follow-up primary transit photometry, and thus in principle leads to a dilution of the transit signal and a bias in the parameters we infer from a fit to the photometry described in \S\ref{sec:analysis}. However, the effect is negligible. As we discuss in the next section, we are confident that the primary is being eclipsed. Thus the fractional effect on the transit depth is of the same order as the fractional contribution of the companion flux to the total flux, which is $<1\%$. \section{Evidence Against a Blend Scenario}\label{sec:blend} One of the many challenges of photometric surveys for transiting planets is the relatively high rate of astrophysical false positives, blended eclipsing stellar binary or triple systems that can mimic some of the observable signatures of transiting low-mass companions to single stars. In the case of the KELT-North survey, one generically expects a higher rate of false positives as compared to other wide-field transit surveys such as HATNet or SuperWASP, because of the poorer image quality arising from the comparatively smaller aperture, larger pixel scale, and wider FOV. For KELT-1 in particular, the extreme properties of the companion, relatively high $\ensuremath{\,{v\sin{I_*}}}$ of the primary, and the fact that the primary is somewhat evolved, are all reminiscent of false positives that have been reported in previous surveys, e.g., \citet{mandushev2005}. In the case of KELT-1b, however, we have a number of lines of evidence that strongly disfavor a blend scenario. The most compelling arguments against blend scenarios arise from the spectra. First is the lack of strong evidence for bisector span variations. The lower panel of Figure \ref{fig:phased} shows the bisector variations phased to the best-fit period of the companion as determined from the joint fit to the RV and photometry data described in \S\ref{sec:analysis}. There is no evidence for bisector variations correlated with the orbital phase of the companion. The weighted RMS of the bisector spans, excluding the data taken on UT 2012-01-07, is $\sim 120~\ensuremath{\,{\rm m~s^{-1}}}$, only $\sim 30\%$ larger than would be expected based on the native uncertainties, and a factor of $\sim 30$ times smaller than the RMS of the RV measurements themselves. Figure \ref{fig:RVvsBS} shows the bisector spans as a function of radial velocity relative to the system barycenter. There is no strong correlation; the correlation coefficient is only -0.17. In contrast, Figure \ref{fig:RVvsBSrm} shows data taken on the night of UT 2012-01-07, which covered the primary transit. For the subset of these data taken within 0.03 days of the transit center (approximately the middle half of the transit), there is a clear correlation between the radial velocity and the bisector variations, with a correlation coefficient of 0.68. This is expected since the anomalous radial velocity shift from the RM effect is due to a distortion of the rotationally-broadened stellar spectral lines as the planet progressively occults the light from different parts of the face of the star. Indeed, the second piece of evidence that the transit signatures are indeed due to a small companion occulting the primary star is the RM signal itself (Fig.\ \ref{fig:RM}), which has an amplitude consistent with the apparent transit depth and spectroscopically-determined $\ensuremath{\,{v\sin{I_*}}}$. Third, photometric observations in several different filters ($riz$) are all consistent with the primary transit having nearly the same depth, and are well-modeled by transits of a dark companion across a star with the limb darkening consistent with its spectroscopically measured $\ensuremath{T_{\rm eff}}$ and $\ensuremath{\log{g}}$ (see Section \ref{sec:analysis}). Fourth, photometric observations at the predicted time of superior conjunction reveal no evidence for a secondary eclipse at the $\la 1$ mmag level. These first two pieces of evidence tend to exclude or strongly disfavor blend scenarios in which the observed transits are due to diluted eclipses of a much fainter and redder eclipsing binary (e.g., \citealt{odonovan2006}). Finally, our adaptive optics imaging does not reveal any sources further than $\sim 0.25$\arcsec from the primary that could be both blended with it in seeing-limited images {\it and} cause transits at the observed depth of $\sim 1\%$. The one source we do detect, the putative tertiary, has a flux ratio relative to the primary of only $\sim 0.5\%$ in the near-IR, and is likely considerably fainter in the optical, and thus is too faint to explain the observed transits. We did not perform any detailed modeling to determine the viability of specific blend scenarios. We defer here to \citet{bakos2012}, who argue that such analyses are generally unnecessary in situations in which there are no significant bisector variations, the transit ingress/egress durations are short compared to the total duration, and the radial velocity variations phase with the predicted transit ephemeris. We conclude that all of the available data are best explained as due to a Jupiter-sized, brown dwarf companion transiting a rapidly-rotating mid-F star, with little or no evidence for significant contamination from blended sources. Under this assumption, we proceed in the following section to analyze these data in order to determine the physical properties of the KELT-1 host star and its short-period, low-mass companion. \begin{figure}[t] \epsscale{1.0} \plotone{RVvsBS.eps} \caption{ \label{fig:RVvsBS} Bisector spans versus the RV relative to the system barycenter, excluding observations taken on the night of the primary transit on UT 2012-01-07. There is no evidence of a significant correlation between the bisector and RV variations, and the RMS of the bisector span variations is $\sim 30$ times smaller than the RMS of the RV measurements. } \end{figure} \begin{figure}[t] \epsscale{1.0} \plotone{RVvsBSrm.eps} \caption{ \label{fig:RVvsBSrm} bisector spans versus the RV relative to the system barycenter for observations taken on the night of the primary transit on UT 2012-01-07. The points in red are the subset of those data that were taken within 0.03 days of the center of the primary transit, roughly corresponding to the middle half of the full transit duration. Note that these data are strongly correlated with the RV variations due to the RM effect. } \end{figure} \section{Characterization of the Star, Companion, and Orbit} \label{sec:char} \subsection{Properties of the Host Star}\label{sec:hostprops} Table \ref{tab:hostprops} lists various collected properties and measurements of the KELT-1 host star. Many these have been culled from the literature, and the remainder are derived in this section. In summary, KELT-1 is a mildly evolved, solar-metallicity, mid-F star with an age of $\sim 1.5-2$ Gyr located at a distance of $\sim 260$~pc, with kinematics consistent with membership in the thin disk. We construct an empirical spectral energy distribution (SED) of KELT-1 using optical fluxes in the $B_T$ and $V_T$ passbands from the Tycho-2 catalog \citep{hog2000}, near-infrared (IR) fluxes in the $J$, $H$ and $K\!s$ passbands from the 2MASS Point Source Catalog \citep{skrutskie2006,cutri2003}, and near- and mid-IR fluxes in the four WISE passbands \citep{wright2010,cutri2012}. This SED is shown in \ref{fig:sed}. We fit this SED to NextGen models from \citet{hauschildt1999} by fixing the values of $\ensuremath{T_{\rm eff}}$, $\ensuremath{\log{g}}$ and [Fe/H] inferred from the global fit to the light curve and RV data as described in \S\ref{sec:analysis} and listed in Table \ref{tab:physpars}, and then finding the values of the visual extinction $A_V$ and distance $d$ that minimizes $\chi^2$. We find $A_V = 0.18 \pm 0.10$ and $d=262\pm 14$pc, with a $\chi^2 = 10.5$ for 6 degrees of freedom, indicating a reasonable fit ($P(>\chi^2)\sim 10\%$). We also performed a fit to the SED without priors, finding $\ensuremath{T_{\rm eff}}=6500 \pm 400$K, $A_V=0.20 \pm 0.15$, $\ensuremath{\log{g}}=4.25 \pm 0.75$ and [Fe/H]$=-0.5\pm 0.5$, consistent with the constrained fit. There is no evidence for an IR excess. \begin{figure} \includegraphics[scale=0.35,angle=90]{kelt1.sed.parfix.fin.ps} \caption{The red errorbars indicate measurements of the flux of KELT-1 host star in various optical and IR passbands. The vertical errorbar indicates the photometric uncertainty, whereas the horizontal errorbar indicates the effective width of the passband. The solid curve is the best-fit theoretical SED from the NextGen models of \citet{hauschildt1999}, assuming $\ensuremath{T_{\rm eff}}$, $\ensuremath{\log{g}}$ and [Fe/H] fixed at the values in Table \ref{tab:physpars}, with $A_V$ and $d$ allowed to vary. The blue dots are the predicted passband-integrated fluxes of the best-fit theoretical SED. \label{fig:sed}} \end{figure} We note that the quoted statistical uncertainties on $A_V$ and $d$ are likely to be underestimated, because we have not accounted for the uncertainties in values of $\ensuremath{T_{\rm eff}}$, $\ensuremath{\log{g}}$ and [Fe/H] used to derive the model SED. Furthermore, it is likely that alternate model atmospheres would predict somewhat different SEDs and thus values of the extinction and distance. In Figure \ref{fig:hr} we plot the predicted evolutionary track of KELT-1 on a theoretical HR diagram ($\ensuremath{\log{g}}$ vs.\ $\ensuremath{T_{\rm eff}}$), from the Yonsei-Yale stellar models \citep{demarque2004}. Here again we have used the values of $M_*$ and [Fe/H] derived from the global fit (\S\ref{sec:analysis} and Table \ref{tab:physpars}). We also show evolutionary tracks for masses corresponding to the $\pm 1~\sigma$ extrema in the estimated uncertainty. In order to estimate the age of the KELT-1 system, we compare these tracks to the values of $\ensuremath{T_{\rm eff}}$ and $\ensuremath{\log{g}}$ and associated uncertainties as determined from the global fit. These intersect the evolutionary track for a fairly narrow range of ages near $\sim 2$ Gyr. The agreement between the prediction from the evolutionary track at this age and the inferred temperature and surface gravity for KELT-1 is remarkably good, but perhaps not entirely surprising. The values of $\ensuremath{T_{\rm eff}}$, $\ensuremath{\log{g}}$, [Fe/H] and $M_*$ were all determined in the global fit to the light curve and RV data in \S\ref{sec:analysis}, which uses the empirical relations between ($\ensuremath{T_{\rm eff}}$, $\ensuremath{\log{g}}$, [Fe/H]) and ($M_*, R_*$) inferred by \citet{torres10} as priors on the fit, in order to break the well-known degeneracy between $M_*$ and $R_*$ for single-lined spectroscopic eclipsing systems. These empirical relations are known to reproduce the constraints between these parameters imposed by the physics of stellar evolution quite well (see, e.g., Section 8 in \citealt{torres10}). \begin{figure} \includegraphics[scale=0.35,angle=90]{kelt1.hrd.comp.ps} \caption{Theoretical HR diagram based on Yonsei-Yale stellar evolution models \citep{demarque2004}. The solid track is the isochrone for the best-fit values of the mass and metallicity of the host star from the joint fit described in \S\ref{sec:analysis}, $M_*=1.32 \pm 0.03~M_\odot$ and [Fe/H]$=0.01 \pm 0.07$. The red cross shows the best-fit $\ensuremath{T_{\rm eff}}=6513\pm 50$K and $\ensuremath{\log{g}}=4.229_{-0.020}^{+0.012}$ from the final analysis. The black cross shows the inferred $\ensuremath{T_{\rm eff}}$ and $\ensuremath{\log{g}}$ from the spectroscopic analysis alone. The blue dots represent the location of the star for various ages in Gyr. The host star is slightly evolved with a probable age of $\sim 2$ Gyr, although a similar analysis with a different stellar evolutionary model prefers a slightly younger age of $\sim 1.5-1.75$ The two sets of dashed tracks represent the tracks for the extreme range of the 1-$\sigma$ uncertainties on $M_*$ and [Fe/H] from the final analysis (dark shaded), and from the spectroscopic constraints alone (light shaded). \label{fig:hr}} \end{figure} Based on its $\ensuremath{T_{\rm eff}}$ and $J-K$ and the empirical table of spectral type versus color and $\ensuremath{T_{\rm eff}}$ for main-sequence from \citet{kenyon1995}, we infer the spectral type of KELT-1 to be F5 with an uncertainty of roughly $\pm 1$ spectral type. We determined the Galactic $U,V,W$ space velocities of the KELT-1 system using the proper motion of $(\mu_{\alpha},\mu_{\delta})=(-10.1 \pm 0.7,-9.4\pm 0.7)$~mas~yr$^{-1}$ from the NOMAD catalog \citep{zacharias2004}, the distance of $d=262\pm 14$~pc from our SED fit described above, and the barycentric radial velocity of the system as determined from the TRES observations (\S\ref{sec:flwos}) of $\gamma_{abs}=-14.2 \pm 0.2$ km/s. We used a modification of the IDL routine {\tt GAL\_UVW}, which is itself based on the method of \citet{johnson1987}. We adopt the correction for the Sun's motion with respect to the Local Standard of Rest from \citet{coskunoglu2011}, and choose a right-handed coordinate system such that positive $U$ is toward the Galactic Center. We find $(U,V,W)= (19.9 \pm1.1,-9.6 \pm 0.5, -2.6 \pm 0.9)\ensuremath{\,{\rm km~s^{-1}}}$, consistent with membership in the thin disk \citep{bensby2003}. We note also that the distance of KELT-1 from the Galactic plane is $\sim 80$~pc. Finally, we use the solar evolutionary models of \citet{guenther1992}, updated with input physics from \citet{vansaders2012}, to gain some insight into the detailed structure of the host star. Fixing the mass and metallicity at the values determined from the global fit (\S\ref{sec:analysis} and Table~\ref{tab:physpars}), we evolved the model forward until the model $\ensuremath{\log{g}}$ and $\ensuremath{T_{\rm eff}}$ approximately matched the values inferred for KELT-1. We found that ages of $\sim 1.5-1.75$ Gyr best matched the available constraints, and thus this model prefers a somewhat younger age than the Yale-Yonsei model of (Fig.\ \ref{fig:hr}, \citealt{demarque2004}). We therefore decided to adopt an age of $1.75 \pm 0.25$ Gyr, consistent with both estimates. For this range of ages, the models of \citet{guenther1992} predict a radius of the base of the convective zone of $R_{cz}=1.30\pm 0.03~R_\odot$, and a very small mass for the convective zone of $M_{cz}=[2.8\pm 0.14]\times 10^{-5}~M_\odot$, as expected given the effective temperature of $\ensuremath{T_{\rm eff}} \sim 6500$K. In addition, the moment of inertia for the star and convective zone are $C_* = [1.15\pm 0.04] \times 10^{54}~{\rm g~cm^2}$ and $C_{cz}=[3.2 \pm 0.6]\times 10^{51}~{\rm g~cm^2}$, respectively. We can also write the moment of inertia of the star as $C_*=\alpha_*M_*R_*^2$ with $\alpha_*=0.0422$ \citep{guenther1992}. We will use these to estimate the angular momenta of the star, companion and orbit in \S \ref{sec:tides}. \subsection{System Properties Derived from a Joint Fit} \label{sec:analysis} It is well known that a joint fit to high-quality RVs and transit photometry of a transiting planet system allows one to determine the mass and radius of the star and planet, as well as the semi-major axis of the orbit, in principle to very high precision, up to a perfect one-parameter degeneracy \citep{seager2003}. This degeneracy arises because the duration, depth, and shape of the primary transit, when combined with the eccentricity, longitude of periastron, and period of the planet from RV data, allow one to precisely estimate the density of the primary star $\rho_*$, but not $M_*$ or $R_*$ separately. Breaking this $M_*-R_*$ degeneracy generally requires imposing some external constraint, such as a theoretical mass-radius relation \citep{cody2002,seager2003}, or constraints from theoretical isochrones (e.g., \citealt{bakos2012}). In principle, a measurement of $\ensuremath{\log{g}}$ from a high-resolution spectrum can be used to break the degeneracy, but in practice these measurements are generally not competitive with the constraint on $\rho_*$ and often have systematic uncertainties that are comparable to the statistical uncertainties. We fitted the RV and transit data using a modified version of the IDL fitting package EXOFAST \citep{eastman12}. The approach of EXOFAST to breaking the $M_*-R_*$ degeneracy is similar to the method described in, e.g., \citet{anderson2012}, but with significant differences. We will review it briefly here, but point the reader to \citet{eastman12} for more details. We fitted the RV and transit data simultaneously with a standard Keplerian and transit \citep{mandel2002} model using a modified MCMC method (described in more detail below). In addition to the standard fitting parameters, we also included $\ensuremath{T_{\rm eff}}, \ensuremath{\log{g}}$, [Fe/H] as proposal parameters. We then included priors on the measured values of $\ensuremath{T_{\rm eff}}, \ensuremath{\log{g}}$, [Fe/H] as determined from analysis of the TRES spectra and given in \S\ref{sec:flwos}. In addition, we included priors on $M_*$ and $R_*$, which are based on the empirical relations between ($\ensuremath{T_{\rm eff}}, \ensuremath{\log{g}}$, [Fe/H]) and ($M_*, R_*$) determined by \citet{torres10}. These priors essentially break the $M_*-R_*$ degeneracy, as they provide similar constraints as isochrones, i.e., they encode the mapping between the $M_*$, [Fe/H] and age of a star to its $\ensuremath{T_{\rm eff}}$ and $\ensuremath{\log{g}}$ as dictated by stellar physics. We fitted the 6 primary transits, Doppler RV, stellar parameters, and RM effect simultaneously using EXOFAST, which employs a Differential Evolution Markov Chain Monte Carlo (DE-MC) method \citep{braak06}. We converted all times to the \ensuremath{\rm {BJD_{TDB}}} \ standard \citep{eastman10}, and then at each step in the Markov Chain, we converted them to the target's barycentric coordinate system (ignoring relativistic effects). Note the final times were converted back to \ensuremath{\rm {BJD_{TDB}}} \ for ease of use. This transformation accurately and transparently handles the light travel time difference between the RVs and transits. First, we fitted the Doppler RV data independently to a simple Keplerian model, ignoring the RM data taken on UT 2012-01-07. At this stage, we did not include any priors on the stellar parameters, as they do not affect the RV-only fit. We scaled the uncertainties such that the probability that the \ensuremath{\chi^{\,2}} \ was larger than the value we achieved, $\ensuremath{P\left(>\ensuremath{\chi^{\,2}} \right)}$, was $0.5$, to ensure the resulting parameter uncertainties were roughly accurate. For a uniform prior in eccentricity, we found the orbit is consistent with circular, with a $3\sigma$ upper limit of $e < 0.04$. Nevertheless, in order to provide conservative estimates for the fit parameters, we allowed for a non-zero eccentricity in our final fit. However, to test the effect of this assumption, we repeated the final fit forcing $e=0$. We also investigated the possibility of a slope in the RVs, but found it to be consistent with zero, so we did not include this in the final fit. Next, we fitted each of the 4 transits individually, including a zero point, $F_{0,i}$ and airmass detrending variable, $C_{0,i}$ for each of the $i$ transits. The airmass detrending coefficient was significant ($>1 \sigma$) for all but 1 transit, so for consistency, we included it for all. After finding the best fit with AMOEBA \citep{nelder65}, we scaled the errors for each transit such that $\ensuremath{P\left(>\ensuremath{\chi^{\,2}} \right)}=0.5$. At this stage, we included the priors on the stellar parameters as described above. Next, we performed a combined fit to all the data, including a prior on the projected stellar rotation velocity ($\ensuremath{\,{v\sin{I_*}}}=55.2 \pm 2~\ensuremath{\,{\rm km~s^{-1}}}$) from the spectra\footnote{The prior on $\ensuremath{\,{v\sin{I_*}}}$ improves the determination of the spin-orbit alignment angle $\lambda$ \citep{gaudi06}. We also performed a fit without this prior, finding results that were roughly consistent with, although less precise than, those with the prior.}, and a prior on the period from the KELT-North discovery light curve ($P=1.217513 \pm 0.000015$~days). Because it is usually systematics in the RV data that vary over long time scales (due to a combination of instrumental drift and stellar jitter) that ultimately set the error floor to the RVs, we expect the uncertainties of densely packed observations to be smaller than the RMS of all observations, but with a systematic offset relative to the rest of the orbit. Therefore, we fitted a separate zero point during the RM run, and also scaled the errors on the RVs during transit to force $\ensuremath{P\left(>\ensuremath{\chi^{\,2}} \right)} = 0.5$ for those subsets of points. \ensuremath{P\left(>\ensuremath{\chi^{\,2}} \right)} depends on the number of degrees of freedom, but it is not obvious how many degrees of freedom there are in the RM run -- technically, the entire orbit and the transit affect the \ensuremath{\chi^{\,2}} of the RM (13 parameters), but the freedom of the RM measurements to influence most of those parameters is very limited, when fit simultaneously with the transits. Indeed, even \ensuremath{\,{v\sin{I_*}}} \ is constrained more by the spectroscopic prior than the RM in this case, which means there are only two parameters (the projected spin-orbit alignment, $\lambda$, and the zero point, $\gamma$) that are truly free. To be conservative, we subtracted another degree of freedom to encompass all the other parameters on which the RM data has a slight influence, before scaling the errors. The RM data were modeled using the \citet{ohta05} analytic approximation with linear limb darkening. At each step in the Markov Chain, we interpolated the linear limb darkening tables of \citet{claret11} based on the chain's value for \ensuremath{\log{g}}, \ensuremath{T_{\rm eff}}, and {\rm [Fe/H]} \ to derive the linear limb-darkening coefficient, $u$. We assumed the $V$ band parameters to approximate the bandpass of TRES, though we repeated the exercise in the $B$-band with no appreciable difference in any of the final parameters. Note that we do {\it not} fit for the limb-darkening parameters, as the data are not sufficient to constrain them directly.. The uncertainties in all the limb-darkening parameters provided in Table \ref{tab:fitpars} arise solely from the scatter in \ensuremath{\log{g}}, \ensuremath{T_{\rm eff}}, {\rm [Fe/H]}. We assume no error in the interpolation of the limb-darkening tables. In order to search for Transit Timing Variations (TTVs), during the combined fit, we fitted a new time of transit, $T_{C,i}$ for each of the $i$ transits. Therefore, the constraint on $T_C$ and $P$ (quoted in Tables \ref{tab:fitpars} and \ref{tab:physpars}, respectively) come from the prior imposed from the KELT-North light curve and the RV data, not the follow up light curves. Using these times to constrain the period during the fit would artificially reduce any observed TTV signal. A separate constraint on $T_C$ and $P$ follows from fitting a linear ephemeris to the transit times, as discussed in \S \ref{sec:ttv}. It is the result from this fit that we quote as our final adopted ephemeris. The results from this global fit are summarized in Tables \ref{tab:physpars} and \ref{tab:fitpars}. We also show the results for the physical parameters assuming $e=0$ in Table \ref{tab:physpars}; the differences between the fixed and free eccentricity fits are always smaller than their uncertainties, and generally negligible for most of these parameters. The values of $\ensuremath{T_{\rm eff}}$, $\ensuremath{\log{g}}$, and [Fe/H] we infer from the global fit are in agreement with the values measured directly from the TRES spectra to within the uncertainties. Since the spectroscopic values were used as priors in the global fit, this generally indicates that there is no tension between the value of $\rho_*$ inferred from the light curve and RV data, and the spectroscopic values. The median value and uncertainty for $\ensuremath{T_{\rm eff}}$ is nearly unaffected by the global fit. While the median value for [Fe/H] has changed slightly, the uncertainty is very similar to that from the input prior. On the other hand, the uncertainty in $\ensuremath{\log{g}}$ from the global fit is a factor of $\ga 5$ smaller than the uncertainty from the spectroscopic measurement. This is not surprising, since the constraint on $\rho_*$ from the RV and light curve data provides a strong constraint on $\ensuremath{\log{g}}$ via the relations of \citet{torres10}. Following papers by the HATNet collaboration (e.g., \citealt{bakos2011,hartman2011}), we also present in Table \ref{tab:physpars} our estimates of the median values and uncertainties for a number of derived physical parameters of the system that may be of interest, including the planet equilibrium temperature assuming zero albedo and perfect redistribution $T_{eq}$, the average amount of stellar flux at the orbit of the companion $\left< F \right>$, and the Safronov number $\Theta = (a/R_p)(M_P/M_*)$ (e.g., \citealt{handsen2007}). In addition, in Table \ref{tab:fitpars} we quote our estimates of various fit parameters and intermediate derived quantities for the Keplerian RV fit, the primary transits, and the secondary eclipse. We note that the final uncertainties we derive for $M_*$, $R_*$, $\ensuremath{\log{g}}$ and $\rho_*$ are relatively small, $\sim 2\%-5\%$. These uncertainties are similar to those found for other transiting planet systems using methods similar to the one used here (e.g., \citealt{anderson2012}). Specifically, these methods derive physical parameters from a fit to the light curve and RV data, which simultaneously imposes an empirical constraint between the $M_*, R_*$ and $\ensuremath{T_{\rm eff}}, \ensuremath{\log{g}}$ and [Fe/H] ultimately derived from \citet{torres10}. This constraint helps to break the $M_*-R_*$ degeneracy pointed out by \citet{seager2003} and discussed above, and ultimately dictates our final uncertainty on $M_*$ and $R_*$ (and thus $M_P$ and $R_P$). In particular, our spectroscopic measurement of $\ensuremath{\log{g}}$ provides a much weaker constraint. Given that our results rely so heavily on the \citet{torres10} relations, it is worthwhile to ask to what extent our parameters and uncertainties might be affected should these relations be systematically in error. First, as already noted, these empirical relations are known to agree well with stellar isochrones in general \citep{torres10}, and for KELT-1 in particular (Fig.\ \ref{fig:hr}). Second, analyses using stellar isochrones rather than empirical relations produce similar uncertainties on $M_*$ and $R_*$ (e.g., \citealt{bakos2011}), suggesting that the small uncertainties we derive are not a by-product of our methodology. Finally, \citet{southworth2009} demonstrated that the results of the analysis of 14 transiting systems with several different sets of isochrones generally agree to within a few percent. We therefore conclude that our results are likely accurate, with systematic errors at the level of our statistical uncertainties (a few percent). \subsection{System Ephemeris and Transit Timing Variations} \label{sec:ttv} Table \ref{tab:ttimes} lists the measured transit times for each of the six modeled transits, and Figure \ref{fig:ttvs} shows the residuals of these times from a fit to a linear ephemeris. The best fit has \begin{eqnarray} \nonumber T_C(\ensuremath{\rm {BJD_{TDB}}}) &=& 2455909.292797 \pm 0.00024\\ P &=& 1.2175007 \pm 0.000018, \label{eqn:ephem} \end{eqnarray} which is consistent with the ephemeris derived from the KELT-North light curve alone. The $\chi^2=44.9$ for the linear fit with 4 degrees of freedom is formally quite poor. This is mostly driven by one nominally significant ($5 \sigma$) outlier, specifically for the transit observed on UT 2011-12-16 from FLWO. We note that the faint companion to KELT-1, if indeed bound, is too distant to explain such large TTVs. We have taken great care to ensure the accuracy of our clocks and our conversion, and the fact that the residuals from different observatories roughly follow the same trend in time suggests that a catastrophic error in the observatory clock cannot be blamed. Since we fit the trend with airmass simultaneously with the transit, the potentially-large covariance between it and the transit time should be accurately reflected in the quoted uncertainties \citep{eastman12}. Nevertheless, our MCMC analysis does not adequately take into account the effect of systematic uncertainties, and in particular we do not account for correlated uncertainties \citep{pont2006,carter2009}, which could skew the transit time of a given event substantially. And, given the results from Kepler which suggest the rarity of such transit timing variations \citep{steffen2012}, we are reluctant to over-interpret this result. Nevertheless, this is an interesting target for future follow-up. \begin{figure}[h] \epsscale{1.2} \hskip-0.5in \plotone{final2ttv.eps} \caption{The residuals of the transit times from the best-fit (linear) ephemeris. The transit times are given in Table \ref{tab:ttimes}. } \label{fig:ttvs} \end{figure} \subsection{Secondary Eclipse Limits}\label{sec:second} \begin{figure}[h] \epsscale{1.0} \plotone{abfcontour.eps} \caption{Values of the heat redistribution parameter $f'$ and Bond albedo $A_B$ that are excluded at a given confidence level based on the data taken during the secondary eclipse shown in Figure \ref{fig:secondary}. The $f'$ parameter describes the efficiency of heat redistribution, and is 1/4 in the case of uniform redistribution, and 2/3 in the case of no redistribution. In between these two extremes $f'$ is not easily related to the amount of heat redistribution. The orange section corresponds to eclipse depths detectable at less than the $68\%$ confidence level, the light orange for 68-90\% confidence, yellow for 90-95\% confidence, and the white for $>95\%$ confidence. } \label{fig:secconst} \end{figure} We observed the predicted secondary eclipses of KELT-1b assuming $e=0$ on UT 2011-12-02, 2011-12-30, and 2012-01-04. In none of the three were we able to detect a secondary eclipse. The observations on 2011-12-02 and on 2012-01-04 were taken from the ULMO Observatory in $i$. Both nights we were able to observe through the predicted ingress and egress of the potential secondary. The two $i$ band light curves have a combined 236 data points, and show an RMS scatter of 1.56 mmag. The observations on 2011-12-30 were taken with the FTN telescope in Pan-STARRS-$Z$. In this case we were only able to begin observations half way through the predicted secondary eclipse. This $Z$-band light curve has 72 points, and an RMS scatter of 0.75 mmag. We used the system parameters derived from the joint fit (\S\ref{sec:analysis}) to fit our three observations. Since we did not detect a secondary eclipse, we used these fits to explore the combination of heat redistribution efficiency and Bond albedo $A_B$ that would give rise to a secondary eclipse depth that is inconsistent with our data. To do so, we calculated the secondary eclipse depths we would expect for a range of redistribution efficiencies and albedos, and then fit a secondary eclipse model with the predicted depth to all three of our observations simultaneously. In calculating the expected secondary eclipse depths, we made the assumption that both the star and the planet were blackbodies. We also assumed that the planet was a grey Lambert sphere, so the geometric albedo $A_g = (2/3)A_B$, and the spherical albedo is constant as a function of wavelength. Following \cite{seager2010}, we parametrized the heat redistribution efficiency as $f'$, which is 1/4 in the case of uniform redistribution, and 2/3 in the case of no redistribution. We note that in between these two extremes $f'$ is not easily related to the amount of heat redistribution, i.e., $f'=0.45$ does not imply half of the incident stellar energy is redistributed around the planet. To test these expected secondary eclipse depths against our observations, we fit simple trapezoidal eclipse curves with the expected depths to all three datasets simultaneously. Under our assumptions, the depth, timing, shape, and duration of the secondary eclipse are all determined by the parameters derived from the global fit and our specified values of $A_B$ and $f'$. We then fit this model to our data, allowing for a normalization and a linear trend in the flux with time. We used the $\Delta\chi^2$ between the best fit eclipse model and the best constant fit, which itself was allowed a free slope and offset, to evaluate the detectability of each of the secondary eclipse depths. We used the $\chi^2$ distribution to transform these $\Delta\chi^2$ values into detection probabilities. Figure \ref{fig:secondary} shows an example light curve against a median binned version of our data. This particular curve is the secondary eclipse we would expect if KELT-1b had $A_B=0.1$, and instantaneously re-radiated its incident stellar flux, i.e., $f'=2/3$. We would have detected this event with more than 95\% confidence. Figure \ref{fig:secconst} shows the results of our exploration of the heat redistribution versus Bond albedo parameter space. The orange section corresponds to eclipse depths detectable at less than the 68\% confidence, the light orange is for depths detectable with 68-90\% confidence, light yellow is for 90-95\%, and the white contains depths that would have been detected at greater than 95\% confidence. The particular shapes of the contours on this plot come from the competing effects of reflection and blackbody emission from KELT-1b on the depth of the secondary eclipse. Along the very bottom of the figure the Bond albedo is zero, and thus there is only thermal emission. We see the strong change in eclipse depth as amount of heat redistribution decreases, thus causing the temperature and eclipse depth for KELT-1b to increase. Along the top of the figure, where the Bond albedo is 0.75, the reflected starlight dominates the blackbody emission such that changing the redistribution efficiency has little effect on the eclipse depth. Slightly more than half of the allowed parameter space in Figure \ref{fig:secconst} would have caused secondary eclipses detectable in our data at greater than 90\% confidence, while almost all would have been detected at more than 68\% confidence. Since we did not see a secondary eclipse in our observations, we conclude that KELT-1b either it has a non-zero albedo, or it must redistribute some heat from the day side, or both. Formally, the scenario that is most consistent with our data is that KELT-1b has both a low Bond albedo and is very efficient at redistributing heat away from its day side, however we are reluctant to draw any strong conclusions based on these data. \begin{figure}[h] \epsscale{2.2} \plotone{mpvs3.eps} \caption{ (Top panel) Mass versus period for the known transiting companions to main-sequence stars with companion masses in the range $1-100~M_J$. An estimate of the deuterium burning limit \citep{spiegel2011} is shown as the horizontal dotted line, while the hydrogen burning limit is shown as the horizontal dashed line. The vertical line shows the division between hot and cool stars of $\ensuremath{T_{\rm eff}} = 6250$K suggested by \citet{winnrm2010}. Brown dwarfs are shown as triangles, exoplanets as squares, and low-mass stars as asterisks. KELT-1b is shown as the large star. It is the shortest period transiting brown dwarf currently known. (Bottom panel) Mass versus host star effective temperature $\ensuremath{T_{\rm eff}}$ for the sample of transiting companions shown in the top panel. As suggested by \citet{bouchy2011b}, there is some evidence that massive ($M_P\ga 5~M_{\rm Jup}$ companions are preferentially found around hot ($\ensuremath{T_{\rm eff}} \ga 6000$K) stars, and KELT-1b follows this possible trend. Note that we exclude the BD companion to NLTT 41135 \citep{irwin2010}, and the double BD transit system 2M0535$-$05 \citep{stassun2007} in this and subsequent plots. \label{fig:mpvs}} \end{figure} \section{Discussion}\label{sec:discussion} From our global fit to the light curves and RVs, we find that KELT-1b is a low-mass companion with a measured mass $M_P=27.23_{-0.48}^{+0.50}~M_{\rm Jup}$ and radius $R_P=1.110_{-0.022}^{+0.032}~R_{\rm Jup}$. It is on a circular orbit with a semimajor axis of $a=0.02466\pm 0.00016$AU. The host KELT-1 is a mildly evolved mid-F star with a mass $M_*=1.324\pm0.026~M_\odot$, radius $R_*=1.462_{-0.024}^{+0.037}~R_\odot$, effective temperature $\ensuremath{T_{\rm eff}}=6518\pm 50~{\rm K}$, and a likely age of $\sim 1.5-2$Gyr. Because of its small semimajor axis and hot host, KELT-1b receives a large stellar insolation flux of $\langle F \rangle = 7.81_{-0.33}^{+0.42} \times 10^9~{\rm erg~s^{-1}~cm^{-2}}$, implying a high equilibrium temperature of $T_{eq}=2422_{-26}^{+32}~{\rm K}$ assuming zero albedo and perfect redistribution. Both the surface gravity and density of KELT-1b are substantially higher than that of its host star, and higher than we would expect for a stellar object. We find that the orbit normal of KELT-1b is well-aligned with the projected rotation axis of its host star, with a projected alignment angle of $\lambda = 2 \pm 16$ degrees. Even among the large and diverse menagerie of known transiting exoplanets and low-mass companions, KELT-1b is unique. First, it is one of only 7 unambiguous objects with mass the range $\sim 13-80~M_{\rm Jup}$ that are known to transit stars. Among these, it has the shortest period, and orbits the brightest host star ($V=10.7$). In addition, there is potentially a stellar M dwarf companion to the primary. For all these reasons, KELT-1b is likely to be a very interesting object for further study, and we expect it will provide a benchmark system to test theories of the emplacement and evolution of short period companions, as well the physics of tidal dissipation and irradiated atmospheres of substellar objects. We will discuss some of these ideas briefly. \subsection{Brown Dwarf or Supermassive Planet? KELT-1b and the Brown Dwarf Desert}\label{sec:bdd} Is KELT-1b a brown dwarf (BD) or a is it a suppermassive planet? By IAU convention, brown dwarfs (BDs) are defined to have masses between the deuterium burning limit of $\sim 13~M_{\rm Jup}$ \citep{spiegel2011} and the hydrogen burning limit of $\sim 80~M_{\rm Jup}$ (e.g., \citealt{burrows1997}). Less massive objects are defined to be planets, whereas more massive objects are stars. By this definition, KELT-1b is a low-mass BD. However, it is interesting to ask whether or not KELT-1b could have plausibly formed in a protoplanetary disk, and therefore might be more appropriate considered a ``supermassive planet'' \citep{schneider2011}. More generally, it is interesting to consider what KELT-1b and systems like it may tell us about the formation mechanisms of close companions with masses in the range of $10-100M_{\rm Jup}$. One of the first results to emerge from precision Doppler searches for exoplanets is the existence of a BD desert, an apparent paucity brown dwarf companions to FGK stars with periods less than a few years, relative to the frequency of stellar companions in the same range of periods \citep{marcy2000}. Subsequent studies uncovered planetary companions to such stars in this range of periods in abundance \citep{cumming08}, indicating that the BD desert is a local nadir in the mass function of companions to FGK stars. The simplest interpretation is that this is the gap between the largest objects that can form in a protoplanetary disk, and the smallest objects that can directly collapse or fragment to form a self-gravitating object in the vicinity of a more massive protostar. Therefore, the location of KELT-1b with respect to the minimum of the brown dwarf mass function might plausibly provide a clue to its origin. \subsubsection{Comparison Sample of Transiting Exoplanets, Brown Dwarfs, and Low-mass Stellar Companions}\label{sec:comp} In order to place the parameters of KELT-1b in context, we construct a sample of transiting exoplanets, BDs, and low-mass stellar companions to main sequence stars. We focus only on transiting objects, which have the advantage that both the mass and radius of the companions are precisely known\footnote{In contrast, for companions detected only via RVs, only the minimum mass is known. Of course, one can make an estimate of the posterior probability distribution of the true mass given a measured minimum mass by adopting a prior for the distribution of inclinations (e.g., \citealt{lee2011}). However, this procedure can be particularly misleading in the case of BDs: if BDs are indeed very rare, then objects with minimum mass in the BD desert are more likely to be stellar companions seen at low inclination. Anecdotally, in those cases where constraints on the inclinations can be made, companions with minimum mass near the middle of the brown dwarf desert often do turn out to be stars (e.g., \citealt{sahlmann2011,fleming2012}).}. We collect the transiting exoplanet systems from the Exoplanet Data Explorer (exoplanets.org, \citealt{wright2011}), discarding systems for which the planet mass is not measured. We supplement this list with known transiting brown dwarfs \citep{deleuil2008,johnson2011,bouchy2011a,bouchy2011b,anderson2011}. We do not include the system discovered by \citep{irwin2010}, because a radius measurement for the brown dwarf was not possible. We also do not include 2M0535$-$05 \citep{stassun2007}, because it is a young, double BD system. We add several transiting low-mass stars near the hydrogen burning limit \citep{pont2005,pont2006b,beatty2007}. We adopt the mass of XO-3b from the discovery paper \citep{johnskrull2008}, which is $M_P=13.1 \pm 0.4~M_{\rm Jup}$, which straddles the deuterium burning limit \citep{spiegel2011}. However, later estimates revised the mass significantly lower to $M_P=11.8\pm 0.6$ \citep{winn2008}. We will therefore categorize XO-3b as an exoplanet. The disadvantage of using samples culled from transit surveys is that the sample size is much smaller, and transit surveys have large and generally unquantified selection biases (e.g., \citealt{gaudi2005,fressin2009}), particularly ground-based transit surveys. We emphasize that such biases are almost certainly present in the sample we construct. We have therefore made no effort to be complete. The comparisons and suggestions we make based on this sample should not be considered definitive, but rather suggestive. Figure \ref{fig:mpvs} places KELT-1b among the demographics of known transiting companions to main sequence stars, focusing on massive exoplanets, BDs, and low-mass stars with short periods of $\la 30$ days. KELT-1b has the tenth shortest period of any transiting exoplanet or BD known. It has the sixth shortest period among giant ($M_P\ga 0.1~M_{\rm Jup}$) planets, with only WASP-19b, WASP-43b, WASP-18b, WASP-12b, OGLE-TR-56b, and HAT-P-23b having shorter periods. KELT-1b is more massive by a factor $\sim 3$ than the most massive of these, WASP-18b \citep{hellier2009}. KELT-1b has a significantly shorter period than any of the previously known transiting brown dwarfs, by a factor of $\ga 3$. KELT-1b therefore appears to be located in a heretofore relatively unpopulated region of the $M_P-P$ parameter space for transiting companions. Although the KELT-1 system is relatively unique, it is worth asking if there are any other known systems that bear some resemblance to it. The $M_P\simeq 18 M_{\rm Jup}$, $P\simeq 1.3$ day RV-discovered companion to the M dwarf HD 41004B \citep{zucker2003} has similar minimum mass and orbit as KELT-1b, however the host star is obviously quite different. Considering the host star properties as well, perhaps the closest analogs are WASP-18b \citep{hellier2009}, WASP-33b \citep{cameron2010}, and KOI-13b \citep{mazeh2012,mislis2012}. All three of these systems consist of relatively massive ($M_p\ga 3~M_{\rm Jup}$) planets in short ($\la 2$ day) orbits around hot $(\ensuremath{T_{\rm eff}} \ga 6500$~K$)$ stars. The mass of KELT-1b ($\sim 27~M_{\rm Jup}$) is close to the most arid part of the BD desert, estimated to be at a mass of $31_{-18}^{+25}~M_{\rm Jup}$ according to \citet{grether2006}. Thus, under the assumption that the BD desert reflects the difficulty of forming objects with this mass close to the parent star under {\it any} formation scenario, KELT-1b may provide an interesting case to test these various models. For disk scenarios, gravitational instability can likely form such massive objects, but likely only at much larger distances \citep{rafikov2005,dodson2009,kratter2010}. The maximum mass possible from core accretion is poorly understood, but may be as large as $\sim 40~M_{\rm Jup}$ \citep{mordasini2009}. The possibility of significant migration of KELT-1b from its birth location to its present position must also be considered, particularly given the existence of a possible stellar companion to KELT-1 (\S\ref{sec:keckao}). This possibility complicates the interpretation of the formation of KELT-1b significantly. For example, it has been suggested that brown dwarf companions are more common at larger separations \citep{metchev2009}; thus KELT-1b may have formed by collapse or fragmentation at a large separation, and subsequently migrated to its current position via the Kozai-Lidov mechanism \citep{kozai1962,lidov1962}. One clue to the origin of KELT-1b and the BD desert may be found by studying the frequency of close BD companions to stars as a function of the stellar mass or temperature. Figure \ref{fig:mpvs} shows the mass of known transiting short period companions as a function of the effective temperature of the host stars. As pointed out by \citet{bouchy2011b}, companions with $M_P\ga 5~M_{\rm Jup}$ appear to be preferentially found around hot stars with $\ensuremath{T_{\rm eff}} \ga 6000~{\rm K}$, and KELT-1b follows this trend. Although these hot stars are somewhat more massive, the most dramatic difference between stars hotter and cooler than 6000~K is the depth of their convection zones. This led \citet{bouchy2011b} to suggest that tides may play an important role in shaping the frequency and distribution of massive exoplanet and brown dwarf companions to old stars. Some evidence for this has been reported by \citet{winnrm2010}, who argue that hot ($\ensuremath{T_{\rm eff}} \ge 6250$K) stars with close companions preferentially have high obliquities, suggesting that if the emplacement mechanisms are similar for all stars, tidal forces must later serve to preferentially bring cool host stars into alignment. Figure \ref{fig:rmdist} shows the distribution of spin-orbit alignments for transiting planets versus the host star effective temperature. KELT-1b falls in the group of hot stars with {\it small} obliquities. Interestingly the other massive $\ga 5~M_{\rm Jup}$ planets are also located in this group. We discuss the possible formation and evolutionary history of KELT-1b, and the likely role of tides in this history, in more detail below. We remain agnostic about the classification of KELT-1b as a brown dwarf or supermassive planet. \begin{figure} \epsscale{1.2} \plotone{rm.eps} \caption{ The projected spin-orbit alignment angle $\lambda$ for transiting planets as measured by the RM effect, versus the effective temperature of the host star, following \citet{winnrm2010}. The grey points show exoplanets with mass $M_P< 5~M_{\rm Jup}$, whereas the black points show those with $M_P> 5 M_{\rm Jup}$. KELT-1b, shown with a star, is the first transiting brown dwarf with a RM measurement. Its orbit normal is consistent with being aligned with the projected host star spin axis. The dotted vertical line shows the suggested dividing line between hot and cool stars by \citet{winnrm2010}. \label{fig:rmdist}} \end{figure} \subsection{Tides, Synchronization, and Kozai Emplacement}\label{sec:tides} \begin{figure} \epsscale{2.2} \plotone{tidsyn.eps} \caption{ Dimensionless combinations of physical parameters that quantify the relative time scale for orbital tidal decay (top panel) and stellar spin-orbit synchronization (bottom panel) for different binary systems, as a function of the orbital period of the system. See \S\ref{sec:tides} for an explanation and assumptions. Brown dwarfs are shown as triangles, exoplanets as squares, and low-mass stars as asterisks. KELT-1b is shown as the large star. Among known transiting exoplanets and brown dwarfs, it has the shortest characteristic time scale for orbital decay and synchronization. \label{fig:tidsyn}} \end{figure} Given the relatively large mass and short orbital period of KELT-1b, it seems probable that tides have strongly influenced the past evolution of the system, and may continue to be affecting its evolution. The literature on the influence of tides on exoplanet systems is vast (see, e.g., \citealt{rasio1996,ogilvie2004,jackson2008,leconte2010,matsumura2010,hansen2010}, for but a few examples), and there appears to be little consensus on the correct treatment of even the most basic physics of tidal dissipation, with the primary uncertainties related to where, how, and on what time scale the tidal energy is dissipated. While we are interested in evaluating the importance of tides on the evolution of the orbit of KELT-1b and the spin of KELT-1, delving into the rich but difficult subject of tides is beyond the scope of this paper. We therefore take a somewhat heuristic approach. Specifically, we construct a few dimensionless quantities that likely incorporate the primary physical properties of binary systems that determine the scale of tidal evolution, but do not depend on the uncertain physics of energy dissipation. Specifically, we define, \begin{eqnarray} {\cal T}_a & \equiv & \frac{M_*}{M_P} \left(\frac{a}{R_*}\right)^5,\qquad {\rm and}\\ {\cal T}_{\omega_*} & \equiv & \left(\frac{M_*}{M_P}\right)^2 \left(\frac{a}{R_*}\right)^3. \label{eqn:tfacs} \end{eqnarray} For some classes of theories of tidal dissipation and under some assumptions, ${\cal T}_a$ is proportional to the e-folding timescale for decay of the orbit, and ${\cal T}_{\omega_*}$ is proportional to the timescale for synchronization of the spin of the star with the companion orbital period. It is worthwhile to note that for transiting planet systems the combinations of parameters $M_P/M_*$ and $a/R_*$ are generally much better determined than the individual parameters. In particular, the ratio of the mass of the planet to that of the star is closely related to the RV semi-amplitude $K$, whereas $a/R_*$ is closely related to the ratio of the duration of the transit to the period \citep{winn2010}. Figure \ref{fig:tidsyn} shows ${\cal T}_a$ and ${\cal T}_{\omega_*}$ as a function of orbital period for the sample of transiting exoplanets, brown dwarfs, and low-mass stars discussed previously. KELT-1b has shorter timescales than nearly the entire sample of systems, with the exception of a few of the low-mass stars. We therefore expect tidal effects to be quite important in this system. As a specific example, under the constant time lag model \citep{hut1981,matsumura2010}, and assuming dissipation in the star, zero eccentricity, zero stellar obliquity, and a slowly rotating star, the characteristic time scale for orbital decay due to tides is $\tau_{decay} \equiv a/|{\dot a}| = (12\pi)^{-1} Q_*' {\cal T}_a P$, where $Q_*'$ is related to the dimensionless tidal quality factor. For KELT-1b, ${\cal T}_a \sim 3 \times 10^{4}$, and so $\tau_{decay}\sim 0.3~{\rm Gyr}$ for $Q_*'=10^8$, clearly much shorter than the age of the system. Similarly, the time scale for spinning up the star by the companion is $\tau_{synch} \equiv \omega_*/|\dot \omega_*| \propto Q_*'{\cal T}_{\omega_*} P$ \citep{matsumura2010}, and so is also expected to be short compared to the age of the system. Given the expected short synchronization time scale and the fact that the expected time scale for tidal decay is shorter than the age of the system, it is interesting to ask whether or not the system has achieved full synchronization, thus ensuring the stability of KELT-1b. The measured projected rotation velocity of the star is $\ensuremath{\,{v\sin{I_*}}}=56\pm 2~\ensuremath{\,{\rm km~s^{-1}}}$, which given the inferred stellar radius corresponds to a rotation period of $P_* = 2\pi R_*\sin I_*/\ensuremath{\,{v\sin{I_*}}} = [1.322\pm 0.053]\sin I_*$~days, which differs from the orbital period of KELT-1b by $\sim 2\sigma$ for $I_*=90^\circ$. This is suggestive that the system is indeed synchronized. The small discrepancy could either be due to a slightly underestimated uncertainty on $\ensuremath{\,{v\sin{I_*}}}$, or the host could be moderately inclined by $I_* \sim [67\pm 7]^\circ$. However, one might expect the obliquity of the star to be realigned on roughly the same time scale as the synchronization of its spin \citep{matsumura2010}. The stellar inclination can also be constrained by the precise shape of the transit light curve: lower inclinations imply higher rotation velocities, and thus increased oblateness and gravity brightening \citep{barnes2009}. Ultimately, the inclination is limited to $I_* \ga 10^\circ$ in order to avoid break up. We can also ask, given the known system parameters, if the system is theoretically expected to be able to achieve a stable synchronous state. A system is ``Darwin stable'' \citep{darwin1879,hut1980} if its total angular momentum, \begin{equation} L_{tot} = L_{orb} + L_{\omega,*} + L_{\omega,P} \label{eqn:ltot} \end{equation} is more than the critical angular momentum of \begin{equation} L_{crit} \equiv 4\left[\frac{G^2}{27}\frac{M_*^3M_P^3}{M_*+M_P}(C_*+C_P)\right]^{1/4}, \label{lcrit} \end{equation} where $L_{orb}$ is the orbital angular momentum, $L_{\omega_*}$ is the spin angular momentum of the star, $L_{\omega,P}$ is the spin angular momentum of the planet, and $C_*=\alpha_* M_* R_*^2$ and $C_P=\alpha_P M_P R_P^2$ are the moments of inertia of the star and planet, respectively \citep{matsumura2010}. Since $C_P/C_* \sim (M_P/M_*)(R_P/R_*)^2 \sim 10^{-3}$, the contribution from the planet spin to the total angular momentum is negligible. We find $L_{tot}/L_{crit}=1.029 \pm 0.014$, marginally above the critical value for stability. In addition, we find $(L_{\omega,*}+L_{\omega,P})/L_{orb} = 0.154 \pm 0.006$, which is smaller than the maximum value of $1/3$ required for a stable equilibrium \citep{hut1980}. Curiously, if we assume the star is already tidally synchronized, we instead infer $(L_{\omega,*}+L_{\omega,P})/L_{orb}=0.167 \pm 0.004$, i.e., remarkably close to exactly one-half the critical value of 1/3. Two additional pieces of information potentially provide clues to the evolutionary history of this system: the detection of a possible tertiary (\S\ref{sec:keckao}; Fig.\ref{fig:aoimage}), and the measurement of the RM effect (Fig.~\ref{fig:RM}), demonstrating that KELT-1 has small projected obliquity. If the nearby companion to KELT-1 is indeed bound, it could provide a way of emplacing KELT-1b in a small orbit via the Kozai-Lidov mechanism \citep{kozai1962,lidov1962}. If KELT-1b were originally formed much further from its host star, and on an orbit that was significantly misaligned with that of the putative tertiary, then its orbit might subsequently be driven to high eccentricity via secular perturbations from the tertiary \citep{holman1997,lithwick2011,katz2011}. If it reached sufficiently high eccentricity such that tidal effects became important at periastron, the orbit would be subsequently circularized at a relatively short period \citep{fabrycky2007,wu2007,socrates2012}. Nominally, one might expect the orbit of KELT-1b to be then left with a relatively large obliquity \citep{naoz2011}. The measured projected obliquity is $\la 16$ degrees, implying that either the current true obliquity is small, or the star is significantly inclined (i.e., $I_* \sim 0$). However, if the star is significantly inclined, then the system cannot be synchronized. Perhaps a more likely alternative is that, after emplacement by the tertiary and circularization of the orbit, the system continued to evolve under tidal forces, with KELT-1b migrating inward to its current orbit while damping the obliquity of KELT-1 and synchronizing its spin period. Clearly, detailed simulations are needed to establish whether or not this scenario has any basis in physical reality. \subsection{Comparison to Theoretical Models of Brown Dwarfs}\label{sec:models} Transiting brown dwarfs provide one of the only ways to test and calibrate models of BD structure and evolution, which are used to interpret observations of the hundreds of free floating brown dwarfs for which no direct measurement of mass and radius is possible. Given that only 5 transiting brown dwarfs with radius measurements were previously known, KELT-1b potentially provides another important test of these models. Figure \ref{fig:mr} shows the mass-radius relation for the known transiting companions to main-sequence stars with companion masses in the range $10-100~M_J$. Being close to the minimum in the brown dwarf desert, the mass of KELT-1b begins to fill in the dearth of know systems between $\sim 20-60~M_{\rm Jup}$. Furthermore, the formal uncertainty in its radius is only $\sim 2.5\%$, thereby allowing for a stringent test of models. In contrast, the two transiting BDs with similar masses, CoRoT-3b \citep{deleuil2008} and KOI-423b \citep{bouchy2011b}, have much larger radius uncertainties, presumably due to the relative faintness of the host stars. \begin{figure} \epsscale{1.2} \plotone{mr.eps} \caption{Radius versus mass for the known transiting companions to main-sequence stars with companion masses in the range $10-100~M_J$ that have measured radii. An estimate of the deuterium burning limit \citep{spiegel2011} is shown as the vertical dotted line, and the hydrogen burning limit is shown as the vertical dashed line. Brown dwarfs are shown as triangles, exoplanets as squares, and low-mass stars as asterisks. KELT-1b is shown as the large star. Predicted radii as a function of mass for isolated objects from the isochrones of \citet{baraffe2003} are shown for an age of 5 Gyr (dashed), 1 Gyr (dotted), and 0.5 Gyr (long dashed); the true age of the KELT-1 system is almost certainly between $1-5$ Gyr. Although stellar insolation is likely to increase the radii at fixed mass, \citet{bouchy2011a} predict that the effect is small. KELT-1b therefore has an anomalously large radius. \label{fig:mr}} \end{figure} Evolutionary models for isolated BDs generally predict that young ($\sim 0.5$ Gyr) objects in the mass range $10-100~M_{\rm Jup}$ should have radii of $\sim R_{\rm Jup}$ (see the models of \citealt{baraffe2003} in Fig.~\ref{fig:mr}). As these objects cool, however, their radii decrease, particularly for masses between 50 and 80 $M_{\rm Jup}$. After $\sim 1$~Gyr, all isolated objects with mass between 20-80~$M_{\rm Jup}$ are predicted to have radii $<R_{\rm Jup}$. The radius we measure for KELT-1b is $R_P = 1.110_{-0.022}^{+0.032}~R_{\rm Jup}$, which, at a mass of $M_P=27.23_{-0.48}^{+0.50}~M_{\rm Jup}$, is $\sim 7~\sigma$ and $\sim 10~\sigma$ larger than the radius predicted by \citet{baraffe2003} for ages of 1 Gyr and 5 Gyr, respectively. KELT-1b is strongly irradiated, which in principle can delay its cooling and contraction. However, \citet{bouchy2011a} predict that the effect of insolation is small for brown dwarfs in this mass range, although their models were for a much more modest insolation corresponding to an equilibrium temperature of 1800~K (versus $\sim 2400$K for KELT-1b). Therefore, given the estimated $1.5-2$ Gyr age of the system, KELT-1b is likely to be significantly inflated relative to predictions. \begin{figure} \epsscale{1.2} \plotone{rprsv.eps} \caption{ Transit depth assuming no limb darkening, i.e., $(R_P/R_*)^2$, as a function of the apparent $V$ magnitude of the host star for a sample of transiting systems. Brown dwarfs are shown as triangles, exoplanets as squares, and low-mass stars as asterisks. KELT-1b is shown as the large star. All else being equal, objects in the top left provide the best targets for follow-up. KELT-1b has a similar transit depth as the other known transiting brown dwarfs, but is significantly brighter. Also labeled are some other benchmark systems. KELT-2b ($M_P \sim 1.5~M_{\rm Jup}$) is shown as a large cross (Beatty et al., in preparation). \label{fig:rprsv}} \end{figure} Using the benchmark double transiting BD 2M0535$-$05, \citet{gmc2009} explore models in which brown dwarfs have large spots, which reduce the flux from their surface, thereby decreasing their effective temperatures and increasing their radii relative to those without spots (see also \citet{bouchy2011a}). They find that these can lead to significantly inflated radii, but only for large spot filling factors of $\sim 50\%$, and for relatively young ($\sim 0.5$ Gyr) systems. However, a detailed spectroscopic analysis of that system by \citet{mohanty2010} and \citet{mohanty2012}, shows that surface spots cannot be present with such a large filling factor, and thus favor global structural effects such as strong internal magnetic fields (e.g., \citealt{mullan2010}). Many other mechanisms have been invoked to explain the inflated radii of some giant exoplanets (see \citealt{fortney2010} for a review), however it is not clear which, if any, of the many mechanisms that have proposed may also be applied to inflated brown dwarfs. We would be remiss if we did not question whether we were erroneously inferring a large radius for the planet. In the past, such situations have arisen when there is a discrepancy between the constraint on the stellar density from the light curve and the constraint on the stellar surface gravity of the star from spectroscopy (e.g., \citealt{johnskrull2008,winn2008}). In our case, we find no such tension. The parameters of the star inferred from the spectroscopic data alone are in nearly perfect agreement with the results from the global analysis of the light curve, RV data, and spectroscopic constraints. We note that the effect of allowing a non-zero eccentricity also has a negligible effect on the inferred planetary radius. Finally, we reiterate that the faint companion detected in AO imaging (\S\ref{sec:keckao}), which is unresolved in our follow-up photometry, has a negligible effect on our global fit and inferred parameters. Therefore, we believe our estimate of $R_P$ is likely robust. We conclude by noting that there is a need for predictions of the radii of brown dwarfs for a range of ages and stellar insolations, and it would be worthwhile to explore whether or not the inflation mechanisms that have been invented to explain anomalously large giant planets might work for much more massive and dense objects as well. \subsection{Prospects for Follow Up} Figure \ref{fig:rprsv} compares the transit depth and apparent visual magnitude of the KELT-1 system ($\delta \sim 0.6\%$, $V=10.7$) to the sample of transiting systems collected in \S\ref{sec:comp} with available $V$ magnitudes. KELT-1 is not particularly bright compared to the bulk of the known transiting exoplanet hosts. However, it is significantly brighter than the hosts of all known transiting brown dwarfs; the next brightest is WASP-30 \citep{anderson2011}, which is $\sim 1.2$ magnitudes fainter. On the other hand, the depth of the KELT-1b transit is similar to that of the other known brown dwarfs. The prospects for follow-up of KELT-1b are exciting, not only because of the brightness of the host, but also because of the extreme nature of the system parameters, in particular the relatively short orbital period, relatively large stellar radius, and relatively large amount of stellar irradiation received by the planet. Following \citet{mazeh2010} and \citet{faigler2011}, we can estimate the amplitudes of ellipsoidal variations $A_{ellip}$, Doppler beaming $A_{beam}$ (see also \citealt{loeb2003}), reflected light eclipses and phase variations $A_{ref}$, and thermal light eclipses and phase variations $A_{therm}$, \begin{eqnarray} A_{beam} &=& \alpha_{beam}4\left(\frac{K}{c}\right) \sim 5.7\alpha_{beam}\times 10^{-5}\\ A_{ref} &=& \alpha_{ref} \left(\frac{R_P}{a}\right)^2\sim 4.6\alpha_{ref}\times 10^{-4}\\ A_{ellip} &=& \alpha_{ellip}\frac{M_P}{M_*} \left(\frac{R_*}{a}\right)^3 \sim 4.1\alpha_{ellip}\times 10^{-4} \\ A_{therm} &=& \alpha_{therm} \left(\frac{R_P}{R_*}\right)^2 \left(\frac{R_*}{a}\right)^{1/2} \sim 3.2 \alpha_{therm}\times 10^{-3}, \end{eqnarray} where the expression for $A_{therm}$ assumes observations in the Rayleigh-Jeans tail of both objects, and the expression for $A_{ellip}$ assumes an edge-on orbit. The dimensionless constants $\alpha$ are defined in \citet{mazeh2010}, but to make contact with the secondary eclipse analysis in \S\ref{sec:second} we note that $\alpha_{ref}=A_g$ and $\alpha_{therm}=[f'(1-A_B)]^{1/4}$. All of these constants are expected to be of order unity, except for $\alpha_{ref}$, which may be quite low for strongly irradiated planets, depending on wavelength \citep{burrows2008}. Based on previous results, all of these effects with the possible exception of Doppler beaming are likely to be detectable with precision photometry (see, e.g., \citealt{cowan2012}). For ellipsoidal variations in particular, we expect $\alpha_{ellip} \sim 2$ and thus a relatively large amplitude of $A_{ellip} \sim 10^{-3}$. Furthermore, the detection of all these signals is facilitated by the short orbital period for KELT-1b. The prospects for transmission spectroscopy are probably poorer, given the relatively small planet/star radius ratio ($\sim 0.078$) and more importantly the large surface gravity for KELT-1b. For the optimistic case of $T_{eq}\simeq 2400$K assuming zero albedo and perfect redistribution, the scale height is only $H\sim kT/(\mu m_H g_P) \sim 16~{\rm km}$, and thus will only lead to changes in the transit depth of order $\sim 2H/R_P \sim 0.04\%$. \section{Summary}\label{sec:summary} We have presented the discovery of KELT-1b, the first transiting low-mass companion from the wide-field Kilodegree Extremely Little Telescope-North (KELT-North) transit survey. The host star KELT-1 is a mildly evolved, solar-metallicity, rapidly-rotating, mid-F star with an age of $\sim 1.5-2$ Gyr located at a distance of $\sim 260$~pc. The transiting companion is a low-mass brown dwarf or supermassive planet with mass $\sim 27~M_{\rm Jup}$, on a very short period, circular orbit of $P \sim 1.2$~days. In many ways, the KELT-1 system is quite unusual and extreme: KELT-1b receives a large amount of stellar insolation, is inflated relative to theoretical predictions, and raises strong tides on its host. The obliquity of KELT-1 is consistent with zero, and there is evidence that the spin of KELT-1 has been synchronized with the orbital period of KELT-1. Finally, there is a likely M-dwarf stellar companion to the KELT-1 system with a projected separation of $\sim 150$~AU. As the first definitively inflated transiting brown dwarf, KELT-1b demonstrates the need for models of brown dwarfs subject to a range of stellar insolations. A plausible formation scenario for this system posits that KELT-1b formed on a much wider orbit, and was driven to a smaller semimajor axis by the tertiary via the Kozai-Lidov mechanism. The system then continued to evolve under strong tidal forces, with KELT-1b migrating inward to its current orbit, while damping the obliquity of KELT-1 and synchronizing its spin period. The future evolution of the KELT-1 system may be spectacular. As KELT-1 continues to evolve and its radius increases, so will the tides raised on it by KELT-1b. Assuming KELT-1 is and remains tidally locked, as it cools it will develop a deep convective envelope, but be forced to rotate at an ever increasing rate. In $\sim 2$ Gyr, KELT-1 will have roughly the temperature of sun, but with a radius of $\sim 2~\ensuremath{\,{\rm R}_\Sun}$ and a rotational velocity of $\sim 100~\ensuremath{\,{\rm km~s^{-1}}}$. At this point, KELT-1 will likely become an active RS CVn star \citep{walter1981}. Eventually, as KELT-1 reaches the base of the giant branch, it will swallow KELT-1b whole, likely resulting in a bright UV/X-ray and optical transient \citep{metzger2012}. \acknowledgments We would like to particularly thank Bruce Gary for acquiring and reducing the data from HAO, Saurav Dhital for estimating the chance probability of a close companion, and Bence Beky and the HATNet team for giving up one night on the FLWO 1.2m on short notice. We thank Subo Dong, Jonathan Fortney, Fred Rasio and Aristotle Socrates for useful discussions. Early work on KELT-North was supported by NASA Grant NNG04GO70G. Work by B.S.G., J.D.E., and T.G.B.\ was partially supported by NSF CAREER Grant AST-1056524. E.L.N.J.\ gratefully acknowledges the support of the National Science Foundation's PREST program, which helped to establish the Peter van de Kamp Observatory through grant AST-0721386. K.A.C.\ was supported by a NASA Kentucky Space Grant Consortium Graduate Fellowship. J.A.P.\ and K.G.S.\ acknowledge support from the Vanderbilt Office of the Provost through the Vanderbilt Initiative in Data-intensive Astrophysics. K.G.S.\ and L.H.\ acknowledge the support of the National Science Foundation through PAARE grant AST-0849736 and AAG grant AST-1009810. The TRES and KeplerCam observations were obtained with partial support from the Kepler Mission through NASA Cooperative Agreement NNX11AB99A with the Smithsonian Astrophysical Observatory, D.W.L.\ PI. This work has made use of NASA's Astrophysics Data System, the Exoplanet Orbit Database at exoplanets.org, and the Extrasolar Planet Encyclopedia at exoplanet.eu \citep{schneider2011}.
2,877,628,090,616
arxiv
\section{Introduction} \IEEEPARstart{T}{he} problem of reconstructing an unknown signal from an observation, where the observation is modeled as the filtered version of the unknown signal with a given distortion filter, is known as the deconvolution problem. When the impulse response of the distortion filter is not known and to be estimated along with the unknown signal, it evolves into the Blind Deconvolution (BD) problem. This problem setting emerges in a variety of real-life applications including, but not limited to, image deblurring \cite{LEdmund}, seismic exploration \cite{QCheng,JMendel}, digital communication \cite{GXu,EMoulines}, and biomedical signal reconstruction \cite{JGao,BCivek}. The class of BD problems is inherently ill-posed and suffers from the identifiability issues without the prior knowledge about the convolving signals \cite{SChoudhary}. This is due to the fact that many distinct signal pairs can yield the same observation. To overcome this issue, several different assumptions are made on the signal pairs in an attempt to constrain the solution space. Typical examples of these include restricted support in frequency or time domain \cite{JGao,BCivek}, availability of a sparse representation \cite{CBilen,YLi,LWang}, or existence of a generative subspace \cite{EMoulines,AAhmed}.\par Solution of BD problems has been extensively studied under both deterministic and probabilistic frameworks. Deterministic approaches typically construct a constrained optimization problem, where the goal is to minimize an objective function corresponding to a likelihood term that measures the quality of fit. The constraints on the variables are imposed either explicitly or by augmenting the objective function. Because a closed form solution is usually not available, gradient based search algorithms are employed to converge to a local minimum that minimizes the objective function. However, due to the structure of the constraints, the constructed optimization problem usually has a non-convex formulation, and therefore, the successful recovery relies on finding a good initial estimate \cite{YLi}. Even though there exist convex formulations, which eliminate the effect of initialization, their ability to impose a variety of constraints simultaneously, e.g., frequency and time domain constraints at the same time, is limited \cite{BCivek,CBilen,LWang}. Notably, the lifting approach proposed in \cite{AAhmed}, and its extension to sparse sequences \cite{SLing}, enable imposing time/frequency constraints jointly by assuming the existence of a generative subspace. However, the number of available measurements is limited when the impulse response of the distortion filter has a bandlimited structure, which might prevent successful recovery with lifting approaches. \par Probabilistic approaches, on the other hand, provide effective alternatives by modeling the unknown quantities as random variables and producing estimates based on the posterior distribution. Appropriate prior distributions are assigned to the unknown variables in order to increase the likelihood of points in the probability space that satisfy the constraints. Similar to deterministic models, analytic solution is not tractable in many cases, where the solution is obtained through numerical iterative methods such as Markov Chain Monte Carlo (MCMC) \cite{CRobert}. Different from gradient-based searches, MCMC methods do not rely on gradient information, which reduces the effect of initialization on the convergence behavior \cite{gilks1995markov}. This enables probabilistic schemes to incorporate more complex constraints simultaneously without suffering from the highly non-convex structure of the posteriors shaped by these constraints.\par In this paper, we adopt the probabilistic framework and present computationally efficient Bayesian methods for the solution of the regularized BD problem, in which the desired signal is a sparse sequence and the unknown impulse response of the distortion filter is time and/or bandlimited. This problem setting is of great importance due to its applicability to a wide range of scenarios. In many cases, even if the desired signal itself is not sparse, it is possible to find a sparse representation in a suitable transform domain. Moreover, physical realizations of real-world systems are well modeled by exactly or approximately bandlimited system responses with finite impulse responses.\par Bayesian BD methods conventionally employ a Bernoulli-Gaussian (BG) prior for modeling the sparse sequences \cite{MLavielle,CSoussen}. According to the BG model, the sparse sequence is represented by a Bernoulli distributed binary latent sequence, indicating the nonzero positions, and an amplitude sequence, representing the amplitudes corresponding to those positions under the Gaussian law. The posterior distribution constructed in this manner has a discrete structure, prohibiting efficient analytical solutions and leading to iterative approaches for making inference on the posterior. Bayesian methods typically use the MCMC simulations, whose potential in the sparse BD literature was presented by the pioneering work of Cheng et al. \cite{QCheng}. The MCMC methods provide powerful iterative tools, such as Gibbs sampler, for Bayesian inference problems, even when the number of unknowns is quite high \cite{SGeman}. The Gibbs sampler performs quite well with a significantly fast mixing rate, especially if the sampled variables are independent \cite{SGeman}. However, the mixing performance degrades considerably when there are strong dependencies between variables \cite{SBourguignon,DGe}. \par It has been shown that the original Gibbs sampler constructed based on BG prior for sparsity causes implicit dependencies between the consecutive sampling steps of the latent indicator sequence, which prohibits the sampler to explore the probability space efficiently, causing it to get stuck on a local optimum for a long time \cite{SBourguignon}. In order to eliminate this deficiency, more efficient sampling methods were proposed in \cite{SBourguignon} and \cite{CChi}, accounting for the statistical dependence between the neighboring variables in the indicator sequence. An alternative idea, presented in \cite{DGe}, is to use the blocked Gibbs sampler scheme, which was first introduced in \cite{JLiu}, that enables sampling adjacent variables jointly. Despite the considerable improvement in the convergence rate, their application is usually limited to very short blocks due to exponentially increasing computational complexity with the block length. Another attempt, also presented in \cite{DGe}, to improve the mixing rate was to integrate the partially collapsed Gibbs (PCG) sampler \cite{DDyk}, a generalization of the block Gibbs samplers, into the BG model. PCG samplers explore the probability space more effectively by means of marginalization and trimming operations \cite{DDyk}. However, although the PCG sampling scheme outperforms the blocked sampler schemes in terms of the number of iterations needed for convergence, iterations are considerably more costly \cite{DGe}.\par The problems stated above are mainly due to the discrete nature of the BG model, which creates a computational bottleneck even for the highly efficient sampling schemes. In order to address this deficiency, we present alternative MCMC methods which utilize the Normal-Inverse-Gamma (NIG) prior for modeling sparsity. Unlike the BG model, the sparse sequence is modeled by a zero-mean multivariate Gaussian distribution with an unknown diagonal covariance matrix, where the individual variances marginally follow an Inverse-Gamma (IG) distribution. Therefore, the marginal distribution for the sparse sequence corresponds to a multivariate $t$-distribution, whose heavy tails encourage sparsity \cite{YildirimS,MohammadA,YangL}. With this setting, the problem is transformed into a completely continuous valued domain, i.e., all unknowns including the latent variables are continuous valued. We first present a classical Gibbs sampling approach that exploits the continuous valued structure of the problem provided by the NIG prior. We then propose a PCG based sampling scheme to further enhance the convergence behavior by accounting for statistical dependency. The performance gains achieved by the proposed methods are illustrated through a variety of simulations. To the best of our knowledge, this paper presents the first application of the MCMC methods based on the NIG prior to the Bayesian sparse BD problem with time/frequency domain constraints. The proposed methods achieve state-of-the-art performance of the previously proposed blocked \cite{DGe} Gibbs samplers with BG prior at a fraction of their computational cost.\par The paper is organized as follows. We first introduce the problem setting and define the prior distributions associated with each variable in Section \ref{problem_setting}. Then, in Section \ref{proposed_samplers}, we present the proposed MCMC methods for the solution of sparse BD problems, followed by their validation through numerical experiments in Section \ref{simulations}. We finalize the discussion with the concluding remarks in Section \ref{conclusion}. \section{Problem Statement}\label{problem_setting} We consider a real-valued time-domain observation sequence $y_n$ of length $N$, which is modeled by linear convolution of two finite sequences, a relatively shorter pulse sequence $h_n$ of length $T \ll N$, modeling the impulse response of the distortion filter, and a sparse sequence $x_n$ of length $K < N$. The output of the convolution is then corrupted by additive noise $v_n$, yielding the following measurement model \begin{equation}\label{conv_model} y_n = \sum_{k = 0}^{T-1}h_k x_{n-k} + v_n\; \text{ for } n = 0,...,N-1, \end{equation} where $x_n = 0$ for any $n \notin \{0,\hdots,K-1\}$. Here, we consider the scenario where both the pulse shape $h_n$ and the sparse sequence $x_n$ are unknown and to be estimated from the observation $y_n$. This problem is well-known as the blind deconvolution problem and usually ill-posed if there is no prior information about the pair of convolving sequences. Here, we focus on the setting where the problem is regularized by sparsity and time-frequency domain constraints, which enables the successful recovery of both sequences. \par For notational convenience, we define the vector valued variables $\vec{y},\vec{v} \in \mathbbm{R}^N$, $\vec{x} \in \mathbbm{R}^K$, and $\vec{h} \in \mathbbm{R}^T$, and rewrite (\ref{conv_model}) as \begin{equation}\label{conv_model2} \vec{y} = \vec{H}\vec{x}+\vec{v}, \end{equation} where $\vec{H} \in \mathbb{R}^{N \times K}$ is the Toeplitz matrix of $\vec{h}$. Here, the first column and row of $\vec{H}$ are given as $[h_0,h_1,...,h_{T-1},0,...,0]^T$ and $[h_0,0,...,0]$ respectively. Defining $\vec{X}\in \mathbb{R}^{N \times T}$ as the Toeplitz matrix having $[x_0,x_1,...,x_{K-1},0,\hdots,0]^T$ as its first column and $[x_0,0,...,0]$ as its first row, we can alternatively rewrite (\ref{conv_model2}) as $\vec{y} = \vec{X}\vec{h}+\vec{v}$. We will make use of both of these formulations in the rest of the paper. \subsection{Prior Distributions}\label{section_prior_dist} Following the Bayesian framework, the unknowns are modeled as random variables with specific prior distributions. This section provides a complete description of the prior distributions assigned to each variable. We begin with review of the BG distribution, which is the conventional prior used in Bayesian settings for sparse sequences. We then introduce the NIG model, which is a \textit{soft} alternative to the BG model promoting sparsity of $\vec{x}$ and will constitute the basis of our proposed estimator in the next section. \par \subsubsection{Bernoulli-Gaussian Prior for Sparsity} The BG model introduces a latent binary sequence $\vec{s} = [s_0,s_1,\hdots,s_{K-1}]^T$ with $s_n \in \{0,1\}$ and defines the conditional distribution of $x_n$ given $s_n$ as \begin{equation}\label{conditional_prior_x_given_s} p(x_n|s_n) = \begin{cases} \delta(x_n) &\text{if} \;\; s_n = 0\\ \mathcal{N}(x_n;0,\sigma_{x}^2) &\text{if} \;\; s_n = 1 \end{cases}, \end{equation} where $\delta(\cdot)$ is the Dirac delta function and $\mathcal{N}(\cdot;\mu,\sigma^2)$ denotes the Gaussian distribution with mean $\mu$ and variance $\sigma^2$. Assuming $s_n$ are independent and identically distributed (i.i.d.) according to a Bernoulli distribution with parameter $\pi_0 = P(s_n = 0)$, the prior for $\vec{s}$ takes the form \begin{equation}\label{prior_s} p(\vec{s}) = \prod_{n=0}^{K-1}p(s_n) = \binom{K}{|\mathcal{K}_1|}\pi_0^{|\mathcal{K}_1|}(1-\pi_0)^{K-|\mathcal{K}_1|}, \end{equation} where $\mathcal{K}_1$ denotes the set of indices giving the locations of 1's and $|\mathcal{K}_1|$ represents the cardinality of $\mathcal{K}_1$, i.e., $|\mathcal{K}_1| = \sum s_n$. Assuming different pairs of $(x_n,s_n)$ are statistically independent, the joint distribution of $\vec{x}$ and $\vec{s}$ becomes \begin{equation}\label{joint_prior_x_and_s} p(\vec{x},\vec{s}) = p(\vec{s})\prod_{n \in \mathcal{K}_1}\mathcal{N}(x_n;0,\sigma_{x}^2)\prod_{n\in\mathcal{K}_0}\delta(x_n), \end{equation} where $\mathcal{K}_0$ is the complement of $\mathcal{K}_1$ in $\{0,1,...,K-1\}$. The parameter $\pi_0$ reflects our \textit{a priori} knowledge about the expected rate of 1's in $\vec{s}$, limiting the total number of nonzero entries in $\vec{x}$, and hence, leading to a sparse sequence. \subsubsection{The Normal-Inverse-Gamma Prior for Sparsity} Instead of introducing a latent binary sequence, which forms a discrete probability space, the sparsity can also be imposed by a diagonal covariance matrix $\vec{\Sigma}_x = \text{diag}(\vec{\sigma}_x^2)$, where $\vec{\sigma}_x^2 = [\sigma_{x_0}^2,\sigma_{x_1}^2,\hdots,\sigma_{x_{K-1}}^2]^T$. Here, $\vec{\sigma}_x^2$ consists of continuous valued unknown variances of each element in $\vec{x}$. Under a zero-mean multivariate Gaussian law with the covariance matrix $\vec{\Sigma}_x$, the conditional prior distribution of $\vec{x}$ given $\vec{\sigma}_x^2$ becomes \begin{equation}\label{conditional_prior_x_given_sigma_x} p(\vec{x}|\vec{\sigma}_x^2) = \mathcal{N}(\vec{x};\vec{0},\vec{\Sigma}_x), \end{equation} where $\mathcal{N}(\cdot;\vec{\mu},\vec{\Sigma})$ represents the multivariate Gaussian distribution with mean $\vec{\mu}$ and covariance $\vec{\Sigma}$. The idea is that an individual element $x_n$ can be made arbitrarily small by setting a sufficiently low variance $\sigma_{x_n}^2$. Therefore, the unknown variance vector $\vec{\sigma}_x^2$ is also assumed to be a random sequence and to be estimated along with $\vec{h}$ and $\vec{x}$. \par \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{prior_for_x.png} \caption{Marginal prior distribution of $x_n$ in logarithmic scale for different parameter values $(\alpha_x,\beta_x)$. Standard Gaussian and Cauchy distributions are included for comparison. All densities are scaled such that the maximum is 1. \label{marginal_prior_of_x}} \vspace{-5mm} \end{figure} We assign i.i.d. IG\footnote{Please see the supplement for the explicit definition of IG distribution.} prior on $\vec{\sigma}_x^2$ with shape and scale parameters $\alpha_x$ and $\beta_x$ \begin{equation}\label{prior_sigma_x} p(\vec{\sigma}_x^2|\alpha_x,\beta_x) = \prod_{n = 0}^{K-1}\mathcal{IG}(\sigma_{x_n}^2;\alpha_x,\beta_x). \end{equation} The IG distribution is conjugate prior for the unknown variance of the Gaussian distribution, which enables analytical calculation of the posterior. Moreover, note that the marginal distribution of any element $x_n$ becomes \begin{equation}\label{marginal_prior_x} \begin{split} p(x_n) &= \int p(x_n|\sigma_{x_n}^2)p(\sigma_{x_n}^2)d\sigma_{x_n}^2 \\ &= \dfrac{\beta_x^{\alpha_x}}{\sqrt{2\pi}\Gamma(\alpha_x)}\dfrac{\Gamma(\alpha_x + 0.5)}{(0.5x_n^2 + \beta_x)^{\alpha_x + 0.5}}, \end{split} \end{equation} which corresponds to a generalized Student's \textit{t}-distribution with degree of freedom $2\alpha_x$ and scale $\beta_x/\alpha_x$. As shown in Fig. \ref{marginal_prior_of_x}, appropriate selection of parameters $(\alpha_x,\beta_x)$ leads to a family of distributions that are highly concentrated around zero with significantly heavier tails compared to standard Gaussian distribution, justifying the use of NIG model for sparse sequences as an alternative to BG model. However, as opposed to the simple interpretation of $\pi_0$ within the BG model, the effect of parameters $\alpha_x$ and $\beta_x$ is complicated, preventing the availability of prior estimates beforehand. Therefore, we construct a hierarchical Bayesian model and learn the prior distribution parameters $\alpha_x$ and $\beta_x$ from the measurement as well. Assuming no prior information on $\alpha_x$ and $\beta_x$, we assign Jeffreys prior, given by $p(\alpha_x) \propto 1/\alpha_x$ and $p(\beta_x) \propto 1/\beta_x$, which forms an improper prior exhibiting non-informative structure. Here, $\propto$ denotes the proportionality. We should note that the effect of prior distributions $p(\alpha_x)$ and $p(\beta_x)$ on the posterior will indeed be dominated by $p(\vec{x}|\vec{\sigma}_x^2)$ and $p(\vec{\sigma}_x^2|\alpha_x,\beta_x)$ as the length of the sequence $K$ increases. Hence, the constructed hierarchical model is not sensitive to the selection of prior distributions for the variables $\alpha_x$ and $\beta_x$. \subsubsection{Prior for Short Pulse Sequence} Time and frequency domain constraints for pulse sequences are widely used in blind deconvolution framework to regularize the problem \cite{JGao,BCivek,GKail}. One common practice is to assume that the pulse sequence belongs to a known subspace, i.e., $\vec{h} = \vec{A}\vec{\gamma}$, where $\vec{A} \in \mathbbm{R}^{T \times L}$ represents a lower dimensional subspace with $L \leq T$ and $\vec{\gamma} \in \mathbbm{R}^L$ represents the unknown orientation of $\vec{h}$ in the subspace, which is to be estimated. The duration of $\vec{h}$ in time domain is explicitly enforced by the dimension of $\vec{A}$, and the frequency domain restrictions can be applied by constructing $\vec{A}$ using the first $L$ sequence of either Discrete Prolate Spheroidal (DPS) Sequences or Hermite Functions \cite{FHlawatsch}. Whenever there is no specific frequency domain restriction, $\vec{A}$ can be set as identity, i.e., $\vec{A} = \vec{I}$. We should note that due to the scaling ambiguity inherent in BD problems, the scale of the pulse sequence $\vec{h}$ must be restricted. This can be achieved by assigning an appropriate prior on $\vec{\gamma}$. Therefore, we assign a zero-mean i.i.d. Gaussian distribution with variance $\sigma_{\gamma}^2$, i.e., \begin{equation}\label{prior_gamma} p(\vec{\gamma}) = \mathcal{N}(\vec{\gamma};\vec{0},\sigma_{\gamma}^2\vec{I}). \end{equation} Here, the variance $\sigma_{\gamma}^2$ is a fixed hyperparameter to avoid scaling ambiguity. \subsubsection{Prior for Noise Variance} We assume that measurements are corrupted by additive white Gaussian noise with unknown variance $\sigma_v^2$. Assuming no prior information, similar to the other parameters, we assign Jeffreys prior, i.e., $p(\sigma_v^2) \propto 1/\sigma_v^2$. \subsection{Estimation Problem} The estimation problem consists of estimating the actual variables of interest $\vec{x}$ and $\vec{\gamma}$, along with the noise variance $\sigma_v^2$, the latent variables $\vec{\sigma}_x^2$, and the corresponding prior distribution parameters $\alpha_x$, $\beta_x$ from the given measurement $\vec{y}$. Given the prior distributions for all variables, the posterior distribution follows \begin{equation}\label{posterior} p(\vec{\theta}|\vec{y}) \propto p(\vec{y}|\vec{\theta})p(\vec{\theta}), \end{equation} where we set $\vec{\theta} = [\vec{x},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2,\alpha_x,\beta_x]$ for more compact notation. Assuming all variables are statistically independent, the prior distribution $p(\vec{\theta})$ is given by \begin{equation}\label{prior} p(\vec{\theta}) = p(\vec{x}|\vec{\sigma}_x^2)p(\vec{\sigma}_x^2|\alpha_x,\beta_x)p(\alpha_x)p(\beta_x)p(\vec{\gamma})p(\sigma_v^2), \end{equation} where the expressions for the right hand side are given in Section \ref{section_prior_dist}. Assumed Gaussian noise model yields the following likelihood term \begin{equation}\label{likelihood} p(\vec{y}|\vec{\theta}) \propto \bigg(\dfrac{1}{\sigma_v^2}\bigg)^{N/2}\exp\bigg(-\dfrac{\|\vec{y} - \vec{X}\vec{A}\vec{\gamma}\|^2}{2\sigma_v^2}\bigg), \end{equation} where $\|\cdot\|$ denotes $\ell_2$ norm of a vector. Note that the likelihood term is only a function of $\vec{x}$, $\vec{\gamma}$ and $\sigma_v^2$.\par For estimation of the variables, we consider the minimum mean-square-error (MMSE) estimator, i.e., \begin{equation}\label{mmse_estimator} \vec{\theta}^{*} = E[\vec{\theta}|\vec{y}], \end{equation} which is equivalent to the expected value of the posterior distribution given in (\ref{posterior}). However, explicit calculation of (\ref{mmse_estimator}) is not possible since it requires analytically intractable integrations. This leads us to an approximate solution, which can be obtained using MCMC simulations. MCMC methods are widely used in Bayesian inference problems, including blind deconvolution literature, when the posterior distribution is too complicated to obtain exact analytical solutions \cite{CRobert,gilks1995markov}. The first step of MCMC methods consists of generating a set of, say, $J$ random samples, denoted by $\{\vec{\theta}^{(i)}\}_{i=1}^{J}$, with $\vec{\theta}^{(i)} = [\vec{x}^{(i)},\vec{\sigma}_x^{2(i)},\vec{\gamma}^{(i)},\sigma_v^{2(i)},\alpha_x^{(i)},\beta_x^{(i)}]$ being the $i^{th}$ sample, from the posterior distribution using an appropriate sampler. Once the sampling process is completed, the actual solution of (\ref{mmse_estimator}) is approximated by the sample mean, i.e., \begin{equation}\label{MCMC_approximation} \vec{\theta}^{*} \simeq \dfrac{1}{J-J^{\prime}}\sum_{i=J^{\prime}+1}^{J}\vec{\theta}^{(i)}, \end{equation} where the first $J^{\prime}$ samples are discarded as part of the burn-in process. Here, the crucial part is to have an effective sampler, which can converge to the true target distribution quickly. To this end, in the next section, we construct the proposed valid sampling schemes to be used within MCMC simulations. \section{Proposed Samplers for Sparse Blind Deconvolution}\label{proposed_samplers} Samplers are essential components of MCMC methods for complex target distributions. They construct Markov Chains whose stationary distributions converge to the target distribution (which corresponds to the posterior distribution (\ref{posterior}) in our case) in the long run \cite{WGilks2}. The effectiveness of the sampler is directly associated with the mixing rate, which represents how fast the stationary distribution is achieved. In this work, we employ both Gibbs and PCG sampling schemes with improved mixing rates.\par In this section, we first briefly review the idea of classical Gibbs sampling for multivariate distributions, followed by the construction of our proposed classical Gibbs sampler, which utilizes the alternative NIG model for sparsity. Then, we propose a PCG based sampler to improve upon the classical Gibbs sampler to obtain a faster mixing rate with a slightly increased computational complexity. \subsection{Gibbs Sampler} Let $\pi(\vec{\theta})$ denote the target distribution that we want to sample from, with $\vec{\theta}$ being a vector valued random variable of arbitrary length, say $M$, i.e., $\vec{\theta} = [\theta_0,\theta_1,\hdots,\theta_{M-1}]^T$. When direct sampling from $\pi(\vec{\theta})$ is not feasible, the idea of Gibbs sampling suggests that we can sample each of the scalar variables, $\theta_m$, in turn from their conditional distributions with all other variables are fixed at their current values. Hence, in the $i^{th}$ iteration, $\theta_m^{(i)}$ is obtained by sampling from $p(\theta_m^{(i)}|\theta_{m^-}^{(i)},\theta_{m^+}^{(i-1)})$, where $m^-$ and $m^+$ represents the indices $\{1,\hdots,m-1\}$ and $\{m+1,\hdots,M\}$ respectively. One iteration of Gibbs sampling is completed once every single variable is updated. All variables are initialized, usually by sampling from their prior distributions, before the first iteration, which has a strong effect on the first few realizations. In order to eliminate this effect, a burn-in process is incorporated, where the realizations generated until convergence to the target distribution are discarded. \par It is also useful to note that this process is not restricted to sampling a single scalar variable at a time. An extension of the Gibbs sampler, called blocked Gibbs sampler, allows sampling blocks of variables at one step through their joint distribution conditioned on others. It helps to achieve considerably improved convergence rates compared to sampling a scalar valued variable at a time by reducing the autocorrelation between the successive samples. This is especially useful when applied to variables with strong dependencies. \begin{table} \centering \normalsize \caption{Proposed Classical Gibbs Sampler} \vspace{-2mm} \label{table1} \renewcommand\arraystretch{1.2} \begin{tabular}{|m{0.4\textwidth}|} \hline Step 1. Sample $\alpha_x$ from $p(\alpha_x|\vec{\sigma}_x^2,\beta_x)$ \\ Step 2. Sample $\beta_x$ from $p(\beta_x|\vec{\sigma}_x^2,\alpha_x)$ \\ Step 3. Sample $\vec{\sigma}_x^2$ from $p(\vec{\sigma}_x^2|\vec{x},\alpha_x,\beta_x)$ \\ Step 4. Sample $\vec{x}$ from $p(\vec{x}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2)$ \\ Step 5. Sample $\vec{\gamma}$ from $p(\vec{\gamma}|\vec{y},\vec{x},\sigma_v^2)$ \\ Step 6. Sample $\sigma_v^2$ from $p(\sigma_v^2|\vec{y},\vec{x},\vec{\gamma})$ \\ \hline \end{tabular} \vspace{-5mm} \end{table} \subsection{The Proposed Classical Gibbs Sampler}\label{proposed_classical_gibbs_sampler} We begin with constructing the classical Gibbs sampler based on the selected prior distributions. Then, in the next section, we propose an alternative sampling scheme that introduces additional intermediate steps to enhance the mixing rate. One iteration of our classical Gibbs sampler is described in Table \ref{table1}. In each step, we sample a variable from its full conditional posterior distribution. The reason for dropping some of the variables from the conditions is not because of marginalization, but due to conditional independence. For instance, in Step 1, we have $p(\alpha_x|\vec{y},\vec{x},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2,\beta_x) = p(\alpha_x|\vec{\sigma}_x^2,\beta_x)$. Analogous situations apply to the other steps. \par This sampling scheme can be viewed as a blocked Gibbs sampler, since the variables being sampled in steps 3, 4, and 5 are vector valued. Nevertheless, each variable is sampled exactly once using the corresponding posterior distribution conditioned on the current values of all other variables, hence, it is a valid Gibbs sampler. Due to our selection of conjugate priors, each conditional posterior distribution is analytically tractable. We now present the closed-form expressions for the sampling distributions in each step of Table \ref{table1}. The derivations are provided in the supplemental material. \subsubsection{Sampling Distributions for Step 1 and 2} The prior distribution parameters $\alpha_x$ and $\beta_x$ for the latent variable $\vec{\sigma}_x^2$ are sampled respectively from \begin{equation}\label{sampling_alpha_x} p(\alpha_x|\vec{\sigma}_x^2,\beta_x) \propto \dfrac{\beta^{K\alpha_x}}{\Gamma(\alpha_x)^K}\bigg(\prod_{n=0}^{K-1}\dfrac{1}{\sigma_{x_n}^2}\bigg)^{\alpha_x+1}p(\alpha_x) \end{equation} and \begin{equation}\label{sampling_beta_x} p(\beta_x|\vec{\sigma}_x^2,\alpha_x) = \mathcal{G}(\beta_x;\tilde{\alpha}_{\beta_x},\tilde{\beta}_{\beta_x}), \end{equation} where $\tilde{\alpha}_{\beta_x} = K\alpha_x$ and $\tilde{\beta}_{\beta_x} = \sum_{n=0}^{K-1}1/\sigma_{x_n}^2$. While sampling $\beta_x$ is straightforward, it is not for $\alpha_x$ due to not well-known form of its sampling distribution. Nevertheless, univariate form of (\ref{sampling_alpha_x}) allows us to draw samples efficiently by employing different sampling approaches, such as Metropolis-Hastings or Slice sampling \cite{RNeal}. \subsubsection{Sampling Distribution for Step 3} The posterior distribution of $\vec{\sigma}_x^2$ conditioned on $\vec{x}$, $\alpha_x$, and $\beta_x$ is given by \begin{equation}\label{sampling_sigma_x} p(\vec{\sigma}_x^2|\vec{x},\alpha_x,\beta_x) = \prod_{n=0}^{K-1}\mathcal{IG}(\sigma_{x_n}^2;\tilde{\alpha}_x,\tilde{\beta}_{x_n}) \end{equation} with common shape parameter $\tilde{\alpha}_x = \alpha_x + 1/2$ and individual scale parameters $\tilde{\beta}_{x_n} = x_n^2/2 + \beta_x$. Therefore, sampling $\vec{\sigma}_x^2$ can be achieved by independently sampling its elements from univariate IG distributions. \subsubsection{Sampling Distribution for Step 4} The sampling distribution for the sparse sequence $\vec{x}$ takes the form of a multivariate Gaussian distribution \begin{equation}\label{sampling_x} p(\vec{x}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2) = \mathcal{N}(\vec{x};\vec{\tilde{\mu}}_x,\vec{\tilde{\Sigma}}_x) \end{equation} with the posterior mean $\vec{\tilde{\mu}}_x$ and covariance $\vec{\tilde{\Sigma}}_x$ given by \begin{equation}\label{sampling_x_mean_and_covariance} \vec{\tilde{\mu}}_x = \dfrac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_x\vec{H}^T\vec{y}, \qquad \vec{\tilde{\Sigma}}_x = \bigg(\dfrac{1}{\sigma_v^2}\vec{H}^T\vec{H} + \vec{\Sigma}_x^{-1}\bigg)^{-1}. \end{equation} Hence, sampling $\vec{x}$ is straightforward. \subsubsection{Sampling Distribution for Step 5} Similar to that of $\vec{x}$, the sampling distribution for $\vec{\gamma}$ is also a multivariate Gaussian distribution \begin{equation}\label{sampling_gamma} p(\vec{\gamma}|\vec{y},\vec{x},\sigma_v^2) = \mathcal{N}(\vec{\gamma};\vec{\tilde{\mu}}_{\gamma},\vec{\tilde{\Sigma}}_{\gamma}), \end{equation} where the posterior mean and covariance are now given by \begin{equation}\label{sampling_gamma_mean_and_covariance} \vec{\tilde{\mu}}_{\gamma} = \dfrac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_{\gamma}\vec{B}^T\vec{y},\qquad \vec{\tilde{\Sigma}}_{\gamma} = \bigg(\dfrac{1}{\sigma_v^2}\vec{B}^T\vec{B} + \dfrac{1}{\sigma_{\gamma}^2}\vec{I}\bigg)^{-1}, \end{equation} with $\vec{B} = \vec{X}\vec{A}$, from which sampling is straightforward. \subsubsection{Sampling Distribution for Step 6} The sampling distribution for the noise variance $\sigma_v^2$ takes the form of IG distribution given by \begin{equation}\label{sampling_sigma_v} p(\sigma_{v}^2|\vec{y},\vec{x},\vec{\gamma}) = \mathcal{IG}(\sigma_v^2;\tilde{\alpha}_{v},\tilde{\beta}_v) \end{equation} with parameters $\tilde{\alpha}_v = N/2$ and $\tilde{\beta}_v = \|\vec{y}-\vec{X}\vec{A}\vec{\gamma}\|^2/2$. \par The use of alternative \text{soft} NIG prior for sparsity, instead of the BG model, allows us to introduce Step 3, where the latent variable $\vec{\sigma}_x^2$ is sampled conditioned on the current value of $\vec{x}$. The equivalent step for the BG model would be sampling the binary sequence $\vec{s}$ from $p(\vec{s}|\vec{x}) = p(\vec{x}|\vec{s})p(\vec{s})$, which would, however, not possible due to the deterministic dependence between $\vec{s}$ and $\vec{x}$. Therefore, $\vec{s}$ and $\vec{x}$ are usually sampled jointly. However, since $\vec{s}$ is a binary sequence of length $K$, sampling $\vec{s}$ as a whole at one step, which would require $2^K$ different probability mass point calculation, is not feasible. This enforces BG models to sample a single tuple $(s_n,x_n)$ at a time conditioned on the others. Since neighboring variables usually have strong dependence, sampling them conditioned on each other causes the sampler to be stuck on a local optimum for a long time. There is an extension proposed in \cite{DGe} that samples $M$-tuples at a time to escape from local optimums faster, but it is limited to very short blocks due to exponentially increasing computational load \cite{DGe}. Therefore, the discrete nature of the BG model creates essential computational burdens. By employing the alternative NIG model, the problem is transformed into a fully continuous valued framework, which not only helps to eliminate the exponential computational complexity but also provides an easier transition between different local optimums. \subsection{The Proposed Partially Collapsed Gibbs Sampler} There exist two main points worth addressing in an attempt to improve the convergence rate of the proposed classical Gibbs sampler. Firstly, note that Step 3 samples $\vec{\sigma}_x^2$ purely conditioned on $\vec{x}$, which may slow down the convergence rate due to their strong statistical dependence. We address this issue by introducing intermediate proposal steps after Step 3, in which new values for coinciding blocks of $\vec{x}$ and $\vec{\sigma}_x^2$ are proposed using their joint posterior densities. Secondly, the blind nature of the problem creates many distinct local optimums corresponding to different $\vec{x}$ and $\vec{\gamma}$ pairs. Once a suboptimal\footnote{Here, by suboptimality, we refer to the configurations of $\vec{x}$ and $\vec{\gamma}$ that achieves a likelihood value as high as the true values of $\vec{x}$ and $\vec{\gamma}$.} configuration has been reached with a corresponding noise level $\sigma_v^2$, the sampler can get stuck on this configuration for a long time since $\vec{x}$, $\vec{\gamma}$, and $\sigma_v^2$ are all sampled conditioned on each other. This problem is especially important when the noise variance $\sigma_v^2$ gets smaller because it results in a posterior distribution with sharper and more isolated peaks. One way to resolve this issue is to sample $\vec{x}$ and $\vec{\gamma}$ jointly, which can be realized by marginalizing either $\vec{x}$ or $\vec{\gamma}$ from the sampling distributions given in Step 6 or 4, respectively. However, both of these approaches lead to complicated sampling distributions, from which sampling is not feasible due to high dimensionality. A less effective but more feasible way is to sample either $\vec{x}$ and $\sigma_v^2$, or $\vec{\gamma}$ and $\sigma_v^2$ jointly, creating more freedom for sampling $\sigma_v^2$. This allows $\sigma_v^2$ to assume larger values more frequently. Note that as $\sigma_v^2$ gets larger, the effect of likelihood is reduced on the conditional posteriors of both $\vec{x}$ or $\vec{\gamma}$, and it becomes easier to escape from a local optimum. Since $L \ll K$, we choose to sample $\vec{\gamma}$ and $\sigma_v^2$ jointly, because it is computationally much more efficient compared to sampling $\vec{x}$ and $\sigma_v^2$ jointly.\par \begin{table} \centering \normalsize \caption{Partially Collapsed Gibbs Sampler} \vspace{-2mm} \label{table2} \renewcommand\arraystretch{1.3} \begin{tabular}{|m{0.46\textwidth}|} \hline Step 1. Sample $\alpha_x$ from $p(\alpha_x|\vec{\sigma}_x^2,\beta_x)$ \\ Step 2. Sample $\beta_x$ from $p(\beta_x|\vec{\sigma}_x^2,\alpha_x)$ \\ Step 3. Sample $\vec{\sigma}_{x}^2$ from $p(\vec{\sigma}_{x}^2|\vec{x},\alpha_x,\beta_x)$ \\ Step 4. Sample $\vec{x}_{\sim\ell_n}$ from $p(\vec{x}_{\sim\ell_n}|\vec{y},\vec{\sigma}_{x}^2,\vec{\gamma},\sigma_v^2)$ \\ Step 5. Sample $\vec{\sigma}_{x_{\ell_n}}^2$ from $p(\vec{\sigma}_{x_{\ell_n}}^2|\vec{y},\vec{x}_{\sim\ell_n},\vec{\gamma},\sigma_v^2,\alpha_x,\beta_x)$ \\ Step 6. Sample $\vec{x}_{\ell_n}$ from $p(\vec{x}_{\ell_n}|\vec{y},\vec{x}_{\sim\ell_n},\vec{\sigma}_{x_{\ell_n}}^2,\vec{\gamma},\sigma_v^2)$ \\ Step 7. Sample $\sigma_{v}^2$ from $p(\sigma_{v}^2|\vec{y},\vec{x})$ \\ Step 8. Sample $\vec{\gamma}$ from $p(\vec{\gamma}|\vec{y},\vec{x},\sigma_v^2)$ \\ \hline \end{tabular} \vspace{-5mm} \end{table} Let us first introduce the intermediate sampling steps for blocks of $\vec{x}$ and $\vec{\sigma}_x^2$. We define $\ell_n$ as the right-hand neighborhood of length $Q$ for index $n$, i.e., $\ell_n = \{n,n+1,\hdots,n+Q-1\}$ for $n = 0,1,\hdots,K-Q$, and let ${\sim}\ell_n$ be the complement of $\ell_n$ in $\{0,\hdots,Q-1\}$. We represent the variable blocks pointed by the neighborhood $\ell_n$ as $\vec{\sigma}_{x_{\ell_n}}^2 = [\sigma_{x_n}^2,\sigma_{x_{n+1}}^2,\hdots,\sigma_{x_{n+K-1}}^2]^T$ and $\vec{x}_{\ell_n} = [x_n,x_{n+1},\hdots,x_{n+K-1}]^T$. Similarly, $\vec{\sigma}_{x_{\sim\ell_n}}^2$ and $\vec{x}_{\sim\ell_n}$ represent the blocks of the remaining $K-Q$ variables. At each iteration of the new sampler, we propose new values for the blocks $\vec{\sigma}_{x_{\ell_n}}^2$ and $\vec{x}_{\ell_n}$ using the proposal distribution $p(\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}|\vec{y},\vec{x}_{\sim\ell_n},\vec{\sigma}_{x_{\sim\ell_n}}^2,\vec{\gamma},\sigma_v^2)$, which is the joint posterior distribution of $\vec{\sigma}_{x_{\ell_n}}^2$ and $\vec{x}_{\ell_n}$ conditioned on all the other variables. Since this is actually a valid Gibbs sampling step, the Metropolis-Hastings acceptance probability is always 1 for the proposals. The neighborhood $\ell_n$ is updated after each iteration as follows. At $i^{th}$ iteration we set $n = \text{mod}(i-1,K-Q+1)$, where $\text{mod}(a,b)$ is the modulo operator returning the remainder after division of $a$ by $b$. This creates a sliding window over $\vec{\sigma}_x^2$ and $\vec{x}$ that shifts one index to the right at each iteration. Hence, the whole sequence is scanned after every $K-Q+1$ iterations.\par For a given neighborhood $\ell_n$, sampling from the joint conditional posterior $p(\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}|\vec{y},\vec{x}_{\sim\ell_n},\vec{\gamma},\sigma_v^2,\alpha_x,\beta_x)$ can be realized in two consecutive steps, i.e., \begin{itemize}[noitemsep,topsep=1pt] \setlength\itemsep{0.1em} \item First, sample $\vec{\sigma}_{x_{\ell_n}}^2$ from $p(\vec{\sigma}_{x_{\ell_n}}^2|\vec{y},\vec{x}_{\sim\ell_n},\vec{\gamma},\sigma_v^2,\alpha_x,\beta_x)$, \item Then, sample $\vec{x}_{\ell_n}$ from $p(\vec{x}_{\ell_n}|\vec{y},\vec{x}_{\sim\ell_n},\vec{\sigma}_{x_{\ell_n}}^2,\vec{\gamma},\sigma_v^2)$. \end{itemize} These steps can be inserted right after Step 4, as Steps 5 and 6, respectively. Note that the block $\vec{x}_{\ell_n}$ is still being sampled in Step 4, which is redundant because it would not be conditioned on in Step 5 and immediately replaced with the new values obtained in Step 6. Therefore, sampling $\vec{x}_{\ell_n}$ can be skipped in Step 4, which forms a new step where only $\vec{x}_{\sim\ell_n}$ is sampled from $p(\vec{x}_{\sim\ell_n}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2)$. Although $\vec{\sigma}_{x_{\ell_n}}^2$ is also re-sampled in Step 5, it is still conditioned on in Step 4, so we cannot skip sampling $\vec{\sigma}_{x_{\ell_n}}^2$ in Step 3. The first 6 steps of the resulting sampling scheme is given in Table \ref{table2}.\par In order to jointly sample $\sigma_v^2$ and $\vec{\gamma}$, we need to sample from $p(\sigma_v^2,\vec{\gamma}|\vec{y},\vec{x})$, which can also be realized in two steps as: \begin{itemize}[noitemsep,topsep=1pt] \setlength\itemsep{0.1em} \item First, sample $\sigma_v^2$ from $p(\sigma_v^2|\vec{y},\vec{x})$, \item Then, sample $\vec{\gamma}$ from $p(\vec{\gamma}|\vec{y},\vec{x},\sigma_v^2)$. \end{itemize} These constitute the last two steps of the proposed sampler in Table \ref{table2}. \par We emphasize that the sampling distribution in Step 4 is not associated with the joint posterior $p(\vec{\theta}|\vec{y})$ anymore. Therefore, the proposed sampler is not a classical Gibbs sampler. Instead, it is a PCG sampler since the procedure described above is completely consistent with the \textit{marginalization} and \textit{trimming} operations described in \cite{DDyk}. PCG samplers are generalizations of the block Gibbs sampler, where some of the variables are not sampled from their full conditional posteriors. This provides more freedom for the sampler to jump from one point to another in the sampling space, which usually increases the mixing rate.\par Compared to the classical Gibbs sampler in Table \ref{table1}, the first three steps are the same and sampling distribution for Step 8 is already given in (\ref{sampling_gamma}). The sampling distributions for the remaining steps are presented below. Derivations are provided in the supplemental material. \subsubsection{Sampling Distribution for Step 4}\label{PCG_step_4} The sampling distribution for $\vec{x}_{\sim \ell_n}$ block is obtained by marginalizing $\vec{x}_{\ell_n}$ from $p(\vec{x}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2)$. The resulting distribution is also a multivariate Gaussian distribution given by \begin{equation}\label{sampling_x_block} p(\vec{x}_{\sim\ell_n}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2) = \mathcal{N}(\vec{x}_{\sim\ell_n};\vec{\tilde{\mu}}_{\sim\ell_n},\vec{\tilde{\Sigma}}_{\sim\ell_n}). \end{equation} The posterior mean $\vec{\tilde{\mu}}_{\sim\ell_n}$ and covariance $\vec{\tilde{\Sigma}}_{\sim\ell_n}$ are defined as \begin{subequations} \begin{align} \vec{\tilde{\mu}}_{\sim\ell_n} &= \dfrac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_{\sim\ell_n}\vec{H}_{\sim\ell_n}^T\vec{D}_{\ell_n}^T\vec{y},\\ \vec{\tilde{\Sigma}}_{\sim\ell_n} &= \bigg(\dfrac{1}{\sigma_v^2}\vec{H}_{\sim\ell_n}^T\vec{D}_{\ell_n}\vec{H}_{\sim\ell_n} + \vec{\Sigma}_{\sim\ell_n}^{-1}\bigg)^{-1}, \end{align} \end{subequations} where $\vec{D}_{\ell_n} = \vec{I} - \frac{1}{\sigma_v^2}\vec{H}_{\ell_n}\vec{\tilde{\Sigma}}_{\ell_n}\vec{H}_{\ell_n}^T$ with $\vec{H}_{\sim\ell_n}$ and $\vec{H}_{\ell_n}$ represent the matrices formed by the columns of $\vec{H}$ indexed by ${\sim}\ell_n$ and $\ell_n$ respectively. The covariance matrices are defined as $\vec{\Sigma}_{\sim\ell_n} = \text{diag}(\vec{\sigma}_{x_{\sim\ell_n}}^2)$, $\vec{\Sigma}_{\ell_n} = \text{diag}(\vec{\sigma}_{x_{\ell_n}}^2)$, and $\vec{\tilde{\Sigma}}_{\ell_n} = (\frac{1}{\sigma_v^2}\vec{H}_{\ell_n}^T\vec{H}_{\ell_n} + \vec{\Sigma}_{\ell_n}^{-1})^{-1}$. \subsubsection{Sampling Distribution for Step 5}\label{PCG_step_5} Similarly, the sampling distribution for $\vec{\sigma}_{x_{\ell n}}^2$ block is obtained by marginalizing $\vec{x}_{\ell_n}$ from $p(\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}|\vec{y},\vec{x}_{\sim\ell_n},\vec{\gamma},\sigma_v^2,\alpha_x,\beta_x)$, which is given by \begin{equation}\label{sampling_sigma_x_block} \begin{split} p(\vec{\sigma}_{x_{\ell_n}}^2|\vec{y},&\vec{x}_{\sim\ell_n},\vec{\gamma},\sigma_v^2,\alpha_x,\beta_x) \\ &\propto \dfrac{|\vec{\tilde{\Sigma}}_{\ell_n}|^{1/2}}{|\vec{\Sigma}_{\ell_n}|^{1/2}}\exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\ell_n}^T\vec{\tilde{\Sigma}}_{\ell_n}^{-1}\vec{\tilde{\mu}}_{\ell_n}\bigg)p(\vec{\sigma}_{x_{\ell_n}}^2). \end{split} \end{equation} where $\vec{\tilde{\mu}}_{\ell_n} = \frac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_{\ell_n}\vec{H}_{\ell_n}^T\vec{\tilde{y}}_{\sim\ell_n}$ and $\vec{\tilde{y}}_{\sim\ell_n} = \vec{y} - \vec{H}_{\sim\ell_n}\vec{x}_{\sim\ell_n}$. Direct sampling from (\ref{sampling_sigma_x_block}) is not possible, since the form of the distribution is not well-known. Therefore, we employ a MH or Slice sampling step. \subsubsection{Sampling Distribution for Step 6} The sampling distribution for $\vec{x}_{\ell_n}$ block is given by \begin{equation}\label{sampling_x_block_2} p(\vec{x}_{\ell_n}|\vec{y},\vec{x}_{\sim\ell_n},\vec{\sigma}_{x_{\ell_n}}^2,\vec{\gamma},\sigma_v^2) = \mathcal{N}(\vec{x}_{\ell_n};\vec{\tilde{\mu}}_{\ell_n},\vec{\tilde{\Sigma}}_{\ell_n}), \end{equation} with $\vec{\tilde{\mu}}_{\ell_n}$ and $\vec{\tilde{\Sigma}}_{\ell_n}$ are as defined in Section \ref{PCG_step_4} and \ref{PCG_step_5}.\par \subsubsection{Sampling Distribution for Step 7} The sampling distribution for the noise variance $\sigma_v^2$ is obtained by marginalizing $\vec{\gamma}$ from $p(\sigma_v^2,\vec{\gamma}|\vec{y},\vec{x})$, which is given by \begin{align}\label{sampling_sigma_v_2} &p(\sigma_v^2|\vec{y},\vec{x}) \notag \\ &\propto \bigg(\dfrac{1}{\sigma_v^2}\bigg)^{N/2}|\vec{\tilde{\Sigma}}_{\gamma}|^{1/2}\exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\gamma}^T\vec{\tilde{\Sigma}}_{\gamma}^{-1}\vec{\tilde{\mu}}_{\gamma} - \dfrac{1}{2\sigma_v^2} \vec{y}^T\vec{y}\bigg)p(\sigma_v^2). \end{align} Although it is not straightforward to sample from (\ref{sampling_sigma_v_2}), we can efficiently employ MH or Slice sampling methods due to its univariate form. In the next section, we investigate the scaling and time-shifting ambiguities existing in the BD problems and propose two intermediate sampling steps accounting these ambiguities. \subsection{Scale and Time-Shift Ambiguities} Scaling and time-shift ambiguities are inherent in blind deconvolution problems preventing unique recovery of the pulse shape and the sparse sequence. For a given solution pair $(\vec{x},\vec{h})$, one can produce infinitely many different solutions consisting of the scaled versions $(\alpha\vec{x},\vec{h}/\alpha)$ with $\alpha \in \mathbbm{R}$, which constitutes the scale ambiguity. The time-shifted versions $(\vec{x} \ast \delta_n,\vec{h} \ast \delta_{-n})$, where $\delta_n$ is the Kronecker delta with spike at position $n$ and $\ast$ denotes the convolution operation, is another source of ambiguity constituting the time-shift ambiguity. In practice, recovery of the true parameters up to an arbitrary scale and time-shift does not cause a major problem and is usually sufficient. \subsubsection{Scale Ambiguity} In Bayesian framework, assignment of the prior distributions can eliminate the scaling ambiguity only if the scaling of the prior distributions is fixed at an anchor point. Otherwise, with such a scaling transformation $\{a\vec{x},a^2\vec{\sigma}_x^2,\alpha_x,a^2\beta_x,\vec{\gamma}/a,\sigma_{\gamma}^2/a^2,\sigma_v^2\}$ with $a > 0$ that leaves the likelihood invariant, one can scale the posterior with $a^{L-K+4}$. This means that the posterior distribution can be increased arbitrarily by decreasing the scale $a$, indicating nonexistence of a global optimum. However, this issue can be avoided by setting, for example, $\sigma_{\gamma}^2$ to a fixed constant. In this case, the sampler eventually converges to a fixed scale $\alpha'$ associated with the value of $\sigma_{\gamma}^2$. However, convergence might be slow since the parameters are sampled conditioned on each other. A common approach to accelerate the convergence is to introduce an intermediate Metropolis-Hastings (MH) sampling step. Following this approach, once the current values of $(\vec{x},\vec{\gamma},\vec{\sigma}_x^2)$ are sampled, we first propose the new values for $\vec{x}^{*}$ and $\vec{\gamma}^{*}$, i.e., $(\vec{x}^{*},\vec{\gamma}^{*}) = (\alpha\vec{x},\vec{\gamma}/\alpha)$, by sampling the scaling factor $\alpha$ from the proposal distribution $q(\alpha) = \mathcal{N}(\alpha;0,\sigma_{\alpha}^2)$ with known variance $\sigma_{\alpha}^2$ and then propose $\vec{\sigma}_x^{2*}$ by sampling from $p(\vec{\sigma}_x^{2*}|\vec{x}^*,\alpha_x,\beta_x)$, yielding the complete proposal distribution $q(\vec{x}^{*},\vec{\gamma}^{*},\vec{\sigma}_x^{2*}|\vec{x},\vec{\gamma},\vec{\sigma}_x^2) = q(\alpha)p(\vec{\sigma}_x^{2*}|\vec{x}^*,\alpha_x,\beta_x)$. The proposed values are accepted with probability $p_{\alpha}$, which is given by \begin{equation}\label{MH_scaling_acceptance_prob} p_{\alpha} = \min\bigg\{1,\alpha^L\exp\bigg(\dfrac{\alpha^{4}-1}{2\alpha^{2}\sigma_{\alpha}^2}\bigg)\prod_{n=0}^{K-1}\bigg[\dfrac{x_n^2 + 2\beta_x}{\alpha^2x_n^2 + 2\beta_x}\bigg]^{\alpha_x + 0.5} \bigg\} \end{equation} We note that a better strategy would be adjusting the parameters $\alpha_x$ and $\beta_x$ as well by sampling new values $\alpha_x^{*}$ and $\beta_x^{*}$ from $p(\alpha_x^{*},\beta_x^{*}|\vec{\sigma}_x^{2*})$. However, achieving a closed form expression for the acceptance probability would not be possible. \subsubsection{Time-Shift Ambiguity} Unlike scaling ambiguity, time-shift ambiguity does not fully apply to our case due to the edge effects, i.e., the edges of the reconstructed observation sequence will be corrupted when shifted versions of a given solution are considered. This is because we model the observation sequence using linear convolution of finite length sequences. Therefore, solution pairs with different shifts cannot achieve exactly the same likelihood value. However, although large time-shift ambiguities are explicitly avoided, time-shifts of very short lengths can still correspond to a similar level of likelihood, and hence, needs to be addressed. Since both sequences $\vec{x}$ and $\vec{h}$ are sampled conditioned on the current value of the other, jumps between different time-shift configurations rarely occur. To increase the frequency of these jumps, we employ the circular shift compensation method proposed in \cite{CLabat}. Following the scale compensation step, new values of $\vec{x}$, $\vec{\sigma}_x^2$ and $\vec{\gamma}$ are proposed using the proposal distribution $ q(\vec{x}^{*},\vec{\sigma}_x^{2*},\vec{\gamma}^{*}|\vec{x},\vec{\sigma}_x^{2},\vec{\gamma}) = q(\vec{x}^{*},\vec{\sigma}_x^{2*}|\vec{x},\vec{\sigma}_x^2)p(\vec{\gamma}^{*}|\vec{y},\vec{x}^{*},\sigma_v^2)$ where \begin{equation}\label{MH_time_shift_proposal_2} q(\vec{x}^{*},\vec{\sigma}_x^{2*}|\vec{x},\vec{\sigma}_x^2) = \begin{cases} 0.5 \; \text{if} \; (\vec{x}^*,\vec{\sigma}_x^{2*}) = (\vec{x}\circledast\delta_{-1},\vec{\sigma}_x^{2}\circledast\delta_{-1})\\ 0.5 \; \text{if} \; (\vec{x}^*,\vec{\sigma}_x^{2*}) = (\vec{x}\circledast\delta_{1},\vec{\sigma}_x^{2}\circledast\delta_{1}) \end{cases} \end{equation} and $\circledast$ denotes the circular convolution. It can be shown that the MH acceptance probability $p_s$ is given by \begin{equation}\label{MH_time_shift_acceptance_prob} p_{s} = \min\bigg\{1,\dfrac{|\vec{\tilde{\Sigma}}_{\gamma^*}|}{|\vec{\tilde{\Sigma}}_{\gamma}|}\exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\gamma^*}^T\vec{\tilde{\Sigma}}_{\gamma^*}^{-1}\vec{\tilde{\mu}}_{\gamma^*} - \dfrac{1}{2}\vec{\tilde{\mu}}_{\gamma}^T\vec{\tilde{\Sigma}}_{\gamma}^{-1}\vec{\tilde{\mu}}_{\gamma}\bigg)\bigg\} \end{equation} where both $(\vec{\tilde{\mu}}_{\gamma^*},\vec{\tilde{\Sigma}}_{\gamma^*})$ and $(\vec{\tilde{\mu}}_{\gamma},\vec{\tilde{\Sigma}}_{\gamma})$ can be calculated through (\ref{sampling_gamma_mean_and_covariance}) using $\vec{x}^*$ and $\vec{x}$ respectively. \section{Simulations}\label{simulations} In this section, we present our numerical studies to assess the performance of the proposed samplers and compare our results with the classical BG deconvolution approach, in which the sparsity of $\vec{x}$ is enforced by the BG prior, as described in (\ref{conditional_prior_x_given_s})-(\ref{joint_prior_x_and_s}), instead of the NIG prior used in this work. The form of the slightly modified version of Cheng et al.'s classical Gibbs sampler employing BG model is given in Table \ref{table3}. Step 1 of this sampler consists of $K$ sub-update steps, where pairs of $(s_n,x_n)$ are sampled jointly from their joint posterior distribution at each sub-step. Jointly sampling $(s_n,x_n)$ is also completed in two steps, first sampling $s_n$ from $p(s_n|\vec{y},\vec{x}_{\sim n},\vec{\gamma},\sigma_v^2)$ and then sampling $x_n$ from $p(x_n|\vec{y},\vec{x}_{\sim n},s_n,\vec{\gamma},\sigma_v^2)$. The remaining sampling steps are the same as the proposed samplers. We also consider the $M$-tuple Gibbs sampler proposed in \cite{DGe}, with $M = 3$. It modifies Step 1 of Table \ref{table3} to sample tuples of length $3$ at a time and considerably improves the convergence rate of classical BGS. Hence, it constitutes a stronger baseline for assessing the performance of our samplers. Throughout the simulations, we use these as our baseline samplers and call them, respectively, as the classical Bernoulli-Gaussian Sampler (BGS) and $3$-Tuple BGS. We refer to the proposed samplers using the Normal-Inverse-Gamma law as NIGS-1 (for the classical Gibbs sampler) and NIGS-2 (for the partially collapsed Gibbs sampler). \par \begin{table} \centering \normalsize \caption{Baseline Classical Bernoulli-Gaussian Sampler} \vspace{-2mm} \label{table3} \renewcommand\arraystretch{1.2} \begin{tabular}{|p{0.06\textwidth} p{0.35\textwidth}|} \hline Step 1. & For $n =0,1,\hdots,K-1$, sample $(s_n,x_n)$ from $p(s_n,x_n|\vec{y},\vec{x}_{\sim n},\vec{\gamma},\sigma_v^2)$\\ Step 2. & Sample $\vec{\gamma}$ from $p(\vec{\gamma}|\vec{y},\vec{x},\sigma_v^2)$ \\ Step 3. & Sample $\sigma_v^2$ from $p(\sigma_v^2|\vec{y},\vec{x},\vec{\gamma})$ \\ \hline \end{tabular} \vspace{-5mm} \end{table} Due to the scale and minor time-shift ambiguities that are inherent in blind deconvolution problems, all recovery results presented in this section are up to an arbitrary scale and shift factor. The compensation steps introduced in the previous section resolve these ambiguities only for the sampling stage, i.e., the resulting estimated sequences are not necessarily expected to match the scale and shift of the true sequences. Therefore, for any given estimation pair $(\hat{\vec{x}},\hat{\vec{h}})$, we present the corrected versions $\vec{x}^{\prime}$ and $\vec{h}^{\prime}$, which are given by $\vec{x}^{\prime} = (\hat{\vec{x}} \ast \delta_{-n})/a$ and $\vec{h}^{\prime} = a(\hat{\vec{h}} \ast \delta_{n})$, where the correction factors are found using \begin{equation}\label{scale_and_time_shift_correction} (a,n) = \argmin_{a\in \mathbbm{R},n\in \mathbb{Z}}\|\vec{h} - a(\hat{\vec{h}} \ast \delta_n)\|. \end{equation} \begin{figure*}[t!] \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.95\textwidth]{mendel_x_rec_NIGS1.png} \end{subfigure} % \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.95\textwidth]{mendel_x_rec_NIGS2.png} \end{subfigure} % \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.95\textwidth]{mendel_h_rec.png} \end{subfigure} % \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.95\textwidth]{mendel_x_rec_BGS.png} \end{subfigure} % \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.95\textwidth]{mendel_x_rec_BGS_Tuple.png} \end{subfigure} % \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.95\textwidth]{mendel_MPSRF.png} \end{subfigure} \caption{Recovery results for the sparse sequence (left and middle columns), pulse sequence (upper right), and convergence rates (lower right). \label{mendel_results}} \vspace{-9mm} \end{figure*} \subsection{A Measure of Convergence Rate} In order to empirically measure the convergence rate, we employ the iterated graphical monitoring approach proposed by Brook and Gelman in \cite{SBrooks}. The diagnostic is based on comparing the between and within chain covariance matrices, denoted by $\vec{S}_b$ and $\vec{S}_w$ respectively, of different simulation chains that are running simultaneously. More precisely, for multivariate simulations, convergence is declared when the multivariate potential scale reduction factor (MPSRF), denoted by $\hat{R}$, becomes close to 1. A typical threshold in which the MPSRF is expected to fall below is 1.2 as suggested in \cite{SBrooks}. The MPSRF is defined as \begin{equation}\label{MPSRF} \hat{R} = \dfrac{i-1}{i} + \dfrac{q+1}{q}\lambda_{\text{max}}, \end{equation} where $i$ denotes the number of MCMC iterations, $q$ is the total number of distinct simulation chains running simultaneously and $\lambda_{\text{max}}$ is the maximum eigenvalue of the matrix $\vec{S}_w^{-1}\vec{S}_b$. The definitions of the between chain covariance matrix $\vec{S}_b$ and the within chain covariance matrix $\vec{S}_w$ are given as \begin{subequations} \begin{align} \vec{S}_b &= \dfrac{1}{q-1}\sum_{j=1}^{q}(\vec{\bar{\theta}}_{.j} - \vec{\bar{\theta}}_{..})(\vec{\bar{\theta}}_{.j} - \vec{\bar{\theta}}_{..})^T,\\ \vec{S}_w &= \dfrac{1}{q(i-1)}\sum_{j=1}^{q}\sum_{l=1}^{i}(\vec{\theta}_{lj} - \vec{\bar{\theta}}_{.j})(\vec{\theta}_{lj} - \vec{\bar{\theta}}_{.j})^T, \end{align} \end{subequations} where $\vec{\theta}_{lj}$ denotes the $l^{th}$ sample of $j^{th}$ chain. Also, $\vec{\bar{\theta}}_{.j}$ and $\vec{\bar{\theta}}_{..}$ represent the local mean of the $j^{th}$ chain and the global mean of the all chains respectively. Therefore, for a given data, we run several chains from different initial points and monitor the MPSRF value to assess the convergence. \subsection{Recovery Performance on Mendel Sequence}\label{results_mendel} As an illustrative example, we present the recovery results of the proposed samplers on a given observation sequence and provide a comparison with the classical BGS described above in order to show its inefficiency. The observation sequence $y_n$ was generated based on the linear convolution model given in (\ref{conv_model}). As the sparse sequence $x_n$, we used the well-known Mendel's sequence, which models a 1-D sparse reflectivity profile for seismic blind deconvolution \cite{JMendel}. The sequence is depicted with bullets in sparse recovery plots of Fig. \ref{mendel_results}. As the short pulse sequence $h_n$, we used the following sequence \begin{equation}\label{pulse_sequence} h_n = \cos\big((n-10)\pi/4\big)\exp\big(-|0.225n-2|^{1.5}\big) \end{equation} for $n = 0,1,\hdots,20$, which is the same sequence used in both \cite{QCheng} and \cite{DGe}. It is represented with solid line in the upper-right plot of Fig. \ref{mendel_results} with solid line. The noise variance was set as $\sigma_v^2 = 4.8\times10^{-6}$, which corresponds to a Signal-to-Noise Ratio (SNR) of 25 dB. \par The lengths of the sparse sequence $\vec{x}$ and the pulse sequence $\vec{h}$ are $K = 300$ and $T = 21$, respectively, yielding a length $N = 321$ measurement sequence $\vec{y}$. For this experiment, we did not impose any frequency domain constraints on the pulse sequence and set the subspace matrix as identity, i.e., $\vec{A} = \vec{I}$. For Step 1 of NIGS-1, and Steps 1, 5, and 7 of NIGS-2, we employed univariate Slice sampling approach. We set the window size for NIGS-2 as $Q = 10$ and used the following fixed values for the parameters of baseline samplers: $\sigma_x^2 = 1$, $\sigma_{\gamma}^2 = 10$, and $\pi_0 = 1-|\vec{x}|_0/K$, where $|\vec{x}|_0$ denotes the number of nonzero elements in the true sequence $\vec{x}$. For all samplers, we generated $10^4$ samples and used the last $25\%$ for producing the estimations. \par The four plots in the left and middle columns of Fig. \ref{mendel_results} illustrate the sparse sequences recovered by each sampler. The recoveries obtained by the proposed samplers and 3-Tuple BGS are almost identical and perfectly match the true sequence. On the other hand, the sparse sequence recovered by the classical BGS contains inaccurate nonzero entries where the actual spikes are located closely. This example shows the main inefficiency of the classical BGS. Since the $(s_n,x_n)$ tuples are sampled conditioned on the current value of adjacent entries, it usually takes a large number of iterations to escape from a local optimum. Therefore, initialization plays an important role in the convergence behavior of BGS. This issue will become clearer once we present the convergence analysis.\par The recovered pulses, as shown in middle lower plot in Fig. \ref{mendel_results}, are almost identical and perfectly match the true pulse shape for all samplers. This is usually the case since the number of parameters to be estimated, i.e., the degree of freedom, is much smaller for the pulse sequence.\par In order to compare the convergence rates of the samplers, we illustrate the evolution of the MPSRF corresponding to samples of $\vec{x}$ and $\vec{\gamma}$ for all samplers in the lower-right plot in Fig. \ref{mendel_results}. We simulated 10 independent chains with distinct initialization for each sampler and updated the MPSRF after every 200 iterations using the last 50\% of the generated samples. The MPSRF curves of the proposed samplers NIGS-1 and NIGS-2 consistently decrease and fall below the convergence threshold of $1.2$ after around $5000$ and $2000$ iterations, respectively. This suggests that each of the individual chains of NIGS-1 and NIGS-2 converged to the same local optimum, which is very likely to be the global one, regardless of where they are initialized at the parameter space. However, the MPSRF curve of classical BGS fails to converge and cannot fall below the convergence threshold during the simulation duration. This result confirms that individual chains of BGS get stuck on a local optimum for a much larger number of iterations. Note that unlike classical BGS, 3-Tuple BGS reaches the convergence threshold at around $2500$ iterations, which is due to its improved convergence rate. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\textwidth]{conv_exp_MPSRF_x.png} \end{subfigure} % \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\textwidth]{conv_exp_MPSRF_h.png} \end{subfigure} % \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\textwidth]{convenrgence_ratio_vs_sim_time_x.png} \end{subfigure} % \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\textwidth]{convenrgence_ratio_vs_sim_time_h.png} \end{subfigure} \caption{Ratio of the number of converged simulations to the total number of simulations as a function of MCMC iteration (top) and CPU time (bottom) for the sparse sequence $\vec{x}$ and the pulse sequence $\vec{h}$.\label{convergence_ratio}} \vspace{-5mm} \end{figure*} \subsection{Empirical Analysis of the Convergence Rates}\label{results_convergence_rates} Next, we evaluate the convergence performance of the proposed samplers over a set of randomly generated sparse sequences. The generated sequences have length $K = 300$. The nonzero positions were randomly distributed across the sequence. In order to ensure that there are no localized regions with all nonzero elements, we divided each sequence into multiple segments of equal lengths and randomly choose a nonzero index for each segment. A $6\%$ sparsity level, defined by the ratio of the number of nonzero entries to the total length of the sequence, was maintained for all generated sequences. The amplitudes for the nonzero positions were independently drawn from a univariate zero-mean Gaussian distribution with a variance of 0.5. A total of 50 distinct observations were generated by convolving each of 50 different sparse sequences with the pulse sequence given in (\ref{pulse_sequence}), which were then corrupted by additive noise such that the SNR is 20 dB. \par For each observation, we run all samplers 10 times with different initialization to obtain 10 independently simulated chains. The MPSRF values are calculated on these chains after every 100 MCMC iterations by using the last $50\%$ of the samples. Same as before, we set the convergence threshold for MPSRF as 1.2. In Fig. \ref{convergence_ratio}, we illustrate the ratio of converged sequences over all 50 sequences as a function of the MCMC iteration for all samplers. In order to individually investigate the convergence performance on different unknowns, we calculated the MPSRF curves separately for the sparse sequence $\vec{x}$ and pulse coefficients $\vec{\gamma}$, which are represented in the upper left and right figures, respectively. \par We see in the top left figure that the ratio of converged sequences for the proposed samplers increases quite rapidly compared to classical BGS, while all three are outperformed by 3-Tuple BGS. The outstanding convergence performance of 3-Tuple BGS is mainly due to the fact that many of the configurations leading to a local optimum occur within a short neighborhood, which can be escaped rapidly even with a very small tuple size $M$. However, the computational complexity of $M$-Tuple BGS increases exponentially with the tuple size $M$ as shown in \cite{DGe}. We also observed that NIGS-2 achieves a slightly better convergence rate compared to NIGS-1, which indicates that the intermediate sampling steps introduced in NIGS-2 help accelerating the mixing rate. Comparing the left and right figures, we see that convergence rate is faster for all samplers in the case of pulse coefficients $\vec{\gamma}$. This is indeed an expected result since the degree of freedom is considerably higher for the sparse sequence. We further note that the convergence rates of the proposed samplers are almost as fast as that of 3-Tuple BGS, and they still significantly outperform classical BGS. \subsection{Computational Complexity Analysis} The study presented above compared the convergence rates of the different algorithms in terms of the number of MCMC iterations. Next, we incorporate the computational complexities of the samplers into the analysis. In order to investigate the empirical computational complexities, we measured the processing times based on unoptimized MATLAB R2019a implementation of the samplers on an Intel Core i5-8265U processor. For the implementation of classical and 3-Tuple BGS, we used the efficient numerical method presented in \cite{DGe}. The average elapsed time for one iteration of each sampler as a function of the sparse sequence length $K$ is illustrated in the left plot in Fig. \ref{computational_complexities}. For all considered values of $K$, the number of spikes was adjusted accordingly to maintain the 6\% sparsity level. It can be observed that the cost per iteration in terms of processing time is significantly higher for 3-Tuple BGS for all values of $K$. Moreover, the difference between the computational complexities of the proposed samplers and classical BGS increases considerably for larger values of $K$. In order to better illustrate the difference of computational complexities, we also present the computational gain of all samplers relative to 3-Tuple BGS in the right plot in Fig. \ref{computational_complexities}. We define the gain as the ratio of cost per iteration of 3-Tuple BGS to those of other samplers. It is clear that NIGS-1 and NIGS-2 are, respectively, at least around 10 and 5 times computationally more efficient compared to 3-Tuple BGS. Moreover, the computational gain of the proposed samplers increases for larger values of $K$. We also observed that classical BGS is computationally more efficient compared to NIGS-2 for smaller values of $K$, even though its cost per iteration increases significantly for larger values of $K$.\par \begin{table}[t!] \renewcommand\arraystretch{1.1} \centering \caption{Overall empirical successful recovery rates for the sparse sequence $\vec{x}$ (top) and the pulse sequence $\vec{h}$ (bottom). The NMSE threshold $\tau$ is varied between $0.01$ and $0.1$.\label{overall_success_rates}} \vspace{-2mm} \begin{tabularx}{\linewidth}{ C{0.2\linewidth} C{0.14\linewidth} C{0.14\linewidth} C{0.14\linewidth} C{0.14\linewidth} } \multicolumn{5}{c}{\small Sparse Sequence} \\ \toprule & $\tau = 0.01$ & $\tau = 0.04$ & $\tau = 0.07$ & $\tau = 0.1$ \\ \midrule \textbf{NIGS-1} & 0.54 & 0.69 & 0.75 & 0.79 \\ \textbf{NIGS-2} & 0.57 & 0.72 & 0.78 & 0.81 \\ \textbf{Classical BGS} & 0.36 & 0.53 & 0.62 & 0.69 \\ \textbf{3-Tuple BGS} & 0.57 & 0.73 & 0.79 & 0.82 \\ \bottomrule \end{tabularx} \medskip \begin{tabularx}{\linewidth}{ C{0.2\linewidth} C{0.14\linewidth} C{0.14\linewidth} C{0.14\linewidth} C{0.14\linewidth} } \multicolumn{5}{c}{\small Pulse Sequence} \\ \toprule & $\tau = 0.01$ & $\tau = 0.04$ & $\tau = 0.07$ & $\tau = 0.1$ \\ \midrule \textbf{NIGS-1} & 0.82 & 0.92 & 0.94 & 0.96 \\ \textbf{NIGS-2} & 0.83 & 0.93 & 0.96 & 0.97 \\ \textbf{Classical BGS} & 0.85 & 0.93 & 0.95 & 0.96 \\ \textbf{3-Tuple BGS} & 0.86 & 0.93 & 0.95 & 0.96 \\ \bottomrule \end{tabularx} \vspace{-5mm} \end{table} \begin{figure*}[t!] \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\textwidth]{cost_per_iter_vs_meas_length.png} \end{subfigure} % \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\textwidth]{computational_gain.png} \end{subfigure} \caption{Comparison of cost per iteration in seconds for sparse sequences with varying lengths (left). Computational gain of the samplers relative to 3-Tuple BGS as a function of sparse sequence length $K$ (right). \label{computational_complexities}} \vspace{-1mm} \end{figure*} \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS1_x_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS2_x_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_x_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_Tuple_x_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS1_x_10.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS2_x_10.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_x_10.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_Tuple_x_10.png} \end{subfigure} \caption{Empirical successful recovery rates for the sparse sequence $\vec{x}$ at different SNR and sparsity levels for $\tau = 0.01$ and $\tau = 0.1$. Dashed white line depicts the transition boundary for 3-Tuple BGS. \label{success_rate_snr_vs_sparsity_x}} \vspace{-5mm} \end{figure*} Based on empirically measured computational costs of the samplers, we can also compare the convergence rates in terms of the simulation time. The comparisons are provided in bottom rows of Fig. \ref{convergence_ratio} for both sparse sequence $\vec{x}$ and the pulse coefficients $\vec{\gamma}$ using length $K = 300$ sparse sequences. The results indicate that the excessive computational cost of 3-Tuple BGS overwhelms its outstanding convergence rate. Due to its lower cost per iteration, NIGS-1 achieves the fastest convergence rate in terms of processing time for both $\vec{x}$ and $\vec{\gamma}$. NIGS-2 has a slightly higher cost per iteration compared to NIGS-1, but still attains a similar convergence rate to that of 3-Tuple BGS for $\vec{x}$, which is even better for $\vec{\gamma}$. Due to increasing computational gain with $K$, it would also be reasonable to suggest that the difference between convergence rates will be more substantial for larger values of $K$, favoring the use of proposed samplers in practice. \subsection{Overall Recovery Performance for Different Scenarios} In this section, we investigate the recovery performance of the proposed samplers is investigated under various different SNR and sparsity levels, along with the comparisons with the baseline samplers. We considered 11 different SNR levels ranging between 10 dB and 30 dB with 2 dB separation between each level. For each SNR level, we investigated 26 different scenarios, where the number of spikes is increased from 10 to 60 with 2 increments. The length of the sparse sequence for each scenario was fixed at $K=300$, yielding sparsity levels ranging from 3\% to 20\%. We created 20 different random sparse sequences for each one of these 286 scenarios using the same way described in Section \ref{results_convergence_rates}. For the pulse sequence, we used the normalized first derivative of Gaussian pulse, given by \begin{equation} h_n = 2(n\pi f_c/f_s - 2)\exp\big(0.5-2(n \pi f_c/f_s - 2)^2\big) \end{equation} for $n = 0,1,\hdots,22$ with center frequency $f_c = 2$ GHz and sampling rate $f_s = 36$ GHz. This pulse shape constitutes a strictly short duration sequence in the time domain, which is also nearly bandlimited in the frequency domain. Therefore, for these experiments, we construct the columns of the pulse subspace matrix $\vec{A}$ using the first $L = 8$ DPS sequences of length $T = 23$. We run each sampler for $10^4$ iterations and use the last 25\% of the samples for estimation of the unknowns. We declare a recovered sparse sequence $\vec{x}^{\prime}$ (or pulse sequence $\vec{h}^{\prime}$) successful if the Normalized Mean Squared Error (NMSE) is less than a given threshold $\tau$, i.e., \begin{equation}\label{success_condition} \dfrac{\|\vec{x} - \vec{x}^{\prime}\|^2}{\|\vec{x}\|^2} \leq \tau, \qquad \dfrac{\|\vec{h} - \vec{h}^{\prime}\|^2}{\|\vec{h}\|^2} \leq \tau. \end{equation} \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS1_h_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS2_h_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_h_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_Tuple_h_1.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS1_h_10.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{NIGS2_h_10.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_h_10.png} \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{BGS_Tuple_h_10.png} \end{subfigure} \caption{Empirical successful recovery rates for the pulse sequence $\vec{h}$ at different SNR and sparsity levels for $\tau = 0.01$ and $\tau = 0.1$. \label{success_rate_snr_vs_sparsity_h}} \vspace{-5mm} \end{figure*} \begin{figure*}[t!] \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.9\textwidth]{success_rate_vs_SNR_x.png} \end{subfigure} % \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.9\textwidth]{success_rate_vs_SNR_h.png} \end{subfigure} \caption{Overall empirical successful recovery rates for the sparse sequence $\vec{x}$ (left) and the pulse sequence $\vec{h}$ (right) at different SNR levels. Solid and dashed lines correspond to $\tau = 0.01$ and $\tau = 0.1$ cases respectively.\label{overall_success_rates_vs_SNR}} \vspace{-5mm} \end{figure*} In Fig. \ref{success_rate_snr_vs_sparsity_x}, we illustrate the empirical successful recovery rates of the samplers for the sparse sequence $\vec{x}$ for two different values of $\tau$. We first focus on the transition boundaries, which identify the feasible regions in the sparsity/SNR plane for which successful recovery, as defined in (\ref{success_condition}), is possible. In order to better illustrate the differences, we draw the transition boundary of 3-Tuple BGS, associated with the corresponding $\tau$, as a reference at all plots. It can be observed that feasible regions of the proposed samplers are slightly more restricted. This indicates that for a given sparsity level, the proposed samplers require a slightly higher SNR level for successful recovery. Nevertheless, the feasible regions are quite similar for all samplers. On the other hand, the rate of successful recovery within the feasible region is significantly higher for the proposed sampler as opposed to classical BGS. This is due to the fact that classical BGS requires a considerably larger number of iterations than $10^4$ to converge to the true stationary distribution. In addition, we also observed that NIGS-2 achieves the highest overall success rate within its feasible region even though 3-Tuple BGS has a slightly larger feasible region. This is also justified by the average success rates, given in Table \ref{overall_success_rates}, corresponding to different values of $\tau$, over all 286 scenarios. Despite the slightly smaller feasible region, NIGS-2 achieves a success rate as high as that of 3-Tuple BGS. The results also show that NIGS-2 achieves a higher overall success rate compared to NIGS-1, which provides empirical evidence that the intermediate sampling steps of NIGS-2 improves the convergence rate. Moreover, it is clear that both proposed samplers significantly outperform classical BGS. \par We present the corresponding success rates for the pulse shape $\vec{h}$ in Fig. \ref{success_rate_snr_vs_sparsity_h}. Comparing the transition plots in Fig. \ref{success_rate_snr_vs_sparsity_x} and \ref{success_rate_snr_vs_sparsity_h}, our first observation is that feasible regions for the pulse sequence are significantly more extensive for all samplers. As it can be seen from $\tau = 0.01$ case, similar to the recovery of sparse sequence, the feasible regions are more extensive for the baseline samplers. Another observation is that the success rates of classical BGS are significantly higher for the pulse sequence. This indicates that the main source of the inefficiency of classical BGS is different configurations of sparse sequences rather than the pulse shape. Table \ref{overall_success_rates} shows that all samplers achieve a similar level of success rate for the pulse sequence except for $\tau = 0.01$, which can be explained by the smaller feasible regions of the proposed samplers. Overall, the success rates are consistently higher compared to the recovery of sparse sequences, implying that it is easier to recover the pulse shape in most cases.\par Finally, we also compared the overall success rates of the samplers at different SNR levels in Fig. \ref{overall_success_rates_vs_SNR}. Considering the recovery of sparse sequences, given in the left figure, all samplers perform quite similarly for lower SNRs, and the success rates increase as SNR increases, except for classical BGS. Its success rate reaches a stable level at around 20 dB and then starts slightly decreasing after 24 dB. This seems non-intuitive but at high SNRs, peaks of the likelihood function get sharper, and it becomes overwhelmingly difficult to escape from a locally optimum configuration. The proposed samplers and 3-Tuple BGS are not affected by this effect due to their improved ability to escape local optimums. The figure illustrates the success rate curves for both $\tau = 0.01$ and $\tau = 0.1$. It can be observed that at high SNRs, the number of simulations of NIGS-2 with resulting NMSE less than $\tau = 0.01$ is more than those of 3-Tuple BGS both with NMSE less than $\tau = 0.1$ and $\tau = 0.01$. This suggests that NIGS-2 is more successful at escaping local optimums. However, 3-Tuple BGS outperforms all other samplers at lower SNRs. The right figure in Fig. \ref{overall_success_rates_vs_SNR} demonstrates the same analyses for the recovery of the pulse sequence. As expected, at lower SNRs, the baseline samplers achieve higher success rates due to their extensive feasible regions. On the other hand, they are being outperformed by the proposed samplers at high SNRs, because of enhanced convergence characteristics of NIGS-1 and NIGS-2. We also note that it is possible to observe a similar performance reduction effect at high SNRs. \section{Conclusion}\label{conclusion} In this paper, we studied the problem of sparse blind deconvolution under a Bayesian framework and presented efficient MCMC based estimation methods for jointly recovering two unknown sequences from their noisy convolutions. We derived two different hierarchical Gibbs samplers under a NIG prior enforcing sparsity from, which forms a continuous valued alternative to the conventional BG model. While the first sampler follows a classical Gibbs sampler structure, the second one employs a PCG sampler scheme, which incorporates additional sampling steps to reduce the statistical dependence of the variables in an attempt to enhance the convergence rate. By moving the problem into a completely continuous valued framework, we avoided the computational burdens due to the discrete nature of the BG model. The proposed samplers were evaluated on an empirical basis via extensive numerical simulations and compared with the existing sampling schemes that are based on the BG model. The obtained results demonstrated the effectiveness of the proposed samplers on achieving successful recovery under various different settings. Comparisons with the baseline samplers demonstrated a significant increase in the convergence rate, along with considerable computational gains. As a result, the proposed methods can be used in a variety of real-life applications involving blind deconvolution problems with sparsity and time/frequency domain constraints. \begin{comment} \begin{appendices} \section{Derivation of Sampling Distributions}\label{Appendix_A} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} In this appendix, we derive the sampling distributions given in Section \ref{proposed_classical_gibbs_sampler}. Throughout this section, we use the notation $\vec{\theta} \backslash \theta$ to denote the set of all variables except $\theta$. We use the following definitions for the Gaussian, Gamma, and Inverse-Gamma distributions: \begin{align} \mathcal{N}(x;\mu,\sigma^2) &= \dfrac{1}{\sqrt{2\pi\sigma^2}}\exp\bigg(-\dfrac{(x-\mu)^2}{2\sigma^2}\bigg)\\ \mathcal{G}(x;\alpha,\beta) &= \dfrac{\beta^{\alpha}}{\Gamma(\alpha)}x^{\alpha - 1}\exp\big(-\beta x\big)\\ \mathcal{IG}(x;\alpha,\beta) &= \dfrac{\beta^{\alpha}}{\Gamma(\alpha)}\bigg(\dfrac{1}{x}\bigg)^{\alpha + 1}\exp\bigg(-\dfrac{\beta}{x}\bigg) \end{align} We begin with presenting the derivations of the sampling distributions for the proposed classical Gibbs sampler, followed by the derivations for the proposed Partially Collapsed Gibbs sampler. \subsection{Sampling Distributions for Classical Gibbs Sampler}\label{Appendix_A1} \textbf{Step 1:} The conditional posterior distribution of $\alpha_x$ is given by \begin{equation}\label{posterior_alpha_x} p(\alpha_x|\vec{y},\vec{\theta} \backslash \alpha_x) = p(\alpha_x|\vec{\sigma}_x^2,\beta_x) \propto p(\vec{\sigma}_x^2|\alpha_x,\beta_x)p(\alpha_x). \end{equation} Inserting (\ref{prior_sigma_x}) and (\ref{prior_alpha_x_and_beta_x}) into (\ref{posterior_alpha_x}) yields \begin{align} p(\alpha_x|\vec{\sigma}_x^2,\beta_x) &\propto \prod_{n = 0}^{K-1}\dfrac{\beta_x^{\alpha_x}}{\Gamma(\alpha_x)}\bigg(\dfrac{1}{\sigma_{x_n}^2}\bigg)^{\alpha_x + 1}\exp\bigg(-\dfrac{\beta_x}{\sigma_{x_n}^2}\bigg) \dfrac{1}{\alpha_x} \notag \\ &\propto \dfrac{\beta^{K\alpha_x}}{\Gamma(\alpha_x)^K}\bigg(\prod_{n=0}^{K-1}\dfrac{1}{\sigma_{x_n}^2}\bigg)^{\alpha_x+1}p(\alpha_x). \end{align} which is the sampling distribution given in (\ref{sampling_alpha_x}). \par \textbf{Step 2:} Similar to Step 1, the conditional posterior for $\beta_x$ is given by \begin{equation}\label{posterior_beta_x} p(\beta_x|\vec{y},\vec{\theta} \backslash \beta_x) = p(\beta_x|\vec{\sigma}_x^2,\alpha_x) \propto p(\vec{\sigma}_x^2|\alpha_x,\beta_x)p(\beta_x). \end{equation} Inserting (\ref{prior_sigma_x}) and (\ref{prior_alpha_x_and_beta_x}) into (\ref{posterior_beta_x}) yields \begin{align} p(\beta_x|\vec{\sigma}_x^2,\alpha_x) &\propto \prod_{n = 0}^{K-1}\dfrac{\beta_x^{\alpha_x}}{\Gamma(\alpha_x)}\bigg(\dfrac{1}{\sigma_{x_n}^2}\bigg)^{\alpha_x + 1}\exp\bigg(-\dfrac{\beta_x}{\sigma_{x_n}^2}\bigg) \dfrac{1}{\beta_x} \notag \\ &\propto \beta^{K\alpha_x -1}\exp\bigg(-\beta_x \sum_{n=0}^{K-1}\dfrac{1}{\sigma_{x_n}^2}\bigg), \end{align} which is a Gamma distribution with parameters $\alpha_{\beta_x} = K\alpha_x$ and $\beta_{\beta_x} = \sum_{n=0}^{K-1}1/\sigma_{x_n}^2$ after proper normalization.\par \textbf{Step 3:} The conditional posterior of latent variable $\vec{\sigma}_x^2$ is given by \begin{equation}\label{posterior_sigma_x} p(\vec{\sigma}_x^2|\vec{y},\vec{\theta} \backslash \vec{\sigma}_x^2) = p(\vec{\sigma}_x^2|\vec{x},\alpha_x,\beta_x) \propto p(\vec{x}|\vec{\sigma}_x^2)p(\vec{\sigma}_x^2|\alpha_x,\beta_x). \end{equation} Here, inserting (\ref{conditional_prior_x_given_sigma_x}) and (\ref{prior_sigma_x}) into (\ref{posterior_sigma_x}), we get \begin{align} p(\vec{\sigma}_x^2|\vec{x},\alpha_x,\beta_x) &= \prod_{n = 0}^{K-1} \bigg(\dfrac{1}{\sigma_{x_n}^2}\bigg)^{\alpha_x + 3/2}\exp\bigg(-\dfrac{x_n^2/2+\beta_x}{\sigma_{x_n}^2}\bigg) \notag \\ &= \prod_{n = 0}^{K-1} \bigg(\dfrac{1}{\sigma_{x_n}^2}\bigg)^{\tilde{\alpha}_{x}+1}\exp\bigg(-\dfrac{\tilde{\beta}_{x_n}}{\sigma_{x_n}^2}\bigg). \end{align} where $\tilde{\alpha}_{x} = \alpha_x + 1/2$ and $\tilde{\beta}_{x_n} = x_n^2/2 + \beta_x$. Therefore, with proper normalization, $\sigma_{x_n}^2$s have independent IG posteriors with common shape parameter $\tilde{\alpha}_{x}$ and individual scale parameters $\tilde{\beta}_{x_n}$ as in (\ref{sampling_sigma_x}). \par \textbf{Step 4:} The conditional posterior distribution for sparse sequence $\vec{x}$ takes the form \begin{equation}\label{posterior_x} p(\vec{x}|\vec{y},\vec{\theta} \backslash \vec{x}) = p(\vec{x}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2) \propto p(\vec{y}|\vec{x},\vec{\gamma},\sigma_v^2)p(\vec{x}|\vec{\sigma}_x^2). \end{equation} Inserting (\ref{likelihood}) and (\ref{conditional_prior_x_given_sigma_x}) into (\ref{posterior_x}) yields \begin{align}\label{posterior_x_derivation} &p(\vec{x}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2) \notag \\ &\propto \exp\bigg(-\dfrac{\|\vec{y} - \vec{H}\vec{x}\|^2}{2\sigma_v^2}\bigg)\exp\bigg( -\dfrac{1}{2}\vec{x}^T\vec{\Sigma}_x^{-1}\vec{x}\bigg) \notag \\ &= \exp\bigg(-\dfrac{\vec{y}^T\vec{y} - 2\vec{y}^T\vec{H}\vec{x} + \vec{x}^T\vec{H}^T\vec{H}\vec{x}}{2\sigma_v^2} -\dfrac{1}{2}\vec{x}^T\vec{\Sigma}_x^{-1}\vec{x}\bigg) \notag \\ &\propto \exp\bigg(\dfrac{1}{\sigma_v^2}\vec{y}^T\vec{H}\vec{x} - \dfrac{1}{2}\vec{x}^T\bigg(\dfrac{1}{\sigma_v^2}\vec{H}^T\vec{H} +\vec{\Sigma}_x^{-1}\bigg)\vec{x}\bigg) \notag \\ &\propto \exp\bigg(-\dfrac{1}{2}(\vec{x}-\vec{\tilde{\mu}}_x)^T\vec{\tilde{\Sigma}}_x^{-1}(\vec{x}-\vec{\tilde{\mu}}_x)\bigg), \end{align} where the posterior mean $\vec{\tilde{\mu}}_x$ and covariance $\vec{\tilde{\Sigma}}_x$ are defined as \begin{equation}\label{x_mean_cov_posterior} \vec{\tilde{\mu}}_x = \dfrac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_x\vec{H}^T\vec{y},\qquad \vec{\tilde{\Sigma}}_x = \bigg(\dfrac{1}{\sigma_v^2}\vec{H}^T\vec{H} + \vec{\Sigma}_x^{-1}\bigg)^{-1}, \end{equation} which yields a multivariate Gaussian distribution as in (\ref{sampling_x}). \par \textbf{Step 5:} The conditional posterior for variance $\sigma_{\gamma}^2$ of the prior distribution of pulse coefficients $\vec{\gamma}$ is given by \begin{align}\label{posterior_sigma_gamma} p(\sigma_{\gamma}^2|\vec{y},\vec{\theta}\backslash \sigma_{\gamma}^2) = p(\sigma_{\gamma}^2|\vec{\gamma}) \propto p(\vec{\gamma}|\sigma_{\gamma}^2)p(\sigma_{\gamma}^2). \end{align} Inserting (\ref{prior_gamma}) and (\ref{prior_sigma_gamma}) into (\ref{posterior_sigma_gamma}) yields \begin{align} p(\sigma_{\gamma}^2|\vec{\gamma}) &\propto \bigg(\dfrac{1}{\sigma_{\gamma}^2}\bigg)^{L/2}\exp\bigg(-\dfrac{\|\vec{\gamma}\|^2}{2\sigma_{\gamma}^2}\bigg)\dfrac{1}{\sigma_{\gamma}^2} \end{align} which is, after proper normalization, an Inverse-Gamma distribution with parameters $\tilde{\alpha}_{\sigma_{\gamma}^2} = L/2$ and $\tilde{\beta}_{\sigma_{\gamma}^2} = \|\vec{\gamma}\|^2/2$ as in (\ref{sampling_sigma_gamma}). \par \textbf{Step 6:} The full conditional posterior distribution of pulse coefficients $\vec{\gamma}$ is given by \begin{equation}\label{posterior_gamma} p(\vec{\gamma}|\vec{y},\vec{\theta} \backslash \vec{\gamma}) = p(\vec{\gamma}|\vec{y},\vec{x},\sigma_v^2,\sigma_{\gamma}^2) \propto p(\vec{y}|\vec{x},\vec{\gamma},\sigma_v^2)p(\vec{\gamma}|\sigma_{\gamma}^2). \end{equation} Similar to Step 4, inserting (\ref{likelihood}) and (\ref{prior_gamma}) into (\ref{posterior_gamma}) yields \begin{align}\label{posterior_gamma_derivation} &p(\vec{\gamma}|\vec{y},\vec{x},\sigma_v^2) \notag \\ &\propto \exp\bigg(-\dfrac{\|\vec{y} - \vec{X}\vec{A}\vec{\gamma}\|^2}{2\sigma_v^2}\bigg)\exp\bigg( -\dfrac{1}{2}\vec{\gamma}^T\vec{\Sigma}_{\gamma}^{-1}\vec{\gamma}\bigg) \notag \\ &= \exp\bigg(-\dfrac{\vec{y}^T\vec{y} - 2\vec{y}^T\vec{B}\vec{\gamma} + \vec{\gamma}^T\vec{B}^T\vec{B}\vec{\gamma}}{2\sigma_v^2} -\dfrac{1}{2}\vec{\gamma}^T\vec{\Sigma}_{\gamma}^{-1}\vec{\gamma}\bigg) \notag \\ &\propto \exp\bigg(\dfrac{1}{\sigma_v^2}\vec{y}^T\vec{B}\vec{\gamma} - \dfrac{1}{2}\vec{\gamma}^T\bigg(\dfrac{1}{\sigma_v^2}\vec{B}^T\vec{B} +\vec{\Sigma}_{\gamma}^{-1}\bigg)\vec{\gamma}\bigg) \notag \\ &\propto \exp\bigg(-\dfrac{1}{2}(\vec{\gamma}-\vec{\tilde{\mu}}_{\gamma})^T\vec{\tilde{\Sigma}}_{\gamma}^{-1}(\vec{\gamma}-\vec{\tilde{\mu}}_{\gamma})\bigg), \end{align} where $\vec{B} = \vec{X}\vec{A}$ and the posterior mean $\vec{\tilde{\mu}}_{\gamma}$ and covariance $\vec{\tilde{\Sigma}}_{\gamma}$ are defined as \begin{equation} \vec{\tilde{\mu}}_{\gamma} = \dfrac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_{\gamma}\vec{B}^T\vec{y},\qquad \vec{\tilde{\Sigma}}_{\gamma} = \bigg(\dfrac{1}{\sigma_v^2}\vec{B}^T\vec{B} + \vec{\Sigma}_{\gamma}^{-1}\bigg)^{-1}. \end{equation} which yields the multivariate Gaussian distribution given in (\ref{sampling_gamma}) after proper normalization. \par \textbf{Step 7:} The conditional posterior distribution of noise variance $\sigma_v^2$ is given by \begin{equation}\label{posterior_sigma_v} p(\sigma_v^2|\vec{y},\vec{\theta} \backslash \sigma_v^2) = p(\sigma_v^2|\vec{y},\vec{x},\vec{\gamma}) \propto p(\vec{y}|\vec{x},\vec{\gamma},\sigma_v^2)p(\sigma_v^2). \end{equation} Similar to Step 5, inserting (\ref{likelihood}) and (\ref{prior_sigma_v}) in (\ref{posterior_sigma_v}) yields \begin{equation} p(\sigma_v^2|\vec{y},\vec{x},\vec{\gamma}) \propto \bigg(\dfrac{1}{\sigma_v^2}\bigg)^{N/2}\exp\bigg(-\dfrac{\|\vec{y}-\vec{X}\vec{A}\vec{\gamma}\|^2}{2\sigma_v^2}\bigg)\dfrac{1}{\sigma_v^2} \end{equation} which is, after proper normalization, an Inverse-Gamma distribution with parameters $\tilde{\alpha}_v = \alpha_v + N/2$ and $\tilde{\beta_v} = \|\vec{y}-\vec{X}\vec{A}\vec{\gamma}\|^2/2 + \beta_v$ as in (\ref{sampling_sigma_v}). \subsection{Sampling Distributions for the Partially Collapsed Gibbs Sampler}\label{Appendix_A2} The sampling distributions for Steps 1, 2, 3, 7, and 9 are the same as those of classical Gibbs sampler given in Table \ref{table1}. Therefore, we only derive the sampling distributions for Steps 4, 5, 6, and 8 in this section.\par \textbf{Step 4:} The sampling distribution for $\vec{x}_{\sim\ell_n}$ can be obtained by marginalizing $\vec{x}_{\ell_n}$ from $p(\vec{x}|\vec{y},\vec{\theta} \backslash \vec{x})$. Marginalization of a multivariate Gaussian distribution with known mean $\vec{\mu}$ and covariance $\vec{\Sigma}$ can be directly achieved by dropping the corresponding elements of $\vec{\mu}$ and $\vec{\Sigma}$. However, this requires explicit calculation of the full posterior mean and covariance given in (\ref{x_mean_cov_posterior}). Instead, we derive the exact expression for the marginal distribution by integrating out $\vec{x}_{\ell_n}$ from $p(\vec{x}|\vec{y},\vec{\theta} \backslash \vec{x})$. Let us first define $\vec{\tilde{y}}_{\sim\ell_n} = \vec{y} - \vec{H}_{\sim\ell_n}\vec{x}_{\sim\ell_n}$, where we set $\vec{H}_{\sim\ell_n}$ and $\vec{H}_{\ell_n}$ as the columns of $\vec{H}$ indexed by ${\sim}\ell_n$ and $\ell_n$ respectively. Let us also define the following two covariance matrices, $\vec{\Sigma}_{\sim\ell_n} = \text{diag}(\vec{\sigma}_{x_{\sim\ell_n}}^2)$ and $\vec{\Sigma}_{\ell_n} = \text{diag}(\vec{\sigma}_{x_{\ell_n}}^2)$. The posterior distribution (\ref{posterior_x}) can then be written as \begin{equation}\label{x_posterior_derivation} \begin{split} &p(\vec{x}|\vec{y},\vec{\theta} \backslash \vec{x}) \\ &\propto \exp\bigg(-\dfrac{\|\vec{\tilde{y}}_{\sim\ell_n} - \vec{H}_{\ell_n}\vec{x}_{\ell_n}\|^2}{2\sigma_v^2}\bigg)\exp\bigg( -\dfrac{1}{2}\vec{x}_{\ell_n}^T\vec{\Sigma}_{\ell_n}^{-1}\vec{x}_{\ell_n}\bigg) \\ &\qquad \qquad \times \exp\bigg(-\dfrac{1}{2}\vec{x}_{\sim\ell_n}^T\vec{\Sigma}_{\sim\ell_n}^{-1}\vec{x}_{\sim\ell_n}\bigg)\\ &= \exp\bigg(\dfrac{\vec{\tilde{y}}_{\sim\ell_n}^T\vec{H}_{\ell_n}\vec{x}_{\ell_n}}{\sigma_v^2} - \dfrac{1}{2}\vec{x}_{\ell_n}^T\bigg(\dfrac{1}{\sigma_v^2}\vec{H}_{\ell_n}^T\vec{H}_{\ell_n} + \vec{\Sigma}_{\ell_n}^{-1}\bigg)\vec{x}_{\ell_n}\bigg) \\ &\qquad \qquad \times \exp\bigg(-\dfrac{1}{2\sigma_v^2}\vec{\tilde{y}}_{\sim\ell_n}^T\vec{\tilde{y}}_{\sim\ell_n} -\dfrac{1}{2}\vec{x}_{\sim\ell_n}^T\vec{\Sigma}_{\sim\ell_n}^{-1}\vec{x}_{\sim\ell_n}\bigg) \\ &\propto \exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\ell_n}^T\vec{\tilde{\Sigma}}_{\ell_n}^{-1}\vec{\tilde{\mu}}_{\ell_n}-\dfrac{1}{2\sigma_v^2}\vec{\tilde{y}}_{\sim\ell_n}^T\vec{\tilde{y}}_{\sim\ell_n}-\dfrac{1}{2}\vec{x}_{\sim\ell_n}^T\vec{\Sigma}_{\sim\ell_n}^{-1}\vec{x}_{\sim\ell_n}\bigg) \\ &\qquad \qquad \times \exp\bigg(-\dfrac{1}{2}(\vec{x}_{\ell_n} - \vec{\tilde{\mu}}_{\ell_n})^T\vec{\tilde{\Sigma}}_{\ell_n}^{-1}(\vec{x}_{\ell_n} - \vec{\tilde{\mu}}_{\ell_n})\bigg), \end{split} \end{equation} where $\vec{\tilde{\mu}}_{\ell_n}$ and $\vec{\tilde{\Sigma}}_{\ell_n}$ are defined as \begin{subequations}\label{x_block_mu_cov_posterior} \begin{align} \vec{\tilde{\mu}}_{\ell_n} &= \dfrac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_{\ell_n}\vec{H}_{\ell_n}^T\vec{\tilde{y}}_{\sim\ell_n},\\ \vec{\tilde{\Sigma}}_{\ell_n} &= \bigg(\dfrac{1}{\sigma_v^2}\vec{H}_{\ell_n}^T\vec{H}_{\ell_n} + \vec{\Sigma}_{\ell_n}^{-1}\bigg)^{-1}. \end{align} \end{subequations} Integrating out $\vec{x}_{\ell_n}$ yields \begin{equation} \begin{split} &p(\vec{x}_{\sim\ell_n}|\vec{y},\vec{\theta} \backslash \vec{x}) \\ &= p(\vec{x}_{\sim\ell_n}|\vec{y},\vec{\sigma}_{x}^2,\vec{\gamma},\sigma_v^2) = \int p(\vec{x}|\vec{y},\vec{\theta} \backslash \vec{x}) d\vec{x}_{\ell_n}\\ &\propto \exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\ell_n}^T\vec{\tilde{\Sigma}}_{\ell_n}^{-1}\vec{\tilde{\mu}}_{\ell_n}-\dfrac{1}{2\sigma_v^2}\vec{\tilde{y}}_{\sim\ell_n}^T\vec{\tilde{y}}_{\sim\ell_n}-\dfrac{1}{2}\vec{x}_{\sim\ell_n}^T\vec{\Sigma}_{\sim\ell_n}^{-1}\vec{x}_{\sim\ell_n}\bigg) \\ &\propto \exp\bigg(-\dfrac{1}{2\sigma_v^2}\vec{\tilde{y}}_{\sim\ell_n}^T\vec{D}_{\ell_n}\vec{\tilde{y}}_{\sim\ell_n}-\dfrac{1}{2}\vec{x}_{\sim\ell_n}^T\vec{\Sigma}_{\sim\ell_n}^{-1}\vec{x}_{\sim\ell_n}\bigg), \end{split} \end{equation} where we set $\vec{D}_{\ell_n} = \vec{I} - \frac{1}{\sigma_v^2}\vec{H}_{\ell_n}\vec{\tilde{\Sigma}}_{\ell_n}\vec{H}_{\ell_n}^T$. Expanding $\vec{\tilde{y}}_{\sim\ell_n}$, we achieve \begin{equation} \begin{split} p(&\vec{x}_{\sim\ell_n}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2)\\ &\begin{multlined} \propto \exp\bigg(\dfrac{\vec{y}^T\vec{D}_{\ell_n}\vec{H}_{\sim\ell_n}\vec{x}_{\sim\ell_n}}{\sigma_v^2} \\ -\dfrac{1}{2}\vec{x}_{\sim\ell_n}^T\bigg(\dfrac{1}{\sigma_v^2}\vec{H}_{\sim\ell_n}^T\vec{D}_{\ell_n}\vec{H}_{\sim\ell_n} + \vec{\Sigma}_{\sim\ell_n}^{-1}\bigg)\vec{x}_{\sim\ell_n}\bigg) \end{multlined}\\ &\propto \exp\bigg(-\dfrac{1}{2}(\vec{x}_{\sim\ell_n} - \vec{\tilde{\mu}}_{\sim\ell_n})^T\vec{\tilde{\Sigma}}_{\sim\ell_n}^{-1}(\vec{x}_{\sim\ell_n} - \vec{\tilde{\mu}}_{\sim\ell_n})\bigg), \end{split} \end{equation} where the posterior mean $\vec{\tilde{\mu}}_{\sim\ell_n}$ and covariance $\vec{\tilde{\Sigma}}_{\sim\ell_n}$ are defined as \begin{subequations} \begin{align} \vec{\tilde{\mu}}_{\sim\ell_n} &= \dfrac{1}{\sigma_v^2}\vec{\tilde{\Sigma}}_{\sim\ell_n}\vec{H}_{\sim\ell_n}^T\vec{D}_{\ell_n}^T\vec{y},\\ \vec{\tilde{\Sigma}}_{\sim\ell_n} &= \bigg(\dfrac{1}{\sigma_v^2}\vec{H}_{\sim\ell_n}^T\vec{D}_{\ell_n}\vec{H}_{\sim\ell_n} + \vec{\Sigma}_{\sim\ell_n}^{-1}\bigg)^{-1}. \end{align} \end{subequations} After proper normalization, the resulting form of the sampling distribution becomes \begin{equation} p(\vec{x}_{\sim\ell_n}|\vec{y},\vec{\sigma}_x^2,\vec{\gamma},\sigma_v^2) = \mathcal{N}(\vec{x}_{\sim\ell_n};\vec{\tilde{\mu}}_{\sim\ell_n},\vec{\tilde{\Sigma}}_{\sim\ell_n}). \end{equation} which is the sampling distribution given in (\ref{sampling_x_block}).\par \textbf{Step 5:} Similar to Step 4, we can marginalize $\vec{x}_{\ell_n}$ from $p(\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}|\vec{y},\vec{\theta} \backslash \{\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}\})$, to obtain the sampling distribution for $\vec{\sigma}_{x_{\ell_n}}^2$. Note that we have \begin{multline}\label{x_sigma_x_joint_posterior} p(\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}|\vec{y},\vec{\theta} \backslash \{\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}\}) \\ \propto p(\vec{y}|\vec{x},\vec{\gamma},\sigma_v^2)p(\vec{x}_{\ell_n}|\vec{\sigma}_{x_{\ell_n}}^2)p(\vec{\sigma}_{x_{\ell_n}}^2|\alpha_x,\beta_x). \end{multline} Following the same steps as in (\ref{x_posterior_derivation}), we can write \begin{align}\label{x_sigma_x_joint_posterior2} &p(\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}|\vec{y},\vec{\theta} \backslash \{\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}\}) \notag \\ &\propto \exp\bigg(-\dfrac{1}{2}(\vec{x}_{\ell_n} - \vec{\tilde{\mu}}_{\ell_n})^T\vec{\tilde{\Sigma}}_{\ell_n}^{-1}(\vec{x}_{\ell_n} - \vec{\tilde{\mu}}_{\ell_n})\bigg) \notag \\ &\qquad \times \exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\ell_n}^T\vec{\tilde{\Sigma}}_{\ell_n}^{-1}\vec{\tilde{\mu}}_{\ell_n}\bigg)\dfrac{1}{|\vec{\Sigma}_{\ell_n}|^{1/2}}p(\vec{\sigma}_{x_{\ell_n}}^2|\alpha_x,\beta_x), \end{align} where the posterior mean $\vec{\tilde{\mu}}_{\ell_n}$ and the covariance $\vec{\tilde{\Sigma}}_{\ell_n}$ have the same definitions given in (\ref{x_block_mu_cov_posterior}). Integrating out $\vec{x}_{\ell_n}$ from (\ref{x_sigma_x_joint_posterior2}) yields \begin{align}\label{sigma_x_block_posterior} p(\vec{\sigma}_{x_{\ell_n}}^2,&\vec{x}_{\ell_n}|\vec{y},\vec{\theta} \backslash \{\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}\}) \notag \\ &= p(\vec{\sigma}_{x_{\ell_n}}^2|\vec{y},\vec{x}_{\sim\ell_n},\vec{\sigma}_{x_{\sim\ell_n}}^2,\vec{\gamma},\sigma_v^2) \notag \\ &=\int p(\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}|\vec{y},\vec{\theta} \backslash \{\vec{\sigma}_{x_{\ell_n}}^2,\vec{x}_{\ell_n}\})d\vec{x}_{\ell_n} \notag \\ &\propto \dfrac{|\vec{\tilde{\Sigma}}_{\ell_n}|^{1/2}}{|\vec{\Sigma}_{\ell_n}|^{1/2}}\exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\ell_n}^T\vec{\tilde{\Sigma}}_{\ell_n}^{-1}\vec{\tilde{\mu}}_{\ell_n}\bigg)p(\vec{\sigma}_{x_{\ell_n}}^2), \end{align} which is the sampling distribution given in (\ref{sampling_sigma_x_block}).\par \textbf{Step 6:} Following the same steps as in (\ref{posterior_x_derivation}), the sampling distribution for $\vec{x}_{\ell_n}$ is given by \begin{align} p(\vec{x}_{\ell_n}|\vec{y},\vec{\theta} \backslash \vec{x}_{\ell_n}) &= p(\vec{x}_{\ell_n}|\vec{y},\vec{x}_{\sim\ell_n},\vec{\sigma}_{x_{\ell_n}}^2,\vec{\gamma},\sigma_v^2) \notag \\ &= \mathcal{N}(\vec{x}_{\ell_n};\vec{\tilde{\mu}}_{\ell_n},\vec{\tilde{\Sigma}}_{\ell_n}), \end{align} where $\vec{\tilde{\mu}}_{\ell_n}$ and $\vec{\tilde{\Sigma}}_{\ell_n}$ are given in (\ref{x_block_mu_cov_posterior}).\par \textbf{Step 8:} The sampling distribution for $\sigma_v^2$ is obtained by marginalizing $\vec{\gamma}$ from $p(\sigma_v^2,\vec{\gamma}|\vec{y},\vec{\theta} \backslash \{\sigma_v^2,\vec{\gamma}\})$. Note that we can write \begin{align} p(\sigma_v^2,&\vec{\gamma}|\vec{y},\vec{\theta} \backslash \{\sigma_v^2,\vec{\gamma}\}) \notag \\ &= p(\sigma_v^2,\vec{\gamma}|\vec{y},\vec{x},\sigma_{\gamma}^2) \propto p(\vec{y}|\vec{x},\vec{\gamma},\sigma_v^2)p(\vec{\gamma}|\sigma_{\gamma}^2)p(\sigma_v^2) \notag \\ &\propto \bigg(\dfrac{1}{\sigma_v^2}\bigg)^{N/2}\exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\gamma}^T\vec{\tilde{\Sigma}}_{\gamma}^{-1}\vec{\tilde{\mu}}_{\gamma}-\dfrac{1}{2\sigma_v^2}\vec{y}^T\vec{y}\bigg)p(\sigma_v^2) \notag \\ &\qquad \times \exp\bigg(-\dfrac{1}{2}(\vec{\gamma}-\vec{\tilde{\mu}}_{\gamma})^T\vec{\tilde{\Sigma}}_{\gamma}^{-1}(\vec{\gamma}-\vec{\tilde{\mu}}_{\gamma})\bigg) \end{align} by following the same steps as in (\ref{posterior_gamma_derivation}). Integrating out $\vec{\gamma}$ yields \begin{align} &p(\sigma_v^2|\vec{y},\vec{x},\sigma_{\gamma}^2) \notag \\ &= \int p(\sigma_v^2,\vec{\gamma}|\vec{y},\vec{x},\sigma_{\gamma}^2)d\vec{\gamma} \notag \\ &\propto \bigg(\dfrac{1}{\sigma_v^2}\bigg)^{N/2}|\vec{\tilde{\Sigma}}_{\gamma}|^{1/2}\exp\bigg(\dfrac{1}{2}\vec{\tilde{\mu}}_{\gamma}^T\vec{\tilde{\Sigma}}_{\gamma}^{-1}\vec{\tilde{\mu}}_{\gamma} - \dfrac{1}{2\sigma_v^2} \vec{y}^T\vec{y}\bigg)p(\sigma_v^2), \end{align} which is the sampling distribution given in (\ref{sampling_sigma_v_2}). \end{appendices} \end{comment} \bibliographystyle{IEEEtran}
2,877,628,090,617
arxiv
\section{Introduction} In the past years, Popa's deformation/rigidity theory has led to a broad range of rigidity theorems for probability measure preserving (pmp) Bernoulli actions $\Gamma \actson (X,\mu) = (X_0,\mu_0)^\Gamma$, see e.g.\ \cite{Pop03,Pop04,Pop06,Ioa10,IPV10,PV21}. This includes numerous W$^*$-superrigidity results showing that both the group $\Gamma$ and its action $\Gamma \actson (X,\mu)$ can be entirely retrieved from the ambient II$_1$ factor $L^\infty(X) \rtimes \Gamma$. More recently, there has been a growing interest in \emph{nonsingular} Bernoulli actions $\Gamma \actson (X,\mu) = \prod_{g \in \Gamma} (X_0,\mu_g)$, where the base measures $\mu_g$ vary. For $\Gamma = \Z$, this provides under the appropriate assumptions a classical family of nonsingular ergodic transformations that have been widely studied, see e.g.\ \cite{Ham81,Kos09,BKV19}. For nonamenable groups $\Gamma$, a first systematic study of nonsingular Bernoulli actions was made in \cite{VW17}. In view of the wealth of rigidity theorems for pmp Bernoulli actions, this raises the natural problem to prove rigidity and classification theorems for the type III factors $L^\infty(X,\mu) \rtimes \Gamma$ associated with nonsingular Bernoulli actions. There is a conceptual reason why obtaining such rigidity theorems is a hard problem. It was proven in \cite[Theorem 3.1]{VW17} that if a nonamenable group $\Gamma$ admits a nonsingular Bernoulli action of type III, then $\Gamma$ must have a nonzero first $L^2$-Betti number. In the pmp setting, all superrigidity theorems for Bernoulli actions are restricted to nonamenable groups with zero first $L^2$-Betti number! It has even been conjectured that a pmp Bernoulli action satisfies cocycle superrigidity (w.r.t.\ the appropriate target groups) if and only if $\Gamma$ is nonamenable with zero first $L^2$-Betti number. Therefore, we only set out to prove strong rigidity theorems, providing partial classification results for natural families of type III Bernoulli crossed products $L^\infty(X,\mu) \rtimes \Gamma$. These are the first classification results for type III Bernoulli crossed products going beyond Connes $\tau$-invariant to distinguish between such type III factors. We specifically prove these results for the wide family of nonsingular Bernoulli actions of the free groups $\F_n$ that were introduced in \cite[Section 7]{VW17} and that we recall below. More generally, we consider such Bernoulli actions for arbitrary free product groups $\Gamma = \Z \ast \Lambda$ with $\Lambda$ being any nonamenable group. Recall from \cite[Section 7]{VW17} that, given any standard Borel space $Y$ and equivalent probability measures $\nu \sim \eta$ on $Y$, and given any countably infinite group $\Lambda$, we can consider the nonsingular Bernoulli action of the free product $\Gamma = \Z \ast \Lambda$ given by \begin{equation}\label{eq.my-bernoulli} \begin{split} \Gamma \actson (X,\mu) = \prod_{h \in \Gamma} (Y,\mu_h) &\;\;\text{where}\;\; \mu_h = \begin{cases} \nu &\;\text{if the last letter of $h$ belongs to $\N \subset \Z$,}\\ \eta &\;\text{otherwise,}\end{cases}\\ &\;\;\text{and}\;\; (g \cdot x)_h = x_{g^{-1} h} \; . \end{split} \end{equation} We always assume that $\nu$ and $\eta$ are not concentrated on a single atom, because otherwise $(X,\mu)$ consists of a single point. By \cite[Proposition 7.1]{VW17}, the action $\Gamma \actson (X,\mu)$ is essentially free and ergodic, of type III whenever $\nu \neq \eta$. By \cite[Section 7]{VW17}, this family of Bernoulli actions is rich: the crossed products can be of any possible type III$_\lambda$, $\lambda \in (0,1]$, and they can have basically any possible Connes $\tau$-invariant (in the sense of \cite{Con74}). The main goal of this paper is to prove classification results for the crossed product factors $M = L^\infty(X) \rtimes \Gamma$ given by \eqref{eq.my-bernoulli}. We associate a measure class on the real line to the factor $M$ and prove that it is an isomorphism invariant for this family of type III factors. To formulate this first main result, we introduce some notation. For every measure class $\mu$ on $\R$, we denote by $\mutil$ the measure class defined by $\mutil(\cU) = 0$ iff $\mu(-\cU) = 0$. We say that a measure class $\mu$ on $\R$ is \emph{stable} if $\delta_0 \prec \mu$, $\mutil \sim \mu$ and $\mu \ast \mu \sim \mu$. For every measure class $\mu$ on $\R$, there is a smallest stable measure class $\gamma$ such that $\mu \prec \gamma$. We denote this as $\gamma = \stc(\mu)$. This measure class $\gamma$ can be defined as the join of the measure classes $\mu^{\ast n} \ast \mutil^{\ast m}$, $n,m \geq 0$, where we use the convention that the $0$'th convolution power is $\delta_0$. Given equivalent probability measures $\nu \sim \eta$ on a standard Borel space $Y$, we consider the stable measure class \begin{equation}\label{eq.measure-class-gamma} \gamma = \stc\bigl((\log d\nu / d\eta)_*(\nu)\bigr) \; . \end{equation} \begin{letterthm}\label{thm.main} For $i = 1,2$, let $\Lambda_i$ be a nonamenable group and let $\nu_i \sim \eta_i$ be equivalent probability measures on standard Borel spaces $Y_i$. Consider the nonsingular Bernoulli actions of $\Gamma_i = \Z \ast \Lambda_i$ on $(X_i,\mu_i)$ given by \eqref{eq.my-bernoulli}. Denote by $M_i$ their crossed product von Neumann algebras and let $\gamma_i = \stc\bigl((\log d\nu_i / d\eta_i)_*(\nu_i)\bigr)$ be the stable measure class defined by \eqref{eq.measure-class-gamma}. If $M_1 \cong M_2$, then $\gamma_1 \sim \gamma_2$. \end{letterthm} We also analyze which conclusions can be drawn if $M_1$ merely embeds with expectation into $M_2$, meaning that there exists a faithful normal $*$-homomorphism $\pi : M_1 \to M_2$ and a faithful normal conditional expectation of $M_2$ onto $\pi(M_1)$. The following then provides large families of Bernoulli crossed products for which such embeddings with expectation do not exist. While a systematic study of embeddability between Bernoulli crossed products has been made in \cite{PV21} in the probability measure preserving type II$_1$ setting, our Theorem \ref{thm.main2} is the first such systematic nonembeddability result in the type III case. To formulate this result, we provide the following canonical class of examples of \eqref{eq.my-bernoulli}. Define the set $\cP$ of Borel probability measures on $\R$ by \begin{equation}\label{eq.setP} \cP = \bigl\{ \nu \bigm| \nu \;\;\text{is a Borel probability measure on $\R$ with}\;\; \int_\R \exp(-x) \, d\nu(x) < +\infty \bigr\} \; . \end{equation} Given $\nu \in \cP$, there is a unique probability measure $\eta$ on $\R$ given by normalizing $\exp(-x) \, d\nu(x)$. By construction, the measure $(\log d\nu / d\eta)_*(\nu)$ is a translate of $\nu$. Then, \eqref{eq.my-bernoulli} provides a nonsingular Bernoulli action for any free product $\Z \ast \Lambda$, with base space $Y = \R$. Recall that a Borel set $K \subset \R$ is called independent if every set of $n$ distinct elements of $K$ generates a free abelian subgroup of $\R$ of rank $n$. \begin{letterthm}\label{thm.main2} Let $K \subset \R$ be an independent Borel set. For $i = 1,2$, let $\Lambda_i$ be nonamenable groups, put $\Gamma_i = \Z \ast \Lambda_i$ and let $\nu_i \in \cP$ be nonatomic measures supported on $K$. Consider the associated nonsingular Bernoulli actions with crossed product von Neumann algebra $M_i$. If $M_1$ embeds with expectation into $M_2$, then $\nu_1 \prec \nu_2$. In particular, if $M_1 \cong M_2$, then $\nu_1 \sim \nu_2$. \end{letterthm} So, Theorem \ref{thm.main2} provides large classes of nonsingular Bernoulli crossed products that cannot be embedded with expectation one into the other. Moreover, the conclusions of Theorem \ref{thm.main2} hold for further classes of probability measures $\nu_i \in \cP$, see Corollary \ref{cor.Bernoulli-independent-set} below. In Example \ref{exam.full-with-standard-tau}, we use this result to provide mutually nonembeddable type~III Bernoulli crossed products that cannot be distinguished by invariants from modular theory, like Connes $\tau$-invariant. We next focus on solidity of nonsingular Bernoulli crossed products. Recall from \cite{Oza03} that a II$_1$ factor $M$ is called \emph{solid} if $A' \cap M$ is amenable for every diffuse von Neumann subalgebra $A \subset M$. In \cite{Oza03}, it is proven that the group von Neumann algebra $L(\Gamma)$ is solid for every word hyperbolic group $\Gamma$ and, more conceptually, for every \emph{biexact} countable group $\Gamma$ (see \cite[Chapter 15]{BO08}). An arbitrary diffuse von Neumann algebra $M$ is called solid if $A' \cap M$ is amenable for every diffuse von Neumann subalgebra $A \subset M$ that is the range of a faithful normal conditional expectation (see \cite{VV05}). One of the striking features of solid factors is that they are prime: they do not admit nontrivial tensor product decompositions. Also all their nonamenable subfactors with expectation are prime. Solidity has a counterpart in ergodic theory, as discovered in \cite{CI08}: an essentially free nonsingular action of a countable group $\Gamma$ on a standard nonatomic probability space $(X,\mu)$ is said to be a \emph{solid action} if for every subequivalence relation $\cS$ of the orbit equivalence relation $\cR(\Gamma \actson X)$, there exists a partition of $(X,\mu)$ into $\cS$-invariant Borel sets $(X_n)_{n \geq 0}$ such that $\cS|_{X_0}$ is amenable and $\cS|_{X_n}$ is ergodic for every $n \geq 1$. Note that $\Gamma \actson (X,\mu)$ is a solid action if and only if for every diffuse von Neumann subalgebra $A \subset L^\infty(X)$, the relative commutant $A' \cap L^\infty(X) \rtimes \Gamma$ is amenable. In \cite{CI08}, it is proven that all pmp Bernoulli actions $\Gamma \actson (X_0,\mu_0)^\Gamma$ are solid actions. It remains an open question whether all nonsingular Bernoulli actions $\Gamma \actson \prod_{h \in \Gamma} (X_0,\mu_h)$ are solid actions. In \cite{HIK20}, it is proven that this is indeed the case when $X_0 = \{0,1\}$ consists of two points and when the probability measures $(\mu_h)_{h \in \Gamma}$ have a stronger almost invariance property: for all $g \in \Gamma$, we have that $\mu_{gh} = \mu_h$ for all but finitely many $h \in \Gamma$. We prove that all nonsingular Bernoulli actions in \eqref{eq.my-bernoulli} are solid actions. Note that our family mainly consists of Bernoulli actions with a diffuse base space, thus complementing the results of \cite{HIK20}. Actually, our method to prove Theorems \ref{thm.main} and \ref{thm.main2} is a ``solidity method'' that was introduced in \cite{HSV16}. In \cite{HSV16}, it was proven that any faithful normal state $\psi$ on a free Araki-Woods factor $M$ with the property that the centralizer $M^\psi$ is nonamenable, must have a corner that is unitarily conjugate to a corner of the canonical free quasi-free state $\vphi$ on $M$. So in cases where the centralizer of the free quasi-free state $\vphi$ is a nonamenable II$_1$ factor, we can characterize $\vphi$ as the essentially unique state on $M$ having a nonamenable centralizer. As a consequence, the spectral measure class of the modular operator $\Delta_\vphi$ becomes an invariant of such von Neumann algebras $M$. We thus introduce the following terminology: we say that a faithful normal state $\vphi$ on a von Neumann algebra $M$ is a \emph{solid state} if every faithful normal state $\psi$ on $M$ with a nonamenable centralizer $M^\psi$ has a corner that is unitarily conjugate to a corner of $\vphi$ (see Definition \ref{def.solid-state}). In particular, if $\vphi$ is a solid state on a type III factor and $M^\vphi$ is amenable, it follows that every faithful normal state on $M$ has an amenable centralizer. The main result of \cite{HSV16} can then be reformulated as saying that the free quasi-free state on a free Araki-Woods factor is a solid state. For our nonsingular Bernoulli actions in \eqref{eq.my-bernoulli} with crossed product $M = L^\infty(X,\mu) \rtimes \Gamma$, it is in general not true that the crossed product state $\vphi_\mu$ is solid. Nevertheless, our proof of Theorems \ref{thm.main} and \ref{thm.main2} is based on carefully analyzing which states on $M$ have a nonamenable centralizer. Under extra assumptions, we do find that $\vphi_\mu$ is a solid state. Our solidity results can then be summarized as follows. \begin{letterthm}\label{thm.main-solid} Let $\Lambda$ be a nonamenable group and let $\nu \sim \eta$ be equivalent probability measures on a standard Borel space $Y$. Consider the nonsingular Bernoulli actions of $\Gamma = \Z \ast \Lambda$ on $(X,\mu)$ given by \eqref{eq.my-bernoulli}. Denote $M = L^\infty(X,\mu) \rtimes \Gamma$. \begin{enumlist} \item The nonsingular Bernoulli action $\Gamma \actson (X,\mu)$ is a solid action. \item The factor $M$ is solid relative to $L(\Lambda)$ in the sense of \cite[Definition 3.2]{Mar16}. \item If $\Lambda$ is biexact, then $M$ is solid. \item If $\Lambda$ is biexact and $(\log d\nu/d\eta)_*(\nu)$ is nonatomic, then the crossed product state $\vphi_\mu$ on $M$ is a solid state. \end{enumlist} \end{letterthm} In \cite[Corollary 4.5]{Oza04}, it was proven that for every pmp Bernoulli action $\Gamma \actson (X,\mu) = (X_0,\mu_0)^\Gamma$ of a biexact group $\Gamma$, the crossed product $M = L^\infty(X,\mu) \rtimes \Gamma$ is solid. It is an open problem whether the same holds for arbitrary nonsingular Bernoulli actions of biexact groups. By \cite[Theorem C]{HV12}, this problem is equivalent to the open problem whether every nonsingular Bernoulli action of a biexact group is a solid action. Since we expect that these open problems have a positive solution, it is tempting to believe that for any nonsingular Bernoulli action $\Gamma \actson (X,\mu)$ of a biexact group, the crossed product state $\vphi_\mu$ on $M = L^\infty(X,\mu) \rtimes \Gamma$ is a solid state. This is however not true, as we show in Example \ref{ex.not-solid}. In an attempt to give a more conceptual explanation for Theorem \ref{thm.main}, it is equally natural to try to prove the following statement: if $\Gamma \actson (X,\mu)$ is any nonsingular Bernoulli action with the property that the measure $\mu$ is $\Lambda$-invariant for a nonamenable subgroup $\Lambda < \Gamma$, then the spectral measure class of $\Delta_{\vphi_\mu}$ can be recovered as an invariant of $M$. But the same Example \ref{ex.not-solid} shows that also this statement is false. This explains why our Theorem \ref{thm.main} is restricted to the natural family of actions introduced in \eqref{eq.my-bernoulli}. We finally prove the following partial converse to Theorem \ref{thm.main}. \begin{letterprop}\label{prop.some-isomorphism} Let $\Lambda$ be a countable group and put $\Gamma = \Z \ast \Lambda$. For $i=1,2$, let $\nu_i \sim \eta_i$ be equivalent probability measures on the standard Borel spaces $Y_i$. Denote by $\Gamma \actson^{\al_i} (X_i,\mu_i)$ the associated nonsingular Bernoulli actions given by \eqref{eq.my-bernoulli}. Denote $\sigma_i = (\log d\nu_i / d\eta_i)_*(\nu_i)$. If $\sigma_1 = \sigma_2$ and if the maps $\log d\nu_i / d\eta_i : (Y_i,\nu_i) \to \R$ are not essentially one-to-one, then there exists a measure preserving conjugacy between the actions $\Gamma \actson^{\al_i} (X_i,\mu_i)$. In particular, the crossed product factors $M_i = L^\infty(X_i,\mu_i) \rtimes_{\al_i} \Gamma$ are isomorphic. \end{letterprop} It is clear that Proposition \ref{prop.some-isomorphism} is not an optimal result. One might for instance speculate that the assumption $\sigma_1 \sim \sigma_2$ should be sufficient to prove that the nonsingular Bernoulli actions $\al_i$ are orbit equivalent. Still, our result is nonempty: in Example \ref{ex.some-isomorphism}, we provide examples where the hypotheses of Proposition \ref{prop.some-isomorphism} are satisfied with $Y_1$ being a finite set with atomic measures and $Y_2 = [0,1]$ with two measures $\nu_2 \sim \eta_2$ that are equivalent with the Lebesgue measure. In these examples, there is no obvious conjugacy between the nonsingular Bernoulli actions given by \eqref{eq.my-bernoulli}. \section{Preliminaries} A von Neumann subalgebra $B \subset N$ is said to be \emph{with expectation} if there exists a faithful normal conditional expectation $E : N \to B$. We start by recalling Popa's theory of \emph{intertwining-by-bimodules}, as introduced in \cite[Section 2]{Pop03}. We make use of the adaptations to the semifinite and infinite setting, which reached a final version in \cite[Section 4]{HI15}. So, let $M$ be any von Neumann algebra with separable predual and let $p,q \in M$ be nonzero projections. Let $A \subset pMp$ and $B \subset qMq$ be von Neumann subalgebras with expectation. We write $A \prec_M B$ if there exist projections $r \in A$, $s \in B$, a nonzero partial isometry $v \in r M s$ and a unital normal $*$-homomorphism $\theta : r A r \to s B s$ such that $a v = v \theta(a)$ for all $a \in r A r$ and $\theta(r A r) \subset s B s$ is with expectation. We write $A \prec_{f,M} B$ if for every nonzero projection $e \in A' \cap pMp$, we have that $Ae \prec_M B$. When $A$ is finite, $B$ is semifinite, $E_B : qMq \to B$ is a faithful normal conditional expectation and $\Tr$ is a faithful normal semifinite trace on $B$, the following results are contained in \cite[Theorem 4.3]{HI15}. \begin{itemlist} \item $A \not\prec_M B$ if and only if there exists a sequence of unitaries $a_n \in \cU(A)$ such that $$\|E_B(x^* a_n y)\|_{2,\Tr} \to 0 \quad\text{for all $x,y \in p M q$ with $\Tr(x^*x) , \Tr(y^* y) < +\infty$.}$$ \item $A \prec_M B$ if and only if there exists an integer $n \in \N$, a finite projection $s \in M_n(\C) \ot B$, a nonzero partial isometry $v \in (\C^n \ot p M) s$ and a normal unital $*$-homomorphism $\theta : A \to s(M_n(\C) \ot B)s$ such that $a v = v \theta(a)$ for all $a \in A$. \end{itemlist} Recall that given a von Neumann subalgebra $A \subset N$, one defines $\cN_N(A) = \{u \in \cU(N) \mid u A u^* = A\}$ and one calls $\cN_N(A)\dpr$ the \emph{normalizer} of $A$ inside $N$. Note that $A' \cap N \subset \cN_N(A)\dpr$. When $A \subset N$ is with expectation, also $\cN_N(A)\dpr \subset N$ is with expectation. Assume again that $M$ is a von Neumann algebra with separable predual and that $A \subset pMp$ and $B \subset qMq$ are von Neumann subalgebras with expectation. When $e \in A' \cap pMp$ is a nonzero projection such that $Ae \prec_M B$, we can take a nonzero partial isometry $v$ as above, where $v \in rMs$ and $r \in Ae$. When $u \in \cN_{pMp}(A)$, we can replace $v$ by $uv$ and replace $\theta$ by $\theta \circ \Ad u^*$. It follows that $A \, ueu^* \prec_M B$. We conclude from this argument that there exists a unique projection $z$ in the center $\cZ(\cN_{pMp}(A)\dpr)$ of the normalizer such that $Az \prec_{f,M} B$ and $A(p-z) \not\prec_M B$. If $M$ is a von Neumann algebra with separable predual and if $B \subset M$ is a von Neumann subalgebra with expectation, then $M$ is \emph{solid relative to} $B$ in the sense of \cite[Definition 3.2]{Mar16} if and only if every von Neumann subalgebra $Q \subset pMp$ with expectation and with diffuse center $\cZ(Q)$ satisfies at least one of the following properties: $Q$ is amenable or $Q \prec_M B$. For every von Neumann algebra $M$ with separable predual, we denote by $\core(M)$ its \emph{continuous core}, which can be concretely realized as $M \rtimes_{\si^\vphi} \R$ whenever $\vphi$ is a faithful normal state on $M$ with modular automorphism group $(\si^\vphi_t)_{t \in \R}$. We denote by $\lambda_\vphi(t)$, $t \in \R$, the canonical unitary operators in the crossed product $\core(M) = M \rtimes_{\si^\vphi} \R$, generating the von Neumann subalgebra $L_\vphi(\R) \subset \core(M)$. There is a canonical faithful normal semifinite trace $\Tr$ on $\core(M)$. Both the inclusion $M \subset \core(M)$ and the trace $\Tr$ are essentially independent of the choice of $\vphi$, since Connes cocycle derivative theorem provides a trace preserving $*$-isomorphism $\theta : M \rtimes_{\si^\vphi} \R \to M \rtimes_{\si^\om} \R$ satisfying $\theta(a) = a$ for all $a \in M$ and $\theta(\lambda_\vphi(t)) = [D\vphi:D\om]_t \, \lambda_\om(t)$. The restriction of the trace $\Tr$ to $L_\vphi(\R)$ is semifinite. The unique trace preserving conditional expectation $E_{L_\vphi(\R)} : \core(M) \to L_\vphi(\R)$ satisfies $E_{L_\vphi(\R)}(a) = \vphi(a) 1$ for all $a \in M$. Whenever $P \subset M$ is a von Neumann subalgebra and $E : M \to P$ is a faithful normal conditional expectation, we obtain a canonical trace preserving embedding $\core(P) \hookrightarrow \core(M)$, which can be concretely constructed by taking a faithful normal state $\vphi$ on $P$ and writing $\core(P) = P \rtimes_{\si^\vphi} \R \hookrightarrow M \rtimes_{\si^{\vphi \circ E}} \R = \core(M)$. Note that this embedding depends on the choice of $E$. In the trivial case where $P = \C 1$, we have that $E(a) = \psi(a)1$ and the embedding corresponds to $L_\psi(\R) \subset \core(M)$. Given an action $\Gamma \actson I$ of a countable group $\Gamma$ on a countable set $I$ and given a von Neumann algebra $(P,\om)$ equipped with a faithful normal state, we consider the generalized Bernoulli action $\Gamma \actson (N,\om) = (P,\om)^I$. Here we use the notation $(P,\om)^I$ to denote the tensor product of copies of $(P,\om)$ indexed by $I$. The action $\Gamma \actson (N,\om)$ is state preserving. We get a canonical action of $\Gamma$ on the continuous core $\core(N)$ such that $$\core(N \rtimes \Gamma) = \core(N) \rtimes \Gamma \; .$$ In \cite{Pop03}, Popa introduced his fundamental malleable deformation for probability measure preserving Bernoulli actions $\Gamma \actson (X_0,\mu_0)^\Gamma$, which has been a cornerstone for deformation/\allowbreak rigidity theory. It has been extended in several directions. In \cite{Ioa06}, another malleable deformation was found, adapted to noncommutative Bernoulli actions $\Gamma \actson (P,\tau)^\Gamma$, where $(P,\tau)$ is a tracial von Neumann algebra. This can be adapted in a straightforward way to the nontracial case, i.e.\ for Bernoulli actions $\Gamma \actson (P,\om)^\Gamma$, where $\om$ is a faithful normal state on $P$ (see e.g.\ \cite[Section 5]{Mar16}). Also Popa's spectral gap rigidity for Bernoulli actions, as introduced in \cite{Pop06}, can be extended to the setting of generalized Bernoulli actions, i.e.\ for actions $\Gamma \actson (X_0,\mu_0)^I$, where $I$ is a countable set on which $\Gamma$ is acting, see \cite[Section 4]{IPV10}. Putting all these generalizations together, we right away get the following variant of \cite[Corollary 4.3]{IPV10}. \begin{theorem}\label{thm.def-rig-nc-bernoulli} Let $(P,\om)$ be an amenable von Neumann algebra with a faithful normal state. Let $\Gamma \actson I$ be an action of a countable group $\Gamma$ on a countable set $I$. Assume that $\Stab(i)$ is amenable for every $i \in I$ and assume that there exists a $\kappa \in \N$ such that $\Stab J$ is finite whenever $J \subset I$ and $|J| \geq \kappa$. Denote, as above, $(N,\om) = (P,\om)^I$ and let $\Gamma \actson (N,\om)$ be the generalized Bernoulli action. Write $M = N \rtimes \Gamma$. Let $p \in \core(M)$ be a projection of finite trace and $A \subset p \core(M) p$ a von Neumann subalgebra such that the relative commutant $A' \cap p \core(M) p$ has no amenable direct summand. Denote by $Q = \cN_{p \core(M) p}(A)\dpr$ the normalizer of $A$. Let $z \in \cZ(Q)$ be the maximal projection such that $A z \prec_f L_\om(\R)$. Put $z' = p-z$. Then $Q z' \prec_f L_\om(\R) \vee L(\Gamma)$. \end{theorem} \begin{proof} Write $\core(L(\Gamma)) = L_\om(\R) \vee L(\Gamma)$. Replacing $A$ by $A z\dpr$ where $z\dpr$ is an arbitrary projection in $\cZ(Q) z'$, we may assume that $A \not\prec L_\om(\R)$ and we have to prove that $Q \prec \core(L(\Gamma))$. Even though $\om$ is not necessarily tracial, the tensor length deformation makes sense in this context and the proof of \cite[Corollary 4.3]{IPV10} can be copied almost verbatim. The conclusion is that at least one of the following statements hold: $Q \prec \core(N) \rtimes \Stab i$ for some $i \in I$, or $Q \prec \core(L(\Gamma))$. The von Neumann algebra $\core(N) \rtimes \Stab i$ is amenable. Since $A' \cap p \core(M) p \subset Q$, the von Neumann algebra $Q$ has no amenable direct summand. Therefore, it is impossible that $Q \prec \core(N) \rtimes \Stab i$. This concludes the proof of the theorem. \end{proof} Also the following result is an immediate noncommutative variant of known results for probability measure preserving Bernoulli actions. The method was introduced in \cite[Section 3]{Pop03} and the following version is a straightforward generalization of \cite[Lemma 4.2]{Vae07}. The same result still holds when replacing the ad hoc construction \eqref{eq.pseudo-almost} by the quasinormalizer of $B$ inside $pMp$, but we only need this simpler version. \begin{proposition}\label{prop.control-normalizer} Make the same assumptions as in Theorem \ref{thm.def-rig-nc-bernoulli}. Let $p \in L(\Gamma)$ be a projection and $B \subset p L(\Gamma) p$ a diffuse von Neumann subalgebra. Define \begin{equation}\label{eq.pseudo-almost} D = \bigl\{u \in p M p \bigm| \exists \be \in \Aut(B) \; , \; \forall b \in B \; : \; u b = \be(b) u \bigr\}\dpr \end{equation} and note that $\cN_{pMp}(B)\dpr \subset D$. \begin{enumlist} \item If $B \not\prec_{L(\Gamma)} L(\Stab i)$ for every $i \in I$, then $D \subset p L(\Gamma) p$. \item If $rDr$ is nonamenable for every nonzero projection $r \in B' \cap p L(\Gamma) p$, then $D \subset p L(\Gamma) p$. \end{enumlist} \end{proposition} \begin{proof} Since $L(\Gamma)$ lies in the centralizer of the state $\om$ on $M$, all computations of \cite[Lemma 4.2]{Vae07} go through verbatim. So, if $B \not\prec_{L(\Gamma)} L(\Stab i)$ for every $i \in I$, it follows from \cite[Lemma 4.2]{Vae07} that $D \subset p L(\Gamma) p$. Next assume that there exists an $i \in I$ such that $B \prec_{L(\Gamma)} L(\Stab i)$. It suffices to prove that $rDr$ is amenable for some nonzero projection $r \in B' \cap p L(\Gamma) p$. By assumption, $\Stab J$ is finite whenever $J \subset I$ and $|J| \geq \kappa$. Also, $B$ is diffuse. We thus find a finite nonempty subset $J \subset I$ such that $B \prec_{L(\Gamma)} L(\Stab J)$ and $B \not\prec_{L(\Gamma)} L(\Stab(J \cup \{j\}))$ for every $j \in I \setminus J$. As in \cite[Remark 3.8]{Vae07}, we can take an integer $n \in \N$, a projection $q \in M_n(\C) \ot L(\Stab J)$, a nonzero partial isometry $v \in (\C^n \ot p L(\Gamma))q$ and a unital normal $*$-homomorphism $\theta : B \to q (M_n(\C) \ot L(\Gamma)) q$ such that $b v = v \theta(b)$ for all $b \in B$ and such that $\theta(B) \not\prec_{L(\Stab J)} L(\Stab(J \cup \{j\}))$ for every $j \in I \setminus J$. When $u \in pMp$ and if $\be \in \Aut(B)$ such that $u b = \be(b) u$ for all $b \in B$, it follows that $$v^* u v \, \theta(b) = \theta(\be(b)) \, v^* u v \quad\text{for all $b \in B$.}$$ By \cite[Lemma 4.2]{Vae07}, it follows that $v^* u v \in N \rtimes \Norm J$, where $\Norm J = \{g \in \Gamma \mid g \cdot J = J \}$. Writing $r = v v^*$ and $s = v^* v$, we get that $r$ is a projection in $B' \cap p L(\Gamma) p \subset D$, that $s$ is a projection in $N \rtimes \Norm J$ and that $v^* D v \subset N \rtimes \Norm J$. Since $L(\Gamma) \subset N \rtimes \Gamma$ is with expectation, also $B \subset p M p$ and thus $D \subset p M p$ are with expectation. It follows that $v^* D v$ is with expectation in $s(N \rtimes \Norm J)s$. Since $J$ is finite and nonempty, the group $\Norm J$ is amenable. We conclude that $v^* D v$ is amenable, so that $rDr$ is amenable. \end{proof} \section{Measure classes of faithful normal states}\label{sec.measure-classes} For any self-adjoint, possibly unbounded operator $T$, we denote by $\class(T)$ its spectral measure class on $\R$. Note that a Borel set $\cU \subset \R$ has measure zero for $\class(T)$ if and only if the spectral projection $1_\cU(T)$ equals $0$. Given a faithful normal state $\om$ on a von Neumann algebra $M$, we define the measure class $\class(\om) := \class(\log \Delta_\om)$, where $\Delta_\om$ is the modular operator of $\om$. Of course, $\class(\om)$ highly depends on the choice of the state $\om$ and hence, does not provide an invariant of the von Neumann algebra $M$. A key element of this paper is that certain von Neumann algebras, including many nonsingular Bernoulli crossed products, have a favorite state $\om$ that can be essentially intrinsically characterized, so that $\class(\om)$ becomes an isomorphism invariant for this family of von Neumann algebras. To establish these results, we rephrase \cite[Corollary 3.2]{HSV16} in the following way, also introducing the notation $\vphi \prec_f \om$ for faithful normal states on a von Neumann algebras. \begin{lemma}[{\cite[Corollary 3.2]{HSV16}}]\label{lem.HSV} Let $\vphi$ and $\om$ be faithful normal states on a von Neumann algebra $M$, with corresponding canonical subalgebras $L_\vphi(\R)$ and $L_\om(\R)$ of the continuous core $\core(M)$. Then the following three statements are equivalent. \begin{enumlist} \item $L_\vphi(\R) \prec_{\core(M)} L_\om(\R)$. \item There exist a nonzero partial isometry $v \in M$ and $\gamma > 0$ such that $\gamma^{it} \, [D\om:D\vphi]_t \, \si^\vphi_t(v) = v$ for all $t \in \R$. \item There exists a nonzero partial isometry $v \in M$ with $e := v^* v \in M^\vphi$, $q:= vv^* \in M^\om$ and $\vphi(e)^{-1} \vphi(v^* x v) = \om(q)^{-1} \om(x)$ for all $x \in qMq$. \end{enumlist} When these equivalent conditions hold, we write $\vphi \prec \om$. Note that by 3, we have $\vphi \prec \om$ iff $\om \prec \vphi$. Also the following three statements are equivalent. \begin{enumlist}[resume] \item $L_\vphi(\R) \prec_{f,\core(M)} L_\om(\R)$. \item For every nonzero projection $p \in M^\vphi$, there exist $\gamma > 0$ and $v \in M$ as in 2 with $v^* v \leq p$. \item For every nonzero projection $p \in M^\vphi$, there exists $v \in M$ as in 3 with $v^* v \leq p$. \end{enumlist} When these equivalent conditions hold, we write $\vphi \prec_f \om$. \end{lemma} In full generality, the relation $\vphi \prec_f \om$ is not strong enough to conclude that $\class(\vphi) \prec \class(\om)$. We nevertheless have the following partial results. \begin{proposition}\label{prop.class} Let $\vphi$ and $\om$ be faithful normal states on a von Neumann algebra $M$. \begin{enumlist} \item If $\vphi \prec_f \om$, there exists an atomic probability measure $\rho$ on $\R$ such that $\class(\vphi) \prec \rho \ast \class(\om)$. \item If $\vphi \prec \om$ and if $M^\vphi$ is a factor, then $\class(\vphi) \prec \class(\om)$. \end{enumlist} \end{proposition} \begin{proof} 1.\ We apply point 5 of Lemma \ref{lem.HSV}. We thus find a sequence of nonzero projections $p_n \in M^\vphi$, partial isometries $v_n \in M$ and $\gamma_n > 0$ such that $\sum_n p_n = 1$, $v_n^* v_n = p_n$ and $$\gamma_n^{it} \, [D\om:D\vphi]_t \, \si^\vphi_t(v_n) = v_n \quad\text{for all $n$ and all $t \in \R$.}$$ Define $H = \ell^2(\N^2) \ot L^2(M,\om)$ and consider the unitary representation $$\theta : \R \to \cU(H) : \theta(t)(\delta_{n,m} \ot \xi) = (\gamma_n / \gamma_m)^{it} \, \delta_{n,m} \ot \Delta_\om^{it} \xi \; .$$ Then, $$V : L^2(M,\vphi) \to H : V(x) = \sum_{n,m} \gamma_m^{1/2} \, \delta_{n,m} \ot v_n x v_m^*$$ is a well defined isometry satisfying $\theta(t) V = V \Delta_\vphi^{it}$ for all $t \in \R$. Hence, $(\Delta_\vphi^{it})_{t \in \R}$ is unitarily equivalent with a subrepresentation of $\theta$. Choosing an atomic probability measure $\rho$ on $\R$ with atoms in the points $\log \gamma_n - \log \gamma_m$, it follows that $\class(\vphi) \prec \rho \ast \class(\om)$. 2.\ We apply point 2 of Lemma \ref{lem.HSV}. Take a nonzero projections $p \in M^\vphi$, a partial isometry $v \in M$ and $\gamma > 0$ such that $v^* v = p$ and $\gamma^{it} \, [D\om:D\vphi]_t \, \si^\vphi_t(v) = v$ for all $t \in \R$. Since $M^\vphi$ is a factor, we can choose partial isometries $w_n \in M^\vphi$ with $w_n w_n^* \leq p$ and $\sum_n w_n^* w_n = 1$. We can then apply the proof of the first point to the partial isometries $v w_n$, with $\gamma_n = \gamma$ for all $n$. The conclusion then becomes $\class(\vphi) \prec \class(\om)$. \end{proof} For later purposes, we prove the following rather specific and technical variant of the second point in Proposition \ref{prop.class}. \begin{lemma}\label{lem.class} Let $N$ be a von Neumann algebra with von Neumann subalgebra $M \subset N$ and faithful normal conditional expectation $E : N \to M$. Let $\om_0$ and $\vphi_0$ be faithful normal states on $M$. Write $\om = \om_0 \circ E$ and $\vphi = \vphi_0 \circ E$. Assume that there exists a subset $\cG \subset \cU(N)$ such that $u^* \si^\om_t(u) \in M$ for all $t \in \R$, $u \in \cG$ and such that the linear span of $\cG M$ is dense in $L^2(N,\om)$. If $\vphi \prec \om$ and if $M^{\vphi_0}$ is a factor, there exists $u \in \cG$ such that $$\class(\vphi_0) \prec \class((\om \circ \Ad u)|_M) \prec \class(\om) \; .$$ \end{lemma} \begin{proof} By Lemma \ref{lem.HSV}, we find a nonzero element $v \in N$ and $\gamma > 0$ such that \begin{equation}\label{eq.my-eq1} \gamma^{it} \, \si^\om_t(v) \, [D\om:D\vphi]_t = \gamma^{it} \, [D\om:D\vphi]_t \, \si^\vphi_t(v) = v \quad\text{for all $t \in \R$.} \end{equation} Since the linear span of $\cG M$ is dense in $L^2(N,\om)$, we can choose $u \in \cG$ such that $E(u^* v) \neq 0$. Denote $a_t := u^* \si^\om_t(u) \in \cU(M)$. Define the faithful normal state $\psi = \om \circ \Ad u$ on $N$. By construction, $[D\psi : D \om]_t = a_t$. Since $a_t \in M$, we get that $\psi = \psi_0 \circ E$, where $\psi_0$ is defined as the restriction of $\psi$ to $M$. We have to prove that $\class(\vphi_0) \prec \class(\psi_0)$. For every $x \in N$ and $t \in \R$, we have \begin{align*} \si_t^\psi(x) \, [D \psi : D \vphi]_t &= u^* \si^\om_t(u x u^*) u \, [D \psi : D \vphi]_t = a_t \, \si^\om_t(x) \, [D\om : D\psi]_t \, [D \psi : D \vphi]_t \\ &= a_t \, \si_t^\om(x) \, [D \om : D \vphi]_t \; . \end{align*} Applying this to $x = E(u^* v)$, using \eqref{eq.my-eq1} and using that $[D \psi : D \vphi]_t = [D\psi_0 : D\vphi_0]_t \in M$ and $[D \om : D \vphi]_t = [D \om_0 : D\vphi_0]_t \in M$, we get that \begin{align*} \gamma^{it} \, \si_t^{\psi_0}(x) \, [D \psi_0 : D \vphi_0]_t & = \gamma^{it} \, a_t \, \si_t^\om(E(u^* v)) \, [D \om : D \vphi]_t = a_t \, E\bigl( \si_t^\om(u)^* \, \gamma^{it} \, \si_t^\om(v) \, [D \om : D \vphi]_t\bigr)\\ & = a_t \, E(a_t^* \, u^* v) = E(u^* v) = x \; . \end{align*} Defining the partial isometry $w \in M$ as the polar part of $x$, we still have $$\gamma^{it} \, \si_t^{\psi_0}(w) \, [D \psi_0 : D \vphi_0]_t = w \; .$$ Since $M^{\vphi_0}$ is a factor, it then follows from the second point of Proposition \ref{prop.class} that $\class(\vphi_0) \prec \class(\psi_0)$. Since $\psi = \psi_0 \circ E$, we have that $\class(\psi_0) \prec \class(\psi)$. Since $\psi = \om \circ \Ad u$, the modular operators of $\psi$ and $\om$ are unitarily equivalent, so that $\class(\psi) = \class(\om)$. \end{proof} \section{Proof of Theorems \ref{thm.main} and \ref{thm.main2}} We deduce Theorems \ref{thm.main} and \ref{thm.main2} from a more general rigidity result for certain nonsingular coinduced actions. Let $G$ be a countable amenable group and let $G \actson^\al (Z,\zeta)$ be any nonsingular action on a nontrivial standard probability space. Given any countable infinite group $\Lambda$, we consider the free product $\Gamma = G \ast \Lambda$ and the countable set $I = (G \ast \Lambda)/G$ with base element $i_0 = e G$. We then define $(X,\mu) = (Z,\zeta)^I$ and the nonsingular action $\Gamma \actson (X,\mu)$ by \begin{equation}\label{eq.more-general-nonsingular} \begin{split} &(g \cdot x)_{i} = x_{g^{-1} \cdot i} \;\;\text{when $g \in \Lambda$,}\\ & (g \cdot x)_{i} = x_{g^{-1} \cdot i} \;\;\text{when $i \neq i_0$ and $g \in G$, and}\quad (g \cdot x)_{i_0} = g \cdot x_{i_0}\;\;\text{when $g \in G$.} \end{split} \end{equation} So, $\Lambda$ acts as a generalized Bernoulli action on $X = Z^I$, while the action of $G$ is the diagonal product of a generalized Bernoulli action on $Z^{I \setminus \{i_0\}}$ and the given action $G \actson^\al Z$, viewed as the $i_0$-coordinate. Note that the nonsingular Bernoulli action in \eqref{eq.my-bernoulli} arises as a special case of \eqref{eq.more-general-nonsingular} by taking $G = \Z$ and $\Z \actson (Z,\zeta) = \prod_{n \in \Z} (Y,\mu_n)$ by Bernoulli shift. When $i \neq j$ are distinct elements, the stabilizer $\Stab \{i,j\}$ is trivial. It follows that $\Gamma \actson (X,\mu)$ is essentially free. The action of $\Lambda$ is measure preserving and ergodic. It follows that $\Gamma \actson (X,\mu)$ is ergodic, with the Krieger type being determined in the following way: if $\zeta$ is $G$-invariant, then $\Gamma \actson (X,\mu)$ is of type II$_1$. In all other cases, $\Gamma \actson (X,\mu)$ is of type III. We associate to $G \actson^\al (Z,\zeta)$ the stable measure class $\stc(\al)$ defined as the smallest stable measure class such that $(\log d(g \cdot \zeta)/ d\zeta)_*(\zeta) \prec \stc(\al)$ for all $g \in G$. Note that $\Gamma \actson (X,\mu)$ is of type III$_\lambda$ if and only if $\stc(\al)$ is equivalent with the counting measure on $\Z \log \lambda$. Otherwise, $\Gamma \actson (X,\mu)$ is of type III$_1$. Theorems \ref{thm.main} and \ref{thm.main2} will be deduced from the following more general result. \begin{theorem}\label{thm.main3} For $i \in \{1,2\}$, let $G_i$ be countable amenable groups with nonsingular actions $G_i \actson^{\al_i} (Z_i,\zeta_i)$ on nontrivial standard probability spaces. Let $\Lambda_i$ be nonamenable groups. Put $\Gamma_i = G_i \ast \Lambda_i$ and define $\Gamma_i \actson (X_i,\mu_i)$ by \eqref{eq.more-general-nonsingular}. Write $M_i = L^\infty(X_i,\mu_i) \rtimes \Gamma_i$. \begin{enumlist} \item If $M_1 \cong M_2$, then $\stc(\al_1) \sim \stc(\al_2)$. \item If $M_1$ embeds with expectation into $M_2$, there exists an atomic probability measure $\rho$ on $\R$ such that $\stc(\al_1) \prec \rho \ast \stc(\al_2)$. \end{enumlist} \end{theorem} We prove Theorem \ref{thm.main3} by combining several ingredients: we first prove how the crossed product of an action of the form \eqref{eq.more-general-nonsingular} can be embedded in a state preserving way into a generalized Bernoulli crossed product as studied in Theorem \ref{thm.def-rig-nc-bernoulli}. We then combine Theorem \ref{thm.def-rig-nc-bernoulli} with the results of Section \ref{sec.measure-classes} to reach the conclusions of Theorem \ref{thm.main3}. \begin{lemma}\label{lem.canonical-embedding} Let $\Gamma \actson (X,\mu)$ be defined by \eqref{eq.more-general-nonsingular}. Define $P = L^\infty(Z,\zeta) \rtimes G$ and denote by $\om$ the canonical crossed product state on $P$. There is a canonical state preserving embedding $\psi$ of $L^\infty(X,\mu) \rtimes \Gamma$ into the generalized Bernoulli crossed product $(P,\om)^I \rtimes \Gamma$, and there is a state preserving conditional expectation of $(P,\om)^I \rtimes \Gamma$ onto the image of $\psi$. \end{lemma} \begin{proof} We denote by $(u_g)_{g \in \Gamma}$ the canonical unitary operators in the crossed products $L^\infty(X)\rtimes \Gamma$ and $(P,\om)^I \rtimes \Gamma$. We denote by $(v_g)_{g \in G}$ the canonical unitary operators in the crossed product $L^\infty(Z) \rtimes G = P$. We denote by $\pi_0 : (P,\om) \to (P,\om)^I$ the canonical embedding in coordinate $i_0 \in I$. Note that $\pi_0(P)$ commutes with $(u_g)_{g \in G}$ inside $(P,\om)^I \rtimes \Gamma$. We denote by $\vphi$ the crossed product state on $L^\infty(X,\mu) \rtimes \Gamma$. We still denote by $\om$ the natural state on $(P,\om)^I \rtimes \Gamma$. We can then define the state preserving embedding $$\psi : L^\infty(X) \rtimes \Gamma \to (P,\om)^I \rtimes \Gamma$$ such that the restriction of $\psi$ to $L^\infty(X)$ is the canonical embedding of $L^\infty(X,\mu) = L^\infty((Z,\zeta)^I)$ into $(P,\om)^I$ and such that $$\psi(u_g) = u_g \;\;\text{for all $g \in \Lambda$, and}\quad \psi(u_h) = \pi_0(v_h) \, u_h \;\;\text{for all $h \in G$.}$$ By construction, $\psi \circ \si_t^\vphi = \si_t^\om \circ \psi$. There thus exists a state preserving conditional expectation of $(P,\om)^I \rtimes \Gamma$ onto the image of $\psi$. \end{proof} By Lemma \ref{lem.canonical-embedding} any embedding with expectation of a crossed product von Neumann algebra $N$ into $L^\infty(X,\mu) \rtimes \Gamma$ will induce an embedding with expectation of $N$ into $(P,\om)^I \rtimes \Gamma$. As a preparation to prove Theorem \ref{thm.main3}, we thus prove a general rigidity result for such embeddings into generalized Bernoulli crossed products $(P,\om)^I \rtimes \Gamma$. We actually prove a very general result of this kind, which is of independent interest. Let $\cR$ be a nonsingular countable equivalence relation on a standard probability space $(X,\mu)$. In the context of the discussion above, $\cR$ would be the orbit equivalence relation of a nonsingular action of the form \eqref{eq.more-general-nonsingular}, but we prove results for arbitrary equivalence relations $\cR$. We denote by $\vphi_\mu$ the canonical faithful normal state on the von Neumann algebra $L(\cR)$. We assume that the centralizer $L(\cR)^{\vphi_\mu}$ is large, in the sense that it has no amenable direct summand. We prove that if $L(\cR)$ embeds with expectation $E$ into a noncommutative Bernoulli crossed product $(M,\om)$ satisfying the appropriate conditions, then automatically $\vphi_\mu \circ E \prec_f \om$, using the notation of Lemma \ref{lem.HSV}. Recall that a countable nonsingular equivalence relation $\cR$ on a standard probability space $(X,\mu)$ is said to be purely infinite if for every nonnegligible Borel set $\cU \subset X$, the restriction $\cR|_\cU$ does not admit an $\cR$-invariant probability measure that is equivalent with $\mu$. When $\cR$ is ergodic, this is equivalent to saying that $\cR$ is of type III. \begin{proposition}\label{prop.main-technical} Let $\cR$ be a countable nonsingular equivalence relation on the standard probability space $(X,\mu)$. Assume that $\cR$ is purely infinite and that the centralizer $L(\cR)^{\vphi_\mu}$ has no amenable direct summand. Let $(M,\om)$ be the noncommutative generalized Bernoulli crossed product $M = (P,\om)^I \rtimes \Gamma$, where $P$ is amenable, $\om$ is a faithful normal state on $P$, $\Gamma$ is any countable group and the action $\Gamma \actson I$ has the properties that $\Stab(i)$ is amenable for all $i \in I$ and that there exists a $\kappa \in \N$ such that $\Stab(J)$ is finite whenever $J \subset I$ satisfies $|J| \geq \kappa$. If $\pi : L(\cR) \to M$ is an embedding of $L(\cR)$ as a von Neumann subalgebra of $M$ admitting a faithful normal conditional expectation $E : M \to \pi(L(\cR))$, then $\vphi_\mu \circ \pi^{-1} \circ E \prec_f \om$. \end{proposition} \begin{proof} We write $\vphi = \vphi_\mu$ and $N = L(\cR)$. We view $N$ as a von Neumann subalgebra of $M$ with the faithful normal conditional expectation $E : M \to N$. We still denote by $\vphi$ the faithful normal state $\vphi \circ E$ on $M$. By Lemma \ref{lem.HSV}, we have to prove that $L_\vphi(\R) \prec_f L_\om(\R)$ inside the continuous core $\core(M)$. Take an arbitrary nonzero projection $p \in L_\vphi(\R)$ of finite trace and write $A = L_\vphi(\R) p$. We prove that $A \prec_f L_\om(\R)$. Note that $N^\vphi p$ commutes with $A$ and has no amenable direct summand. Denote $Q = A' \cap p \core(M) p$ and let $z \in \cZ(Q)$ be the maximal projection such that $A z \prec_f L_\om(\R)$. Put $z' = p-z$. Assume that $z' \neq 0$. We derive a contradiction. Write $\core(L(\Gamma)) = L_\om(\R) \vee L(\Gamma)$. By Theorem \ref{thm.def-rig-nc-bernoulli}, $Q \prec \core(L(\Gamma))$. Write $B = (L^\infty(X) \vee L_\vphi(\R)) p$. Since $L^\infty(X)$ commutes with $L_\vphi(\R)$, we have that $B \subset Q$. Thus, $B \prec \core(L(\Gamma))$. We prove that $L^\infty(X) \prec_M L(\Gamma)$. Assume the contrary. Denote by $E_{L(\Gamma)} : M \to L(\Gamma)$ the unique $\om$-preserving conditional expectation. Since $L^\infty(X) \not\prec_M L(\Gamma)$, we can take a sequence of unitaries $w_n \in \cU(L^\infty(X))$ such that $E_{L(\Gamma)}(x^* w_n y) \to 0$ $*$-strongly for all $x,y \in M$. We claim that $$\bigl\|E_{\core(L(\Gamma))}(x^* w_n y)\bigr\|_{2,\Tr} \to 0 \quad\text{for all $x,y \in \core(M)$ with $\Tr(x^* x) < +\infty$ and $\Tr(y^* y) < +\infty$.}$$ By density, it suffices to prove this claim for $x = x_1 x_2$ and $y = y_1 y_2$ with $x_1,y_1 \in M$ and $x_2,y_2 \in L_\om(\R)$ with $\Tr(x_2^* x_2) < +\infty$ and $\Tr(y_2^* y_2) < +\infty$. But then, $$E_{\core(L(\Gamma))}(x^* w_n y) = x_2^* \, E_{L(\Gamma)}(x_1^* w_n y_1) \, y_2 \; ,$$ so that the claim follows. We find in particular that $\bigl\|E_{\core(L(\Gamma))}(x^* w_n p \, y)\bigr\|_{2,\Tr} \to 0$ for all $x,y \in \core(M)$. Since $w_n p$ is a sequence of unitaries in $B$, this implies that $B \not\prec \core(L(\Gamma))$, which is a contradiction. So, we have proven that $L^\infty(X) \prec_M L(\Gamma)$. Take projections $e \in L^\infty(X)$, $q \in L(\Gamma)$, a nonzero partial isometry $v \in e M q$ and a normal unital $*$-homomorphism $\theta : L^\infty(X) e \to q L(\Gamma) q$ such that $a v = v \theta(a)$ for all $a \in L^\infty(X) e$. Write $B_1 = L^\infty(X) e$. Since $v^* v$ commutes with $\theta(B_1)$, also the support projection of $E_{L(\Gamma)}(v^* v)$ commutes with $\theta(B_1)$. We may cut down with this projection and thus assume that the support projection of $E_{L(\Gamma)}(v^* v)$ equals $q$. Next, we may also replace $e$ by the support of the homomorphism $\theta$ and assume that $\theta$ is faithful. Define $B_2 = \theta(B_1)$ and $$D_2 = \bigl\{ u \in q M q \bigm| \exists \be \in \Aut(B_2) \; , \; \forall b \in B_2 \; : \; u b = \be(b) u \bigr\}\dpr \; .$$ Define $D_1 = \cN_{eMe}(B_1)\dpr$. Note that $e L(\cR) e \subset D_1$. Whenever $u \in \cU(e M e)$ normalizes $B_1$, we get that $v^* u v \in D_2$. Write $s = v^* v$. Thus, $s \in D_2$ and $v^* D_1 v \subset s D_2 s$. Also, $vv^* \in D_1$. Since $L(\cR)$ has no amenable direct summand and $L(\cR) \subset M$ is with expectation, we conclude that $s D_2 s$ has no amenable direct summand. Let $z \in \cZ(D_2)$ be the central support of $s$ in $D_2$. Then $D_2 z$ has no amenable direct summand. When $r \in B_2' \cap q L(\Gamma)q \subset D_2$ is a nonzero projection, since $q$ is equal to the support projection of $E_{L(\Gamma)}(s)$, we find that $rs \neq 0$. Thus, $rz \neq 0$, so that $r D_2 r$ is nonamenable. Since this holds for every choice of $r$, it follows from Proposition \ref{prop.control-normalizer} that $D_2 \subset q L(\Gamma) q$. In particular, $L(\cR) \prec_M L(\Gamma)$. It follows that $L(\cR)$ has a direct summand that is finite, contradicting our assumption that $\cR$ is purely infinite. This final contradiction shows that $z'=0$. So, $L_\vphi(\R) \prec_f L_\om(\R)$ and the proposition is proven. \end{proof} \begin{proof}[{Proof of Theorem \ref{thm.main3}}] If $G_1 \actson (Z_1,\zeta_1)$ is measure preserving, then $\stc(\al_1) = \delta_0$ and there is nothing to prove. We may thus assume that $G_1 \actson (Z_1,\zeta_1)$ is not measure preserving. As explained above, it follows that $\Gamma \actson (X_1,\mu_1)$ is ergodic and of type III. Denote by $\vphi_i$ the canonical crossed product state on $M_i$. Assume that $\pi : M_1 \to M_2$ is any embedding with expectation. By Lemma \ref{lem.canonical-embedding}, we view $M_2$ as a von Neumann subalgebra of a generalized Bernoulli crossed product $N_2 = (P_2,\om_2)^{I_2} \rtimes \Gamma_2$, where $I_2 = \Gamma_2 / G_2$ and $P_2 = L^\infty(Z_2,\zeta_2) \rtimes G_2$, with crossed product state $\om_2$ on $P_2$. We still denote by $\om_2$ the natural state on $N_2$. There is a unique faithful normal conditional expectation $E : N_2 \to M_2$ satisfying $\om_2 = \vphi_2 \circ E_2$. The action $\Gamma_2 \actson I_2 = \Gamma_2 / G_2$ has amenable stabilizers and has the property that $\Stab\{i,j\} = \{e\}$ when $i \neq j$. We apply Proposition \ref{prop.main-technical} to the orbit equivalence relation $\cR_1 = \cR(\Gamma_1 \actson X_1)$, which is ergodic and of type III. Note that $L(\cR_1)$ is canonically isomorphic with $M_1$ and the state $\vphi_{\mu_1}$ in Proposition \ref{prop.main-technical} is equal to the canonical state $\vphi_1$ on the crossed product $M_1 = L^\infty(X_1,\mu_1) \rtimes \Gamma_1$. The centralizer $M_1^{\vphi_1}$ contains $L^\infty(X_1,\mu_1) \rtimes \Lambda_1$, which has trivial relative commutant in $M_1$. So, $M_1^{\vphi_1}$ is a nonamenable factor. By Proposition \ref{prop.main-technical}, we find that $\vphi_1 \circ \pi^{-1} \circ E \prec_f \om_2$. Using Proposition \ref{prop.class}, we find an atomic probability measure $\rho$ such that $$\class(\vphi_1 \circ \pi^{-1} \circ E) \prec \rho \ast \class(\om_2) \; .$$ Since $\class(\vphi_1) \prec \class(\vphi_1 \circ \pi^{-1} \circ E)$, we get that \begin{equation}\label{eq.almost-second-statement} \class(\vphi_1) \prec \rho \ast \class(\om_2) \; . \end{equation} We now prove that $\class(\vphi_1) = \stc(\al_1)$ and $\class(\om_2) = \stc(\al_2)$. By construction, for every $g \in \Gamma_1$, the measure $g \cdot \mu_1$ is of the form $g \cdot \mu_1 = \prod_{i \in I} (g_i \cdot \zeta_1)$ with $g_i \in G_1$ and with all but finitely many $g_i$ equal to $e$. Moreover, any collection of such $g_i \in G_1$ can be realized by the appropriate choice of $g \in \Gamma_1$. Since $\class(\vphi_1)$ is the join of the measure classes $(\log d(g \cdot \mu_1)/ d\mu_1)_*(\mu_1)$, $g \in \Gamma_1$, it follows that $\class(\vphi_1)$ is the join of all convolution products of $(\log d(g \cdot \zeta_1)/ d\zeta_1)_*(\zeta_1)$, $g \in G_1$. This is precisely $\stc(\al_1)$. Secondly, $\class(\om_2)$ is the join of all convolution powers of $\class(\om_2|_{P_2})$. Since $\class(\om_2|_{P_2})$ is the join of the measure classes $(\log d(g \cdot \zeta_2)/ d\zeta_2)_*(\zeta_2)$, $g \in G_2$, it follows that $\class(\om_2) = \stc(\al_2)$. The second statement of the theorem thus follows from \eqref{eq.almost-second-statement}. To prove the first statement of the theorem, assume that $\pi$ is a $*$-isomorphism between $M_1$ and $M_2$. By symmetry, it suffices to prove that $\stc(\al_1) \prec \stc(\al_2)$. We apply Lemma \ref{lem.class}. Since $P_2 = L^\infty(Z_2) \rtimes G_2$, we may view $$N_2 = (P_2,\om_2)^{I_2} \rtimes \Gamma_2 = L^\infty\bigl((Z_2,\zeta_2)^{I_2}\bigr) \rtimes \cG_2 \; ,$$ where $\cG_2 = G_2 \wr_{I_2} \Gamma_2 = G_2^{(I_2)} \rtimes \Gamma_2$ is the generalized wreath product group that acts naturally on $(Z_2,\zeta_2)^{I_2} = (X_2,\mu_2)$. For every $g \in \cG_2$, we have that $u_g^* \si^{\om_2}_t(u_g) \in L^\infty(X_2,\mu_2) \subset M_2 = \pi(M_1)$. Since the linear span of $u_g L^\infty(X_2,\mu_2)$, $g \in \cG_2$, is dense in $L^2(N_2,\om_2)$, we certainly have the density of the linear span of $u_g M_2$, $g \in \cG_2$. Since $\vphi_1 \circ \pi^{-1} \circ E \prec \om_2$, since $\om_2 = \vphi_2 \circ E$ and since $M_1^{\vphi_1}$ is a factor, it follows from Lemma \ref{lem.class} that $\class(\vphi_1) \prec \class(\om_2)$. We have proven above that $\class(\vphi_1) = \stc(\al_1)$ and $\class(\om_2) = \stc(\al_2)$. So the theorem is proven. \end{proof} Theorem \ref{thm.main} is an immediate consequence of Theorem \ref{thm.main3}, as we show now. \begin{proof}[{Proof of Theorem \ref{thm.main}}] Given equivalent probability measures $\nu \sim \eta$ on a standard Borel space $Y$ and given a nonamenable group $\Lambda$, the nonsingular Bernoulli action of $\Gamma = \Z \ast \Lambda$ given by \eqref{eq.my-bernoulli} is isomorphic with the action defined by \eqref{eq.more-general-nonsingular} associated with $G = \Z \actson^\al (Z,\zeta) = \prod_{n \in \Z} (Y,\mu_n)$, where $\mu_n = \nu$ if $n \in \N$ and $\mu_n = \eta$ if $n \in \Z \setminus \N$. Define $\gamma = \stc((\log d\nu / d\eta)_*(\nu))$. By Theorem \ref{thm.main3}, it suffices to prove that $\gamma \sim \stc(\al)$. By definition, $\stc(\al)$ is the smallest stable measure class satisfying $(\log d(n \cdot \zeta)/d\zeta)_*(\zeta) \prec \stc(\al)$ for all $n \in \Z$. The measure $(\log d(n \cdot \zeta)/d\zeta)_*(\zeta)$ is equivalent with the $|n|$-fold convolution power of $(\log d\nu / d\eta)_*(\nu)$ or its opposite, depending on the sign of $n$. So, $\stc(\al) \sim \stc((\log d\nu / d\eta)_*(\nu)) = \gamma$. \end{proof} To prove Theorem \ref{thm.main2}, we use the following lemma about the relation between independent Borel sets and convolution products. This result is essentially contained in \cite[Section 3]{LP97}, but we provide an elementary proof for convenience. As mentioned in the introduction, recall that a Borel set $K \subset \R$ is said to be independent if every finite subset $\cF \subset K$ generates a free abelian subgroup of $\R$ of rank $|\cF|$. In other words, a Borel set $K \subset \R$ is independent if and only if for every $n$-tuple of distinct elements $x_1,\ldots,x_n \in K$, the homomorphism $\Z^n \to \R : \lambda \mapsto \sum_{k=1}^n \lambda_k x_k$ is injective. Also recall that given a measure class $\mu$ on $\R$, we define $\mutil$ by $\mutil(\cU) = \mu(-\cU)$ and we denote by $\stc(\mu)$ the join of the measure classes $\mu^{\ast n} \ast {\mutil}^{\ast m}$, $n,m \geq 0$. \begin{lemma}\label{lem.indep-K} Let $K \subset \R$ be an independent Borel set. We decompose any probability measure $\eta$ on $\R$ as the sum $\eta = \eta_a + \eta_c$ of an atomic and a nonatomic measure. \begin{enumlist} \item If $\eta$ is a nonatomic probability measure on $\R$, the set $C = \{x \in \R \mid \eta(x+K) > 0\}$ is countable. For any probability measure $\eta$ on $\R$, we define the measure class $\pi_K(\eta)$ on $K$ by $\pi_K(\eta) := \bigvee_{x \in C} (\delta_{-x} \ast \eta_c)|_{K}$. \item Let $\mu$ be a nonatomic probability measure on $\R$ and let $\eta$ be any probability measure on $\R$. Denote by $\eta_a$ the atomic part of $\eta$. Then, $\pi_K(\eta \ast \mu) \sim \pi_K(\eta_a \ast \mu)$. In particular, if also $\eta$ is nonatomic, then $\pi_K(\eta \ast \mu) = 0$. If $\eta_a \neq 0$, we conclude that $\pi_K(\eta \ast \mu) \sim \pi_K(\mu)$. \item For every probability measure $\mu$ on $\R$ and every atomic probability measure $\rho$ on $\R$, we have that $\pi_K(\rho \ast \stc(\mu)) \sim \pi_K(\mu_c) \vee \pi_K(\mutil_c)$. \item If $x \in \R$ and $\mu$ is a probability measure with $\mu(\R \setminus K) = 0$, then $\pi_K(\delta_x \ast \mu) \sim \mu_c$ and $\pi_K(\delta_x \ast \mutil) = 0$. \end{enumlist} \end{lemma} \begin{proof} We first prove the following two claims. \begin{enumlist}[label=(\roman*),labelwidth=4ex,leftmargin=4.5ex] \item If $x \in \R \setminus \{0\}$, then $(x+K) \cap K$ has at most one element. \item If $x \in \R$, then $(x-K) \cap K$ has at most two elements \end{enumlist} To prove (i), if $(x+K) \cap K$ is nonempty and $x \neq 0$, we can write $x = a-b$ with $a,b \in K$ and $a \neq b$. Since $K$ is independent, one deduces that $(x+K) \cap K = \{a\}$. To prove (ii), if $(x-K) \cap K$ is nonempty, we can write $x = a+b$ with $a,b \in K$. Since $K$ is independent, one deduces that $(x-K) \cap K = \{a,b\}$. 1.\ Fix a nonatomic probability measure $\eta$ on $\R$. If $x \neq y$, it follows from (i) that $\eta((x+K) \cap (y+K)) = 0$. Since $\eta$ is a finite measure, the set $C = \{x \in \R \mid \eta(x+K) > 0\}$ is countable. 2.\ Fix $x \in \R$. Then, $(\eta \ast \mu)(x+K) = \int_\R \mu(-y+x + K) \, d\eta(y)$. There are only countable many $y \in \R$ such that $\mu(-y+x + K) > 0$. Thus, $$(\eta_c \ast \mu)(x+K) = \int_\R \mu(-y+x + K) \, d\eta_c(y) = 0 \; .$$ This holds for every $x \in \R$, so that $\pi_K(\eta_c \ast \mu) = 0$. It follows that $\pi_K(\eta \ast \mu) \sim \pi_K(\eta_a \ast \mu)$. By definition, $\pi_K(\delta_x \ast \mu) \sim \pi_K(\mu)$ for every $x \in \R$. Then also $\pi_K(\eta_a \ast \mu) \sim \pi_K(\mu)$, whenever $\eta_a \neq 0$. 3.\ Write $\rho_1 = \rho \ast \stc(\mu_a)$. Then, $\rho \ast \stc(\mu) \sim \rho_1 \ast \stc(\mu_c)$. By 2, we know that $\pi_K(\rho_1 \ast \mu_c^{\ast n} \ast \mutil_c^{\ast m}) = 0$ when $n + m \geq 2$. Since $\rho_1$ is atomic, also $\pi_K(\rho_1 \ast \delta_0) = 0$. Therefore, $$\pi_K(\rho \ast \stc(\mu)) \sim \pi_K(\rho_1 \ast \stc(\mu_c)) \sim \pi_K(\rho_1 \ast \mu_c) \vee \pi_K(\rho_1 \ast \mutil_c) \sim \pi_K(\mu_c) \vee \pi_K(\mutil_c) \; .$$ 4.\ By definition, $\pi_K(\delta_x \ast \mu) \sim \pi_K(\delta_x \ast \mu_c)$. When $y \neq x$, it follows from (i) that $(\delta_x \ast \mu_c)(y + K) = 0$. Also, $\delta_x \ast \mu_c$ is supported on $x + K$. Thus, $\pi_K(\delta_x \ast \mu) \sim \delta_{-x} \ast \delta_x \ast \mu_c = \mu_c$. We also have $\pi_K(\delta_x \ast \mutil) = \pi_K(\delta_x \ast \mutil_c)$. By (ii), for every $y \in \R$, we have $$(\delta_x \ast \mutil_c)(y + K) = \mutil_c(-x + y + K) = \mu_c(x-y-K) = \mu_c(K \cap (x-y-K)) = 0 \; .$$ This holds for all $y \in \R$ and thus, $\pi_K(\delta_x \ast \mutil) = 0$. \end{proof} We can then deduce the following consequence of Theorem \ref{thm.main3}. \begin{corollary}\label{cor.Bernoulli-independent-set} For $i \in \{1,2\}$, let $\nu_i \sim \eta_i$ be equivalent probability measures on a standard Borel space $Y_i$. Let $\Lambda_i$ be nonamenable groups. Denote $\Gamma_i = \Z \ast \Lambda_i$ and consider the nonsingular Bernoulli action $\Gamma_i \actson (X_i,\mu_i)$ given by \eqref{eq.my-bernoulli}. Write $\sigma_i = (\log d\nu_i / d\eta_i)_*(\nu_i)$. Denote $M_i = L^\infty(X_i) \rtimes \Gamma_i$. Let $K \subset \R$ be an independent Borel set and use the notation $\pi_K$ introduced in Lemma \ref{lem.indep-K}. \begin{enumlist} \item If $M_1 \cong M_2$, then $\pi_K(\sigma_1) \vee \pi_K(\sigmatil_1) \sim \pi_K(\sigma_2) \vee \pi_K(\sigmatil_2)$. \item If $M_1$ admits an embedding with expectation into $M_2$, then $\pi_K(\sigma_1) \vee \pi_K(\sigmatil_1) \prec \pi_K(\sigma_2) \vee \pi_K(\sigmatil_2)$. \end{enumlist} \end{corollary} \begin{proof} Considering $G_i = \Z \actson^{\al_i} (Z_i,\zeta_i) = \prod_{n \in \Z} (Y_i,\mu_{i,n})$, where $\mu_{i,n} = \nu_i$ if $n \in \N$ and $\mu_{i,n} = \eta_i$ if $n \in \Z \setminus \N$, we have seen in the proof of Theorem \ref{thm.main} that $\stc(\al_i) \sim \stc(\sigma_i)$. If $M_1$ admits an embedding with expectation into $M_2$, Theorem \ref{thm.main3} provides an atomic probability measure $\rho$ such that $\stc(\al_1) \prec \rho \ast \stc(\al_2)$. Thus, $\stc(\sigma_1) \prec \rho \ast \stc(\sigma_2)$. Applying $\pi_K$ and using Lemma \ref{lem.indep-K}.3, we conclude that $\pi_K(\sigma_1) \vee \pi_K(\sigmatil_1) \prec \pi_K(\sigma_2) \vee \pi_K(\sigmatil_2)$. If $M_1 \cong M_2$, also the converse absolute continuity holds. \end{proof} \begin{proof}[{Proof of Theorem \ref{thm.main2}}] In the context of Theorem \ref{thm.main2}, the measure $\sigma_i = (\log d\nu_i / d\eta_i)_*(\nu_i)$ is a translate of $\nu_i$ and $\nu_i$ is supported on $K$. By Lemma \ref{lem.indep-K}.4, we get that $\pi_K(\sigma_i) \sim \nu_i$ and $\pi_K(\sigmatil_i) = 0$. The result thus follows from Corollary \ref{cor.Bernoulli-independent-set}. \end{proof} \begin{example}\label{exam.full-with-standard-tau} Fix a compact independent set $K \subset \R$ such that $K$ is homeomorphic to a Cantor set (see e.g.\ \cite[Theorems 5.1.4 and 5.2.2]{Rud62}). Fix a countable nonamenable group $\Lambda$ and put $\Gamma = \Z \ast \Lambda$. Put $Y = [0,1] \cup K$. Given any nonatomic probability measure $\rho$ on $K$, define the probability measure $\nu_\rho$ on $Y$ as $(\lambda + \rho)/2$, where $\lambda$ is the Lebesgue measure on $[0,1]$. Define the probability measure $\eta_\rho$ on $Y$ by normalizing $\exp(-y) \, d\nu_\rho(y)$. Consider the associated nonsingular Bernoulli action $\Gamma \actson^{\al_\rho} (X,\mu_\rho)$ given by \eqref{eq.my-bernoulli}, with crossed product factor $M_\rho = L^\infty(X,\mu_\rho) \rtimes_{\al_\rho} \Gamma$. \begin{enumlist} \item $M_\rho$ is a full factor of type III and Connes $\tau$-invariant of $M_\rho$ is the usual topology on $\R$. When $\Lambda$ has infinite conjugacy classes and is not inner amenable, this was proven in \cite[Proposition 7.1]{VW17}, but the result holds by only assuming that $\Lambda$ is nonamenable, as we show in Lemma \ref{lem.full} below. \item Let $\rho$ and $\rho'$ be nonatomic probability measures on $K$. If $M_{\rho} \cong M_{\rho'}$, then $\rho \sim \rho'$. If $M_{\rho}$ admits an embedding with expectation into $M_{\rho'}$, then $\rho \prec \rho'$. Both statements follow from Corollary \ref{cor.Bernoulli-independent-set}: given a nonatomic probability measure $\rho$ on $K$, we have that $(\log d\nu_\rho / d\eta_\rho)_*(\nu_\rho)$ is a translate of $\nu_\rho$. Since $\rho$ is supported on $K$ and since $\lambda(x+ K) = 0$ for every $x \in \R$, it follows from Lemma \ref{lem.indep-K} that $\pi_K(\nu_\rho) \vee \pi_K(\widetilde{\nu_\rho}) \sim \rho$. \end{enumlist} \end{example} For completeness, we include a proof for the following result, which was proven in \cite[Proposition 7.1]{VW17} under the stronger assumption that $\Lambda$ has infinite conjugacy classes and is not inner amenable. \begin{lemma}\label{lem.full} Let $\nu \sim \eta$ be equivalent probability measures on a standard Borel space $Y$. Assume that $\nu$ and $\eta$ are not concentrated on a single atom. Let $\Lambda$ be a countable nonamenable group. Write $\Gamma = \Z \ast \Lambda$ and define the nonsingular Bernoulli action $\Gamma \actson (X,\mu)$ by \eqref{eq.my-bernoulli}. Then, the factor $M = L^\infty(X,\mu) \rtimes \Gamma$ is full and the $\tau$-invariant is the weakest topology on $\R$ that makes the map $$\R \to \cU(L^\infty(Y,\nu)) : t \mapsto \Bigl(\frac{d\nu}{d\eta}\Bigr)^{it}$$ continuous, where $\cU(L^\infty(Y,\nu))$ is equipped with the strong topology. \end{lemma} \begin{proof} Denote by $\vphi$ the canonical crossed product state on $M$. Write $Q = L^\infty(X,\mu) \rtimes \Lambda$. As in the proof of \cite[Proposition 7.1]{VW17}, it suffices to show the following: if $x_n \in \cU(M)$ is a sequence of unitaries such that $x_n a - a x_n \to 0$ $*$-strongly for every $a \in Q$, then $x_n - \vphi(x_n) 1 \to 0$ $*$-strongly. Note that $Q \subset M^\vphi$. There thus exists a unique $\vphi$-preserving conditional expectation $E : M \to L(\Lambda)$. In the proof of \cite[Proposition 7.1]{VW17}, it is shown that $x_n - E(x_n) \to 0$ $*$-strongly. Writing $y_n = E(x_n)$, we have found a bounded sequence in $L(\Lambda)$ satisfying $y_n a - a y_n \to 0$ $*$-strongly for every $a \in Q$. For every $h \in \Lambda$, denote by $\pi_h : L^\infty(Y,\eta) \to L^\infty(X,\mu)$ the state preserving embedding as the $h$-th coordinate. Choose an element $a \in L^\infty(Y,\eta)$ such that $\int_Y |a|^2 \, d\eta = 1$ and $\int_Y a \, d\eta = 0$. Denote by $(u_g)_{g \in \Lambda}$ the canonical unitaries, so that $(y_n)_g := \vphi(y_n u_g^*)$ are the canonical Fourier coefficients. We also write $\|x\|_{\vphi} = \vphi(x^* x)^{1/2}$ for all $x \in M$. Since $\pi_e(a)$ and $\pi_h(a)$ are orthogonal in $L^2(M,\vphi)$ for all $h \neq e$, a direct computation shows that $$\|y_n \pi_e(a) - \pi_e(a) y_n\|_\vphi^2 \geq 2 \sum_{h \in \Lambda \setminus \{e\}} |(y_n)_h|^2 = 2 \, \|y_n - \vphi(y_n)1\|_2^2 \; .$$ Therefore, $y_n - \vphi(y_n)1 \to 0$ $*$-strongly. Then also $x_n - \vphi(x_n)1 \to 0$ $*$-strongly. \end{proof} \section{Solidity of nonsingular Bernoulli actions, proof of Theorem~\ref{thm.main-solid}} Recall from \cite{Oza03,VV05} that a von Neumann algebra $M$ is called \emph{solid} if for every diffuse von Neumann subalgebra with expectation $A \subset M$, the relative commutant $A' \cap M$ is amenable. If $B \subset M$ is a von Neumann subalgebras with expectation, recall from \cite{Mar16} that $M$ is said to be \emph{solid relative to $B$} if the following holds: for every nonzero projection $p \in M$ and nonamenable von Neumann subalgebra $Q \subset pMp$ with expectation and with diffuse center $\cZ(Q)$, we have that $Q \prec_M B$. Recall from \cite{CI08} that a countable nonsingular equivalence relation $\cR$ on a standard probability space $(X,\mu)$ is called \emph{solid} if for every Borel subequivalence relation $\cS \subset \cR$, there exists a partition of $X$, up to measure zero, into $\cS$-invariant Borel subsets $(X_n)_{n \geq 0}$ such that $\cS|_{X_0}$ is amenable and $\cS|_{X_n}$ is ergodic for all $n \geq 1$. This is equivalent to saying that for every diffuse von Neumann subalgebra $A \subset L^\infty(X)$, the relative commutant $A' \cap L(\cR)$ is amenable. Finally, an essentially free nonsingular action $\Gamma \actson (X,\mu)$ is said to be a \emph{solid action} if the orbit equivalence relation $\cR(\Gamma \actson X)$ is solid. \begin{definition}\label{def.solid-state} A faithful normal state $\vphi$ on a von Neumann algebra $M$ is said to be a \emph{solid state} if for every faithful normal state $\psi$ on $M$ with nonamenable centralizer $M^\psi$, we have that $\psi \prec \vphi$. \end{definition} Theorem \ref{thm.main-solid} is a special case of the following result. \begin{theorem} \label{thm.solid-more-general} Let $G$ be a countable amenable group and let $G \actson (Z,\zeta)$ be a nonsingular action on a nontrivial standard probability space. Let $\Lambda$ be a countable nonamenable group. Define $\Gamma = G \ast \Lambda$ and let $\Gamma \actson (X,\mu)$ be defined by \eqref{eq.more-general-nonsingular}. Put $M = L^\infty(X,\mu) \rtimes \Gamma$. \begin{enumlist} \item $\Gamma \actson (X,\mu)$ is a solid action. \item The factor $M$ is solid relative to $L(\Lambda)$. \item If $\Lambda$ is biexact, then $M$ is solid. \item If $\Lambda$ is biexact and $(\log d(g \cdot \zeta)/d\zeta)_*(\zeta)$ is nonatomic for every $g \in G \setminus \{e\}$, then the crossed product state $\vphi_\mu$ on $M$ is a solid state. \end{enumlist} \end{theorem} We first prove the following lemma, in which we also introduce some of the notation that will be used in the proof of Theorem \ref{thm.solid-more-general}. \begin{lemma}\label{lem.solid-more-general} Make the same assumptions as in Theorem \ref{thm.solid-more-general}. Write $I = (G \ast \Lambda)/G$. Denote by $\vphi$ the crossed product state on $M$ induced by $\mu$. For every $a \in G^{(I)}$, denote by $\mu_a \sim \mu$ the probability measure on $X$ defined by $\mu_a = \prod_{i \in I} a_i \cdot \zeta$. Denote by $\vphi_a$ the corresponding crossed product state on $M$. Let $p \in \core(M)$ be a projection of finite trace. Let $A \subset p \core(M) p$ be a von Neumann subalgebra whose relative commutant $A' \cap p \core(M) p$ has no amenable direct summand. Then one of the following statements holds. \begin{enumlist} \item $A \prec_{\core(M)} L(\Lambda) \vee L_\vphi(\R)$. \item $A \prec_{\core(M)} L_{\vphi_a}(\R)$ for some $a \in G^{(I)}$. \end{enumlist} If moreover $\Lambda$ is biexact, then the second statement always holds. \end{lemma} \begin{proof} Denote $P = L^\infty(Z,\zeta) \rtimes G$ and let $\om$ be the crossed product state on $P$ given by $\zeta$. Write $(N,\om) = (P,\om)^I$ and $\cN = N \rtimes \Gamma$. We still denote by $\om$ the crossed product state on $\cN$. Denote by $\theta : M \to \cN$ the embedding given by Lemma \ref{lem.canonical-embedding}. Note that there is a unique faithful normal conditional expectation $E : \cN \to \theta(M)$ such that $\vphi \circ \theta^{-1} \circ E = \om$. For the rest of the proof, we view $M$ as a subalgebra of $\cN$ and no longer write the canonical embedding $\theta$. We then also replace the notation $\om$ by $\vphi$. The conditional expectation $E$ induces an embedding $\core(M) \subset \core(\cN)$. Write $B = L_\vphi(\R)$ and $Q = \cN_{p \core(\cN) p}(A)\dpr$. Since $A' \cap p \core(\cN) p \supset A' \cap p \core(M) p$ has no amenable direct summand, by Theorem \ref{thm.def-rig-nc-bernoulli}, we can take a projection $z \in \cZ(Q)$ such that $Az \prec_{f,\core(\cN)} B$ and, with $z' = p - z$, we have $Q z' \prec_{f,\core(\cN)} B \vee L(\Gamma)$. Assume that $z' \neq 0$. We prove that $A z' \prec_{\core(\cN)} B \vee L(\Lambda)$. Assume the contrary. In particular, $Q z' \not\prec_{\core(\cN)} B \vee L(\Lambda)$. We may view $B \vee L(\Gamma)$ as the tensor product $L(G \ast \Lambda) \ovt B$ and hence, also as the amalgamated free product $(L(G) \ovt B) \ast_B (L(\Lambda) \ovt B)$. Since $Q$ contains the commuting subalgebras $A$ and $A' \cap p \core(\cN) p$, and since $A' \cap p \core(\cN) p$ has no amenable direct summand, our assumption that $A z' \not\prec_{\core(\cN)} B \vee L(\Lambda)$ and \cite[Theorem 4.2]{CH08} imply that $A z' \prec_{\core(\cN)} B \vee L(G)$. Our assumption says in particular that $A z' \not\prec_{\core(\cN)} B$, so that \cite[Theorem 2.4]{CH08} implies that $A' \cap p \core(\cN) p \prec_{\core(\cN)} B \vee L(G)$. This is a contradiction because $B \vee L(G)$ is amenable, while $A' \cap p \core(\cN) p$ has no amenable direct summand. Since $z$ and $z'$ cannot be both equal to zero, we have proven that $A \prec_{\core(\cN)} L(\Lambda) \vee L_\vphi(\R)$. To conclude the proof of the lemma, we assume that none of the two statements in the lemma hold and prove that $A \not\prec_{\core(\cN)} L(\Lambda) \vee L_\vphi(\R)$. Since the two statements in the lemma do not hold, we can choose a sequence of unitaries $w_n \in \cU(A)$ such that \begin{equation}\label{eq.we-have-this} \|E_{L(\Lambda) \vee L_\vphi(\R)}(x^* w_n y)\|_{2,\Tr} \to 0 \quad\text{and}\quad \|E_{L_{\vphi_a}(\R)}(x^* w_n y)\|_{2,\Tr} \to 0 \end{equation} for all $a \in G^{(I)}$ and $x,y \in p \core(M)$. Here, all conditional expectations are the unique trace preserving ones. It suffices to prove that \begin{equation}\label{eq.goal-here} \|E_{L(\Lambda) \vee L_\vphi(\R)}(x^* w_n y)\|_{2,\Tr} \to 0 \quad\text{for all $x,y \in p \core(\cN)$.} \end{equation} As in the proof of Theorem \ref{thm.main3}, we denote by $\Gamma \actson^\al G^{(I)}$ the natural action by translation, define the generalized wreath product group $\cG = G \wr_I \Gamma = G^{(I)} \rtimes_\al \Gamma$ and view $\cN$ as the crossed product $L^\infty(X) \rtimes \cG$. Define $i_0 \in I$ as the coset $i_0 = G$. Under this identification, $M = L^\infty(X) \rtimes \theta(\Gamma)$, where $\theta : \Gamma \to \cG$ is the injective group homomorphism uniquely determined by $\theta(g) = \pi_{i_0}(g) \, g$ and $\theta(\lambda) = \lambda$ for all $g \in G$, $\lambda \in \Lambda$. In particular, every $a \in G^{(I)}$ gives rise to a canonical unitary $u_a \in N$. We have $\vphi_a \circ E = \vphi \circ \Ad u_a^*$, so that $L_{\vphi_a}(\R) = u_a \, L_\vphi(\R) \, u_a^*$. By density, it suffices to prove \eqref{eq.goal-here} for $x = x_0 u_a$ and $y = y_0 u_b$ with $x_0, y_0 \in p \core(M)$ and $a,b \in G^{(I)}$. If $a=b=e$, then \eqref{eq.goal-here} follows immediately from \eqref{eq.we-have-this}. When $a$ and $b$ are not both equal to $e$, the set $\cF = \{\lambda \in \Lambda \mid \al_\lambda(b) = a\}$ is finite. Also, for $g \in \Gamma$, we have that $a^{-1} \theta(g) b \in \theta(\Lambda) = \Lambda$ if and only if $g \in \cF$. So, if $\cF = \emptyset$, we find that $$E_{L(\Lambda) \vee L_\vphi(\R)}(u_a^* x_0^* w_n y_0 u_b) = 0$$ for all $n \in \N$. When $\cF \neq \emptyset$, a direct computation shows that for every $x_1 \in \core(M)$, $$E_{L(\Lambda) \vee L_\vphi(\R)}(u_a^* x_1 u_b) = \sum_{\lambda \in \cF} u_a^* \, E_{L_{\vphi_a}(\R)}(x_1 \, u_\lambda^*) \, u_a \, u_\lambda \; .$$ Since for every $\lambda \in \cF$, we have by \eqref{eq.we-have-this} that $\|E_{L_{\vphi_a}(\R)}(x_0^* w_n y_0 u_\lambda^*)\|_{2,\Tr} \to 0$, again \eqref{eq.goal-here} follows. So the first part of the lemma is proven. Finally assume that $\Lambda$ is moreover biexact. Above, we have seen that $Q z' \prec_{\core(\cN)} B \vee L(\Lambda)$. We can view $B \vee L(\Lambda)$ as the tensor product $L(\Lambda) \ovt B$, where $B$ is abelian. By \cite[Propositions 11 and 12]{OP03}, any von Neumann subalgebra $D$ of a corner of $L(\Lambda) \ovt B$ having a nonamenable relative commutant, intertwines into $B$. So we find that $A z' \prec_{\core(\cN)} B$. Since $z$ and $z'$ cannot be both equal to zero, we get that $A \prec_{\core(\cN)} L_\vphi(\R)$. The argument above then shows that $A \prec_{\core(M)} L_{\vphi_a}(\R)$ for some $a \in G^{(I)}$. \end{proof} \begin{proof}[{Proof of Theorem \ref{thm.solid-more-general}}] We start by proving that $M$ is solid relative to $L(\Lambda)$. It suffices to prove the following statement: if $e \in M$ is a projection and $A \subset eMe$ is a diffuse abelian von Neumann subalgebra with expectation such that the relative commutant $Q = A' \cap eMe$ has no amenable direct summand, then $Q \prec_M L(\Lambda)$. Fix a faithful normal conditional expectation $E : eMe \to A$ and choose a faithful normal state $\psi$ on $A$. We still denote by $\psi$ the state $\psi \circ E$ on $eMe$. We identify $\core(eMe) = e \core(M) e$. Fix a nonzero projection $p \in L_\psi(\R)$ of finite trace. Then, $A p$ and $p \core_\psi(Q) p$ are commuting von Neumann subalgebras of $p \core(M) p$ and $p \core_\psi(Q) p$ has no amenable direct summand. By Lemma \ref{lem.solid-more-general} and using the notation introduced in that lemma, one of the following statements holds. \begin{itemlist} \item $A p \prec_{\core(M)} L(\Lambda) \vee L_\vphi(\R)$. \item $A p \prec_{\core(M)} L_{\vphi_a}(\R)$ for some $a \in G^{(I)}$. \end{itemlist} We claim that the second statement does not hold. Since $A$ is diffuse, we can choose a sequence $w_n \in \cU(A)$ such that $w_n \to 0$ weakly. Whenever $x_0,y_0 \in M$ and $x_1,y_1 \in L_{\vphi_a}(\R)$, we get that $$E_{L_{\vphi_a}(\R)}(x_1^* x_0^* \, w_n \, y_0 y_1) = x_1^* \, \vphi_a(x_0^* \, w_n y_0) \, y_1 \; .$$ Since $w_n \to 0$ weakly, it follows by density that $\|E_{L_{\vphi_a}(\R)}(x^* \, w_n \, y)\|_{2,\Tr} \to 0$ for all $x,y \in \core(M)$ with $\Tr(x^*x) < +\infty$ and $\Tr(y^* y) < +\infty$. In particular, $\|E_{L_{\vphi_a}(\R)}(x^* \, w_n p \, y)\|_{2,\Tr} \to 0$ for all $x,y \in \core(M)$. So, the claim is proven. It follows that $A p \prec_{\core(M)} L(\Lambda) \vee L_\vphi(\R)$. We now claim that $A \prec_M L(\Lambda)$. Assume the contrary. Denote by $E_{L(\Lambda)} : M \to L(\Lambda)$ the unique $\vphi$-preserving conditional expectation. Since $A$ is abelian and $A \not\prec_M L(\Lambda)$, we can take a sequence of unitaries $w_n \in \cU(A)$ such that $E_{L(\Lambda)}(x^* w_n y) \to 0$ $*$-strongly, for all $x,y \in M$. If now $x_0,y_0 \in M$ and $x_1,y_1 \in L_\vphi(\R)$, we get that $$E_{L(\Lambda) \vee L_\vphi(\R)}(x_1^* x_0^* \, w_n \, y_0 y_1) = x_1^* \, E_{L(\Lambda)}(x_0^* \, w_n \, y_0) \, y_1 \; .$$ By density, we get that $\|E_{L(\Lambda) \vee L_\vphi(\R)}(x^* \, w_n \, y)\|_{2,\Tr}$ for all $x,y \in \core(M)$ with $\Tr(x^*x) < +\infty$ and $\Tr(y^* y) < +\infty$. In particular, $\|E_{L(\Lambda) \vee L_\vphi(\R)}(x^* \, w_n p \, y)\|_{2,\Tr} \to 0$ for all $x,y \in \core(M)$. This contradicts the statement that $A p \prec_{\core(M)} L(\Lambda) \vee L_\vphi(\R)$. So, the claim that $A \prec_M L(\Lambda)$ is proven. Choose projections $r \in A$ and $s \in L(\Lambda)$, a nonzero partial isometry $v \in r M s$ and a unital normal $*$-homomorphism $\theta : r A r \to s L(\Lambda) s$ such that $a v = v \theta(a)$ for all $a \in r A r$. Denote $D = \theta(r A r)' \cap s M s$. Let $\Theta : M \to P^I \rtimes \Gamma$ be the embedding given by Lemma \ref{lem.canonical-embedding}. Then, $\Theta(\theta(rAr))$ is a diffuse von Neumann subalgebra of a corner of $L(\Lambda)$. Since $\Lambda \cap \Stab i = \{e\}$ for every $i \in I$ and $\Theta(\theta(rAr))$ is diffuse, we get for every $i \in I$ that $\Theta(\theta(rAr)) \not\prec_{L(\Gamma)} L(\Stab i)$. It then follows from Proposition \ref{prop.control-normalizer} that $\Theta(D) \subset L(\Gamma)$. Since $\Theta(M) \cap L(\Gamma) = L(\Lambda)$, we conclude that $D \subset s L(\Lambda) s$. In particular $s_1 = v^* v$ belongs to $L(\Lambda)$. By construction, $v^* Q v \subset D$ and $r_1 = vv^*$ belongs to $Q$. In particular, $Q \prec_M L(\Lambda)$. So we have proven that $M$ is solid relative to $L(\Lambda)$. If $\Lambda$ is biexact, then $L(\Lambda)$ is solid by \cite{Oza03}. Since $M$ is solid relative to $L(\Lambda)$, it then follows that $M$ is solid, when $\Lambda$ is biexact. We next prove that $\Gamma \actson (X,\mu)$ is a solid action. Choose a diffuse von Neumann subalgebra $A \subset L^\infty(X)$. We have to prove that $A' \cap M$ is amenable. Assume that $A' \cap M$ is nonamenable. Since $L^\infty(X) \subset M$ is an inclusion with expectation, also $A \subset M$ and $A' \cap M$ are inclusions with expectation. Since $A \subset \cZ(A' \cap M)$ and $M$ is solid relative to $L(\Lambda)$, we find that $A' \cap M \prec_M L(\Lambda)$. A fortiori, $A \prec_M L(\Lambda)$. On the other hand, since $A$ is diffuse, we can take a sequence of unitaries $w_n \in \cU(A)$ such that $w_n \to 0$ weakly. For all $g,h \in \Gamma$ and $x,y \in L^\infty(X)$, we have $$E_{L(\Lambda)}(u_g^* x^* \, w_n \, y u_h) = \begin{cases} \vphi(x^* \, w_n \, y) \; u_g^* u_h &\;\;\text{if $g^{-1} h \in \Lambda$,}\\ 0 &\;\;\text{otherwise.}\end{cases}$$ The conditional expectation $E_{L(\Lambda)}$ is $\vphi$-preserving and the restriction of $\vphi$ to $L(\Lambda)$ is the canonical trace on $L(\Lambda)$. By density, we find that $\|E_{L(\Lambda)}(x^* \, w_n \, y)\|_{2,\vphi} \to 0$ for all $x, y \in M$. So, $A \not\prec_M L(\Lambda)$. This contradiction concludes the proof that $\Gamma \actson (X,\mu)$ is a solid action. Finally assume that $\Lambda$ is biexact and that $(\log d(g \cdot \zeta)/d\zeta)_*(\zeta)$ is nonatomic for every $g \in G \setminus \{e\}$. Let $\psi$ be a faithful normal state on $M$ such that $M^\psi$ is nonamenable. Take a nonzero central projection $e \in \cZ(M^\psi)$ such that $M^\psi e$ has no amenable direct summand. Fix a finite trace projection $q \in L_\psi(\R) \subset \core(M)$ such that the projection $p = e q$ is nonzero. Then, $L_\psi(\R) p$ is a von Neumann subalgebra of $p \core(M) p$ whose relative commutant contains $M^\psi p$ and hence, has no amenable direct summand. By the second part of Lemma \ref{lem.solid-more-general}, we find $a \in G^{(I)}$ such that $L_\psi(\R) p \prec_{\core(M)} L_{\vphi_a}(\R)$. By Lemma \ref{lem.HSV}, it follows that $\psi \prec \vphi_a$. In particular, $\vphi_a$ has a nonamenable centralizer. Denote by $\Gamma \actson^\al G^{(I)}$ the action by translation. Let $i_0 \in I$ be the coset $G$. Denote by $\pi_i : G \to G^{(I)}$ the embedding in the $i$'th coordinate. A map $c : \Gamma \to G^{(I)}$ is called an $\al$-cocycle if $c(gh) = c(g) \, \al_g(c(h))$ for all $g, h \in \Gamma$. Let $c : \Gamma \to G^{(I)}$ be the unique $\al$-cocycle satisfying $c(g) = \pi_{i_0}(g)$ for all $g \in G$ and $c(\lambda) = e$ for all $\lambda \in \Lambda$. A direct computation gives that $g \cdot \mu_a = \mu_{c(g) \, \al_g(a)}$ for all $g \in \Gamma$. Define the subgroup $L \subset \Gamma$ by $$L = \{g \in \Gamma \mid c(g) = a \, \al_g(a^{-1}) \} \; .$$ Since $(\log d(g \cdot \zeta)/d\zeta)_*(\zeta)$ is nonatomic for every $g \in G \setminus \{e\}$, we get that $(d \mu_b / d \mu_c)(x) \neq 1$ for a.e.\ $x \in X$ and all $b \neq c$ in $G^{(I)}$. It follows that $(d(g \cdot \mu_a) / d\mu_a)(x) \neq 1$ for a.e.\ $x \in X$ and all $g \in \Gamma \setminus L$. It follows that $M^{\vphi_a}$ is a von Neumann subalgebra with expectation of $L^\infty(X) \rtimes L$. Since $M^{\vphi_a}$ is nonamenable, we conclude that $L$ is a nonamenable group. For every $b \in G^{(I)}$, we denote $|b| = \# \{i \in I \mid b_i \neq e\}$. We also define for every $g \in \Gamma = G \ast \Lambda$, the $G$-length $|g|_G$ as the minimal number of elements in $G$ one needs when writing $g$ as a product of elements in $G$ and elements in $\Lambda$. Thus, $|\lambda|_G = 0$ for all $\lambda \in \Lambda$ and $|g|_G = n$ whenever $g = \lambda_0 g_1 \lambda_1 \cdots \lambda_{n-1} g_n \lambda_n$ with $g_i \in G \setminus \{e\}$ for all $i$, $\lambda_i \in \Lambda \setminus \{e\}$ for all $i \in \{1,\ldots,n-1\}$ and $\lambda_0,\lambda_n \in \Lambda$. Another direct computation shows that $|c(g)| = |g|_G$ for all $g \in \Gamma$. For all $g \in L$, we have that $|g|_G = |c(g)| = |a \, \al_g(a^{-1})| \leq 2 \, |a|$. Hence, $g \mapsto |g|_G$ is bounded on $L$. Since $L$ is nonamenable, this implies that $L = g_0 \Lambda_0 g_0^{-1}$ for some $g_0 \in \Gamma$, where $\Lambda_0 \subset \Lambda$ is a nonamenable subgroup. Write $b = c(g_0)$. Since $c(\lambda) = e$ for all $\lambda \in \Lambda$, we get that $c(g) = b \, \al_g(b^{-1})$ for all $g \in g_0 \Lambda g_0^{-1}$. Hence, $a \, \al_g(a^{-1}) = b \, \al_g(b^{-1})$ for all $g \in L$. This means that $\al_g(b^{-1} a) = b^{-1} a$ for all $g \in L$. Since the nonamenable group $L$ acts with infinite orbits on $I$, we conclude that $a = b$. So, $\mu_a = \mu_{c(g_0)} = g_0 \cdot \mu$. Using the unitary $u_{g_0} \in M$, it follows that $\vphi_a$ and $\vphi$ are unitarily conjugate. We already proved that $\psi \prec \vphi_a$. It follows that $\psi \prec \vphi$. We have thus proven that $\vphi$ is a solid state on $M$. \end{proof} Having proven Theorem \ref{thm.main-solid}, it is tempting to believe that for any nonsingular Bernoulli action $\Gamma \actson (X,\mu)$ of a biexact group, the crossed product state $\vphi_\mu$ on $M = L^\infty(X,\mu) \rtimes \Gamma$ is a solid state. Having proven Theorem \ref{thm.main}, it is equally tempting to believe that one may recover the measure class $\class(\vphi_\mu)$ as an isomorphism invariant for any nonsingular Bernoulli crossed product $L^\infty(X,\mu) \rtimes \Gamma$ whenever $\mu$ is $\Lambda$-invariant for a nonamenable subgroup $\Lambda \subset \Gamma$. The following example shows that both statements are wrong. The example is very similar to the construction in \eqref{eq.my-bernoulli}, except that we consider the free product of two nonamenable groups. \begin{example}\label{ex.not-solid} Let $\Gamma = \Gamma_1 \ast \Gamma_2$ be an arbitrary free product of two countable nonamenable groups. The following construction provides a nonsingular Bernoulli action $\Gamma \actson (X,\mu)$ with the following properties. \begin{enumlist} \item The crossed product state $\vphi_\mu$ on $M = L^\infty(X,\mu) \rtimes \Gamma$ is not a solid state. \item The measure $\mu$ is $\Gamma_1$-invariant. There exists an equivalent product measure $\mu' \sim \mu$ that is $\Gamma_2$-invariant. The measure classes $\class(\vphi_\mu)$ and $\class(\vphi_{\mu'})$ are not equivalent. \end{enumlist} Since $\Gamma_2$ is nonamenable, not every element of $\Gamma_2$ has order $2$. Fix an element $a \in \Gamma_2$ of order at least $3$ (and possibly of infinite order). Define the map $\pi : \Gamma \to \Gamma_2$ by $\pi(h) = h$ for all $h \in \Gamma_2$ and $\pi(w h) = h$ whenever $h \in \Gamma_2$ and $w \in \Gamma$ is a reduced word in the free product $\Gamma_1 \ast \Gamma_2$ ending with a letter from $\Gamma_1 \setminus \{e\}$. Let $Y$ be a standard Borel space with equivalent probability measures $\nu \sim \eta$ on $Y$. Assume that $\nu$ and $\eta$ are not concentrated on a single atom. We will specify these measures later. Define the subset $W \subset \Gamma$ by $W = \pi^{-1}(\{e,a\})$. For every $g \in \Gamma$, define $\mu_g = \nu$ if $g \in W$ and $\mu_g = \eta$ if $g \in \Gamma \setminus W$. Since $\pi(g v) = \pi(v)$ for all $g \in \Gamma_1$ and $v \in \Gamma$, we have that $g W = W$ for all $g \in \Gamma_1$. When $h \in \Gamma_2$, one has $h W \setminus W = \{h , ha\} \setminus \{e,a\}$ and $W \setminus h W = \{e,a\} \setminus \{h,ha\}$. We conclude that $g W \sdif W$ is a finite set for every $g \in \Gamma$. So, $\Gamma \actson (X,\mu) = \prod_{g \in \Gamma} (Y,\mu_g)$ is a nonsingular Bernoulli action. The action is essentially free. By construction, the measure $\mu$ is $\Gamma_1$-invariant. So, $\Gamma_1 \actson (X,\mu)$ is a pmp Bernoulli action, which is thus ergodic. A fortiori, $\Gamma \actson (X,\mu)$ is ergodic. Next define $W' = W \setminus \Gamma_2 = W \setminus \{e,a\}$. Define $\mu'_g = \nu$ if $g \in W'$ and $\mu'_g = \eta$ if $g \in \Gamma \setminus W'$. Define the product measure $\mu' = \prod_{g \in \Gamma} \mu'_g$. Since $W' \sdif W$ is a finite set, we have that $\mu' \sim \mu$. We now have by construction that $h W' = W'$ for all $h \in \Gamma_2$. So, the measure $\mu'$ is $\Gamma_2$-invariant. Also note that for every $g \in \Gamma_1 \setminus \{e\}$, we have that $g W' \setminus W' = \{e,a\}$ and $W' \setminus g W' = \{g,ga\}$. Since $\Gamma_1 \actson (X,\mu)$ and $\Gamma_2 \actson (X,\mu')$ are ergodic, the centralizer of both states $\vphi_\mu$ and $\vphi_{\mu'}$ is a nonamenable factor. We prove that for the appropriate choice of $\nu$ and $\eta$, we have $\class(\vphi_\mu) \not\prec \class(\vphi_{\mu'})$. It then follows from point 2 of Proposition \ref{prop.class} that $\vphi_\mu \not\prec \vphi_{\mu'}$. Since the relation $\prec$ between states is symmetric, also $\vphi_{\mu'} \not\prec \vphi_\mu$, so that $\vphi_\mu$ is not a solid state. The measure classes $\class(\vphi_\mu)$ and $\class(\vphi_{\mu'})$ can be easily computed as follows. Define $\gamma = (\log d \nu / d \eta)_*(\eta)$. For every $g \in \Gamma$, we have that \begin{alignat*}{2} & (\log d(g \cdot \mu) / d \mu)_*(\mu) \sim \gamma^{\ast k} \ast \gammatil^{\ast l} \quad &&\text{with $k = |g W \setminus W|$ and $l = |W \setminus g W|$,}\\ & (\log d(g \cdot \mu') / d \mu')_*(\mu') \sim \gamma^{\ast k} \ast \gammatil^{\ast l} \quad &&\text{with $k = |g W' \setminus W'|$ and $l = |W' \setminus g W'|$.} \end{alignat*} A direct computation shows that $|g W \setminus W| = |W \setminus gW|$ for all $g \in \Gamma$ and that all elements of $\{0,1,2,\ldots\}$ appear as values. On the other hand, $|g W' \setminus W'| = |W' \setminus gW'|$ for all $g \in \Gamma$ but only the elements of $\{0,2,3,\ldots\}$ appear as values. We conclude that $$\class(\vphi_\mu) = \delta_0 \vee \bigvee_{k=1}^\infty (\gamma \ast \gammatil)^{\ast k} \quad\text{and}\quad \class(\vphi_{\mu'}) = \delta_0 \vee \bigvee_{k=2}^\infty (\gamma \ast \gammatil)^{\ast k} \; .$$ Assume that $\gamma$ is nonatomic and that $K \subset \R$ is an independent Borel set such that $\gamma(K) > 0$. Denote by $\gamma_0$ the restriction of $\gamma$ to $K$. Clearly, $\gamma_0 \ast \widetilde{\gamma_0} \prec \class(\vphi_\mu)$. We claim that $\gamma_0 \ast \widetilde{\gamma_0}$ is orthogonal to $\class(\vphi_{\mu'})$. To prove this claim, it suffices to observe that for every $x \in \R \setminus \{0\}$, the set $(x + (K-K)) \cap (K-K)$ is contained in finitely many translates of $K \cup (-K)$. Arguing as in the proof of Lemma \ref{lem.indep-K}, it follows that $(\eta \ast \gamma \ast \gammatil)(K-K) = 0$ for every nonatomic probability measure $\eta$. So, the restriction of $\class(\vphi_{\mu'})$ to $K-K$ equals $\delta_0$. On the other hand, $\gamma_0 \ast \widetilde{\gamma_0}$ is a nonatomic probability measure that is concentrated on $K-K$, hence proving the claim. Using the construction around \eqref{eq.setP}, we can give concrete examples where $\gamma$ is a nonatomic probability measure that is supported on $K$. \end{example} \section{Conjugacy results and proof of Proposition \ref{prop.some-isomorphism}} In this section, we prove Proposition \ref{prop.some-isomorphism}. We use the following well known lemma and provide a proof for completeness. \begin{lemma}\label{lem.dissipative-Z} Let $\eta \sim \nu$ be equivalent, but distinct probability measures on the standard Borel space $Y$. Define $\mu_n = \nu$ when $n \in \N$ and $\mu_n = \eta$ when $n \in \Z \setminus \N$. Then, the nonsingular Bernoulli action $$\Z \actson (Z,\zeta) = \prod_{n \in \Z} (Y,\mu_n)$$ is totally dissipative. \end{lemma} \begin{proof} Since $\eta \neq \nu$, we can choose a Borel set $U \subset Y$ such that $\eta(U) \neq \nu(U)$. Since $\eta \sim \nu$, we have that $\eta(U)$ and $\nu(U)$ are different from $0$ and $1$. Define the probability measures $\gamma_n$ on $\{0,1\}$ by $\gamma_n(0) = \nu(U)$ for $n \in \N$ and $\gamma_n(0) = \eta(U)$ for $n \in \Z \setminus \N$. Define the factor map $\pi : Y \to \{0,1\}$ by $\pi(y) = 0$ iff $y \in U$. By construction, $\pi_*(\mu_n) = \gamma_n$ for all $n \in \Z$. So, the nonsingular Bernoulli action $\Z \actson \prod_{n \in \Z} (\{0,1\},\gamma_n)$ is a factor of $\Z \actson (Z,\zeta)$. By \cite[Theorem 1]{Ham81}, the former is totally dissipative, so that also $\Z \actson (Z,\zeta)$ is totally dissipative. \end{proof} \begin{proof}[{Proof of Proposition \ref{prop.some-isomorphism}}] Write $\mu_{i,n} = \nu_i$ when $n \in \N$ and $\mu_{i,n} = \eta_i$ when $n \in \Z \setminus \N$. Consider the nonsingular Bernoulli actions $\Z \actson^{\be_i} (Z_i,\zeta_i) = \prod_{n \in \Z} (Y_i,\mu_{i,n})$. Since $\Gamma \actson^{\al_i} (X_i,\mu_i)$ is isomorphic with the action associated in \eqref{eq.more-general-nonsingular} to $\Z \actson (Z_i,\zeta_i)$, it suffices to prove that there exists a measure preserving conjugacy between $\be_1$ and $\be_2$. Denote by $\nu$ the probability measure on $\R$ given by $(\log d\nu_1 / d\eta_1)_*(\nu_1) = (\log d\nu_2 / d\eta_2)_*(\nu_2)$. Since $$\int_\R \exp(-t) \, d\nu(t) = \int_{Y_i} \frac{d\eta_i}{d\nu_i} \, d\nu_i = 1 \; ,$$ we can define the equivalent probability measure $\eta \sim \nu$ on $\R$ such that $(d\eta / d\nu)(t) = \exp(-t)$. Then $\pi_i = \log d\nu_i / d\eta_i$ is a factor map $\pi_i : Y_i \to \R$ satisfying $(\pi_i)_*(\nu_i) = \nu$ and $(\pi_i)_*(\eta_i) = \eta$ for all $i \in \{1,2\}$. Define the probability measures $\mu_n$ on $\R$ by $\mu_n = \nu$ when $n \in \N$ and $\mu_n = \eta$ when $n \in \Z \setminus \N$. Consider the nonsingular Bernoulli action $\Z \actson (Z,\zeta) = \prod_{n \in \Z} (\R,\mu_n)$. Then $\psi_i : (Z_i,\zeta_i) \to (Z,\zeta) : (\psi_i(z))_n = \pi_i(z_n)$ is a measure preserving, $\Z$-equivariant factor map. Denote by $(\nu_{i,t})_{t \in \R}$ the disintegration of $\nu_i$ along the measure preserving factor map $\pi_i : (Y_i,\nu_i) \to (\R,\nu)$. Similarly, denote by $(\eta_{i,t})_{t \in \R}$ the disintegration of $\eta_i$. Then, the disintegration $(\zeta_{i,z})_{z \in Z}$ of $\zeta_i$ along the factor map $\psi_i$ is given by $$\zeta_{i,z} = \prod_{n \in \N} \nu_{i,z_n} \times \prod_{n \in \Z \setminus \N} \eta_{i,z_n} \; .$$ We assumed that the function $\pi_i$ is not essentially one-to-one. We thus find $\eps > 0$ such that the set $$U_i = \{z \in \R \mid \;\text{the largest atom of $\nu_{i,z}$ has weight less than $1-\eps$}\;\}$$ has positive measure, $\nu(U_i) > 0$. For $\zeta$-a.e.\ $z \in Z$, there are infinitely many $n \in \N$ with $z_n \in U_i$. It follows that for $\zeta$-a.e.\ $z \in Z$, the product measure $\zeta_{i,z}$ is nonatomic. Denote by $\lambda$ the Lebesgue measure on $[0,1]$. By the classification of factor maps, at least going back to \cite{Mah83} (see also \cite[Theorem 2.2]{GM87}), we can choose a measure preserving isomorphism $\theta_i : (Z_i,\zeta_i) \to (Z \times [0,1],\zeta \times \lambda)$ such that $p_Z(\theta_i(z)) = \psi_i(z)$ for $\zeta_i$-a.e.\ $z \in Z_i$, where $p_Z(z,t) = z$ for all $(z,t) \in Z \times [0,1]$. We denote $\theta = \theta_2^{-1} \circ \theta_1$ and have found a measure preserving isomorphism $\theta : (Z_1,\zeta_1) \to (Z_2,\zeta_2)$ satisfying $\psi_2(\theta(z)) = \psi_1(z)$ for $\zeta_1$-a.e.\ $z \in Z_1$. By Lemma \ref{lem.dissipative-Z}, the Bernoulli action $\Z \actson (Z,\zeta)$ is totally dissipative. We can thus choose a Borel set $U \subset Z$ such that the sets $(n \cdot U)_{n \in \Z}$ form a partition of $Z$, up to measure zero. Write $U_i = \psi_i^{-1}(U)$. Then, $U_i \subset Z_i$ is a fundamental domain for the action $\Z \actson^{\be_i} (Z_i,\zeta_i)$. By construction, $\theta(U_1) = U_2$, up to measure zero. We can thus, essentially uniquely, define the nonsingular isomorphism $$\Theta : Z_1 \to Z_2 : \Theta(n \cdot z) = n \cdot \theta(z) \quad\text{if $z \in U_1$ and $n \in \Z$.}$$ By construction, $\Theta$ is $\Z$-equivariant. We claim that $\Theta$ is measure preserving. By construction, $$\frac{d(n \cdot \zeta_i)}{d\zeta_i} = \frac{d(n \cdot \zeta)}{d \zeta} \circ \psi_i \; .$$ Therefore, $$\frac{d(n \cdot \zeta_2)}{d\zeta_2} \circ \theta = \frac{d(n \cdot \zeta)}{d\zeta} \circ \psi_2 \circ \theta = \frac{d(n \cdot \zeta)}{d\zeta} \circ \psi_1 = \frac{d(n \cdot \zeta_1)}{d\zeta_1} \; .$$ Since $\theta$ is measure preserving, it then follows that the maps $z \mapsto n \cdot \theta( (-n) \cdot z)$ are measure preserving for all $n \in \Z$. Hence, $\Theta$ is measure preserving and the claim is proven. This concludes the proof of the proposition. \end{proof} \begin{example}\label{ex.some-isomorphism} Let $\gamma \in (0,1)$. On the finite set $Y_1 = \{1,2,3\}$, we consider the probability measures $\nu_1(1)=1/2$, $\nu_1(2) = \nu_1(3) = 1/4$ and $\eta_1(1) = \gamma$, $\eta_1(2)=\eta_1(3)=(1-\gamma)/2$. On the interval $Y_2 = [0,1]$, we consider the probability measures $\nu_2 \sim \eta_2$ where $\nu_2$ is the Lebesgue measure and $$\frac{d\eta_2}{d\nu_2}(t) = \begin{cases} 2 \gamma &\;\;\text{if $0 \leq t \leq 1/2$,}\\ 2(1-\gamma) &\;\;\text{if $1/2< t \leq 1$.}\end{cases}$$ For every countable group $\Lambda$, the associated nonsingular Bernoulli actions $\Gamma \actson (Y_i^\Gamma,\mu_i)$ of $\Gamma = \Z \ast \Lambda$ admit a measure preserving conjugacy, even though one base space is finite and the other base space is diffuse. \end{example}
2,877,628,090,618
arxiv
\section{INTRODUCTION} \label{sec:intro} The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX\footnote{\url{http://www.hetdex.org/}}; \citealt{hil08b, hil16a, geb21}) aims to tightly constrain the expansion history of the Universe and thus the evolution of dark energy by detecting and mapping the spatial distribution of about a million Lyman-$\alpha$ emitting galaxies (LAEs). The redshift range for LAE detection will be $1.9 < z < 3.5$ over a total of $\sim 540$ deg$^{2}$\ (11 Gpc$^3$ comoving volume) This survey is being carried out with the Visible Integral-field Replicable Unit Spectrograph (VIRUS, \citealt{hil18a})\footnote{VIRUS is a joint project of the University of Texas at Austin, Leibniz-Institut f{\" u}r Astrophysik Potsdam (AIP), Texas A\&M University (TAMU), Max-Planck-Institut f{\" u}r Extraterrestriche-Physik (MPE), Ludwig-Maximilians-Universit{\" a}t M{\" u}nchen, Pennsylvania State University, Institut f{\" u}r Astrophysik G{\" o}ttingen, University of Oxford, and the Max-Planck-Institut f{\" u}r Astrophysik (MPA).}. VIRUS is a highly-replicated integral field spectrograph \citep{hil14}, designed for blind spectroscopic surveys, where ``high" or ``large-scale" replication is defined as consisting of greater than 100 copies of a base instrument. VIRUS is composed of a set of 156 integral field spectrographs, that produce about 35,000 spectra with spectral range 3500$-$5500 \AA, and resolving power $R = \Delta\lambda/\lambda \simeq 800$ (at 4500 \AA, $\Delta\lambda \simeq 5.6$ \AA) in a single observation that covers 56 arcmin$^{2}$\ area within an 18 arcmin diameter field (fill factor $\simeq$ 1$/$4.5). Achieving the HETDEX goals has required both the development and implementation of VIRUS and a major rebuild and enhancement of the Hobby-Eberly Telescope\footnote{The Hobby-Eberly Telescope is operated by McDonald Observatory on behalf of the University of Texas at Austin, Pennsylvania State University, Ludwig-Maximillians-Universit{\" a}t M{\" u}nchen, and Georg-August-Universit{\" a}t, G{\" o}ttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly.} with a much larger field of view of 22 arcmin diameter. This paper is focused on describing these innovations and evaluating the performance of the HET Wide Field Upgrade (WFU) and VIRUS against the requirements for HETDEX. The HET (\citealt{lwr94,lwr98}, Figure~\ref{HETlayout}, \edit1{Table~\ref{tab-het}}) is an innovative telescope that has an 11 m hexagonal-shaped spherical primary mirror made from 91 identical 1-m hexagonal segments that points at a fixed zenith angle of 35$^\circ$. The HET can be moved in azimuth on air-bearings to access about 70\% of the sky visible at McDonald Observatory (declination $-$10.3$^\circ \leq \delta \leq $ +71.6$^\circ$). Primary mirror alignment is achieved using instruments located in the Center of Curvature Alignment System tower, accessed once or twice per night at a particular azimuth. The nature of HET requires that observations be 100\% queue-scheduled \citep{shet07}. The pupil was originally 9.2 m in diameter, set by the design of the prime focus spherical aberration corrector, and sweeps over the primary mirror as the x-y tracker follows objects for between 50 minutes (in the south at $\delta$= $-$10.0$^\circ$) and 2.8 hours (in the north at $\delta$ = $+$67.2$^\circ$). The original 4-mirror double-Gregorian type corrector \citep{jung99} had a 4 arcmin (50 mm) diameter science field of view. \edit1{Table~\ref{tab-het} presents the basic properties of the original HET and of the upgraded HET, as described in this paper. The upgrade increased telescope aperture and field of view, while accessible sky and track times remain the same between the original and upgraded HET.} HET is located on Mt. Fowlkes at McDonald Observatory in west Texas. The site is characterized by extremely dark skies and typical continental site median seeing of $\sim$1\farcs0 ~full-width half maximum (FWHM, \citealt{barker03}). The original purpose of the HET was to conduct spectroscopic surveys, using its large primary mirror to enable observations of many targets in a short period of time. However, the 4$\arcmin$ field of view limited the HET in most cases to observations of one target at a time. This clearly was inadequate for the goals of a large program such as HETDEX. Therefore, a consortium was formed to re-purpose the HET with a complete redesign of all mechanical and optical components beyond the 11 m primary mirror. In parallel the massive VIRUS instrument was designed and implemented to observe for the first time large areas of sky in a blind spectroscopic survey. The new instrument suite for the upgraded HET emphasizes the telescope’s strengths in large surveys and synoptic time-domain spectroscopy. All the new instrumentation is fiber-fed so as to exploit the azimuthal scrambling inherent to fiber transmission. This scrambling is particularly important for a telescope with a variable pupil illumination (such as the HET). There are two low-resolution fiber integral field spectrographs: VIRUS, and the second-generation Low Resolution Spectrograph (LRS2, \citealt{chonis16, hil21}), and two fiber-fed high-resolution spectrographs: the Habitable-zone Planet Finder (HPF, \citealt{mahadevan18}) and a forthcoming upgrade of the HET High Resolution Spectrograph (HRS, \citealt{tull98}); the latter two instruments reside in temperature-controlled enclosures located in the basement inside the telescope pier. LRS2 is based on two VIRUS spectrograph units with the gratings replaced by higher dispersion grisms that span 3700~-~10500~\AA~ in four spectrograph channels. The units are designated LRS2-B and LRS2-R, and each is fed by a separate lenslet-coupled integral field unit (IFU) with 6 $\times$ 12 arcsec$^{2}$\ field coverage. Section \S\ref{sec:design} presents the overall design requirements for the upgraded HET and VIRUS. Components of the HET wide field upgrade (WFU) are discussed in \S\ref{sec:wfupgrade} and the design of VIRUS and its support infrastructure in \S\ref{sec:virus}. Performance of VIRUS is reviewed in \S\ref{sec:Vperformance}. Observing with the HETDEX instrumentation is described in \S\ref{sec:WFUperformance} along with the performance of the current HETDEX system. Example spectra from VIRUS are presented in \S\ref{sec:spectra}. Conclusions are summarized in \S\ref{sec:summary}. An appendix lists the acronyms used. \medskip \section{HET AND VIRUS DESIGN REQUIREMENTS FOR HETDEX}\label{sec:design} The requirement to survey large areas of sky with VIRUS plus the need to acquire wavefront sensing stars to provide full feedback on the tracker position led us to design an ambitious new corrector employing meter-scale aspheric mirrors and covering a 22-arcmin diameter field of view. The WFU (\citealt{hil18b, lee21}) deploys the wide field corrector (WFC, \citealt{burge10,oh14,good14a,lee16a}), a new tracker \citep{good18}, a new prime focus instrument package (PFIP, \citealt{vattiat14}), new software control systems \citep{beno12,rams16,rams18}, and new metrology systems \citep{lee18a,lee18b}. The metrology systems provide closed-loop feedback on all axes of motion and the optical configuration of the telescope. The systems include guiding, wavefront sensing, payload tilt sensing, and a distance measuring interferometer. Together these instruments control the alignment of the WFC to the primary mirror as well as providing feedback on the temperature-dependent radius of curvature of the segmented primary mirror, which is mounted on a steel truss\footnote{The WFC is named the Harold C. Simmons Dark Energy Optical System.}. The upgrade left the primary mirror and telescope enclosure unchanged. The timetable for the Wide Field Upgrade is presented in Table~\ref{tab-chron}. Table~\ref{tab-wfupgrade} presents the high-level technical requirements for the WFU, derived from the original HETDEX science requirements as established for the Preliminary Design Review in 2008. The science requirements evolved in the subsequent decade, but there are no new requirements for which there is a technical shortfall. The WFU had additional requirements driven by other science use cases, but the HETDEX criteria are summarized. VIRUS was optimized for HETDEX. The design of VIRUS flows directly from the requirements for HETDEX (\citealt{hil08b, hil16a, geb21}), to maximize the number of LAEs detected in a set observing time, and to span sufficient redshift range to survey the required volume. These science requirements flow down to the following technical requirements for VIRUS: \begin{itemize} \item Coverage of $\Delta z \sim$ 2 and coverage into the ultraviolet to detect LAEs at the lowest feasible redshift. Analysis of the expected number of LAEs with redshift also indicates that the majority of the objects are located at $z ~<$~3.5 due to the change in distance modulus with redshift, coupled with the steepness of the LAE luminosity function. VIRUS is designed for 3500~$<$~$\lambda$~$<$~5500~\AA, or Lyman-$\alpha$ at redshift $1.9 < z < 3.5$. \item Area coverage of at least 50 sq. arcmin. per observation. \item Utilize fiber integral field units (IFU) to keep the weight of the spectrographs off the moving payload of the HET. \item Fiber core diameter of 1.5 arcsec (266 $\mu$m) for optimal detection of LAEs in the typical image quality delivered by HET (1.3 to 2.0 arcsec FWHM). \item Resolution matching the linewidth of LAEs (resolving power $R \sim$~700 or greater) to maximize detectability. Note that $R \sim$~2000 would be required to split the [OII]$\lambda$3727 doublet associated with low-redshift interloper emission line galaxies. These objects are discriminated from LAEs through equivalent width thresholds \citep{gron07, leung17, far21, geb21}, so optimum detection of LAEs and larger survey volume (redshift range) were chosen in the trade-off against higher spectral resolution. \item Low read-noise detectors ($\sim$3 electrons) to achieve equality between sky-background and read noise in 360 seconds at the shortest wavelengths. \item High stability to ambient temperature variations, although not to gravity vector variations, since the VIRUS modules have a fixed altitude orientation, mounted in large enclosures. \item Throughput sufficient to reach an emission line sensitivity of 4~$\times$~10$^{-17} {\rm erg~cm}^{-2} $s$^{-1}$ at 4500~\AA~ in 20 minutes on HET. Combined with the IFU area coverage, this sensitivity is derived from the requirement to detect $\sim$200 LAEs per observation or 2.5 LAEs per IFU, for an average of $\sim$1 LAE per arcmin$^{2}$. \item Simple, inexpensive design that can be replicated in quantity. \end{itemize} These overarching technical requirements emphasize the ability to cover area quickly and require a highly-multiplexed spectrograph, which was achieved through large-scale replication \citep{hil14} of modular integral field spectrograph units. Before embarking on developing the replicated VIRUS units, a prototype of a single spectrograph was constructed in~2006 and tested on the 2.7~m Harlan J Smith Telescope at The McDonald Observatory. The motivation for building the VIRUS prototype (the George and Cynthia Mitchell Spectrograph, formerly VIRUS-P, \citealt{hil08a}) was to provide an end-to-end test of the concepts behind HETDEX, both instrumental and scientific. A number of key design choices were made and tested with the Mitchell Spectrograph, including the decision to use bare fibers rather than lenslet coupling at the IFU input and a catadioptric optical design for the spectrographs \citep{hil04, hil06}. The upgraded HET has a fast focal ratio in order to couple efficiently to fibers, reducing focal ratio degradation (FRD). The layout of the fibers in VIRUS IFUs, with 1/3 fill-factor, is most optimal for covering area, since a dither-pattern of three exposures exactly fills the field of the IFU, while maximizing the area covered per IFU. Lenslet-coupled fiber IFUs have the principal advantages of providing contiguous coverage of a small field, and allowing the slow focal-ratio beams of large telescopes to be coupled efficiently to fibers (e.g. \citealt{all-smith02}). However, lenslets suffer from inefficiencies in the coupling due to lens quality and diffraction effects. This was particularly the case with lenslet technologies available in 2006, when the properties of VIRUS were being cemented. If the fiber core is oversized to mitigate these coupling effects, then resolution is lost \citep{hil04}. For fibers fed at the same focal ratio, the bare fiber bundle will cover the same area of sky as does a lenslet system, both in three exposures, but offers higher overall throughput and lower cost. \edit1{Differential atmospheric refraction (DAR) has a magnitude of 0$\farcs$95 over the bandpass of VIRUS at the mean HET airmass of 1.25. The fixed elevation of HET assures that the DAR relation varies very little and is always aligned in the same direction with respect to the IFU fiber pattern. An atmospheric dispersion corrector is not needed for VIRUS, since the dither pattern of three exposures fills in the area of the IFUs; light lost from the aperture of one fiber falls onto an adjacent fiber position in the full 3-position dither pattern. Emission line objects are positioned randomly with respect to the fiber pattern, so the DAR only affects the position of the detection, which is corrected to a common wavelength. } Another design feature of the IFUs that was established early was the number of fibers per IFU and the distribution of IFUs within the HET field of view. Each spectrograph can accommodate 224 fibers, allowing for adequate gaps to allow the spectrum of each fiber to be isolated on the detector. Packaging two spectrograph channels in a unit proved most efficient for space and other factors (see \S\ref{sec:design}), so each IFU has 448 fibers and covers 51 $\times$ 51 arcsec$^{2}$. With that building block size, the total field of view of the upgraded HET, and the required area and number of LAEs per observation, a grid of IFUs results with 100\arcsec\ separation and $\sim$1/4 fill-factor. This layout has the advantage of allowing contiguous areas to be mapped in four observations, except for a central area reserved for other HET instruments (\S\ref{sec:virus}). Non-uniformity of the window function of the observation is not important on the scale of the IFU separation, which is much smaller than scales probed by the power spectrum of LAEs to be measured by HETDEX \citep{chiang13}. Refractive camera designs were explored initially for the VIRUS prototype spectrograph \citep{hil04}, with the aim of keeping the format of the units as small as possible. However, the requirement to work at 3500 \AA\ led to an optical design of significant complexity with calcium fluoride \edit1{and fused silica.} Following this design investigation, a catadioptric Schmidt camera with the charge coupled device (CCD) detector at its focus was adopted. The camera design was simpler \edit1{and less expensive}, though it necessitated a factor of two larger beam size, the image quality was better and the throughput was equivalent. Construction and testing of the prototype verified the opto-mechanical design, the throughput, the stability, and the sensitivity, and demonstrated the utility of such an instrument for surveys of emission-line objects. It also served as a test-bed for the software development required for analyzing the data from the full VIRUS array. The Mitchell Spectrograph was used to perform the HETDEX Pilot Survey of Ly-$\alpha$ emitting galaxies (\citealt{adams11,blanc11}). The results of the Pilot Survey confirmed the sensitivity estimates on which HETDEX is based, and demonstrated the effectiveness of blind IFU spectroscopy for this scientific application. The requirements outlined above were flowed-down to properties of the WFU and VIRUS that are described in the following sections. \section{OVERVIEW OF THE HET WIDE FIELD UPGRADE}\label{sec:wfupgrade} \begin{figure} \epsscale{1.0} \plotone{HETlayout.pdf} \caption{ \label{HETlayout} \footnotesize The layout of the HET with rendering of the WFU and VIRUS superimposed. This view is looking towards the south-east. The WFU replaces the top end of the HET with a new tracker, wide field corrector (WFC), and prime focus instrument package (PFIP). The VIRUS spectrograph units are housed in two enclosures either side of the structure, which are mounted on the VIRUS support structure, and fed by 35,000 fibers from the prime focus. The main telescope structure, primary mirror, and alignment instruments in the Center of Curvature Alignment System tower remained unchanged from the original, except that the portions of the center of curvature tower within the HET field of view were painted black. } \end{figure} The basic configuration of the HET is unchanged in the upgrade (Figure~\ref{HETlayout}), but the new tracker has a much higher payload of 3 metric tonnes to accommodate the new wide field corrector (WFC) and prime focus instrument package (PFIP), a five-fold increase. Of particular note is the large volume taken up by the enclosures for VIRUS and the additional structure required for their support and movement. Detailed summaries of the development of the WFU from conception to installation are covered in \citet{hil18b} and in \citet{lee21}. The following subsections provide an overview of the project. \subsection{HET Principles of Operation}\label{subsec:operation} The HET requires constant monitoring and updating of the position of the moving components relative to the optical axis of the primary mirror, in order to deliver high quality images. This axis changes constantly as the telescope tracks, so the telescope has to maintain strict tolerances on six degrees of positional freedom, plus time. Tilts of the WFC cause comatic images, and axial errors cause defocus and a change in plate scale. In addition, the global radius of curvature of the primary mirror varies slowly with temperature (as the segmented primary mirror is essentially a glass veneer on a steel truss), and must be monitored and updated during the night. The metrology subsystems implemented to provide the necessary feedback on these degrees of freedom are discussed in \citet{lee18b} and in more detail in \S\ref{subsec:pfip}. First, an overview is helpful before the subsystems are described in more detail. The subsystems that are involved in every track include two guideprobes, two operational wavefront sensors, a tip-tilt sensor, and a distance measuring interferometer. These are augmented by an acquisition camera, a calibration wavefront sensor, a pupil viewer camera and a bore-sight imager, that are used periodically to verify internal alignment. The remaining degree of freedom is the rotation angle on the sky, which is not monitored directly. The encoding of the rotation is sufficient to meet requirements. Observing starts with primary mirror alignment, utilizing the center of curvature alignment system instrumentation in the center of curvature tower. With the telescope moved in azimuth to point at the center of curvature tower, the process first involves setting the position of the instrument suite \citep{booth03} using a leg of the distance measuring interferometer, reflecting off the primary mirror. This process sets the radius of curvature of the primary mirror. The other center of curvature instruments are the Mirror Alignment Recovery System \citep{wolf03} that stacks the mirror segments, the Hartman Extra-focal Instrument \citep{pp04, booth04} that illuminates the primary mirror and can image the return in and out of focus, and a Wavescope\footnote{from Adaptive Optics Associates}, used to examine the overall figure of the primary mirror. The Mirror Alignment Recovery System is a Shack-Hartmann based sensor with a lenslet array matched to the 91 mirror segments. The system utilizes an internal light source to illuminate the HET and a reference mirror to provide focused spot locations from the required spherical surface. Centroids of the HET mirror segment spots are compared to the reference spot locations to measure tip/tilt misalignments of each segment. At the start of the night the first primary mirror alignment takes about 30 minutes. During this period, calibrations of the science instruments are obtained. Once aligned, the segment alignment maintenance system, employing inductive edge sensors, maintains the alignment between the edges of the segments and provides metrics on the quality of the mirror segment alignment over time \citep{booth03, pp04}. Additional alignment during the night is driven by large ambient temperature changes. Re-stacking of the primary mirror is at the discretion of the telescope operator, and occurs either when segment alignment maintenance system metrics indicating the quality of the stack are above acceptable levels, or when there is a segment obviously misaligned and visible on a star image. At most, one additional alignment is needed during the night and takes 15 minutes. Once set up on target, there is a hierarchy of feedback cadence for different metrology loops. The first level of metrology involves the WFC position, monitored by two guideprobes and two operational wavefront sensors on a cadence of a few to a few tens of seconds, depending on star brightness and conditions. These loops constrain sky position (and time at that position) and WFC alignment to the optical axis (focus and tilt), based on direct measurement of light from stars. Drifts in alignment happen more slowly than changes in position and so are averaged and updated on minute timescales. The tip-tilt sensor and distance measuring interferometer provide secondary direct measures of the physical distance and alignment of the payload to the primary mirror. These run on a cadence of one and 10 seconds, respectively. Measurement of the physical separation between WFC and primary mirror when the telescope is in focus provides a constraint on the primary mirror global radius of curvature. Updates to global radius of curvature are sent to the segment alignment maintenance system between targets, as needed, driven primarily by temperature changes. The plate scale is not monitored directly. The focal length of the system is naturally constrained by the monitoring of the radius of curvature of the primary mirror, to a level of accuracy far better than could be measured from the separation of guide stars, for example. \subsection{Tracker}\label{subsec:tracker} The new tracker is a third-generation evolution of the trackers for HET and Southern African Large Telescope (SALT, \citealt{buckley06}), and is in essence a precision six-axis stage (Figure~\ref{HETupgrade}) with a high payload. Its purpose is to position the WFC accurately normal to, and at the correct distance from, the primary mirror vertex as it follows the sky motion of targets. The tracker \citep{good14b,good18} was developed jointly by The University of Texas at Austin McDonald Observatory (MDO) and Center for Electro-mechanics. Details of its deployment and commissioning are provided in \citep{hil16c}. The tracker bridge spans the upper hexagon of the telescope structure, moving on two x-axis stages with skew sensing to maintain alignment. A carriage moves up and down the tracker bridge (the y-axis), and supports the hexapod that provides the fine adjustment in the other degrees of freedom. The hexapod actuators were manufactured by ADS International\footnote{located in Valmadrera, Italy} in collaboration with Center for Electro-mechanics and MDO \citep{zier12}. The total volume of motion is about $7\times7\times4$~m$^3$, and the required accuracy under metrology feedback is on the order of 15 $\mu$m and 4$\arcsec$ in tilt in physical position, and 10 ms in time. \begin{figure} \epsscale{1.0} \plotone{HETupgrade.png} \caption{ \label{HETupgrade} \footnotesize Images of completed HET upgrade. Left panel: View from behind primary mirror (the mirror truss is the turquoise frame), showing the new tracker centered in the upper hexagon of the telescope structure. The VIRUS enclosures are the black-paneled structures either side of the telescope (note that the enclosures are parallel to each other in a plan view). Right panel: The WFC and PFIP with key components indicated. The hexapod struts that orient the WFC to the primary mirror can be identified by their blue casings. The focal plane assembly (at the top of the structure) is supported by a fixed hexapod for alignment and a rotational stage that maintains the sky orientation during a track. The PFIP is shown prior to the installation of the fiber feeds. Figure adapted from \citet{hil18b}, Fig. 2. } \end{figure} \begin{figure} \epsscale{1.0} \plotone{wfc_outline_updated.png} \caption{ \label{layoutWFC} \footnotesize Opto-mechanical layout of the Wide Field Corrector. Left: A rendering of the four mirrors (M2-M5) and mechanical structure of the WFC. Center: The optical layout indicating the identification of the mirrors and the aspheric corrector plate (ACP) located at the exit pupil. Light enters the corrector from the primary mirror through the central hole in M3 and is focused on the concave focal surface (FS) as indicated by the ray tracing. Right: View of the WFC undergoing alignment and interferometric testing at the University of Arizona College of Optical Sciences in 2015. The largest mirrors (M2 and M3) are 1 meter in diameter. Following fabrication, the figures of all mirrors ended up being described by general aspheres. } \end{figure} \subsection{Wide Field Corrector (WFC)}\label{subsec:wfc} The new corrector (Figure~\ref{layoutWFC}) has a 22$\arcmin$ diameter field of view and a 10 m pupil diameter. The periphery of the field is used for guiding and wavefront sensing to provide the necessary feedback to maintain alignment of the payload. The WFC is a four-mirror design with two concave one-meter diameter mirrors (M2, M3), one concave 0.9~m diameter mirror (M5), and one convex 0.23~m diameter mirror (M4). \edit1{All surfaces are conics with additional general aspheric terms. The highest general asphere orders are 8th and 10th on M5 and M3, respectively.} The corrector is designed for feeding optical fibers at $f/$3.65 to minimize focal ratio degradation, and so the chief ray from all field angles is normal to the focal surface. This fiber alignment is achieved with a concave spherical focal surface of radius 984~mm, centered on the exit pupil. The imaging performance is 0.6$\arcsec$ FWHM or better over the entire field of view, and vignetting is minimized. The WFC has uniform plate scale over the entire field of view to ensure that the dither offsets to fill in the gaps between fibers in VIRUS IFUs are close to identical for all IFUs (variation is 0.4\%). The WFC was manufactured by the University of Arizona College of Optical Sciences \citep{burge10}, with significant collaboration from MDO (\citealt{lee16a}, \citealt{good14a}). The smaller M4 was subcontracted to Precision Asphere. With four surfaces, reflective coatings for the WFC are required to have high reflectance (95\% or better from 3500~\AA~ to 1.8~$\mu$m), and are challenging, being based on silver and multiple dielectric layers. The large mirrors were coated by JDS Uniphase Corp\footnote{now Viavi Solutions}, and M4 was coated by ZeCoat. MDO designed and constructed the complex support fixturing needed to safely handle the large mirrors during cleaning and coating at JDS Uniphase \citep{good14a}. Experience with coating degradation on the original HET corrector led us to adopt a sealed design for the WFC, with entrance and exit windows and careful sealing of the WFC housing. Since deployment, the WFC has been purged continuously with nitrogen gas. Periodic visual inspection of the mirrors reveals minor changes in appearance, but direct reflectivity measurements are not possible in-situ. Monitoring of standard stars with LRS2 reveals no detectable degradation of throughput that could be traced to changes in the WFC coatings. Details of the figuring, alignment, and testing at the College of Optical Sciences are given in \cite{burge10}, \cite{oh14}, and \cite{lee16a}. During initial testing, significant errors in the low-order figures of M5 and M3 were detected. Respacing the mirrors and adding an aspheric corrector plate at the exit pupil in place of the planar exit window brought the performance of the WFC back to the image quality specification \citep{lee16a}. Final alignment utilized interferometric testing against computer generated hologram targets. Separate computer generated hologram tests of the M4/M5 pair, the M2/3 pair, and of the entire system were used to evaluate the alignment of the individual mirrors and the whole assembly. A conjugate test of the M4/5 pair with a custom wavefront sensor was developed by MDO to provide an independent confirmation that the system was meeting specification, particularly in off-axis performance, which was degenerate in the computer generated hologram tests \citep{lee16a}. Following an external review of the results, it was agreed that the WFC likely would meet specifications and the next step was to integrate it with the HET primary mirror and perform on-sky testing to verify the system. Integration and testing of the WFC on HET is described in \S\ref{subsec:wfuintegrate}. \citet{lee16a, lee21} provide a detailed discussion of the challenges posed by the WFC and tests developed to demonstrate its performance. \subsection{Prime Focus Instrument Package and Metrology Systems}\label{subsec:pfip} \begin{figure} \epsscale{1.0} \plotone{PFIPrender.pdf} \caption{ \label{PFIPrender} \footnotesize Renderings of the Prime Focus Instrument Package (PFIP). Right: The full assembly with major sub-assemblies indicated. The Wide Field Corrector (WFC) is at the heart of the assembly, mounted to the strongback. The entrance to the WFC is sealed by the Lower Instrument Package (entrance) and by the Pupil Plane Assembly (exit). The Rho axis field rotator keeps the position angle of the Focal assembly fixed on sky during a track. Left: An exploded view of the Focal Assembly showing its major components. The input head mount plate mounts the VIRUS IFUs and the fiber feeds for the other instruments, which lie on the fiber strain relief trays. The Focal Assembly contains the bulk of the complexity of the instrument, including the acquisition and guiding assembly with acquisition camera, guide probes, and wavefront sensors. Figure partly adapted from \citet{hil18b}, Fig. 4. } \end{figure} The Prime Focus Instrument Package (PFIP; \citealt{vattiat12,vattiat14}) rides on the tracker and consists of several subassemblies (Figures~\ref{HETupgrade}, \ref{PFIPrender}). The WFC mounts to a triangular steel frame (the strongback) on a three ball-in-vee kinematic mount. The strongback mounts to the tracker hexapod and the structure of PFIP is assembled around the corrector (Fig. \ref{PFIPrender}). The focal assembly contains all the hardware at the focus of the telescope, including the acquisition and guiding assembly, fiber instrument feeds, custom rotating-blade shutter, and electronics hardware. The focal assembly mounts kinematically to a rotation stage (the Rho axis), supported by a fixed hexapod for position adjustment during alignment. A total Rho axis range of $\pm$~25$^\circ$ allows the range of sky rotation to be followed during tracking for all accessible declinations, with some margin. This range is much less than the $\pm$~200$^\circ$ of the original HET, due to no longer having a long-slit spectrograph and to avoid twisting the integral field fiber feeds. Coincident with the optical focal surface there is the input head mount plate (\S\ref{sec:virus}, Fig.~\ref{ihmp}), which is a precision-machined interface for the VIRUS IFUs and the other instrument fiber feeds. The input head mount plate defines the physical focal surface of the telescope and has concave spherical shape to conform to the WFC optical focal surface\footnote{The input head mount plate was machined and verified to high precision by the Institut f\"ur Astrophysik G\"ottingen.}. Trays, forming the fiber strain relief, guide the VIRUS IFUs and other fiber feeds off the payload \citep{vattiat18}. The input head mount plate and fiber feeds can be removed to a work position for installation and maintenance of fibers and IFUs and for access to the focal assembly. The complex systems of PFIP, and the majority of the motion control actuators, are all contained within the focal assembly, which is designed to be removable for ground service. The relationship between the fiber positions and metrology systems are maintained by kinematic mounts, ensuring rapid return to science operations following service activities. The Lower Instrument Package is mounted to and seals the input end of the wide field corrector, and is a platform for the entrance window changer, tip-tilt camera, and facility calibration unit ( \S\ref{subsec:fcu}) output head. A set of temperature controlled, insulated enclosures house electronics hardware and the facility calibration unit input sources, optics, and selection mechanisms. The pupil plane assembly is located in between the wide field corrector and the focal plane assembly. Initially, the support structure with fixed exit pupil baffle and the corrector plate have been deployed. Future upgrades to the pupil plane assembly have been considered and space and interfaces to accommodate a moving baffle at the exit pupil of the telescope, a platform for selectable exit windows (aspheric corrector plates), and a future atmospheric dispersion compensator have been built into the system. As discussed in \S\ref{subsec:operation}, all degrees of freedom of the motion of the WFC, with respect to the optical axis of the primary mirror, must be monitored and maintained during a track in order to deliver good images. The feedback to maintain these alignments requires excellent metrology, which is provided by the following subsystems \citep{lee12c,lee18b}: \begin{itemize} \item The acquisition camera\footnote{Finger Lakes International Microline ML090000 with Kodak KAF-09000 CCD} has a 3.0$~\times$~3.0 arcmin$^{2}$~ field of view, offset 1.0 arcmin from the optical axis so as to also cover the positions of the LRS2 IFUs and the fiber feeds for the basement high resolution spectrographs. The acquisition camera is fed by a deployable pick-off mirror and has $B, g', r'$, and $i'$ filters\footnote{from Astrodon}. \item Two Guide Probes to monitor the position on the sky, the plate scale of the optical system, and to monitor the image quality and atmospheric transparency. The guide probes each have five filters (clear, $B, g', r'$, and $i'$) and 23~$\times$~23 arcsec$^{2}$\ field of view. \item Two operational wavefront sensors of Shack-Hartmann design with 11 sub-apertures across the pupil diameter, to monitor the focus and tilt of the WFC during tracking. Capture range of 5\arcsec\ diameter. The guide probes and operational wavefront sensors use the same cameras\footnote{Finger Lakes International Microline MLx285 with Sony ICX285AL CCD}. \item Calibration Wavefront Sensor\footnote{Allied Vision Technology (AVT) Prosilica GC2450 camera with Sony ICX625 CCD} with 21 sub-apertures across the pupil diameter, to provide a higher resolution on-axis reference for focus and wavefront monitoring. The calibration wavefront sensor is aligned to be cofocal with the input head mount plate and provides the overall focus reference for the HET. \item A distance measuring interferometer\footnote{The distance measuring interferometer is a custom system based on interferometry, provided by FOGALE nanotech}) operating at 1.6 $\mu$m to measure the physical distance between the WFC and primary mirror \citep{pp06}. The distance measuring interferometer projection head is fed by a fiber link and mounted on the lower instrument package. The interferometer has an additional fiber link to a head in the center of curvature tower for setting the distance at which to stack the primary mirror segments, which defines the primary mirror radius of curvature. \item A tip-tilt sensor operating at 1.6 $\mu$m mounts to the lower instrument package to monitor the tip/tilt of the WFC with respect to the optical axis of the primary mirror, via a reflected beam from the primary mirror. The 1.6 $\mu$m operating wavelength for the tip-tilt sensor and distance measuring interferometer was chosen so as to avoid the wavelength range of any instruments conceived for the telescope, since the HET design is poor for thermal infrared instrumentation. \item A pupil viewer based on a color camera\footnote{AVT Manta G-046 with SONY ICX415 CCD} to view the pupil and monitor mirror segment reflectivity. \item The Bore-sight imager is a coherent fiber bundle from Schott mounted in the input head mount plate that feeds a camera\footnote{AVT Manta G-201B} for monitoring the relationship between the input head and the acquisition camera, periodically with bright stars. \end{itemize} \begin{figure} \epsscale{1.0} \plotone{guideprobe.pdf} \caption{ \label{guideprobe} \footnotesize \edit1{Details of the PFIP guide-probe assembly optomechanical design. Top left is a rendering of the four probe carriages (two each, guide probes and operational wavefront sensors) on the large ring bearings. The rendering indicates how the carriages can nest together without colliding. Each carriage can range over 180$^\circ$ and the probe arms swing 17$^\circ$ to access the guide field annulus between 18 and 22 arcmin. field diameter. Carriages utilize linear magnetic tape encoders and probe arms utilize single turn rotary encoders. Both mechanisms are driven by brushless DC servo motors through capstan drives. For scale reference, the inner diameter of the carriage bearing stack is 445 mm. Top right shows detail of an individual carriage. The axis of the arm actuator passes through the center of the exit pupil of the WFC so that the prism moves concentric with the spherical image focal surface to remain in focus. The bottom panels show the optical layout of the guide probes (left) and operational wavefront sensors (right). For the guideprobes, the prisms reflect the converging light directly onto coherent imaging fiber bundles by Schott. For the wavefront sensors the light is focused onto an aperture of 5$\arcsec$ diameter, collimated by a relay lens and then imaged by a microlens array onto the imaging bundle, in a Shack-Hartmann configuration. Adapted from \citet{vattiat12} and \citet{lee12c}. } } \end{figure} The heart of the metrology system for the WFU is the Acquisition and Guiding assembly, which mounts the guide probe assembly, the acquisition camera, calibration wavefront sensor and pupil viewer. The light is directed to acquisition camera, calibration wavefront sensor, and pupil viewer by deployable pickoff mirrors with pneumatic actuators. During tracking, the guide probe assembly is used for guiding of the telescope and wavefront sensing feedback to the telescope focus and WFC tilt. Careful design of the pickoff mirrors allows the acquisition camera, calibration wavefront sensor, or pupil viewer to be used simultaneously with the guide probes and operational wavefront sensors in the guide probe assembly \citep{vattiat12,lee12c,vattiat14}. There are four probes: two imaging probes and two wave-front sensing probes, providing redundancy \edit1{(Fig.~\ref{guideprobe}}. Each probe consists of a probe optical head, containing the necessary optics coupled to a coherent fiber bundle purchased from Schott. In the operational wavefront sensors, a microlens array is bonded to the input faces of the imaging fiber bundles. Images incident to the fiber bundle input are captured by a remote camera system at the bundle output, fed by reimaging optics and including a filter wheel for each of the guide probes. Each probe optical head is mounted to a carriage with an arm for moving the probe radially in the field with a range of 9-11 arcminutes from the center of the telescope’s field. The four carriages each move through 180$^\circ$ sectors on large circular bearings to access stars. The axis of the arm motion passes through the center of the exit pupil, so the arm naturally traces the spherical focal surface and remains perpendicular to it, without need for any focus adjustment. \edit1{See Fig.~\ref{guideprobe} for details. } The positioning accuracy requirement of the guide probes is 20 microns on the spherical focal surface. To achieve this performance, both mechanical position actuation and encoding require a high level of precision. The guide probe assembly makes $\sim$10$^4$ moves per year. Maintenance is undertaken on a regular basis during focal assembly ground service. The upgrade adds wavefront sensing \citep{lee18a,lee18b} to the HET in order to close the control loop on all axes of the system, in conjunction with the distance measuring interferometer adapted from the original tracker metrology system and a new tip-tilt sensor \citep{vattiat14}. The design of the wavefront sensors is straightforward, but their application to the HET, with the varying illumination of the telescope pupil during a track (Fig.~\ref{pupil}), requires development of a robust software system for analysis of the sensor data to produce reliable wavefront information \citep{lee18a,lee18b}. There is redundancy built into the new metrology system to obtain the highest reliability. The two guide probes distributed around the periphery of the field of view provide feedback on position, rotation, and plate scale, as well as providing a record of image quality and transparency as a function of wavelength. The alignment of the corrector is monitored by the two operational wavefront sensors as well as by the distance measuring interferometer and tip-tilt sensor. The radius of curvature of the primary mirror is monitored by the combination of focus position from the operational wavefront sensors with the physical measurement from the distance measuring interferometer. \begin{figure} \epsscale{1.0} \plotone{HXpaper_pupil.png} \caption{ \label{pupil} \footnotesize Examples of the variations in pupil illumination of the HET. The left four images show views from the Pupil Viewer camera at image field center, with the telescope positioned at different (X,Y) tracker positions expressed in meters: (A) the center of a track (0, 0), (B) (0.4, 0.4), (C) (1.5, 0), and (D) (0.4, -1.6), \edit1{all in meters}. The change in illumination of the pupil as it moves off the edge of the mirror during a track is evident. The top right image (E) illustrates the pupil illumination model that incorporates the obstructions of the WFC and the tracker bridge, as well as the pupil position on the primary mirror. Note that this view is for a field position away from the center, so the holes in the WFC mirrors are not aligned with the central obstruction of the pupil. Bottom right (F) is an operational wavefront sensor image with the telescope pupil position (magenta), primary mirror segments (black) and illumination obstruction features (blue) overlaid. The square grid of the wavefront sensor elements is shown in red while elements with a successful measurement are highlighted in green with green arrows indicating the image centroids. The relationship between the lenslets and the mirror segments changes throughout a track. } \end{figure} The PFIP was deployed on HET in July 2015. It is a complex instrument in its own right, and was commissioned in phases from July 2015 to May 2016. Pupil illumination variations during tracking are illustrated in Figure~\ref{pupil}, along with an example of the illumination model and an image from one of the wavefront sensors. The illumination model is utilized to predict variations in light reaching all points on the field of view and is used to correct the brightness observed for guide stars so that they can be used to monitor atmospheric transparency, as an input into target choices. The model is also employed to correct the relative throughput of VIRUS IFUs when needed (e.g. \S\ref{sec:Vperformance}). \subsection{Facility Calibration Unit}\label{subsec:fcu} The facility calibration unit supports VIRUS and the facility instruments and consists of an enclosure (``source box") containing various calibration sources and an input head. The facility calibration unit head, connected to the source box through two liquid light guides\footnote{Lumatek Series 300 and 2000, to support the broad wavelength coverage}, is attached to the bottom of the WFC as part of the lower instrument package, and can be deployed into the beam to inject calibration light through the WFC whenever calibration is needed. A set of Fresnel lenses and engineered diffusers are used in the facility calibration unit head to mimic the caustics of M1 as closely as possible, to re-produce both the telescope’s pupil and focal surface illumination patterns \citep{lee12}. The source box has a switch-yard to select the lightsource \citep{lee12}. Continuum light for flat field calibration is provided by a combination of a broad-band laser pumped Xenon laser driven lightsource\footnote{Energetiq Technology model EQ-99-FC-S} for UV to red and a quartz-tungsten halogen lamp for red to near-infrared. Wavelength calibration sources of Mercury, Cadmium, Neon, Iron-Argon, Krypton, and Thorium-Argon can be selected. Various imaging and non-imaging optical components (e.g. Compound Parabolic Concentrators, cone reflectors, condenser lenses) are used for efficient coupling between different types of calibration lamps and the light guides, covering wavelengths from 3500 \AA~ to 1.8~$\mu$m \citep{lee12}. One of the switch-yard positions selects a calibration fiber feed for the HPF instrument. This fiber injects light from a laser frequency comb \citep{ost12} located in the basement spectrograph room for precision wavelength calibration as well as other calibration sources specific to the needs of HPF \citep{halver14}. Instrument-specific calibration sets are obtained at the start and end of every night and supplemented by twilight flats for VIRUS and LRS2. Note that HETDEX data processing relies exclusively on twilight flats \citep{geb21}, while data processing for other VIRUS projects utilizes the facility calibration unit laser driven lightsource, Mercury, and Cadmium light sources (\S\ref{sec:spectra}). One of the liquid lightguides is not rated for the lowest temperatures encountered during operations (-10~$^\circ$C) and has been replaced once after five years operation. The brightness and throughput of the calibration system is monitored with the LRS2 instrument. The laser driven lightsource varies in brightness over months timescales and has been refurbished once over the same period. \\ \subsection{Telescope and Instrument Control System}\label{subsec:tcs} The performance of the HET is critically dependent on metrology and the control software system, as the moving payload needs constant and precise adjustment to maintain optical alignment. The WFU has tight specifications on pointing, tracking and guiding performance (in all axes) that have been met by a combination of careful systems design and detailed analysis of performance referring to physical models of the hardware. The new integrated software control system for the HET WFU uses a component architecture providing a high degree of monitoring, automation, scriptability and scalability \citep{rams16,rams18}. It consists of a network of control systems, each of which models a sub-set of closely coupled hardware. The control systems communicate with each other using a simple but flexible messaging scheme encoding commands to subsystems and events informing of state changes. Each system is responsible for specific functions based on type or proximity to hardware, and is designed to be run autonomously. For engineering purposes, each of the subsystems can be scripted independently. The primary software control systems for the WFU and VIRUS are the telescope control system, the prime focus instrument package control system, the payload alignment system, the VIRUS data acquisition system (\S\ref{sec: camera}), and the tracker motion control system (\citealt{beno12}), along with a centralized logging system. In addition to these control systems, graphical user interfaces for the Telescope Operator and Resident Astronomer have been developed. The telescope control system is responsible for coordinating the operation of all other control systems and can be scripted for automatic observing (\S\ref{subsec:target}); knowledge of the high-level astronomy-related state is restricted to telescope control system. The PFIP control system controls the hardware on PFIP, including the large rotating-blade shutter. The payload alignment system is responsible for gathering and processing metrology from the various alignment systems, such as cameras, needed to close all tracker-motion related loops. The tracker motion control system is based in the Matlab-Simulink environment in a dSPACE Inc. controller. Constraints within the tracker motion control system environment limit the ability to perform complex calculations at the 2.5 ms update rate of the motion control system, so telescope control system interprets all the higher-level functions for the tracker motion control system, such as generating the track-position data stream. Logging from tracker motion control system is at 5 Hz and from other subsystems at their native update rates. The logger accesses local databases if the central log-server is down. These local databases are synchronized automatically with the central log server when it is available. In addition to log messages, logging can be configured to record any subset of events generated by the system and thus obtain detailed execution traces. This adjustment can be performed at any time for engineering purposes without interfering with the operation of the control system, generating no additional overhead or changes in timing. \subsection{WFU Integration, Alignment, and Mount Model}\label{subsec:wfuintegrate} Table~\ref{tab-chron} presents the chronology of the WFU and deployment of instrumentation. Following delivery of the WFC to HET, its alignment was rechecked and then the WFC and PFIP were integrated on the tracker. Initial alignment of the WFC axis to the telescope was made with an alignment telescope looking down on the primary mirror to establish the tracker position offsets for the WFC to be centered on and normal to the center mirror segment. The alignment telescope was then reversed to view along the WFC axis to the center of curvature tower instrument, which verified that the WFC was aligned with the primary mirror central axis. Details are given in \citet{hil16c} and \citet{lee16a}. Following this alignment process, first light was achieved on 29 July 2015, with pointing within an arcminute, and excellent image quality on the acquisition camera (1.3$\arcsec$ FWHM, consistent with the expected median image quality of the telescope system). Achieving good on-axis image quality did not vindicate the internal alignment of the WFC, however (\S\ref{subsec:wfc}), so over the Fall of 2015 a series of on-sky tests were performed with miniature deployable wavefront sensors over the field of view (mounted in the VIRUS IFU seats in the input head mount plate) as a final confirmation of the performance of the WFC \citep{lee16a}. By this point, both sidereal objects and geostationary satellites could be tracked with high reliability. Testing of the internal WFC alignment required acquisition of a geostationary satellite on each of the deployable wavefront sensors in turn, measuring the wavefront as a function of field position. These tests verified that the WFC was aligned within specifications with only a very small tilt of the focal surface with respect to the optical axis. Details of the tests and analysis are provided in \citet{lee16a}. The original HET system utilized a heuristic mount model based on on-sky measurements, which convolved the many physical effects that contribute to the pointing and tracking accuracy of the integrated system. Adjustments to improve pointing would often result in poorer tracking accuracy, and vice-versa. For the WFU, the mount model was based on direct physical measurement of subsystems that could be combined to create a deterministic correction to the tracker position with well-understood physical underpinnings. The primary tool in this effort was a laser tracker\textcolor{red}{\footnote{Automated Precision Inc. model LTS-3000}} and spherically mounted retro-reflectors, which were utilized to understand the deflections of the tracker subsystem, using a dummy WFC to mimic the load \citep{good18}. At this point, the distance between the payload and primary mirror was established so the telescope would be in focus when the WFC was deployed. These measurements created a transform with low-order terms that accounted for the deflections of the tracker relative to the ideal tracking sphere. This transform was further refined using the TTCAM and distance measuring interferometer to provide direct measurements of the payload relative to the surface of primary mirror, and thereby tie the tracker frame to the optical frame provided by the surface of the primary mirror, aligned by the center of curvature tower instrumentation\footnote{see \S\ref{subsec:imagequal} for details of the center of curvature tower instrumentation}. Deflections of the telescope structure during the track, due to the unbalanced loading as the payload moves in X,Y, and tilts to remain normal to the primary mirror, were also measured relative to the telescope pier with the laser tracker and incorporated as a further layer of mount model terms that transform the tracker frame of reference to the projection on-sky. After setting in azimuth, the telescope structure sits on four feet on an exceedingly flat concrete pier. As the telescope is moved in azimuth to access different declinations there is also a term in the pointing mount model that accounts for the small irregularities of the pier. This approach to the mount model proved highly successful with initial pointing residuals meeting requirements ($<$~30\arcsec) in the central half of the tracker range. One final layer of low-order corrections, based on on-sky residuals, resulted in requirements and goals being met over the entire tracker range ($\leq$~12\arcsec). Just as important as pointing, open-loop drift rates are low and correlate closely with pointing residuals, indicating that improvements in one will be reflected in the other. \section{VIRUS} \label{sec:virus} The requirements in \S\ref{sec:design} emphasize ability to cover area quickly, require large-scale replication \citep{hil14}, and lead to the following properties for VIRUS: each of the 78 VIRUS units is fed by 448 fibers that each cover 1.8 arcsec$^2$ on the sky, split between two spectrograph channels per unit. The fibers feeding a two-channel unit are arrayed in a 51$\times$51 arcsec$^2$ IFU. The fibers in the VIRUS IFUs have a fill-factor of 1$/$3, such that offsets of the telescope in an equilateral triangle pattern of side $1 \farcs 46$ will fill in the area. These offsets are referred to as the dither pattern. An observation for HETDEX consists of three exposures, each 360 seconds duration, with dithered offsets between. The spectral resolution of VIRUS is 5.6~\AA~ (resolving power R~$\sim$~800 at 4500~\AA), with coverage of 3500$-$5500~\AA. The optical design is simple, using three reflective and two refractive elements. High throughput is obtained with dielectric reflective coatings optimized for the wavelength range. VIRUS units are located in large enclosures off the moving payload of HET (Figure \ref{HETlayout} and \S\ref{sec: vss}), while maintaining fiber length of $\sim$20 m on average, to preserve as much UV response as possible. The full VIRUS array can obtain 35,000 spectra simultaneously, with 14 million (spectral $\times$ spatial) resolution elements. In total, the VIRUS CCDs have 0.7 Gpixels (unbinned), comparable to the largest imagers yet deployed. The IFUs are arrayed in a square grid pattern within 18 arcminutes field diameter and fill this area with $\sim$1$/$4.5 fill factor. Figure~\ref{ihmp} shows the 78 IFUs arrayed in the input head mount plate at the prime focus of HET, along with an image reconstructed from the spectra of 74 spectrograph units ($\sim$33,000 fibers). Note the 100\arcsec\ grid pattern of the IFU layout. The VIRUS spectrographs have been designed and constructed by the McDonald Observatory of the University of Texas at Austin (MDO) and Texas~A$\&$M University (TAMU), the IFU development was led by the Leibniz Institute for Astrophysics (AIP), and many VIRUS mechanical components were supplied by The University of Oxford and the Institut f\"ur Astrophysik G\"ottingen. The VIRUS data processing software was led by the Max-Planck-Institut f\"ur Extraterrestriche-Physik (MPE) and The University of Texas at Austin. The cameras were produced at MDO, the collimators at TAMU \citep{marsh14}, and the IFUs at AIP \citep{kelz14}. Spectrograph integration, alignment, and characterization is led by MDO \citep{tutt16,hil16a}. \begin{figure}[!ht] \epsscale{1.18} \plotone{ihmp.pdf} \caption{ \label{ihmp} \footnotesize Left – Photon’s-eye view of the deployed fiber integral field units (IFUs) for VIRUS and the fiber feeds for other HET instruments mounted in the precision input head mount plate. The two IFUs of the LRS2 instrument (LRS2-B and LRS2-R) and the fiber feed for the Habitable-zone Planet Finder (HPF) occupy three of the four IFU locations in the central region in the input head mount plate. The fiber feed for the future upgrade of the High Resolution Spectrograph (HRS) will be deployed in the remaining central position. VIRUS IFUs are arrayed in a 100\arcsec\ center-to-center grid. The full complement of 78 VIRUS IFUs are shown installed. Each IFU covers \hbox{51$''$ $\times$ 51$''$} and has 448 optical fibers. The 16 empty seats on the periphery will not be populated. For scale, the corners of the outer VIRUS IFUs in this image are~$\approx 18'$ apart, which is~$\approx$~190~mm at the HET focus. A magnified image of a single VIRUS IFU is shown inset at top left to illustrate the fiber layout. The inset image at the lower left shows how the fiber layout maps to CCD amplifier readouts (with different colors for the 2 CCDs, each with 2 amplifiers). Right – Example of sky data from VIRUS on a typical HETDEX field with all 78 spectrograph units operating. The spectral dimension is collapsed to create synthetic g-band images for each IFU; this representation removes the $\approx$50\arcsec\ gaps between the IFUs and increases the size of the IFU fields by a factor of two, for better representation. The central 6 positions are reserved for the other instruments. The direction of the parallactic angle on sky is up in this figure. Numbers represent the IFU coordinate system in the input head mount plate. } \end{figure} \medskip \subsection{VIRUS Spectrographs} \label{sec: spectrographs} \smallskip The details of the design, prototyping, and production of VIRUS and associated subsystems are described in \cite{hil08a,hil18a,murphy08,chonis10,lee10,vattiat10,murphy12,chonis12,marsh14,kelz14,tutt16,vattiat18,spencer18,lee18c}. \edit1{Table~\ref{tab-virus} summarizes the basic properties of the spectrographs, which are discussed in more detail in the following sections. } \begin{figure} \epsscale{1.0} \plotone{virus.png} \caption{ \label{virus} \footnotesize Layout of the production VIRUS spectrograph unit. Each VIRUS unit has two identical spectrograph channels. A cutaway of the production mechanical design is shown on the left and the optical design of one channel is shown on the right. The volume phase holographic grating has 930 lines per mm and the wavelength coverage is fixed at 3500~-~5500 \AA. The mechanical design cutaway shows the IFU slit assembly on the left, mounted to the collimator. The slits of the IFU assembly protrude into slots in the fold mirrors, to minimize obstruction. The distance between the fold and collimator mirrors is maintained with three invar struts for each channel to maintain focus as the ambient temperature changes. The $f/$1.25 Schmidt camera has an internal CCD in the vacuum mounted to the main bulkhead of the collimator. The two CCDs are cooled by a flexible line from above, filled with liquid nitrogen, with a breakable cryogenic bayonet connection, as shown on the left. The spectrograph unit without IFU mounted has dimensions 970 mm long, 740 mm wide, and 460 mm deep, and weighs 73 kg. Figure partly adapted from \citet{hil18a}, Fig. 2. } \end{figure} The VIRUS instrument was evolved for mass-production from the Mitchell Spectrograph and reconfigured with three basic sub-units: the fiber IFU, the collimator assembly, and the camera assembly. While the Mitchell Spectrograph can be reconfigured through collimator angle and disperser changes, the production VIRUS units have no moving parts. Each VIRUS unit houses two spectrograph channels (Figure~\ref{virus}). The beam size of each channel is 125~mm, allowing the collimator to accept an $f$/3.32 beam from the fibers, accommodating a small amount of focal ratio degradation (FRD, e.g. \citealt{schmoll03,murphy08,murphy12}) of the $f$/3.65 input from the telescope. Each camera is an $f$/1.25 vacuum Schmidt design with 166 mm focal length and a custom \hbox{2064 $\times$ 2064} format CCD with 15~$\mu$m pixels at its internal focus. A single corrector plate suffices for both the Schmidt collimator and camera, and acts as the vacuum window of the camera. The field flattener lens in front of the CCD is also an aspheric element\footnote{aspheric elements manufactured by Asphericon Inc., Jena}. The three mirrors in the system have dielectric high reflectivity coatings optimized for 3500~-~7200 \AA. The IFU, collimator, and camera subsystems are connected by kinematic interfaces at the two main plates of the collimator assembly (Fig. ~\ref{virus}). These plates incorporate precision-machined location features. The evolution of the spectrograph design considered using castings for these plates, but computer numerical controlled machining from aluminum plate stock proved to be more cost-effective, given the capacity of machines available at the University of Oxford. The mechanical design and optical tolerancing of the spectrographs for mass production are described in \citet{vattiat10} and \citet{lee10}, respectively. While the VIRUS units are mounted in fixed housings and are gravity invariant, their enclosures track ambient temperature in the telescope enclosure and they are required to operate with high stability over a temperature range of $-5$~$^\circ$C to~+25~$^\circ$C. The instrument is specified to not require recalibrating for shifts in the positions of the fiber spectra over the temperature range encountered in an hour, with the goal of applying a single set of calibrations during an entire night. Stability is crucial since the data processing and analysis are sensitive to shifts of the spectra on the level of $\sim$0.1~ pixel. The requirement corresponds to shifts smaller than 0.5~unbinned pixels (one-tenth of a resolution element) at the detector for a~5~$^\circ$C temperature change. This stability was achieved by using an all-aluminum structure with Invar-36 metering structures for the collimator mirror focus and the internal structure of the cameras. Angles within the optical path are maintained by the homologous expansion of the aluminum structure, while flexures accommodate the difference in coefficient of thermal expansion between the aluminum and invar. The observed stability is a factor of five better than the requirement. VIRUS employs volume phase holographic gratings, which offer high efficiency and low cost. Details of the grating design, development, and testing are reported in \cite{chonis12,chonis14}. The grating has a fringe frequency of 930~lines~mm$^{-1}$ and operates at order $m$~=~1 in transmission from~3500 to~5500~\AA, for unpolarized light. Efficiency is optimized for the ultraviolet with the angle of incidence of $\sim$9$^{\circ}$ and the angle of diffraction of $\sim$15.3$^{\circ}$ at 450 nm. In addition, a~1$^{\circ}$ tilt of the fringes in the grating to ensures that the “Littrow” recombination ghost \citep{burgh07} falls off the detector for the VIRUS configuration. The contract for 170 gratings was awarded to SyZyGy. MDO provided SyZyGy with a custom test instrument with which to evaluate grating performance in diffraction efficiency and scattering at three wavelengths in nine sub-apertures over the 138 mm diameter clear aperture of the gratings. Details of the design and results of these tests for the 170 gratings are reported in \citep{chonis14}. \\ \subsection{VIRUS Fiber Feed} \label{sec: IFU} IFU development at AIP (\citealt{kelz14,kelz21}) and MDO \citep{murphy08,murphy12} focused on establishing a design that minimizes FRD, maximizes throughput, and can be manufactured in quantity \citep{kelz14}. An overview of IFU development and performance is presented in \citet{kelz21}. Careful and rigorous apportioning of tolerances between the components aimed to retain 95\% of the transmitted light within the spectrograph pupil (125~mm, $f$/3.32) for the input focal ratio from the telescope of $f$/3.65. Figure~\ref{ifuprod} presents images of the slit and input ends of production fiber cables, along with other views of the IFU assembly. In total, the VIRUS IFUs contain $\sim$700 km of fiber, which was custom manufactured by FiberTech GmbH and CeramOptec GmbH. The high-OH silica fiber has core size 266 $\mu$m, cladding 290 $\mu$m, and buffer 320 $\mu$m diameter\footnote{fiber types are designated: CeramOptec UV265/292P/320 and Fibertech AS266/292UVPI/318}. 266 $\mu$m projects to 1\farcs5 on sky. Each IFU contains 448 fibers, divided equally between two slits. The design was kept as simple and lightweight as possible. The input head consists of a precision micro-drilled block\footnote{of ARCAP AP1D from Euro Micron}, into which the fibers are fed, which is in turn clamped within a stainless steel shell that provides the mounting features. At the exit, the cable bifurcates within a slit housing into two slits with the fibers glued to grooved blocks. The grooves aim the fibers such that their axes pass through the center of curvature of the spherical collimator mirror and are normal to the surface of the mirror. At input and output, the fibers are glued in position with epoxy\footnote{with Epotek 301-2} and then cut and polished. The input and output are bonded to a thin lens and a cylindrical lens, respectively, both of fused silica and anti-reflection coated\footnote{input bonded with Norland 63 UV curing adhesive and output with Cargille code 0607 gel}. The input lens ensures that the chief ray of the curved focal surface is normal to the fiber input for all the fibers, despite a flat input face. This feature is necessary due to the curvature of the HET focal surface (984 mm radius of curvature) and the size of the IFU fiber array (12.5 mm on the diagonal). The conduit housing the fiber cables underwent extensive design evaluation and prototyping. It is important that the fibers not piston significantly within the conduit where they exit the cable into the output slit assembly (Fig.~\ref{ifuprod}c). Such pistoning could occur due to changes in axial load or ambient temperature swings; particular concerns are shipping and handling during installation, i.e., avoiding twists and torsional stress \citep{murphy08}. To minimize the weight of the conduit, which could dominate the total weight, a custom fully-interlocked aluminum conduit with polyvinyl chloride sheathing from Hagitec was adopted. An inner sock of Kevlar protects the fiber from the internal structure of the conduit. The Kevlar is tensioned during assembly, which stabilizes the length of the conduit to prevent fibers pistoning or developing axial stress. In total, the design evolved through six versions, including those for the Mitchell Spectrograph. Before production, the final design was exercised through the full range of motions expected at the HET in a lifetime test, which simulated 10.2~years of wear (188.7~km of linear travel) on a single fiber bundle. Results of the lifetime test are described in \cite{murphy12}, which qualified the cable design for final manufacture. During the test, there were no signs of the fiber pistoning within the conduit, after initial settling; this behavior has been borne out by experience with the IFUs deployed at the HET. \begin{figure}[!ht] \epsscale{1.0} \plotone{ifuprod.png} \caption{ \label{ifuprod} \footnotesize Production of fiber IFU cables at AIP. (a)~precision slit assembly components comprising (from lower left to upper right) the groove block that locates the fibers, the cap and the cylinder lens holder that acts as the reference for focus; (b)~an assembled and polished slit block containing 224 fibers; (c)~an integrated output slit assembly with two slit blocks mounted and the fan-out of fibers from the conduit; the two spectrograph channels are separated by 320~mm; (d)~an input head, back illuminated, with 448~fibers; the fiber cores have a 266~$\mu$m diameter and one-third fill factor, and the array is approximately 9~mm on a side. 266~$\mu$m projects to 1\farcs5 on sky. (e)~input head with cover lens installed. The head is located within a cylindrical seat in the input head mount plate (Fig.~\ref{ihmp}) with the clocking constrained by a pin (not shown, pressed into the hole at the lower left), and secured with three machine screws, as shown; (f)~a completed IFU of 20~m length on transport spool. Note the protruding fiber slits and two bushings that engage on pins to guide the output slit assembly safely onto the collimator unit. Figure adapted from \citet{hil18a}, Fig. 3. } \end{figure} During production, manufacturers were provided with kits of parts (fiber, mechanical parts, conduit, etc.) and they performed the assembly and polishing of the input and output surfaces to a strict prescription \citep{kelz14}. Final integration of the slit blocks into the output slit assembly was done at AIP. Three production lines were established, based on qualification work at several vendors: AIP\footnote{Assembled by Christian Haubitz-Reinke, Berlin Fibre; www.berlin-fibre.de}, CeramOptec, and FiberWare. Acceptance testing and evaluation at AIP included confirmation of physical properties, microscope examination of polish of the input and output ends against fiducial standards, FRD testing, throughput testing, fiber mapping, and position measurement. Fiber positions within the input head were measured with a precision reimager against a fiducial head that had been measured externally on a coordinate measuring machine in order to relate the fiber core positions to the input head mount features. The fibers deviate from a uniform grid with a dispersion of 10 $\mu$m rms (an on-sky projected dispersion of $0 \farcs 05$). A final system test was performed on each IFU cable with a fiducial spectrograph unit, supplied to AIP by MDO, generating an IFU cable report and fiber mapping files that are used for data processing and record keeping. A detailed description of the design and production of the IFUs and on-sky performance is given in \citet{kelz21}. Design of the fiber handling and deployment of IFUs is described in \citet{vattiat18}. \subsection{Camera and Detector System} \label{sec: camera} The camera cryostat vacuum is shared between a pair of spectrographs; this approach significantly reduces the component count and increases the evacuated volume. Similarly, the cryogenic cooling system is shared within a unit, which also reduces the component count of the VIRUS cryogenic system and reduces losses associated with fittings and valves. The VIRUS cryogenic system and its testing are described in Sec.~\ref{sec: vss} and in detail in \citet{chonis10} and \citet{spencer18}. The cryostat is composed of two aluminum castings, post-machined only on critical mount surfaces and flanges. The cryostats were manufactured by MKS Inc., following extensive evaluation of prototypes. An impregnating step with Locktite Resinol, following machining, is intended to seal the porosity of the cast aluminum. However, careful leak-checking of the cryostats using a residual gas analyzer was required in order to locate and repair leaks that compromise the hold time. Applying two epoxy types of high and low viscosity, for larger and smaller leaks, results in consistent vacuum performance. While the tooling cost for casting is quite significant, the price for even a single cryostat of this size is competitive with machining from bulk stock, and is much cheaper for the large VIRUS production run. The CCDs for VIRUS have \hbox{2064 $\times$ 2064} format with 15~$\mu$m pixels. The required readout time is relatively slow at 20~seconds, binned \hbox{2 $\times$ 1,} but low read noise ($\approx$~3~electrons) is required and the parallel readout of 156~CCDs distributed through the volume of the HET structure is challenging. Each CCD is read through two amplifiers, with the serial registers parallel to the spectrograph dispersion direction to avoid splitting spectra across amplifiers. The 156 VIRUS CCDs total 664~megapixels, when fully deployed, which is comparable to the VLT Multi Unit Spectroscopic Explorer (MUSE, \citealt{bacon10}) and to the largest operational imaging mosaics. The single-exposure raw dataset \hbox{(binned 2 $\times$ 1 at 16 bits digitization),} from the full VIRUS array is 664 MB. The integrated detector system was supplied by Astronomical Research Cameras, Inc. (ARC), with the University of Arizona Imaging Technology Laboratory (ITL) providing thinned backside illuminated CCDs with anti-reflection coatings optimized for the VIRUS bandpass, as a subcontract, from wafers manufactured by Semiconductor Technology Associates, Inc (STA). Since the CCDs are produced in custom wafer runs, an imaging area of \hbox{2064 $\times$ 2064 pixels was selected}, allowing more latitude for alignment. The device is designated STA3600. The design of the detector package, flex circuit, and controller were customized to the VIRUS application, since the engineering effort was spread over a large production run. Figure~\ref{camera} presents renderings of the camera and presents the assembly of a pair of detectors in a cryostat. The structure of the camera assembly is all Invar-36. The CCD package, machined from Invar-36, was designed for minimum obstruction and lies in the shadow of the field flattener lens. The package has a custom header board that brings the traces to a single connector on one side of the package. A custom flex circuit with complex geometry connects the two detectors to a single 55-pin hermetic bulkhead connector. The controller mounts directly to this connector, without external cables, and its form-factor was customized to fit between the cylinders of the cryostat cover. Detector readout is performed through two diagonally opposite amplifiers out of the 4 available in the CCD design. The amplifier pair is selected via jumpers on the readout system flex circuit, within the cryostat. Detectors are vetted by ITL and then at MDO with 3650 \AA~flat field and Fe55 illumination, to verify and optimize performance. \begin{figure}[!ht] \epsscale{1.15} \plotone{camera.pdf} \caption{ \label{camera} \footnotesize The upper renderings show the anatomy of the VIRUS camera assembly. The upper left hand view displays how the Invar-36 camera-mirror assembly integrates with the "spider" support for the detector and field flattener lens in the cryostat housing. All parts of the camera-mirror assembly are invar-36. The mirrors have a spherical back for lightweighting. The Invar blade flexures at the top provide radial support and are bonded to flats at the centers of the mirror backs, with mirror adjustments at the three corners that are accessed via vacuum feed-throughs on a temporary adjuster back, during alignment. The upper right rendering shows the electronics signal path via the double-sided flex circuit connecting each detector to the common 4-channel readout electronics (controller). The controller box mounts to a hermetic vacuum feed-through connector that mounts in the cryostat cover (not shown). The lower two panels show two stages of the assembly of a VIRUS camera. The left image shows two CCDs with integrated field flattener lenses mounted on cast Invar “spiders” with cold links and flex circuit, integrated in the cast aluminum camera body. The right panel displays the two camera-mirror assemblies integrated, prior to installation of the camera cover, which is also cast from aluminum. Figure partly adapted from \citet{hil18a}, Fig. 4. } \end{figure} The CCD and field flattener lens alignment tolerances are the tightest in the system. To avoid deposition of epoxy in the corners of the CCDs during cure due to outgassing, separate alignment references are used for the field flattener lens and CCD that allow them to be bonded separately and then assembled. Two alignment stations incorporating alignment telescopes allow a CCD and field flattener lens to be bonded simultaneously \citep{lee18c}. The CCD is mounted in the cast invar “spider” (Fig. \ref{camera}) which has a minimum-obstruction arm to suspend the detector in the beam. The CCD is aligned in a second fixture by adjusting the three spider mount points which are glued to set the alignment. The CCDs are flat to within $\pm$~10~$\mu$m over their entire surface, which falls within the tolerance requirements. ITL supplies detailed metrology of the shape and location of the CCD surface in relation to the package mount points, and this information is used during alignment of the CCD and field flattener lens assembly \citep{lee18c}. After 72~hours cure time, the field flattener lens is installed onto the detector package. Two mirrored spider assemblies are integrated with cold links and flex circuits and then mounted in the camera body (Figure~\ref{camera}, lower left). The three mount points of each spider interface with precise features post-machined into the aluminum casting. The alignment of the CCD head assembly to the axis of the camera is achieved within the required tolerances of 50~$\mu$m in centration and separation, and 0.05$^{\circ}$ in tilt \citep{lee18c}. Since the camera mirror assembly mounts to the same points, the entire camera becomes a single unit with an Invar-36 structure, with only the aspheric corrector plate window (with much lower alignment tolerances) not as tightly integrated (Figure~\ref{camera}). The CCD controllers have input 12 VDC power and control and data links on fiber-optic connections. In the ARC readout system, the data system requires several levels of multiplexing, utilizing Peripheral Component Interconnect (PCI) and PCI-Express (PCIe) interface cards. Each four-channel CCD controller commands two detectors, each with two readout amplifiers. Ten custom-built multiplexers each combine the output from a set of eight CCD controllers. To minimize crosstalk, the timing of the CCD clocks is synchronized to master clock signals on each multiplexer, distributed over a low-voltage differential signal system. The output of each multiplexer is fed into a separate PCIe interface card mounted in the VIRUS data acquisition system computer. The data are transferred via direct memory access from the PCIe interface cards into the VIRUS data acquisition system computer memory. The software for VIRUS data acquisition controls monitoring and readout of VIRUS units and the new low resolution integral field spectrograph, LRS2. It is written in C$^{++}$ and integrated within the HET Telescope Control System (\S\ref{subsec:tcs}, \citealt{rams16,rams18}). Controllers have proven unreliable due to a combination of design, component choice, and build quality. Failures of analog-to-digital converter integrated circuits and power supply components have had the greatest impact. The clock driver channels needed substantial modification to drive the CCD clock capacitance without generating spurious charge that raises the effective readout noise. These issues have been addressed through careful analysis and component changes and will be discussed in a future paper. As reported in \cite{hil16a}, the original VIRUS detectors suffered from failures in the back side surface treatment, triggered by triggered by chemical contamination from shipping and lab storage containers, that caused significant quantum efficiency depressions as well as clusters of low quantum efficiency pixels, particularly at the corners. This issue led to the majority of the original delivered generation 1 detectors being unusable, but enough were identified with sufficiently good cosmetics to deploy 16 units in 2016, in order to advance understanding of the system and start commissioning the instrument. New wafer runs were procured from STA in 2017 and 2019, and ITL has been processing these runs. These generation 2 detectors have a thicker epitaxial layer of higher resistivity silicon that has yielded better results\footnote{CCDs are thinned and backside illuminated. The generation 1 VIRUS CCDs are nominally 17 um thick; the generation 2 are nominally 27 um thick. The epitaxial silicon has resistivity of 150 ohm-cm for generation 1 and 1000 ohm-cm for generation 2. The CCDs are operated with multi pinned phase readout.}. As of May 2021, 7 of the generation 1 units remain in the VIRUS array. Some of their cosmetic features are detrimental to data quality, but the devices have remained stable. The small clusters of bad pixels present problems, in particular, since they are of similar size to the instrument resolution elements. Some of the generation 1 CCDs also show many charge traps. In a blind detection experiment of single emission lines, such features must be corrected or eliminated to prevent detection of spurious objects \citep{geb21}. Units with generation 2 CCDs have been steadily delivered to bring the total to 74 units on sky, as of May 2021 (Fig. ~\ref{ihmp}). As a gauge of the impact of the CCD cosmetic and readout system issues, 4.4\% of amplifiers have readout issues and 2\% of the resolution elements are masked and eliminated from the data used to date in the HETDEX survey. This percentage is dropping as controllers are repaired and generation 1 CCDs are replaced. CCD delivery is the pacing item in completing the delivery of the final spectrograph units to the HET. A comprehensive description of the detector system for VIRUS will be given in a future paper, once deployment is complete. \subsection{VIRUS Alignment and Characterization} \label{sec: alignment} Following assembly of the collimators and cameras, the spectrographs are integrated and aligned. The alignment procedure involves attaching an adjustment back cover to the camera cryostat, in place of the regular cryostat back. The adjustment cover incorporates six ferrofluidic vacuum feed-throughs for manipulation and locking of the camera mirrors. Small adjustments of the collimator mirror tip, tilt, and piston are also allowed. A test IFU with an input face mask and no cover plate is used to provide a set of sparse spectral line images of Hg and Cd over the full field of the CCDs for the alignment procedure. Experience with aligning the VIRUS prototype revealed that this step was likely to become a bottle-neck in the large-scale production, which led to the development of a deterministic alignment procedure that utilizes moment-based wavefront sensing analysis \citep{lee18c}. This technique relies on the geometric relation between the image shape moments and the geometric wavefront modal coefficients. Moment-based wavefront sensing allows a non-iterative determination of the modal coefficients from focus-modulated images at arbitrary spatial resolutions. The determination of image moments is a direct extension of routine centroid and image size calculation, making its implementation straightforward in the alignment of systems such as VIRUS. The alignment procedure can be accomplished in three hours per channel, once the detectors are cold. The resultant image quality exceeds the specifications in most cases, due to the achieved accuracy of the alignment of the field flattener lens to the CCD. After alignment, VIRUS units undergo a characterization \citep{indahl18} before being packed for transport to the HET. The characterization station is located in a separate lab that can be darkened sufficiently to ensure no stray light for the tests. A lab calibration unit \citep{indahl16} houses a broad-band laser driven lightsource (\S\ref{subsec:fcu}) for flat-fielding and Mercury and Cadmium line lamps for wavelength determination. A standard production IFU is designated for these tests so there is a uniform reference. In addition, a “pixelflat” head that mounts in place of the IFU head and provides a continuous illumination of two slits (rather than the highly spatially-modulated fiber IFU output) is utilized to provide the source for pixel-flat-fields of the detector. This approach produces a flat field free of spatial-dimension fiber modulation, and therefore allows characterization of the pixel-to-pixel quantum efficiency variations and identification of bad pixels. Bias levels are set and photon transfer curves are generated to determine the read noise and gain of each channel. Sets of bias and dark frames are recorded to act as a reference once the units are installed at HET. Figure~\ref{VIRUSchar} provides some examples of outputs from characterization of VIRUS spectrographs and IFUs. The spectrographs produce excellent image quality and the fiber profiles are characterized using the sparsely illuminated IFU input. The profiles are well fitted by a Gaussian-hermite function with exponential wings. The wings contain about 3\% of the total light and are consistent with the scattering expected from the surface roughness specifications of the spectrograph optics. Fig.~\ref{VIRUSchar} also shows the excellent separation (contrast) between fibers that is achieved in the alignment process \citep{lee12b}. The lower panels in Fig.~\ref{VIRUSchar} present two examples of the IFU fiber-to-fiber throughput measured relative to the highest throughput fiber, which is part of the $\it{LabCure}$ report generated at AIP \citep{kelz14, kelz21}. Typical variations are around 10\%, peak-to-peak, with some systematics depending on the fiber location in the slits. \begin{figure}[!ht] \epsscale{1.0} \plotone{VIRUSchar.png} \caption{ \label{VIRUSchar} \footnotesize Examples of characterization data obtained on VIRUS spectrograph units and IFUs. Top left: Fiber profiles of isolated fibers in the spatial dimension measured as part of the spectrograph characterization, plotted on a logarithmic scale against offset from profile center, showing Gaussian-hermite core and in red the exponential wings. The wings sum to ~3\% of the integrated flux and account for all the light seen between fibers. Top right: Fiber profiles against pixel number in the spatial dimension measured as part of the IFU characterization at six locations on the detector. This example exceeds requirements. The profiles indicate excellent contrast between the peak and trough of the profiles. The three profiles on the lower row straddle the break between amplifiers where there is a 3-fiber gap. The lower two panels show typical examples of fiber throughput relative to the peak fiber for two IFUs. The axes are nominal fiber position from the center expressed in arcseconds projected on sky. The fibers are shown oversize compared to reality. The top half of the fibers connect to one of the spectrograph channels and the bottom half feeds the other channel. The split between spectrograph channels occurs at the center of the IFU. Typical fiber-to-fiber non-uniformity is at the 10\% level as seen in these examples. The left hand IFU has a broken fiber near the center. Figure adapted from \citet{hil18a}, Fig.5. } \end{figure} \subsection{VIRUS and LRS2 Support Infrastructure and Deployment} \label{sec: vss} VIRUS and LRS2 are fiber-fed, which allows the mass of the spectrographs to be carried in two enclosures \citep{prochaska14}, one on each side of the telescope structure (Figures~\ref{HETlayout} and~\ref{vss}). Each enclosure can support 40~pairs of spectrographs, providing capacity for the 78 VIRUS units and the two units of LRS2. The enclosures are carried by the VIRUS Support Structure, which is a complex weldment that interleaves with the main telescope structure without applying loads to it that could couple wind induced vibration from the enclosures to the telescope. It rides on separate air-bearings that lift it during changes in telescope azimuth, and is linked to the main structure allowing it to be moved by the main azimuth drive. The enclosures exclude light and dust as much as possible. They have filtered air circulation and heat extraction to remove heat from the VIRUS controllers and ensure that the skin temperature of the enclosures remains close to ambient to ensure they do not impact the dome seeing. The weldments for the enclosures were procured by MDO and were outfitted with hatches, seals, cables and the heat removal system by TAMU \citep{prochaska14}. The VIRUS support infrastructure, installation, and maintenance procedures are described in \cite{spencer18}. \begin{figure}[!ht] \epsscale{1.1} \plotone{vss.png} \caption{ \label{vss} \footnotesize View of the HET from the front showing the primary mirror and the large VIRUS enclosures either side of the telescope structure. The enclosures sit on the VIRUS support structure, which moves on air bearings to follow the azimuth setting of the telescope. The liquid nitrogen phase separator tanks of the VIRUS cryogenic system can be seen mounted to the top of each enclosure. In total, the enclosures and VIRUS support structure weigh 42,500 kg (measured with pressure sensing film and via air-bearing pressures) when all VIRUS units are deployed, and each enclosure has dimension \hbox{6.5 m wide $\times$ 1.23 m deep $\times$ 6.15 m tall.} Figure adapted from \citet{hil18a}, Fig. 6. } \end{figure} The distributed and large-scale layout of the VIRUS array presented a significant challenge for the cryogenic design \citep{smith08, chonis10}. Allowing a 5~W heat load for each detector, accounting for all losses and a 50\% margin, the cooling source is required to deliver 3,600~W of cooling power. Following a trade-off between cryocoolers, small pulse-tubes, and liquid nitrogen based systems, it was clear, from a reliability and cost point of view, that liquid nitrogen was the optimum choice. The challenge of supplying the coolant to the distributed suite of spectrographs was overcome by adopting a gravity siphon system fed by an 11,000 gallon external tank. An important aspect of the cryogenic design is the ability to remove a camera cryostat or spectrograph unit from the system for service without impacting the other units. This capability is particularly difficult in a liquid distribution system. A design was developed that combines a standard flexible stainless steel vacuum jacketed line (SuperFlex) to a cryogenic bayonet incorporating copper thermal connector contacts into each side of the bayonet. When the bayonet halves are brought together they close the thermal contact. The resulting system is completely closed, i.e., it is externally dry with no liquid nitrogen exposure. The camera end is connected by a copper cold finger to the detector. This design has another desirable feature: in normal operation the SuperFlex tube slopes downwards and the bayonet is oriented vertically. Liquid evaporation flows monotonically upwards in order to avoid a vapor lock. If the bayonet is unscrewed and raised upwards, a vapor lock will occur and the bayonet will be cut off from the cooling capacity of the liquid nitrogen. This configuration effectively acts as a “gravity switch”, which passively halts cooling to that camera position, for maintenance or removal. This feature has been key in enabling the staged deployment of VIRUS units, while allowing the VIRUS cryogenic system to remain in continuous operation. The VIRUS cryogenic system was constructed by Midwest Cryogenics and installed in 2012, and has been in continuous use since then. The draw on the external tank is approximately 1200 liters per day or about 2,200 W. That is approximately 60\% of the design cooling power. The external tank has capacity for one month supply and is replenished by regular deliveries of liquid nitrogen, roughly twice a month. An essential part of VIRUS cryogenic system is its safety system \citep{spencer18}. This system continuously monitors critical variables (e.g., dome atmosphere oxygen levels and liquid nitrogen pressure, flow rates, and storage tank level). When predefined limits are exceeded the system automatically activates strategically-located audio and visual alarms, and if required closes the main liquid nitrogen supply line valve. Each afternoon the system performs an auto-test of the alert system, and sends test telephone alerts to the recipient list, ensuring that the system cannot cause an unsafe condition or go off-line for an extended period without being noticed. The VIRUS array underwent a staged deployment of IFUs and spectrograph units, starting in late 2015 \citep{tutt16,hil16a, vattiat18,spencer18}. The left panel of Figure~\ref{ihmp} shows the full compliment of 78 VIRUS IFUs deployed at the focus of HET, along with the two IFUs of LRS2 and the HPF fiber feed. As of 2021~May, 74 of the VIRUS IFUs are attached to spectrograph units. This number is considered complete for the purposes of the HETDEX survey, but the final four units will be brought on line as they become available. Installation and maintenance of the VIRUS spectrograph units is described in \citet{spencer18}. Primary ongoing activities are vacuum maintenance and special calibrations. VIRUS cryostats are not outfitted with vacuum gauges, but the combination of cold block heater current and detector temperature setting allow the state of the vacuums to be monitored. CCD temperatures are held at set points between -110 and -100 $^\circ$C. When the heater power needed to maintain a set point drops below a threshold, the set point is adjusted warmer and before it reaches -100 $^\circ$C, the unit is scheduled for cold vacuum pumping. Cold pumping during the day lasts typically 4-6 hours and results in the vacuum being improved from $\sim$2$\times$10$^{-3}$ to $\sim$10$^{-5}$ mbar. Two cryostats can be pumped at once and most last in excess of 4 months between pumpings. The effort required to maintain the vacuums on 78 cryostats is significant, and an upgrade to add ion pumps to all the units is underway. The principal special calibrations are defocused flat field images that are processed to create pixelflats that isolate the small-scale pixel-to-pixel variations in the flat field. Normal flat fields have the strong spatial modulation of the fiber traces that is on a similar scale to features in the CCD flat field (see Fig. \ref{VIRUSchar}). Spacers are introduced at the kinematic mounts between the IFU slit unit and the spectrograph collimator to sufficiently defocus the spectra that the spatial brightness modulation can be fit and removed to leave the small scale structure in the flat field \citep{geb21}. Introducing the spacers and taking the calibration data is time consuming and the HET staff work to cycle through the full array of VIRUS units once per year. That cadence is sufficient to reveal any small changes in the flat fields with time. \section{VIRUS Performance} \label{sec:Vperformance} With 95\% of the VIRUS array deployed, the performance can be compared with the requirements and technical specifications. VIRUS is the first time a single spectrograph design has been replicated on such a large scale in astronomy, and it is of interest to examine the uniformity of properties across a large cohort of ostensibly identical instruments. In this section, properties of the spectral coverage and resolution, experience with the deployed IFUs, and spectrograph stability and performance are presented. \subsection{Spectrograph coverage and resolution} \label{sec:speccomp} Figure~\ref{Vcoverage} presents statistics on the wavelength coverage of VIRUS spectrograph channels. The image of calibration lines reveals the curvature of lines of constant wavelength with slit position, which was taken into account in the design so as to preserve a minimum of 2000 \AA~ of wavelength coverage from 3500~-~5500~\AA. The fringe frequency of the volume phase holographic gratings is the primary driver of the coverage, while the fringe frequency and angles of incidence and diffraction set by the spectrograph structure control the minimum wavelength. Uniformity of wavelength coverage of one spectral resolution element was targeted, leading to a specification on fringe frequency of $\pm$ 2 fringes and on structure accuracy after alignment of 0.025 degrees. The histograms in Fig.~\ref{Vcoverage} demonstrate a mean coverage of 2003.3~$\pm$~2.1~\AA~ and a mean minimum wavelength of 3492.4~$\pm$~5.9~\AA. The dispersion in minimum wavelength is one resolution element, which was the adopted specification. This wavelength range corresponds to redshift coverage of 1.87 to 3.52 for Ly-$\alpha$, bracketing the high-level requirement of 1.90 to 3.50. \begin{figure}[!ht] \epsscale{1.0} \plotone{Vspeccoverage.png} \caption{ \label{Vcoverage} \footnotesize VIRUS spectral coverage. Left panel: A spectrograph channel illuminated by Hg+Cd emission line lamps in the lab, following alignment. Wavelength increases going up. Green circles are measurement marks for position on the CCD. Note the intentional gaps with fibers missed in the slit layout. The central gap of 3 fibers provides margin for alignment so spectra do not bridge the (vertical) readout split between the two amplifiers per CCD. The gaps at about 2$/$3 of the slit extent and CCD pixels beyond the ends of the slit provide the ability to better separate the wings of the fiber image profiles. The curvature is factored into the wavelength coverage of the channels. Upper right: The minimum wavelength coverage accessible for all fibers for 138 spectrograph channels. Lower right: The minimum wavelength of the same channels. The dispersion in total wavelength coverage is 2.1~\AA~and in minimum wavelength it is 5.9~\AA. For reference, the spectral resolution is 5.6~\AA. See text for discussion. } \end{figure} \begin{figure}[!ht] \epsscale{1.2} \plotone{Vresolution.png} \caption{ \label{resolution} \footnotesize VIRUS resolution. Left panel: Median variation of instrumental resolution expressed in unbinned pixels FWHM over the areas of the Left and Right channel CCD detectors. 3500~\AA\ wavelength is to the left and 5500~\AA\ to the right in each channel. Note that the (2064$\times$2064 pixel) CCDs are binned by a factor of 2 in the spectral dimension. The horizontal line shows the position of the break between the two amplifiers for each channel detector readout. Right panel: Median resolution (FWHM) in unbinned pixels for 141 spectrograph channels active on 2019 October 19. The scale is the same as in the left panel and the color bar on the right shows the full range of values and applies to both panels. This projection shows the layout of the IFUs within the focal surface of the input head mount plate. The LRS2-B and LRS2-R IFU positions are marked ``B" and ``R". Positions are expressed in arcseconds projected on sky. The X axis is perpendicular to the tracker bridge and the Y axis is in the direction of the tracker bridge with positive Y being the parallactic angle. Each IFU feeds two spectrograph channels with the Left channel mapping to the lower half of the IFU in this projection. IFU seat positions with no spectrograph attached are shown as dashed outlines. The IFU with only one active channel was exhibiting a readout problem at the time the data were taken. See text for discussion. } \end{figure} Instrumental resolution is very uniform from channel to channel and stable over time. Figure~\ref{resolution} shows the resolution over the CCDs of the Left and Right spectrograph channels. The values are color-coded as unbinned pixel FWHM and calculated from the median over each channel. The right panel in Fig.~\ref{resolution} shows the distribution over the HET focal surface of median resolution by channel, for deployed spectrograph units. The resolution is expressed in unbinned pixels FWHM, though in practice the spectral dimension is usually binned by a factor of 2. In the right panel, the mapping of one IFU to two spectral channels is evident with small variations in spectral resolution on the two halves of the IFU. Figure~\ref{dispersion} shows the uniformity of the dispersion relation over the same channels (left panel) and shows histograms of the range of spectral resolution exhibited by each channel in \AA\ FWHM (right panel). These analyses show very consistent performance from channel to channel as expected from the flow-down of requirements to component specifications. Spectrograph resolution and long-term stability of the spectrographs can be monitored with internal calibration sources and through observations of extended emission-line regions. The high stability required for the position of the spectra on the CCDs of $<$0.5 (unbinned) pixels over the 5~$^\circ$C temperature swing typical of an observing night (\S\ref{sec: spectrographs}) has been met. Analysis of 61 channels shows average (maximum) shifts of the spectra of 0.03 (0.18) pixels in the spatial dimension and 0.09 (0.23) pixels in the spectral dimension, for a 5~$^\circ$C temperature change, where the factor of 2 binning in the spectral dimension has been accounted for. The spectral dimension shows slightly greater variation, corresponding to $\delta \lambda \sim$0.1~\AA\ on average, but the total shifts even in the worst case are well below the requirement. As a result, calibrations can be applied over a whole night. \begin{figure}[!ht] \epsscale{1.1} \plotone{Vdispersion.png} \caption{ \label{dispersion} \footnotesize VIRUS dispersion. Left panel: Dispersion relation for 141 spectrograph channels. Dispersion is expressed as \AA\ per binned pixel. The median relation and 1-$\sigma$ variation are shown as a black line and grey shading, respectively. The median dispersion is 1.99 \AA\ per binned pixel. Right panel: The variation in median resolution for the channels shown in Figure~\ref{resolution} converted from pixels to \AA. } \end{figure} \subsection{IFU Fiber Cable Performance} \label{subsec:ifuperf} Deployment of IFUs is discussed in \citet{vattiat18} and \citet{spencer18}. Performance of the IFUs is discussed in \citet{kelz21}. During installation, care is taken to ensure that axial twists are not introduced, since experience with the Mitchell Spectrograph prototype revealed a failure mode where repeated twists produced stress in the corner fibers of the IFU input, resulting in severe FRD for those fibers. Once installed, the IFUs hang in simple catenaries between strain relief points on the PFIP and on the VIRUS enclosures. Motion of the tracker simply changes the distance between the strain relief points. Extensive prototype testing \citep{murphy12} involving motions equivalent to 10 years operation revealed only an initial relaxing of the conduit, and lengthening relative to the fibers, as a concern after installation. The tailpieces of the IFUs, where the fiber branches into the two slits, have a clamp to allow adjustment of the axial position of the conduit relative to the fibers to remove any tension that might appear (Fig.~\ref{ifuprod}c). After an initial examination and adjustment of any that exhibit tight fibers, subsequent checks have not revealed significant signs of the fibers translating, axially. Checks are made on approximately a one year cadence at the same time as pixelflat calibrations are obtained. The IFUs are extremely stable, physically and in their optical properties such as transmission. The IFU specification allows for an average of 1 broken fiber (2.2\%) and a maximum of 4, per IFU. In practice, broken fibers are rare, with 37 (0.1\%) in total in the deployed array. \subsection{VIRUS Throughput and Sensitivity} \label{subsec:Vthuput} The throughput requirement of the system of VIRUS units and the HET was calculated during the Preliminary Design Review phase of HETDEX \citep{hil08b} and the baseline was established in 2010. Sensitivities and predicted number of detected emission-line objects were derived and compared to the project's science requirements. Typical observing conditions were included in the simulations. This analysis established the required number of fibers and hence the number of VIRUS units needed (\S\ref{sec:design}). The overall throughput was apportioned at the individual component level, in order to establish requirements for the minimum performance of each component along with mean performance requirements for production sets of components. This approach allowed manufacturers some leeway for the inevitable range of component performance, without having to reject many components with the resulting increase in cost and project schedule. \edit1{The left panel of Figure~\ref{throughput} presents the median efficiency (throughput) of each of the components of VIRUS. It also shows the obstruction model that accounts for the CCD package at the prime focus in the camera \citep{hil18a}. If light is transmitted through the fibers with little focal ratio degradation (FRD), as intended, then the dark central obstruction of the telescope pupil will be preserved in the far-field image at the output of the fibers. There is azimuthal scrambling of the light as it is transmitted through the fiber but ideally very little radial scrambling, which is the definition of good FRD performance. In that case, on axis at field center in the spectrograph camera, the obstruction of the detector package largely coincides with the pupil central obstruction, which is dark, and there is less light loss than off-axis in the camera where the detector package and central obstruction are not aligned. Hence, the shape of the obstruction curve shown in Figure~\ref{throughput} peaks at field (or wavelength) center. Poor FRD would depress the obstruction curve at the center due to light being scattered radially into the central obstruction of the far-field light distribution during transmission through the fibers \citep{murphy08,murphy12}. } The components that most affect the overall performance are the IFU fiber transmission, grating, and CCD (Fig.~\ref{throughput}). \edit1{\citet{indahl18} present the mean performance and variations of each component.} The gratings show some blaze variation that trades efficiency at 5500~\AA~ with efficiency at 3500~\AA~\citep{chonis14,indahl18,indahl21}. This was quite well controlled and improved through production, but amounts to a variation of $\sim$10\% of the mean efficiency at 3500 \AA~ and $\sim$26\% at 5500 \AA~(2-$\sigma$). The effect on sensitivity of this variation is more pronounced at the shorter wavelengths where the overall throughput is dropping off. The reflectivity specifications for the multi-layer dielectric reflectors \edit1{(collimator, fold-flat, and camera mirrors)} were sufficiently stringent that they do not significantly contribute to the dispersion in properties, except for isolated dips in efficiency as seen in Figure~\ref{throughput}. A small number of coating batches had to be replaced as they were out of specification. The VIRUS units incorporate 156 spectral channels that are individual realizations of the same spectrograph with the varying performance of the individual components. Mirrors, gratings, IFUs and CCDs have measured efficiencies that can be combined to compare to the on-sky throughputs of deployed spectrograph channels. \cite{indahl18, indahl21} discuss the range in properties of the VIRUS components and examine multiple realizations of the spectrograph to explore the expected range of throughputs for the spectral channels. The right panel of Figure~\ref{throughput} presents the results of combining the efficiencies of randomly-selected manufactured components to create 156 random realizations of the spectrograph channel throughput. The 2-$\sigma$ spread in throughput is approximately~$\pm$25\% except at the ends of the wavelength range, where it rises to $\pm$30\%. This is in line with expectations for the specifications of the components. Figure~\ref{throughput} also compares the average of these simulated spectrographs to the model of VIRUS throughput adopted in 2010. The model and the derived throughput from simulated components both include the field- and wavelength-dependent correction to account for the obstruction of the detector package in the Schmidt camera of VIRUS, as discussed above \citep{hil18a}. The ends of the wavelength range presented the greatest challenge for meeting throughput requirements, due to the fiber transmission, CCD quantum efficiency, and grating blaze. This fact is reflected in the higher throughput performance than required in the middle of the wavelength range and the slight shortfall at the extreme ends. Overall, this comparison indicates that the predictions of component performance were realistic and the specifications supplied to the manufacturers on a batch basis could, on average, be met or exceeded over most of the bandpass. A comparison can be made between these predictions and the measured on-sky performance by combining them with a model of the HET. The HET model shown in the left panel of Figure~\ref{onsky} includes the WFC obstructions and primary mirror illumination at field center with the tracker on axis, and accounts for those losses relative to a 10~m unobstructed aperture above the atmosphere with an atmospheric extinction model at an airmass of~1.25, which is the mean value for a HET track. Mirror reflectivities measured for the WFC are included and the primary mirror segment reflectivity includes a wavelength-dependent degradation to account for a mean segment coating age of 20 months. \edit1{The primary mirror degradation model is based on limited on-sky measurements made with the original HET of stars with the primary mirror segments destacked so as to produce discrete star images for each segment. Comparison of brightness between freshly coated segments and those of varying ages results in mean degradation coefficients of between 1 and 2\% loss per month over the bandpass of VIRUS, increasing towards shorter wavelengths. These coefficients are quite uncertain and the primary mirror reflectivity is the least well understood component in this prediction of HET throughput. } \begin{figure}[!ht] \epsscale{1.18} \plotone{throughput_revised.pdf} \caption{ \label{throughput} \footnotesize VIRUS component efficiencies and spectrograph channel throughputs. \edit1{Left panel: Median performance of each of the components of a VIRUS spectrograph channel as a function of wavelength, calculated from data delivered by vendors. The three curves with the highest throughput are the reflective coatings on the mirrors, the blue solid curve is the CCDs, the orange solid curve is the IFU fibers, and the green solid curve is the gratings. The grey dashed curve is the spectrograph obstruction of the detector package in the camera in relation to the central obstruction of the pupil from the telescope as transmitted through the fibers. This curve assumes good focal ratio degradation (FRD) performance so the central obstruction remains dark within the spectrograph and best throughput is achieved at camera field center. The lowest black line shows the combination of these component medians and the obstruction.} Right panel: Predicted range of VIRUS unit throughputs without the telescope and atmosphere. The grey curves present 156 realizations of the spectrograph channel throughput made by multiplying randomly-selected components, along with the same model for the spectrograph obstruction presented in the left panel. The black line is the mean of those 156 channels and can be compared to the black line in the left panel. This ground-up average is compared to the prediction from 2010 on which the VIRUS specifications were based, shown in orange. That prediction incorporates the same spectrograph obstruction model, so the higher throughput in the middle of the wavelength range reflects the component performance being better than specifications, except at the extreme ends of the wavelength coverage. The broad dip around 4300 - 4500 \AA\ is a feature in the reflectivity curves of the dielectric mirror coatings that was not taken into account in the prediction. } \end{figure} The measured on-sky VIRUS throughput shown in the right panel of Figure~\ref{onsky} was bootstrapped from detailed measurements of standard stars made with the LRS2 instrument, which is well calibrated and stable. Spectrophotometric standards are observed every night with LRS2, and the most pristine conditions were selected to measure the throughput of the system for 3700~-~10500~\AA. The~98\% fill-factor lenslet-coupled IFUs of LRS2, with $0 \farcs 6$ spatial elements, allow essentially all the photons from a star point-spread-function (PSF) to be recorded. It is considerably more challenging to account for a star’s PSF with VIRUS data due to the $1 \farcs 5$ fiber size and the need to acquire three dithered exposures to fill in the fiber pattern (\S\ref{sec:virus}). A low-order polynomial fit to the VIRUS throughput curve was obtained by comparing simultaneous observations of the sky between LRS2-B and VIRUS units, accounting for the different spatial element areas and the field illumination pattern of the HET. Wavelength regions with structure in the throughput of both instruments were avoided in the normalization. The VIRUS throughput is the average of deployed units, corrected to the center field, center track. This measure removes the field illumination pattern caused by vignetting in the WFC that has a maximum loss of 10\% for the edge IFUs at 9\arcmin~ field radius, with the tracker centered. Figure~\ref{onsky} demonstrates good agreement in the shape and amplitude of the system response between prediction and observation. There is the potential for some inaccuracy in the model of the HET mirror reflectivities, particularly the primary mirror segments that are recoated on about a year cycle time\footnote{During the COVID-19 pandemic telescope access restrictions prevented mirror segment coating, and the desired 12 month cycle time has extended to more than 20 months.}, or due to fiber FRD scattering some light into the central obstruction at the pupil of the spectrograph channels. However, the solid agreement in the shape of the system response suggests that the VIRUS optics obstruction model, assuming good fiber FRD performance, is on average valid. Figure~\ref{onsky} also presents the measured variation in on-sky throughput for deployed spectrograph channels, corrected to field center for the HET field illumination pattern. The curves are generated from twilight sky observations to produce the normalized relative throughput for each channel compared to the average; these normalized curves are multiplied by the average throughput derived from comparison with LRS2-B to generate the curves presented for each spectrograph channel. The LRS2-B comparison is for wavelengths above 3700 \AA. Below this wavelength the curve is extrapolated and hence may be subject to some error. There is some shortfall in measured throughput below 4000~\AA, compared to the model, but overall the on-sky performance of the spectrograph channels is in good agreement with expectations with a shortfall of a factor of about 0.9 at the shortest wavelengths. Considerable focus in design and production was applied to maximize the throughput at 3500~\AA, in spite of the combined challenges posed by fiber length, CCD efficiency, and atmospheric transmission, since the majority of LAEs will be detected between 3500 and 4500~\AA, due to their distance modulus and luminosity function \citep{hil08b, geb21}. \begin{figure}[!ht] \epsscale{1.18} \plotone{onsky_revised.pdf} \caption{ \label{onsky} \footnotesize On-sky throughput of VIRUS plus HET and atmosphere, compared to expectations. \edit1{Left panel: Components of the HET throughput model that transforms the spectrograph throughputs in Figure~\ref{throughput} to the on-sky prediction. The upper grey dashed curve is the atmospheric transmission for McDonald Observatory at 1.25 airmasses (the HET is a fixed elevation telescope). The black solid curve is the prediction for the primary mirror segments that have bare aluminum coatings along with a model for the coating reflectivity degradation as a function of wavelength. The degradation model is based on limited measurements made against freshly coated segments using on-sky star observations. The degradation is shown for 20 months. The dashed blue curve is the expected reflectivity plus on-axis obstruction for the mirrors of the wide field corrector, based on witness samples from the mirror coating deposition. The orange line is the combination of these components and represents the best estimate of the HET on-axis throughput at center track.} Right panel: The orange curve shows the prediction for the average VIRUS throughput based on the 2010 model shown in Figure~\ref{throughput} combined with the model of the HET and atmosphere \edit1{presented in the left panel}. The black curve is the average measured on-sky throughput for 135 VIRUS channels corrected for the field vignetting of the WFC at track center by comparing to the LRS2-B throughput under the best observing conditions. LRS2-B coverage is above 3700 \AA. Grey curves show the individual throughputs of 135 channels, measured on sky and normalized to LRS2-B in the same way, as described in the text. Throughput is defined as the fraction of photons incident upon an unobstructed 10 m diameter aperture above the atmosphere that result in detected electrons on the CCD. } \end{figure} HETDEX observations are calibrated directly against field stars in the field of view, and provide independent confirmation of the analysis presented here \citep{geb21}. The definition of throughput adopted here is different from that in the HETDEX data reduction pipeline where the number of detected electrons is compared to photons incident on a 50 m$^2$ area, rather than a 10 m diameter unobstructed aperture, and the field illumination of each specific IFU is included. The best average throughputs at 4940 \AA, measured in HETDEX data, are 18\% with that definition. Correcting the average throughput at the same wavelength in Figure~\ref{onsky} for the difference in effective aperture and the average illumination for the IFUs, which is 0.94, yields 18.6\% under the same definition. More typical observing conditions yield observations with throughputs of 15-16\% under the HETDEX definition, but the best values are in good agreement with the calibration presented here, considering that the two methods of measuring the throughput are quite different and independent. The shape of the average system response also agrees well with that derived from the calibration of HETDEX observations over the full wavelength range, which further indicates that the cross-calibration between VIRUS and LRS2-B is well-defined and stable. Another approach to evaluate the performance is to compare detection sensitivities with predictions. The noise in blank sky spectra is well characterized, since the vast majority of fibers in a given observation only contain sky signal. \edit1{The most direct measure of sensitivity to compare with predictions is the noise level per resolution element in observations of the dark sky. For objects near the detection limit, the sky noise level determines the sensitivity.} Figure~\ref{sensitivity} (left panel) presents the sensitivity of VIRUS, expressed as the 5-$\sigma$ noise in the sky per fiber, per resolution element, compared to an estimate derived from the throughput, sky, and atmosphere models developed in 2010. The sky model used in the prediction was derived from observations with the Mitchell Spectrograph for the HETDEX Pilot Survey \citep{adams11}. Some terrestrial sky features such as HgI$\lambda$5461 are noticeably stronger over the decade between the Pilot Survey and HETDEX observations, \edit1{but the overall sky brightness level is the same.} The single fiber flux limit based on the sky is best for direct comparison between observations and the model, without having to account for variable image quality and transparency \edit1{that will determine the flux limit for observed sources.} Figure~\ref{sensitivity} indicates that by this measure the system is delivering similar or better sensitivity than predicted at wavelengths longer than $\sim$3700~\AA, and poorer sensitivity for shorter wavelengths. Read noise measured from the overscan data of bias frames of deployed CCDs, with four amplifiers per unit, has an average value of 2.95~$\pm$~0.34 electrons, in line with specifications derived from science requirements. The right panel of Figure~\ref{sensitivity} presents the ratio of sky noise to read noise per resolution element for a typical spectrograph channel for the short 360 second HETDEX exposure time. It is the average from all fibers in 71 VIRUS units, active in mid 2020. The requirement is that these two noise sources be equivalent at the shortest wavelength for a 360 second exposure and higher for longer wavelengths; this is typically borne out on sky, but there is a shortfall at wavelengths below about 3700~\AA, and this accounts for the shortfall in sensitivity versus expectations at the shortest wavelengths. Longer exposures of 1000 seconds or more, for other observing programs with VIRUS, will not suffer from this limitation, since sky noise then equals or dominates read noise at all wavelengths. Nonetheless, while the read noise does contribute more then intended at the shortest wavelengths and this coincides with the poorer sensitivity at these wavelengths, overall the sensitivity of VIRUS appears in line with expectations except for the bluest wavelengths. A detailed discussion of throughputs, detection, sensitivity and completeness limits delivered for the HETDEX survey is presented in \citet{geb21}. \edit1{The object line-flux sensitivity that corresponds to the noise level in Figure~\ref{sensitivity} will depend on how many fibers are included in the detection.} Models of the flux limit with the object located at the vertex of three fibers in the dithered observing pattern, or centered on 7 fibers differ by only a few percent, \edit1{so the sensitivity will not depend significantly on object position in the IFU. In HETDEX, objects are extracted from fibers within a 3\arcsec~ radius with weighting by the point spread function \citep{geb21}. } The noise measured in sky-subtracted, extracted spectra of emission line detections is consistent with the sensitivity levels indicated in Figure~\ref{sensitivity}. \edit1{The faintest emission lines among the detections under good conditions have total line flux of \hbox{$\sim 5 \times 10^{-17}$ erg cm$^{-2}$ s$^{-1}$}.} \begin{figure}[!ht] \epsscale{1.15} \plotone{sensitivity.pdf} \caption{ \label{sensitivity} \footnotesize Left panel: Comparison between predicted 5-$\sigma$ \edit1{sky noise} flux limit (red), which is five times the noise per spectral resolution element, to that measured from the noise in blank sky resolution elements (black). The limit is per fiber, for three 360 second dithered exposures that comprise a HETDEX observation. Right panel: Ratio of sky noise to read noise per resolution element in a 360~second exposure as a function of wavelength. The horizontal dashed line indicates a ratio of unity. The ratio is measured in dark time for the mean read noise of 2.95 electrons. The requirement was to have read noise and sky noise equivalent in this exposure time, but below about 3700 \AA~ there is a shortfall. } \end{figure} \section{Observing with the HETDEX Hardware}\label{sec:WFUperformance} The HET re-entered full queue-scheduled science operations in 2016 December \citep{hil18b}. All the metrology subsystems described in \S\ref{subsec:pfip} are working as intended (Table~\ref{tab-wfupgrade}) and the control loops of the tracker are extremely robust due to the tracker hardware design \citep{good14b,good18,lee18b}. In this section, details of the observing framework and performance of the HETDEX instrument system are discussed. \subsection{Target Preparation and Acquisition}\label{subsec:target} The new telescope control system \S\ref{subsec:tcs} is scriptable, so almost fully automated observation is enabled. The HET often observes a diverse set of science programs on any night, utilizing all the available instrument modes. Setups can be blind or on targets visible on the acquisition camera. Night operation maintains manual setup capability to accommodate this diversity. The target submission language specifies instrument and configuration, acceptable observing conditions, and target properties. Constraints on whether the observation is obtained with the telescope azimuth set for an east or a west track, along with observation groupings and sequences, can be specified as needed. Once accepted, target coordinates and azimuth are processed through a target setup utility called {\it Shuffle}, which selects guide and wavefront sensor stars within the outer-field annulus. Figure~\ref{shuffleM51} presents a Shuffle setup on M51, as an example that illustrates the sectors available to each guideprobe and operational wavefront sensor probe in the outer 2 arcminute annulus. Each probe can range through 180$^\circ$, and while their ranges overlap, they cannot physically collide \citep{vattiat14}. Shuffle can adjust the pointing center if desired to improve guidestar selection. Other options allow placement of a target on a specific IFU with offsets from the IFU center. The output of Shuffle is a file capturing the information to set up the HET on the target, including photometry of the guidestars which is used to reference the calculation of the transparency during the observation. Shuffle also generates a synthetic acquisition camera image for the telescope operator that identifies the pixel positions of stars for precise blind setup, if required. Shuffle is generally run well prior to an observation, but executes in about 20 seconds. Shuffle coordinates of the instruments and guideprobes are based on a coordinate system that is accurately tied to sky through observations of star fields with VIRUS. \begin{figure} \epsscale{1.0} \plotone{shuffle_M51.png} \caption{ \label{shuffleM51} \footnotesize Example setup of the HET using the “Shuffle” software tool on M51 to illustrate the size of the 22 arcminute diameter HET field of view and the footprint of the VIRUS IFUs. The scale indicated by the arrows in the lower left corner is 2.0 arcminutes. The setup is for a west track at telescope azimuth 310$^\circ$. The outer annulus extends from 9-11 arcminutes field radius and has the patrol regions of the two guideprobes and 2 wavefront sensors indicated, along with the identifications and coordinates of the guide and wavefront stars chosen by the software (in yellow and white, respectively). The available VIRUS and LRS2 IFUs and their input head mount plate seat locations are indicated as green squares (which are the correct size for VIRUS, but just indicate the centers for LRS2 in input head mount plate seat locations 056 and 066). Blue circles indicate all objects in the reference catalog (PanSTARRS or Gaia) and magenta indicates those that fall on IFUs. A small red cross indicates the coordinate of the target or field center. The parallactic angle direction is indicated by the red arrow, which is the direction of the zenith for track-center for this particular observation, and sets the orientation of the IFU pattern on the sky. This direction also corresponds to the Y-axis of the HET tracker. } \end{figure} The setup of VIRUS for HETDEX observations is based on the guide probe positions, since the accurate coordinates of the observation are derived from the data during processing \citep{geb21}. There are two usable guidestars more then 90\% of the time. One star is centered on a guide probe and becomes primary for guiding, while both guiders provide metrology streams on other parameters of the observing conditions. Such blind setups have an accuracy of $1 \farcs 5$ rms, comparable to the separation of the fibers in the VIRUS IFUs, and more than adequate for HETDEX and most other VIRUS observations. The primary guidestar fiducial is moved to provide the small offsets for the dithers needed to fill in the gaps between fibers in the IFUs. The ability to orchestrate the telescope control system has also led to improvements in efficiency. The observing conditions decision tool is a Python state machine that monitors the event stream from the metrology system to decide whether conditions are suitable for HETDEX observing. The observing conditions decision tool sets the exposure time for the next observation using the transparency and image quality from the current exposure as inputs, along with the primary mirror illumination for the next observation. Exposure times are allowed to increase up to a factor of two, to compensate for poor conditions. No compensation is made for better than median conditions. When conditions are deemed acceptable and the moon is down, the observing conditions decision tool selects the best field and (if allowed) will automatically slew the telescope and perform the observation with only a setup and guiding confirmation and focus check needed from the Telescope Operator. This orchestration has improved operational efficiency for HETDEX observing with VIRUS and is accomplished within a framework that will allow other instruments to interact with the telescope as the system is developed further. Observing conditions metrology from the guideprobes informs target selection by the night staff or by the observing conditions decision tool. Transparency and sky background brightness are derived from photometry performed on the star images with knowledge of the guidestar brightness passed from Shuffle; image quality is measured from star profile fitting. This metrology stream is logged and displayed in real time for decision-making by the night staff. An interesting observing mode enabled by the common focal surface and shutter between the instruments is VIRUS operating in parallel with the primary observation with LRS2 or HPF (and HRS, when delivered). Such observations are typically not dithered, but are often of much longer exposure time than the HETDEX observations, so have significant value for many projects. This mode can cover large areas with blind spectroscopy and will have interesting utility, especially for stars and other continuum objects. \vfill \subsection{Pointing, Tracking and Guiding}\label{subsec:pointingperf} \begin{figure} \epsscale{1.0} \plotone{guideoff.png} \caption{ \label{guideoff} \footnotesize HET WFU guiding and offsetting accuracy projected into tracker X and Y coordinates. Tracker Y is aligned with the parallactic angle. Left panel: example guiding residuals measured from star centroids for a 1.5 hour track, producing 0.09$\arcsec$ rms guiding, indicated by the red circle. Right panel: accuracy of dither pattern in tracker coordinates, under guider control, for 864 observations from 2018. The pattern is an equilateral triangle of side $1 \farcs 46$ and the offsets are accurate to 0.06$\arcsec$ rms. } \end{figure} The WFU has added detailed metrology on all degrees of freedom of HET positioning. As an example of guiding performance, Figure~\ref{guideoff} shows on-sky residuals during a full 1.5-hour track in the north. The left panel presents the centroids of guide images demonstrating $< 0 \farcs 1$ rms guiding accuracy. Dithering to fill in the fiber pattern within the VIRUS IFUs (\S\ref{sec:virus}) is achieved through offsetting the guide fiducial on the primary guide probe\footnote{A dither mechanism was deployed as part of PFIP \citep{vattiat14,hil16c}; however, it proved difficult to set up and the tracker offsets are so precise that it was ultimately not needed. It has hence been disabled.}. This process achieves a precision in the dither pattern of $0 \farcs 06$ rms as illustrated in the right panel of Figure~\ref{guideoff}. Telescope pointing and tracking have been improved through application of a physical mount model (\S\ref{subsec:wfuintegrate}); improvements in pointing are reflected in improved tracking. A key requirement for efficient observing with VIRUS is the ability to point the telescope such that most observations start with the guidestars within the 22$\arcsec$ guide-probe fields, or sufficiently close to the edge to be seen in the guide image by the telescope operator. This goal is achieved most of the time, and allows setup to be achieved simply by centering the guidestar without the overhead needed to deploy the acquisition camera. Figure~\ref{pointstats} shows initial pointing residuals by month over 2020. These data allow any degradation in pointing to be identified quickly, and indicate a stable mode for pointing accuracy of around 10-15$\arcsec$. In fact the Telescope Operators find there are still zero-point offsets in pointing of around 10\arcsec\ that persist for long periods of time and contribute to the distributions in Figure~\ref{pointstats}. When these offsets are taken into account the pointing is 10\arcsec\ rms, which usually results in immediate acquisition of the guide stars. \begin{figure} \epsscale{1.15} \plotone{2020_pointing.pdf} \caption{ \label{pointstats} \footnotesize Examples of performance metrics from the telescope control system database for 4237 target setups over 2020. The panels show statistics by month of initial pointing corrections that indicate the magnitude of the pointing error. All HET science observations are included, and all weather conditions, not just those for HETDEX. } \end{figure} \subsection{Field acquisition time}\label{subsec:fieldaq} The telescope control system logging allows tagging and monitoring of the duration of all steps in the observational setup process, which has proven helpful in identifying inefficiencies and in driving down the setup time. The most important aspect of reducing setup time has been improvements in pointing. When the majority of setups start with the guidestars in the guide-probe field of view, the interactive part of the setup requires only 10-20 seconds; it is usually not necessary for the telescope operator to observe the field with the acquisition camera. Figure~\ref{setuptime} shows the distribution of setup time between observations (from command to go to next observation to being ready to open the shutter on that observation). These statistics include all observations, not just for HETDEX. The peak close to 200 seconds is dominated by HETDEX setups, while the tail is primarily due to setups for the other instruments. Total setup time between observations (shutter close to shutter open on the next observation) includes all overheads, as charged to observing programs. Median total setup times for accepted observations are 4 minutes for VIRUS, 6 minutes for LRS2, and 7 minutes for HPF. These are a little longer than the setup times in Figure~\ref{setuptime} due to some additional overhead. About 20 seconds additional overhead is due to tracker shutdown and preparation book-keeping that happens after the shutter closes and readout of the previous exposure begins, and for HETDEX, and there is an additional 20 seconds due to executing an observing conditions decision tool optimization to choose the next target from the full database of survey observations. These overheads may be amenable to additional optimization. \begin{figure} \epsscale{1.0} \plotone{total_setup.png} \caption{ \label{setuptime} \footnotesize Setup time in minutes (measured from the command to slew the tracker to being ready to open the shutter on the next observation) for all observations over the months of January to September 2020. The data on 2952 setups are not filtered by HETDEX observation, and show two peaks. The peak at around three minutes is associated with HETDEX observations, run with the observing conditions decision tool automation, while the second peak around five minutes is associated with LRS2 observations and the tail to longer times is mostly associated with more exacting setups for HPF. The few instances with setup times beyond 10 minutes are associated with instances where conditions were difficult or where errors occurred. } \end{figure} While general observational overheads have been reduced significantly compared to the 10 minutes typical for setups on HET before the upgrade, and meet requirements, the goal of 1.5 minute setups has not been achieved for HETDEX observing, when no azimuth change is required. Detailed examination of the events stream from the telescope control system will lead to further incremental improvements, but setup time is now limited by the fundamental motion control of the tracker, with median move time of $\sim$90 seconds \citep{good18,rams18}, so further significant reductions below 4 minutes median total setup time are not expected. The consequence of this setup time for the three dithered 360 second exposures plus readout time that make up a HETDEX observation is a 7\% increase in the total HETDEX observing time over the goal, for the expected number of observations requiring Azimuth moves. \subsection{Optical performance of WFU and Delivered Image Quality}\label{subsec:imagequal} Operations have provided extensive data logging of the delivered images from the guide cameras (over a million images each year), allowing further diagnostic information to monitor the wave-front and optical image quality, and records of weather conditions. The wealth of data available from the metrology systems allow trends in image quality with temperature, wind speed and direction, and other quantities to be investigated. In particular the delivered image quality can be broken up into contributions from the corrector optics, the primary mirror, the dome environment and the column of atmosphere outside the dome, through analysis of data gathered from the wavefront sensors, the center of curvature tower instrumentation, and from other metrology subsystems. Wave-front sensor tests of the optical system during commissioning confirmed that the WFC and the metrology systems deliver the required optical image quality over the full field of view \citep{lee16a}. Those measurements indicate a floor of $\sim$1$\arcsec$ FWHM in the best site seeing conditions, and $1 \farcs 3$ FWHM in $1 \farcs 0$ site seeing conditions, as expected from the design. The WFC image quality varies a small amount between field center and edge of field where the guide probes measure the image quality for each observation. In median seeing, this change amounts to only $0 \farcs 1$ larger images at field edge. Direct tests between the acquisition camera on axis and the guide probes at the edge of the field demonstrate image quality is quite constant over the 22 arcminute diameter field of view in typical conditions. These measurements indicate that the upgraded HET optical system is performing to specification. The improved image quality of the WFC over the much enlarged field of view of the new HET still needs to be convolved with the performance of the primary mirror, any in-dome seeing component, and the atmospheric contributions to the delivered image quality. \edit1{The new hardware for the WFU and the support system for VIRUS were designed with integrated circulating glycol heat removal systems to extract excess heat from the dome environment that could impact dome seeing. The goal was to maintain surfaces at, or a few degrees below, ambient temperature. The VIRUS enclosures are well insulated and employ air circulation and glycol heat exchangers to control internal temperature \citep{prochaska14, spencer18}. Heat removal from the tracker is via heat exchange in glycol jackets on all motors, along with insulation jackets \citep{zier12}. Heat generated by cameras and other equipment in the prime focus instrumentation payload is removed via glycol heat exchangers as part of the air circulation and filtration system \citep{vattiat12}. All external surfaces in the payload are carbon fiber laminated foam insulation panels to reduce heat conduction. Measurements from temperature sensors deployed on the telescope and from a thermal camera have been used to evaluate the performance of the heat removal systems, by examining skin temperatures for subsystems within the dome environment. The VIRUS enclosure skin temperature is observed to closely track the ambient temperature while the enclosure steel frame is a few degrees colder. The tracker and payload skin temperatures track ambient or are 1-2 degrees below, due to radiation to the sky when the dome is open at night. The surfaces of the glycol supply lines are a few degrees below ambient. Examination of the thermal imaging does not reveal areas that are systematically warmer than the dome environment, so it is not expected that the hardware associated with the WFU and VIRUS will contribute significantly to any dome seeing component of the delivered image quality. } As with other ground-based telescopes, the delivered image point spread functions (PSFs) of HET are well fitted by \cite{moffat69} profiles\footnote{$I_r$ = $I_o$ [1+ (r$/$$\theta)$$^2$]$^{-\beta}$}, characterized in terms of the Moffat $\beta$ exponent (e.g. \citealt{truj01}). Smaller values of $\beta$ are associated with more pronounced wings. Atmospheric turbulence leads to $\beta$~=~4.765, a value approached in poorer seeing (e.g. \citealt{dey14}). The HET metrology database allows the PSF shape and FWHM to be characterized over millions of guider images and with the acquisition camera. This analysis reveals a tight correlation between $\beta$ and FWHM, with $\beta$~$\sim$~3.5 in $1 \farcs 0$ FWHM, and asymptoting to $\beta$~$\sim$~4.7 in the poorest conditions (FWHM~$\sim$~$4 \farcs 0$). Median FWHM values from the guide probes at field-edge, and from free air differential image motion monitor (DIMM) over five years are summarized in Table~\ref{tab-summimage}. Weather conditions at McDonald Observatory vary seasonally, so each year's image data are divided into the observing trimesters covering December to March, April to July, and August to November. Guide probe FWHM values include all observations for each trimester and are derived from Moffat fits to the radial profiles of the guidestars with $\beta$ a free parameter. Figure~\ref{imagequal} displays the image quality in more detail with plots of probability distribution functions and cumulative distribution functions, by trimester. Median image quality of the HET is improved, post-upgrade, but still delivers images larger than expected from the combination of DIMM site seeing and telescope optical performance. Investigations with high speed cameras and accelerometers have eliminated wind shake or jitter in tracking as primary sources of the added image quality component. \begin{figure} \epsscale{0.8} \plotone{imagequal.png} \caption{ \label{imagequal} \footnotesize HET delivered image quality measured from guide probes (field edge). Panels display seasonal variations in the distribution of \cite{moffat69} FWHM over the three trimesters (T1 = Dec-Mar; T2 = Apr-Jul; T3 = Aug-Nov). Both frequency and cumulative distributions are shown. The DIMM (differential image motion monitor) data representing the site seeing are also indicated in each panel. } \end{figure} \begin{figure} \epsscale{1.2} \plotone{iqanalysis.png} \caption{ \label{iqanalysis} \footnotesize Example metrology system analysis of HET delivered image quality for two tracks. Left shows a period of 65 minutes recorded on 2019-05-12 UT, during a period of good delivered image quality. Right shows 20 minutes recorded on 2019-04-17 UT, when the HET was delivering poor image quality. Time is UT. Data from the PFIP guide cameras (GC), operational wavefront sensors (OWFS), and calibration wavefront sensor (CWFS) are plotted from tracking a star on axis and 4 stars in the guide region at the periphery of the 22 arcminute diameter field of view. Simultaneous data from the Hartman Extra-focal Instrument (HEFI) and Wavescope (WVSC) located in the center of curvature tower and viewing the primary mirror are also presented. Facility metrology on wind speed and temperature differential from the last primary mirror stack are also presented. See text for discussion. } \end{figure} The ability to monitor the state of the primary mirror using instruments in the center of curvature tower, while simultaneously tracking stars and monitoring on-sky guider and wavefront sensor data, presents an opportunity to go beyond the guideprobe observing metrology data in assessing contributions from different components of the HET image quality. The Hartman Extra-focal Instrument measures the primary mirror image quality, including the mirror stack plus dome seeing. The Wavescope measures these components on a small scale, without the stacking included. Figure~\ref{iqanalysis} presents two examples of the engineering data that can be obtained from simultaneous on-sky metrology coupled with center of curvature instrumentation measurements of the primary mirror and DIMM measurements of free-air seeing. To accomplish this test, a star was placed on the calibration wavefront sensor at field center, while the guiders and operational wavefront sensors observed four stars. The calibration wavefront sensor provides image quality in small apertures, which is a measure of site seeing. When run at high speed, the calibration wavefront sensor acts as a multi-scale DIMM, including the full air column contributing to the HET image quality. Simultaneously, the primary mirror was observed from the center of curvature tower with the Hartman Extra-focal Instrument. The in-focus image from the Extra-focal Instrument provides a measure of the contribution of the primary mirror stack plus segment image quality. Various components of the image quality can be separated using these different metrology systems. Analysis of the figures of the primary mirror segments suggests that they can contribute an extra $0 \farcs 34$ in quadrature on-sky, beyond the stacking error, and this contribution can also be included. The data in the left panel of Fig.~\ref{iqanalysis} were obtained during a period of good delivered image quality, with the DIMM showing site seeing of 0.8-$0 \farcs 9$. Wind speed was about 5 mph and the ambient temperature had been stable since primary mirror stacking. The primary mirror Hartman Extra-focal Instrument and Wavescope data indicate a stable contribution of $\sim 0 \farcs 5$. The calibration wavefront sensor image size indicates seeing of about 1.0-$1 \farcs 1$ through the column, including interior and exterior air, indicating only a small additional component to that seen by the DIMM, if any. Combination of these components in quadrature accounts for the image size measured on the guide cameras of 1.2-$1 \farcs 4$. During this period HET image quality is behaving as intended, with only a small extra component indicated by the difference between DIMM and calibration wavefront sensor image size. The right panel of the figure presents a poor period with delivered image quality above 2\arcsec, obtained with winds around 20 mph and a 5~$^\circ$C temperature difference since stacking the primary mirror. The temperature difference is reflected in the poorer primary mirror stack as indicated by the Hartman Extra-focal Instrument. The DIMM was reporting site seeing above 1\arcsec, but the calibration wavefront sensor was showing images around 2\arcsec. The difference between the HET air column and free air is a combination of the dome and the environment above the dome. However, the dome seeing component, which is sampled by the Wavescope, indicates that the developed image quality is not dominated by dome seeing. The correlation with wind speed possibly indicates a cause associated with the environment outside the dome. Such engineering tests are being obtained on a regular basis to investigate components of the delivered image quality and to monitor the optical quality of the system to ensure there is no degradation from the current performance. These measurements are discussed in more detail in \citet{lee21}. To date, however, no trends have emerged that would indicate immediate actions to significantly improve delivered image quality. In summary, HET is delivering a median image quality between 1.5 and $1 \farcs 8$ FWHM with a Moffat profile, with a tail to poorer images above $2 \farcs 0$, particularly in Trimester 1 (December to March). This trend with trimester has not changed since the original HET began operation, although the upgrade delivers better image quality when the site seeing is good. There is a component to the HET image quality that is attributable to a combination of the primary mirror state, dome seeing, and exterior wind speed. This component was present in the pre-upgrade HET, and was not influenced by the improved image quality of the WFC, or the careful attention to heat sources in the dome during the upgrade. The primary mirror and general dome environment were not altered within the scope of the WFU. Ongoing data collection and engineering tests will hopefully reveal more about the nature of this component and suggest some mitigations, but for the planning of HETDEX it is assumed that the statistics assembled over the past several years are indicative of image quality for the remainder of the survey. As a result, a relaxed image quality criterion for HETDEX observations has been adopted, to allow observing in up to $2 \farcs 5$ rather than limiting at $2 \farcs 0$ image quality, with exposure times increased by the observing conditions decision tool to compensate. \section{Survey Results and Example Spectra}\label{sec:spectra} As a blind spectroscopy survey of very wide area, HETDEX records spectra of any object that falls on the $\sim$35k fibers. VIRUS has also been used for targeted observations of extended objects and in parallel with the other science instruments. This section presents \edit1{an overview of data analysis, some early science results, and} a sampling of spectra to illustrate the diversity of objects accessible to the instrument. Data are copied automatically to the Texas Advanced Computing Center (TACC) at the University of Texas at Austin, where the processing software is installed on TACC supercomputers. Data processing and analysis with custom software for HETDEX is described in \citet{geb21}. In addition to this HETDEX-specific reduction, several software packages have been developed and utilized during spectrograph characterization and operations and for processing observations with VIRUS and LRS2. The package {\it Vaccine} was developed for the HETDEX Pilot Survey, which demonstrated the application of wide field integral field spectroscopy to blind surveys from instrument to software pipeline \citep{adams11,blanc11}. The package {\it Cure} \citep{goessl06,snig12,snig14} was developed early in the project and forms the basis for the LabCure package utilized in IFU characterization (\S\ref{sec: IFU}, \S\ref{sec: alignment}) and scripts used for early spectrograph characterization \citep{indahl16}. At HET, the {\it VIRUS health check} is a Python utility, run on every exposure for real-time feedback throughout the night, that calls {\it Cure} subroutines and acts as quality control for the raw data, raising flags should there be unexpected data properties. A Python software package {\it Panacea} was also developed for VIRUS and LRS2\footnote{https://github.com/grzeimann/Panacea}; the latter is run automatically on a nightly basis. The Python package {\it Remedy} provides on-demand reductions for non-HETDEX observations with VIRUS \citep{zeim21}, including combining data for mosaics, where multiple pointings are used to fill in the gaps between VIRUS IFUs to create maps\footnote{https://github.com/grzeimann/Remedy}. \edit1{During an exposure, the metrology stream from the guide probes provides a measurement of the throughput of the system in the g band, used to correct each exposure for variations in telescope illumination over a track and for atmospheric transparency. The system response and flux calibration of VIRUS observations for HETDEX are fixed relative to (tens of) stars in the VIRUS field of view \citep{geb21} and to both field stars and standard stars for {\it Remedy} data reduction \citep{zeim21}. The shape of the system response is very stable and calibrations for HETDEX are accurate to 5\% \citep{geb21}. Comparison between flux calibrated spectra of standard stars and other continuum objects observed with both LRS2-B and VIRUS show agreement in the shape of the calibration at the 1\% level in the overlapping spectral region (3700 - 5500 \AA). LRS2-B has a fully-filled IFU, allowing the differential atmospheric refraction (DAR) shift in image position with wavelength to be followed exactly. This high level of agreement in system response calibration, internally (between VIRUS and LRS2-B) and externally against stars of known spectral energy distribution, demonstrates that the DAR model used in the extraction of continuum objects in VIRUS data is not a limiting factor in the flux calibration. } \begin{figure}[!ht] \epsscale{0.9} \plotone{Example_Spectra.png} \caption{ \label{examplespec} \footnotesize Examples of emission line and continuum sources detected in VIRUS data for HETDEX. From top to bottom: a Lyman-$\alpha$ emitting galaxy at $z$=2.46; an [OII] emitting galaxy at $z$=0.149. The [OII] emitter has detected continuum emission in addition to the emission line; a low mass local galaxy selected for strong [OIII] emission and large [OIII]$/$[OII] emission line ratio indicative of low metallicity \citep{indahl21a}; a broad line AGN at $z$=2.209 with faint continuum of g=22.5; a DA white dwarf \citep{hawk21}. } \end{figure} \edit1{As noted in \S\ref{subsec:target}, VIRUS can observe simultaneously with the other HET instruments in parallel mode. Any exposure of 300 seconds or longer with LRS2 or HPF as the primary instrument automatically triggers a secondary VIRUS exposure of the same duration. The parallel mode and the resulting continuum object spectral catalog of the HET VIRUS Parallel Survey (HETVIPS) are discussed in \citet{zeim21}. Parallel observations are not dithered, but any object falling on a VIRUS fiber will have a spectrum recorded with the wavelength-dependent sensitivity depending on the position of the object with respect to the fiber aperture and the image quality. Differential atmospheric refraction shifts the object position about 1$\arcsec$ as a function of wavelength, so sensitivity varies from object to object.} \edit1{Large areas are being surveyed in parallel mode, which accounts for more than half of VIRUS observations, and $\sim$50~deg$^{2}$~ has been observed with fiber spectra to date. The continuum object catalog, extracted at the positions of Pan-STAARS1 continuum objects, includes over 200,000 objects. Exposure times vary up to 4500 seconds, with a median of 1800 seconds, and sky brightness varies widely, driven by the primary instrument science. At g=19.5 the typical spectrum has signal-to-noise ratio of 10, averaged over the g filter. } \edit1{Parallel observations are reduced with {\it Remedy} independently from HETDEX and largely cover areas outside the HETDEX sky footprint where VIRUS is usually the primary instrument. \citet{zeim21} demonstrate automatic identification of stars, galaxies and quasars in the parallel dataset. Science projects enabled by parallel observations include the chemical history of the Galaxy, censuses of individual stellar systems such as white dwarfs, and properties of quasars and radio sources out to redshifts of 3.5. It is projected that over a decade, HETVIPS will observe 300 ~deg$^{2}$~ of sky area spread over the entire HET observing sky area and will detect and classify a million objects in a completely blind survey. } \edit1{As of 2021 September, HETDEX has completed about half the survey and has covered 45 sq. degrees area (area of IFU coverage), recording 300 million spectra and detecting more than a million emission lines and 110,000 continuum objects.} Figure~\ref{examplespec} presents several VIRUS spectra obtained during the course of HETDEX observing. LAEs and emission line galaxies dominate the detected objects, and a million of each are expected in the full survey \citep{geb21}. In addition, the survey will contain $\sim$300,000 stars \citep{hawk21}, 50,000 local galaxies brighter than $g$=22, and 10,000 AGN. All these objects will be observed without any pre-selection \edit1{and HETDEX observing is expected to be complete in 2024}. \edit1{The main driver for HETDEX is the determination of the expansion rate of the Universe in the 1.9 $<$ z $<$ 3.5 epoch. The key science requirements that led to the technical requirements in \S\ref{sec:design} are discussed in \citet{geb21}, confirming that the survey is yielding 2.5 LAEs per IFU per observation. Position and redshift accuracy are 0\farcs35 and 100 km$/$s, respectively, well within requirements. Other important requirements that are not related to the instrument capabilities include false positive rates and separation of LAEs from low redshift [OII]$\lambda3727$ emitting galaxies, as discussed in \citet{geb21}. } \edit1{The blind spectroscopic nature of the HETDEX survey will yield results in many areas other than cosmology. Early science publications based on about a third of the eventual dataset provide an indication of the content and applications of the survey. A first analysis of the luminosity functions of LAEs and AGN in early HETDEX data is presented by \citet{zhang21}. The sample comprises 18,320 LAEs, selected from deep Subaru imaging that overlaps HETDEX, to probe the rest-frame ultraviolet luminosity function of galaxies and AGN. The sample includes 2,126 broad-line AGN and confirms that the bright end of the luminosity function is dominated by AGN. At fainter luminosities, the luminosity function is consistent with previous studies and future papers will add determinations of the LAE emission line luminosity function as well as measures of their clustering.} \edit1{HETDEX will contain examples of rare emission line galaxies that are extremely difficult to identify in imaging surveys, such as the population of nearby (z$<$0.1) metal-poor [OIII] emitting galaxies discovered by \cite{indahl21a}. Galaxies were selected to have high [OIII] $\lambda 5007$ / [OII] $\lambda 3727$ ratio, implying highly ionized nebular emission often indicative of low metallicity systems (Fig.~\ref{examplespec}). Follow-up with LRS2 spectroscopy confirmed their low metallicities and reveal these objects as a population of high star formation rate, low mass galaxies that would not have been selected for follow-up without the blind spectroscopy of HETDEX. } \edit1{\citet{hawk21} provide a first look at the stars detected in the HETDEX dataset, with a sample of 100,000 stars selected by cross-matching with {\it Gaia} point sources. The spectra cover {\it Gaia} magnitudes 10 $<$ G $<$ 21. This study demonstrates that accurate classifications are possible with the VIRUS spectral resolution as well as radial velocities accurate to 28 km$/$s for stars considerably fainter than the {\it Gaia} radial velocity limit (G$<$14). An interesting result is that there is sufficient information content in the relatively low resolution VIRUS spectra to uncover 416 new metal-poor candidate stars. These were isolated via machine learning methods, which demonstrate that VIRUS spectra can constrain effective temperature, surface gravity, and metallicity. Follow up is underway with higher spectral resolution to verify the accuracy of the values derived from the HETDEX spectra. Additionally, samples of white dwarfs (Fig.~\ref{examplespec}) and chemically peculiar stars with enhanced abundance ratios can also be picked out in the data. } \section{Summary}\label{sec:summary} Following a decade of development, the upgraded HET saw first light in 2015, after the two-year installation of the new tracker, new wide field corrector, new prime focus instrument package with new metrology instrumentation, and the new telescope control system. HET now has the largest field of view of any 10 m class telescope, operating in the optical and near infrared. This upgrade was motivated by HETDEX science requirements and reimagined the HET as a wide field survey instrument, in combination with the replicated VIRUS integral field spectrograph, which places about 35,000 fibers on sky and observes 56 arcmin$^{2}$~ per dithered observation of three exposures, within a field diameter of 18 arcminutes. The system achieves a high degree of observing automation and has been in full queue-scheduled science operations since December 2016. HETDEX observations started in January 2017. Performance of the telescope plus instrumentation system is much enhanced over the original HET in field area, observational multiplex, and operational efficiency. VIRUS has the largest grasp (A$\Omega$, collecting area of telescope $\times$ area of sky observed $\simeq$ 4.7$\times$10$^6$ m$^2$arcsec$^{2}$) of any spectrograph and represents a milestone in the development of highly-multiplexed instruments based on large-scale replication (defined as requiring more then 100 copies of a base instrument; \citealt{hil14}). It has a similar number of detector pixels to the largest imagers. Other spectrographs employing significant levels of replication include the VLT MUSE integral field spectrograph with 24 spectrograph channels \citep{bacon10}, the LAMOST multi-object spectrograph with 32 spectrograph channels \citep{lamost}, and the DESI multi-object spectrograph with 30 spectrograph channels \citep{desi}. VIRUS provides a first, and currently unique, opportunity to understand the performance of spectrographs replicated on a 100-fold scale. Such instruments have applications on future extremely large telescopes as discussed in \citet{hil14}. The experience with VIRUS is that for large-scale replication it is possible to predict the performance and, with care in specifying component requirements, achieve the expectations on-sky. The combination of the new 10 m wide-field HET with the grasp of VIRUS creates a unique facility, that is designed as a system, able to survey vast areas of sky with un-targeted spectroscopy for the first time. This facility opens up sensitive wide-field blind spectroscopy as a new method to view the universe. \clearpage \acknowledgments HETDEX (including the WFU of the HET) is led by the University of Texas at Austin McDonald Observatory and Department of Astronomy with participation from the Ludwig-Maximilians- Universität München, Max-Planck-Institut für Extraterrestriche Physik (MPE), Leibniz-Institut für Astrophysik Potsdam (AIP), Texas A\&M University, Pennsylvania State University, Institut für Astrophysik Göttingen, The University of Oxford, Max-Planck-Institut für Astrophysik (MPA), The University of Tokyo, and Missouri University of Science and Technology. In addition to Institutional support, HETDEX is funded by the National Science Foundation (grant AST-0926815), the State of Texas, the US Air Force (AFRL FA9451-04-2- 0355), and generous support from private individuals and foundations. We thank the following reviewers for their valuable input at various stages in the project: \begin{itemize} \item Science Requirements Review 26 June 2007, Roland Bacon, Gary Bernstein, Gerry Gilmore, Rocky Kolb, Steve Rawlings \item Preliminary Design Review 10 April 2008, Bruce Bigelow, Gary Chanan, Richard Kurz, Adrian Russell, Ray Sharples \item Tracker Factory Acceptance Test Plan Review 8 March 2011, Povilas Palunas, Jeffrey Kingsley, Dave Chaney \item PFIP Integration and Alignment Review 26 July 2011, Larry Ramsey, Bruce Bigelow, Steve Smee, Mike Smith \item Wide Field Upgrade Readiness Review, 16 July 2013, Daniel Fabricant, Fred Hearty \item Wide Field Corrector Pre-shipment Review 22 April 2015, Daniel Fabricant, Fred Hearty \item VIRUS Detector System Review, 1 February 2016, Roger Smith, Ian McLean, Phillip MacQueen \item External Review, Astronomy Department and McDonald Observatory, 19-22 March 2017, Matthew Bershady, David Charbonneau, Martha Haynes, Piero Madau \end{itemize} We thank the staffs of McDonald Observatory, the Hobby-Eberly Telescope, and the Center for Electromechanics, University of Texas at Austin, the College of Optical Sciences and the Imaging Technology Lab, University of Arizona, the Leibniz-Institut für Astrophysik Potsdam (AIP), the Department of Physics and Astronomy, TAMU, the Max-Planck-Institut für Extraterrestriche Physik (MPE), The University of Oxford Department of Physics, Universit\"ats-Sternwarte M\"unchen, and the Institut für Astrophysik Göttingen, for their contributions to the HET Wide Field Upgrade and VIRUS. We particularly thank the following individuals for their contributions over the course of the project: Joshua Adams, Richard Allen, Heiko Anwad-Heerwart, Edmundo Balderrama, Timothy Beets, Joseph Beno, Lana Beranek, Emily Bevins, Guillermo Blanc, John Booth, Brent Buetow, Jim Burge, Maria Bustamante, John Caldwell, Dustin Davis, Doug Edmundston, Linda Elliot, Neal Evans, Daniel Farrow, Eric Frater, Alexander Gehrt, Claus Goessl, Frank Grupp, Kaartik Gupta, Lei Hao, Christian Haubitz-Reinke, Richard Hayes, Dionne M. Haynes, Roger Haynes, James Heisler, Sarah Hinze, John Jackson, Bryan Keener Smith, Jeff Kingsley, Robert Leach, Mike Lesser, Chenxu Liu, Suvrath Mahadevan, Amanda Martin, Emily Martin, Emily McLinden, Jason Mock, Nicholas Mollison, Omar Molina, Brian Murphy, Jeremy Murphy, Chang Jin Oh, David Ouellette, Justen Pautzke, Eric Peng, Dave Perry, Andrew Peterson, Emil Popow, Marc Rafal, Jean-Philippe Rheault, Christer Sandin, Richard Savage, Logan Schoolcraft, David Sheikh, Greg Smith, Michael Smith, Katie Smither, Ian Soukup, Brian South, Mike Tacon, Eusebio Terrazas, Rodrigo Viveros, Douglas Wardell, Gordon Wesley, Gregory Wedeking, Amy Westfall, Michael Worthington, Joseph Zierer. The Low Resolution Spectrograph 2 (LRS2) was developed and funded by the University of Texas at Austin McDonald Observatory and Department of Astronomy and by Pennsylvania State University. We thank the Leibniz-Institut f\"ur Astrophysik Potsdam (AIP) and the Institut f\"ur Astrophysik Göttingen (IAG) for their contributions to the construction of the integral field units. We acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing high performance computing, visualization, and storage resources that have contributed to the results reported within this paper. This work makes use of the Pan-STARRS1 Surveys (PS1) and the PS1 public science archive, which have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes. This work makes use of data from the European Space Agency (ESA) mission {\it Gaia} (https://www.cosmos.esa.int/gaia), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This work makes use of the Sloan Digital Sky Survey IV, with funding provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. We thank Frank Bash and Mary Ann Rankin for supporting the HET upgrade and VIRUS at a formative stage, and the members of the HET Board, who over the years supported the project from concept to completion. We especially acknowledge the role of David L. Lambert who, as McDonald Observatory Director, provided crucial leadership and support for the HETDEX project for a decade.\\ \\
2,877,628,090,619
arxiv
\section*{Acknowledgments} The work in the Travelogues project (\url{http://www.travelogues-project.info}) is funded through an international project grant by the Austrian Science Fund (FWF, Austria: I 3795) and the German Research Foundation (DFG, Germany: 398697847). \bibliographystyle{splncs04} \section{Conclusions}\label{sec:conclusions} In this paper, we have described a methodology to identify historical travelogues in a large dataset. Our approach combines the knowledge of both domain experts (historical science, library and information science) and data scientists to create a ground truth and subsequently build an MLP model, successfully identifying 345 previously unknown travelogues. Furthermore, we have shown that a ground truth for this kind of data can be as small as 30 examples each for the positive and negative class and still perform well. In the upcoming weeks and months, we will begin looking at the discovered travelogues in more detail. In a first step, we will identify intertextual relations in our corpus to find out in what way the travelogues depended on each other, why and how (certain) stereotypes and prejudices were handed down, evolved or disappeared over the centuries. Ultimately, we want to know how foreign cultures, places and people were perceived, and if the perceptions differed depending on the socio-cultural background of the involved people. This will allow us to come to an understanding how and why something was perceived as \emph{Other}, if and how this changed across the centuries. For this, we have done the groundwork here, as it is crucial to rely on as much data as possible; concrete next steps in this direction include creating a formal description of intertextuality and \emph{Otherness}, and translating it into a set of machine-readable text features. \section{Discussion}\label{section:discussion} Our results show that standard machine learning techniques combined with relatively easily computable features (BOW and bag-of-n-grams) can effectively support scholars in identifying travelogues in a large-scale document corpus. Using the same features for an MLP neural network generates even superior results. This is an important first step in the development of a broader mixed-method approach for the large-scale serial analysis of travelogues. Specifically, we discovered a total of 345 travelogues in the evaluated time periods using the top 200 findings with the highest confidence scores each (800 in total). We previously were not able to find any of these files through search queries based on meta data or manual search in our catalog, hence this directly translates into a re-discovery of sources for scholars in the humanities. Additionally, a large fraction of false positives is, at the level of words and their semantics, extremely similar to the true positives. However, going by the definition provided earlier we are required to use external information that could not be included as a feature, as it is dependant on domain knowledge. This severely limits the efficacy of unsupervised machine learning and deep learning approaches. A clear limitation of our effort lies in the time and effort required to create a high-quality ground truth. While this effort could possibly be reduced by applying unsupervised clustering techniques beforehand, annotations provided by domain experts will always be key for effective learning techniques. Applying active learning techniques (c.f.~\cite{settles2012active}) for iteratively developing ground truths could be a possible strategy for reducing this manual annotation effort. Another limitation of our approach lies in the focus on entire volumes, which currently neglects the fact that volumes may include travelogues and non-travelogues. Using a wider spectrum of semantically richer features (c.f.~\cite{momeni2013identification}) such as named entities could support classification at the paragraph or page level. Nonetheless, our experiments on the efficiency of the classification method presented here show that it is possible to achieve robust results above an F1 score of 0.8 with a relatively small ground truth size. Knowing this, future research in different domains should require substantially less time investments to get started. A remaining challenge lies in the distinction between highly similar genres, in our case historiographies or geographic books and travelogues. We hope that this can be tackled by further refining the ground truth to fit the given genres, taking into account a wide range of external sources, to include domain knowledge in a structured way. \section{Results}\label{sec:results} \subsection{Classification Results} Table~\ref{tabel:results} shows the evaluation of our classification algorithms. \begin{table} \centering \caption{Classification results. We provide precision, recall and F1 scores for multinominal naive Bayes (MNB), support vector machine (SVM), logistic regression (Log) and a multi-layer perceptron neural network (MLP).} \label{tabel:results} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}c|lll|lll|lll|lll@{}} \toprule \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{MNB} & \multicolumn{3}{c|}{SVM} & \multicolumn{3}{c|}{Log} & \multicolumn{3}{c}{MLP} \\ \midrule Century & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{R} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{R} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{R} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{F1} \\ 16th & 0.73 & \textbf{1.00} & 0.85 & 0.96 & \textbf{1.00} & 0.98 & 0.95 & 0.95 & 0.95 & \textbf{1.00} & \textbf{1.00} & \textbf{1.00} \\ 17th & 0.75 & 0.97 & 0.84 & 0.82 & 0.92 & 0.87 & 0.82 & 0.92 & 0.87 & \textbf{0.95} & \textbf{0.93} & \textbf{0.94} \\ 18th & 0.79 & \textbf{0.94} & 0.86 & 0.88 & 0.90 & 0.89 & 0.84 & 0.88 & 0.86 & \textbf{0.96} & 0.93 & \textbf{0.94} \\ 19th & 0.86 & 0.92 & 0.89 & 0.91 & 0.90 & 0.91 & 0.88 & 0.91 & 0.90 & \textbf{0.97} & \textbf{0.96} & \textbf{0.96} \\ \bottomrule \end{tabular*} \end{table} Our results show that it is possible to achieve good classification scores with our dataset, even without extensive feature engineering or pre-trained word embeddings. We assume that this is based in the comparably\footnote{Many works focus on datasets that have more, but shorter documents, c.f.~\cite{yang2016hierarchical} for comparisons of multiple classification methods and datasets.} large size of our data points, as can be seen in Table~\ref{table:books-volumes}. Additionally, through the randomness involved in the selection of non-travelogues, those volumes are expected to have a high variance in genres, which matches the whole corpus as well. Comparing numerous examples of one genre against an equal number of volumes covering many more genres certainly benefits the classification, especially when taking into account the length of the documents. Taking the results from this evaluation, we were confident in approaching our main task, which was the identification of travelogues from a much larger dataset: the other digitized books in our corpus not yet evaluated by us. Following the training of models suitable for classification, one specifically designed for each century, we applied it to our pool of \emph{candidates}. We used this process to create a list of books that are potentially travelogues, ranked from highest to lowest, and evaluated the first 200 items. The results of this are shown in Table~\ref{tabel:discovered_travelogues}, and have been subject to the same scrutiny as our initial ground truth. We can show that our methodology proves a clear improvement over a less guided evaluation: within our evaluated findings, true positives made up 12.5\% (16\textsuperscript{th} c.), 30\% (17\textsuperscript{th} c.), 41.5\% (18\textsuperscript{th} c.) and 89.5\% (19\textsuperscript{th} c.) respectively. Due to time constraints, our evaluation was discontinued after the first 200 items, but this already means that we discovered 345 books of the travelogue genre that were not found by traditional search queries on meta data, as we explained in Section~\ref{sec:dataset-collection}. Discovery by chance only resulted in 3\% (16\textsuperscript{th} c.) and 0.8\% (18\textsuperscript{th} c.) positives, or none at all (17\textsuperscript{th} c., 19\textsuperscript{th} c.). It also has to be noted that the increase in the percentage of true positives in a diachronic perspective is connected to the equally increasing number of travelogues. There simply are many more travelogues that were printed in the 19\textsuperscript{th} century than in the previous time periods. \begin{table} \centering \caption{Applying the classifier on the candidates pool. \normalfont \emph{By chance} shows how many travelogues were found when randomly selecting examples for the ground truth.} \label{tabel:discovered_travelogues} \begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}}crrr@{}} \toprule \textbf{Century} & \textbf{\begin{tabular}[c]{@{}c@{}}No. \\ candidates\end{tabular}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Confirmed\\ (top 200)\end{tabular}}} & \textbf{By chance} \\ \midrule 16\textsuperscript{th} & 8,526 & 25 & 2 (from 67) \\ 17\textsuperscript{th} & 8,763 & 60 & 0 (from 161) \\ 18\textsuperscript{th} & 55,971 & 83 & 7 (from 873) \\ 19\textsuperscript{th} & 88,262 & 177 & 0 (from 1,897) \\ \bottomrule \end{tabular*} \end{table} \subsection{Error Analysis} During this evaluation, it became apparent that the majority of potential findings with a high probability belong to the group we attribute as historiography. Due to their nature, they have strong a overlap with travelogues but lack the required criteria of describing a journey actually experienced by the author. This crucial information can in many cases only be gathered from external sources\footnote{C.f. Section \ref{subsection:travelogues-character}.}. From a purely technical perspective, there is no difference with the content of the books between travelogues and historiographies. This means that while, for the purposes of this project, they are very different, right now we cannot further differentiate between them. Additionally, in the 18\textsuperscript{th} century, many false positives include publications on geography; a possible explanation here is that they often describe locations, which naturally overlap with travelogues as well. \subsection{Ground Truth Requirements}\label{subsec:efficiency-results} The result of our efficiency evaluation is depicted in Figure~\ref{fig:efficiency-plot}. We provide the average F1 score for each time frame, as well as the variance between the samples. For every time frame, our general observation is that the performance of the MLP classification fluctuates heavily when only a small dataset is available. With at least 20 documents, but it is better with 30 examples, each for the positive and negative class a stable performance of above a 0.8 F1 score can be reached. After that it slowly increases up to a 0.9 F1 score at 50 examples each, with only very minor changes for 100 examples each. This experiment shows that it is possible to create a working classification methodology, which reaches acceptable results with a modest time investment\footnote{Depending on the sources, and if additional definitions etc. are needed, between several hours and up to a few weeks of full time work.} upfront, as shown in Table~\ref{tabel:results}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/efficiency_boxplot.png} \caption{Classifier efficiency evaluation for MLP. Every step has a balanced number of travelogues and non-travelogues (5/5, 10/10, 15/15, 20/20, 25/25, 30/30, 50/50 and 100/100). The experiment was repeated five times for each step.} \label{fig:efficiency-plot} \end{figure} \section{Methods}\label{section:methodology} \subsection{Overview}\label{subsec:interdisc-method} Our overall goal is to develop a novel mixed qualitative and quantitative method for the serial analysis of large-scale text corpora. Since serial analysis typically focuses on a specific topic or type of document, in this case \emph{travelogues}, we first need to define a systematic method that supports scholars with diverse backgrounds (historical science, library and information science, data science) in iteratively training a machine learning model that ultimately supports them in locating travelogues within a huge collection of digitized documents. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/iconf_method_graphic2.png} \caption{High-level overview of our interdisciplinary approach. \normalfont{Creation of the ground truth is primarily the responsibility of domain experts (with data scientists contributing to identify non-travelogues). Model creation is completed by data scientists, with the results of the model deployment on the whole corpus being evaluated by the domain experts again.}} \label{fig:approach-overview} \end{figure} Figure~\ref{fig:approach-overview} summarizes the overall workflow and involved participants from a high-level perspective: in the first step, domain experts use the keyword search feature of the Austrian National Library's catalog to search the \emph{overall corpus} for documents meeting our definition of a \emph{travelogue}. They manually inspect each result and annotate those matching our definition as being a \emph{travelogue}. In parallel, the data scientist automatically selects a randomized sample of documents from the overall corpus, which are then manually inspected and verified by the domain experts as being \emph{non-travelogues}. This process, which is described in more detail in Section~\ref{sec:dataset-collection}, yields a balanced \emph{ground truth} corpus consisting of travelogues and non-travelogues documents, which can then be used for subsequent machine learning tasks. Before building machine learning models, documents in the ground truth corpus need to be pre-processed, which includes cleansing, normalization and feature engineering steps. Section~\ref{sec:preprocessing} explains in more detail the steps we applied to our documents. Next, we use the pre-processed documents for \emph{model building}, which includes training various machine learning models, such as SVMs and MLPs. This process is described in Section~\ref{sec:model_building}. Following this, we evaluate the effectiveness of the trained models (see Section~\ref{sec:evaluation}). The top-performing model was then deployed and used to classify the remaining documents in our corpus, in an attempt to identify additional, potentially previously unknown travelogue documents. As a result, our iterative method yields a growing \emph{travelogue corpus}, which can be used for refining the effectiveness of machine learning tasks and for other quantitative and qualitative analytics tasks. In the following sections we describe each step in more detail. \subsection{Dataset and Ground Truth Creation}\label{sec:dataset-collection} In our work, we are focusing on prints published between 1500 and 1876, which are part of the historical holdings of the ONB. Since 2011, more than 600,000 books (volumes) from that period have been digitized and OCR-processed in a public-private partnership with Google (Austrian Books Online, ABO\footnote{\url{https://www.onb.ac.at/en/digital-library-catalogues/austrian-books-online-abo}.}). Therefore, nearly all of the library's historical books are currently accessible in a digital form. Within this corpus, we are specifically searching for travelogues. As a first step, we identified German volumes in the overall ABO corpus, and then split the corpus by century. Then we initiated a ground truth by querying over titles and subject headings. We searched for different keywords in German, namely truncated spellings of `Reise' (travel) and `Fahrt' (journey) along with their known variants and with wildcard affixes and suffixes (in alphabetical order: *faart*, *fahrt*, *fart*, *rais*, *rai\ss{}*, *raisz*, *rays*, *ray\ss{}*, *raysz*, *reis*, *rei\ss{}*, *reisz*, *reys*, *rey\ss{}*, *reysz*, *rys*, *ry\ss{}*) as well as common subject-headings in the library's catalog including `Forschungsreise' (expedition), `Reise' (travel) and `Reisebericht' (travelogue). As these queries still generated many false positives, we cleaned up the dataset manually. Results were double-checked intellectually by two annotators fluent in German and experienced with early modern German, a historian and a librarian, who read parts of the texts and utilized external bibliographies, biographies and catalogs,\footnote{E.g.: \url{https://www.deutsche-biographie.de/}, \url{https://lb-eutin.kreis-oh.de/},\\ \url{https://kvk.bibliothek.kit.edu/}, \url{https://www.oclc.org/de/worldcat.html},\\ \url{http://www.vd16.de/}, \url{http://www.vd17.de/}, \url{http://www.vd18.de/},\\ \url{https://viaf.org/}, Wikipedia.} to confirm whether a document meets our definition of travelogue or belongs to another genre. Uncertainties were resolved unanimously and there were no disagreements on the final annotations. The result of this step is a manually annotated and verified sample of travelogues, which took approximately three months of full time work for both annotators. Since training and validation of machine learning models also requires counterexamples, in this case, non-travelogues, we implemented an automated procedure for randomly selecting an equally sized sample of documents from the subset of German volumes. Via a manual investigation process conducted by the same annotators, we ensured that those documents were not travelogues. This provides a manually verified sample of non-travelogues. In total, our travelogues ground truth dataset contains a balanced sample of 6,048 volumes, representing 3.67\% of 167,570 German language volumes from the complete ONB corpus. Table~\ref{table:books-volumes} provides an overview of our ground truth and its distribution over centuries. One can easily observe that the number of volumes, as well as the size of each publication increases with time. To provide insight into how likely it is to find a travelogue randomly, we also included the number of travelogues that were found while reviewing the randomized sample we used as counterexamples. This approach was replicated for the 16\textsuperscript{th}, 17\textsuperscript{th}, 18\textsuperscript{th} and 19\textsuperscript{th} centuries. Volumes that were not evaluated remain in the \emph{candidates} pool, upon which we applied our classifier after identifying the best-performing model. \begin{table} \centering \caption{Dataset overview. \normalfont Our corpus consists of the total number of digitized German-language books available to us. The ground truth contains an equal amount of travelogues and randomly selected counter examples; in brackets, we provide the number of travelogues we found by chance. Books not evaluated remain in the \emph{candidates} pool. A token contains at least two alphanumerical characters, punctuation etc. is not counted.} \label{table:books-volumes} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}crrrr@{}} \toprule \multicolumn{5}{c}{\textbf{Corpus}} \\ \midrule \multicolumn{1}{l}{Century} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}No. candidate\\ volumes\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}No. ground\\ truth volumes\end{tabular}} & \multicolumn{1}{c}{Total tokens} & \multicolumn{1}{c}{Average tokens} \\ \midrule 16th & 8,526 & 67/67 & 362,244,353 & 41,829 \\ 17th & 8,763 & 161/161 & 651,957,983 & 71,762 \\ 18th & 55,971 & 873/873 & 5,041,741,840 & 82,274 \\ 19th & 88,262 & 1,897/1,897 & 11,464,645,150 & 124,539 \\ \midrule $ \sum $ & 161,522 & 5,996 & 17,520,589,326 & Ø 104,589 \\ \bottomrule \end{tabular*} \end{table} \subsection{Pre-processing}\label{sec:preprocessing} The preprocessing phase involves several steps. First, the texts were tokenized at the word level, using blanks and interpunctuation as separators. The German language uses upper- and lowercase spelling, depending on the word type and their position in the sentence, but to compensate for OCR and orthographic errors we transformed all tokens to lowercase. Furthermore, we removed all tokens that do not contain at least two alphanumeric characters, as this removes OCR artifacts, which are often special characters. For the same reason, each token needs to appear at least twice in the whole corpus. \subsection{Model Building}\label{sec:model_building} As shown in Table~\ref{table:books-volumes}, the documents that we seek to classify are rather large, as they contain on average 41,000-124,000 tokens. We decided to use a combination of BOW and bag-of-n-grams, as shown by (Wang and Manning,~\cite{wang2012baselines}). With this approach, we can both handle intricate problems with our data, while having a computationally effective method that still provides competitive results. Experiments were performed on the above-mentioned ground truth. We tested different classification algorithms: \begin{itemize} \item Multinominal Naive Bayes (MNB) \item Support Vector Machine (SVM) \item Logistic regression (Log) \item Multilayer perceptron (MLP) \end{itemize} For the MNB, SVM and Log algorithms we used the sklearn~\cite{scikit-learn} implementation. We use the Tensorflow~\cite{tensorflow2015-whitepaper} and Keras~\cite{chollet2015keras} implementation for the MLP. The data for all algorithms was vectorized and hashed with the sklearn HashingVectorizer. \subsection{Evaluation Procedure}\label{sec:evaluation} As a baseline, we applied a random classification. In all the experiments, we treat every book as a single document. First, we split the ground truth into both a training (75\%) set and a validation (25\%) set, for every time period. We evaluated all classifiers presented here first through a five-fold cross evaluation along the training split. This essentially means that the training set was split into five equally sized subsets, and for each fold one subset serves as a test, and the other four become the training data. When the results across the cross-evaluation are comparable, good scores are less likely to occur by chance. Subsequently, we applied the classifiers on the held-out validation data. The classification results are discussed in Section~\ref{sec:results}. The evaluation of our work follows a two-step approach. First, we gauge the effectiveness of a given method by precision, recall and F1 metrics on our training set. \emph{Precision} is the number of correct results, divided by the number of all returned results. \emph{Recall} shows how many of the documents that should have been found are actually found, dividing the number of correctly classified documents by the number of documents that actually belong to that class. \emph{F1} is the harmonic mean of precision and recall (with a range between 0 and 1, with 1 representing the perfect result). For the second step of our evaluation, we apply the model that performs best on our training data to the remaining documents of our corpus. This results in a list of all those documents, with probability scores indicating how likely they belong to our travelogues class. Starting with the highest probability, those documents are then manually evaluated by domain experts (see Section~\ref{sec:results}), to judge how well this model identifies travelogues in a set of unseen documents. \subsection{Minimal Ground Truth Requirements}\label{sec:clf-efficiency} Additionally, we wanted to understand how many ground truth documents are needed to train an effective classifier. We approached this by testing the top classification approach against different amounts of ground truth documents and evaluated the results. The same setup as described above is used, but we varied the number of ground truth documents, as well as randomizing their selection. For each time frame, we evaluated 5, 10, 15, 20, 25, 30 and 50 examples each for the positive class (travelogue) and the negative class (anything else); for the 18\textsuperscript{th} and 19\textsuperscript{th} centuries, we extended to 100 examples each. The model created with those documents was then tested on the remaining ground truth documents. For each sample size, we repeated this a total of five times with a different randomized sample each time. \section{Background}\label{section:background} \subsection{Characteristics of Travelogues}\label{subsection:travelogues-character} For identifying travelogues we needed, first of all, a precise definition of the notion of a \emph{travelogue}. In previous research, very broad and general definitions were suggested that, unfortunately, did not resolve all of our questions connected to the classification~\cite{Piera2018Travel,Zimmermann2002}. There was, for instance, no conclusive answer whether or not missives, letters of consuls, or texts only partly including descriptions of actual travel are to be considered travelogues. Consequently, we had to generate our own definition, which aims to apply to all historical eras, geographical regions and media types. Our considerations, however, which are based on an analysis of printed (early) modern travelogues in German, have been formed accordingly and can be characterized as follows: \begin{quote} A travelogue is a specific type of media~\cite{GenzGevaudan2016} that reports on a journey which, if detectable, actually took place. Consequently, a travelogue is formed by two relations: the first is content-based (description of a journey) and the second biographical (factuality of the journey). \end{quote} Our definition builds upon and refines the careful reflections of Almut H{\"o}fert~\cite{Hofert2003Feind}, who provides a narrower characterization: Fictional narratives are excluded, but there is no binary distinction between fictionality and factuality, since a certain amount of fictionality is part of every travelogue~\cite{Nunning2008Wirklichkeit,sandrock2015truth}, apparent factuality was often generated artificially, and fictional narratives influenced reports at times~\cite{Stagl2002Geschichte2}. A journey is a movement in space and time that begins at a starting point and then moves through a variable set of further points outside of the well-known cultural environment of the traveler. In contrast to Wolfang Treue~\cite{Treue2014Abenteuer} we include (forced) emigrations and relics of people who died while traveling, but exclude movements on a permanent level (e.g., nomads, vagabonds). Travelogues can be handed down in various forms, whether through oral speech, non-verbal communication, text, an image or video. Travelogues obtained from the (early) modern period and available for research consist of text and/or images. The available text is predominantly in prose and can be attributed to several text genres, such as reports, diaries, letters or missives. Notwithstanding that there are many mixed forms and transition zones here, especially since certain guidelines for the creation of travelogues (the so-called \emph{Ars Apodemica}) only emerged and were, if at all, partially applied by the authors during the course of the study period~\cite{kurbis2004hispania,Stagl1983Apodemiken}. The only decisive element of a text to be classified as travelogue is the mention of the fact that it reflects the experiences of an actual journey that was undertaken, with all of the imaginable variations of spellings and semantic forms. A frequent, but not always included, feature is an itinerary listing different stations along the journey and the connected experiences associated with the stops. Images in travelogues are usually mimetic, predominantly including portraits, landscapes, and depictions of plants, animals or architecture, but may also incorporate abstract representations. The decisive element here is the inclusion of any pictorial form that is a reflection about experiences that occurred during a journey. Thus, a series of pictures, which originated from an actual journey and contain no text, are also understood to be travelogues, but are not collected within the current project that is focusing specifically on text. Most of the travelogues from the (early) modern period were written by the traveling persons themselves, are therefore known as \emph{ego documents}~\cite{Presser1969Memoires}, and, predominantly, in a narrower sense considered to be \emph{self-testimonies} (Selbstzeugnisse)~\cite{Krusenstjern1994,Ludtke1993}. Consequently, the personal experiences and cultural background of the authors, as well as other persons involved in the production of the final document, strongly shape the content of the resulting texts. However, (early) modern travelogues should not be considered detached from each other, since they depend on each other and/or other (types of) media intertextually~\cite{Pfister1993Intertext}, interpictorially~\cite{Greve2004Bild}, intermaterially or intermodally~\cite{BellingradtSalman2017}. For the definition itself it is considered irrelevant whether, in the case of a publication, a travelogue was published by the traveling person or by someone else (e.g., posthumous publications, later editions, written/edited by a related person), whether they are independent publications, appear in the context of a travel collection, as part of a larger publication (e.g., autobiography, historiography) or in the form of an excerpt. \subsection{Known Document Identification Methods} Generally, linear classifiers have demonstrated solid performance for text classification tasks. This includes support vector machines (SVM) and logistic regression, as shown in \cite{joachims1998text} and \cite{fan2008liblinear}. We build on these findings and evaluate both methods in our experiments. A recent study in the digital library field, by Mai et al.~\cite{mai2018using}, compared the effectiveness of classification models trained on titles only versus models trained on full-texts and found that the former outperform the latter. They used multilayer perceptron (MLP), convolutional neural network (CNN) and long short-term memory (LSTM) architectures, and found that MLP outperformed the other methods in most cases. Although their models were trained on large-scale datasets from other domains (PubMed, EconBiz) and therefore not directly applicable, we consider MLPs for building a travelogue identification model. Dai et al.~\cite{dai2017social} use an unsupervised method based on word embeddings to cluster social media tweets as related or unrelated to a topic, in their case influenza. Although they use much shorter texts (Twitter posts, or tweets, were limited to 140 characters until 2018), their task is similar to the one we present here in that both are binary classification tasks. The authors report an F1 score as high as 0.847, using pre-trained word embeddings from the Google News dataset. Additionally, the authors compared their approach to other methods such as keyword or related-word analysis but found their solution to perform better. In this paper, we will show that similar scores can be achieved without using pre-trained word embeddings. In~\cite{yang2016hierarchical}, Yang et al. use hierarchical attention networks for document classification, in their case sentiment estimation and topic classification (multi-class). Their model outperforms previous methods, depending on the dataset, reaching F1 scores between 0.494 and 0.758. Additionally, they are able to visualize the informative components of a given document. This might be a suitable method for identifying possible subject indexing terms (classes) in an entire corpus and a subsequent document classification task. However, since the identification of travelogues is a binary classification problem, we refrain from these methods at the moment. Zhang et al.~\cite{zhang2015character} use character-level convolutional networks for text classification, comparing them against methods such as BOW, n-grams and other neural network architectures. They test on several large-scale datasets (e.g., news, reviews, question/answers, DBPedia), showing that their methodology outperforms most of the other approaches, having up to 40\% fewer errors. While the authors do not report F1 scores, they illustrate that treating text as just a sequence of characters, without syntactic or semantic information or even knowing the words, can work well for classification tasks. While we do not apply their findings directly, we take inspiration from their work and use BOW and bag-of-n-grams features. \section{Introduction}\label{section:introduction} Travelogues offer a wide range of information on topics closely connected to current challenges, including mass tourism, transnational migration, interculturality and globalization. By definition, documents considered to be travelogues contain perceptions of \emph{Otherness} related to foreign regions, cultures, or religions. At the same time, travelogues are strongly shaped by the socio-cultural background of the people involved in their production. Comparative analysis allows us, in turn, to scrutinize how (specific) cultures handled \emph{Otherness}, as well as to examine the evolution of stereotypes and prejudices. This high degree of topicality fosters the continuous growth of studies on travels and travelogues, as can be observed by the sheer flood of publications appearing every year (c.f.~\cite{salzani2010bibliography}). While heuristic approaches proved to be fruitful~\cite{AgaiConermann2013,vanGroesen2008}, many fundamental questions connected to travelogues remain unanswered, among other reasons, because previous analysis of travelogues rarely exceeded a dozen primary sources. In response, we seek to leverage the possibilities offered by large-scale digitization efforts, as well as novel automated text-mining and machine learning techniques, for the first time, on travelogues. This allows us to significantly increase the quantity of text we can analyze. The overall goal of our work is to develop a novel mixed qualitative and quantitative method for the serial analysis of large-scale text corpora and apply that method to a comprehensive corpus of German language travelogues from the period 1500--1876 (ca. 3,000--3,500 books) drawn from the Austrian Books Online project (ca. 600,000 books) of the Austrian National Library (ONB). As a first step, and this is the focus of this paper, we seek to provide automated support for scholars in identifying travelogues in large collections of historical documents, which have been scanned and undergone an optical character recognition (OCR) process by Google. A major challenge clearly lies in finding an effective method that can be scaled for large collections, is robust enough to support documents with varying OCR quality and can deal with the evolution of the German language over almost four centuries. Previous studies have already demonstrated the potential of quantitative methods for investigating cultural trends~\cite{michel2011quantitative} or types of discourses in the past~\cite{Purschwitz2018Netzwerke}, and the effectiveness of automated machine learning techniques for subject indexing~\cite{mai2018using}. However, to the best of our knowledge, no method has previously been tailored to the specific characteristics and unique challenges of identifying travelogues. To this end, our contributions can be summarized as follows: \begin{enumerate} \item We reviewed the characteristics and commonalities of travelogues and combine our findings into a generic definition of a \emph{travelogue}. \item We provided a manually annotated dataset of documents that match our working definition of a travelogue in the range of the 16th to the 19th century.\footnote{We will share the corpus here: \url{https://github.com/Travelogues/travelogues-corpus}.} \item We employed that dataset as a ground-truth for evaluating a variety of document classification methods and found that a multilayer perceptron (MLP) model trained with standard bag-of-words (BOW) and bag of n-grams (range 1, 2) feature set can effectively identify travelogues with an F1 ratio of 1 (16th c.), 0.94 (17th c.), 0.94 (18th c.) and 0.97 (19th c.).\footnote{The code (as Jupyter notebook) that we used for the classification is available here: \url{https://github.com/Travelogues/identifying-travelogues}.} \item We found that approximately 30 manually annotated documents are needed for training an effective classifier. \end{enumerate} Our results show that standard machine learning approaches can effectively identify travelogues in large text corpora. When we applied our most effective model on the ONB's entire German language corpus, we unearthed 345 travelogues that could not be identified using a traditional keyword search. Thus, we were able to create the most extensive collection of early modern German travelogues to date. This will provide us a solid baseline for determining subsequent steps to develop a serial text-analysis method, which will focus on the specific phenomena of intertextuality and analysis of semantic expressions referring to \emph{Otherness}. We will present our definition of travelogue and closely related work in the next section. Afterward, in Section~\ref{section:methodology}, we outline our methodology before presenting our results in Section~\ref{sec:results}. Finally, we discuss the implications and limitations in Section~\ref{section:discussion} and conclude our paper in Section~\ref{sec:conclusions}.
2,877,628,090,620
arxiv
\section{Introduction} \label{introduction} In current understanding the matter created in heavy-ion collisions behaves as a nearly perfect expanding fluid \cite{heinz} under extreme conditions of very high density and temperature. This hydrodynamic behavior was observed at Brookhaven's Relativistic Heavy Ion Collider (RHIC) and recently confirmed by the ALICE collaboration in Pb-Pb collisions at the LHC \cite {floris,cern}. In high energy collisions the produced particles are predominantly pions. The agreement between the pion production results reported in \cite {floris} and the theoretical hydrodynamical model predictions \cite{shen} is truly remarkable. A realistic hydrodynamic model may be constructed \cite{dumitru-kolb} in which a transverse expansion is superimposed on a longitudinal boost invariant expansion \cite{bjorken}. It is often stated by particle physicists that heavy ion collisions create mini Big Bangs\footnote{A few recent quotations from the press: ``What we're doing is reproducing the conditions that existed at the very early universe, a few millionths of a second after the Big Bang," \cite{tuts}; ``The Large Hadron Collider has successfully created a ``mini-Big Bang" by smashing together lead ions instead of protons.'' \cite{bbc}; ``The collisions generated mini Big Bangs and the highest temperatures and densities ever achieved in an experiment'' \cite{evans1}.} -- events in which matter is created under extreme conditions of high density and high temperature resembling the conditions in the early universe a fraction of a second after the Big Bang. The expansion of hadronic matter that takes place immediately after a heavy ion collision has certain similarity with the cosmological expansion. However, the analogy is rather superficial since in a cosmological expansion of spacetime after the Big Bang the gravity plays the essential role, whereas high energy collisions and subsequent expansion does not involve gravity at all. Although the Minkowski spacetime with expanding hadronic matter can be mapped into an expanding spacetime, the resulting spacetime is still flat. However, we will demonstrate here that in high energy collisions a much closer analogy with cosmology may be drawn owing to the effective analog gravity with essentially curved geometry. Various aspects of analog gravity (for a review and extensive list of references see \cite{barcelo}) have been studied in acoustics \cite{visser} optics \cite{philbin}, superfluidity \cite{jacobson}, black hole accretion \cite{moncrief,abraham}, and hadronic fluid near the QCD chiral phase transition \cite{tolic}. In this paper we study in detail the framework of analog gravity provided by a hadronic fluid at nonzero temperature for the whole range of temperatures below the chiral phase transition. We show that the analog cosmological spacetime corresponds to a contracting FRW universe with a nontrivial apparent horizon. Strongly interacting matter is described at the fundamental level by a nonabelian gauge theory called quantum chromodynamics (QCD). At large distances or small momenta, the QCD exhibits the phenomena of quark confinement and chiral symmetry breaking. At low energies, the QCD vacuum is characterized by a nonvanishing expectation value \cite{shifman}: $\langle \bar\psi\psi\rangle \approx$ (235 MeV)$^3$, the so called quark condensate, which describes the density of quark-antiquark pairs found in the QCD vacuum and its nonvanishing value is the manifestation of chiral symmetry breaking. The phenomenological importance of the chiral transition and possible experimental signatures have been discussed by Harris and M\"uller \cite{harris}. The chiral symmetry breaking and restoration at finite temperature may be conveniently studied using the linear sigma model \cite{bilic,bilic1} originally proposed as a model for strong nuclear interactions \cite{gell}. Today, the linear sigma model serves as an effective model for the low-energy (low-temperature) phase of QCD. The basic model involves four scalar fields (three pions and a sigma meson) and two-flavor constituent quarks. In the chirally symmetric phase at temperatures above the chiral transition point the mesons are massive with equal masses and quarks are massless. In the chirally broken phase the pions are massless, whereas the quarks and sigma meson acquire a nonzero mass proportional to the chiral condensate. At temperatures below the chiral phase transition point the pions, although being massless, propagate slower than light \cite{pisarski2,son1,son2} with a velocity approaching zero at the critical temperature. Hence, it is very likely that there exists a region where the flow velocity exceeds the pion velocity and the analog trapped region may form. In our previous paper \cite{tolic} we have demonstrated that a region containing analog trapped surfaces forms near the chiral phase transition. The purpose of this paper is to study general conditions for the formation of a trapped region with the inner boundary as a marginally trapped surface which we refer to as the {\em analog apparent horizon}. Our approach is based on the linear sigma model combined with a boost invariant Bjorken type spherical expansion. A similar model has been previously studied in the context of disoriented chiral condensate \cite{lampert}. The remainder of the paper is organized as follows. In Sec.~\ref{chiral} we describe the properties and the dynamics of the chiral fluid at finite temperature. The analog geometry of the expanding chiral fluid is studied in Sec.~\ref{analog} in which we derive the condition for the analog apparent horizon and study the analog Hawking effect. In the concluding section, Sec.~\ref{conclusion}, we summarize our results and discuss physical consequences. Finally, in Appendix \ref{trapped} we outline basic notions related to trapped surfaces in black hole physics and cosmology. \section{Chiral fluid} \label{chiral} In this section we focus on physics of hadrons at finite temperature and study the properties and the dynamics of an expanding chiral fluid. We base our study on a linear sigma model with no fermions which we describe in Sec.\ \ref{linear}. In Sec. \ref{velocity} we calculate the effective velocity of pions propagating in a chiral medium. We model the fluid expansion on a boost invariant spherical expansion of the Bjorken type which we describe in Sec.\ \ref{bjorken} \subsection{Linear sigma model} \label{linear} Consider a linear sigma model at finite temperature in a general curved spacetime background. For our purpose it is sufficient to study the model with no constituent fermions. The thermal bath provides a medium which may have an inhomogeneous velocity field. The dynamics of mesons in such a medium is described by an effective chirally symmetric Lagrangian of the form \cite{bilic2} \begin{equation} \label{eq1} {\cal{L}} = \frac{1}{2}(a\, g^{\mu\nu} +b\, u^{\mu}u^{\nu})\partial_{\mu} \varphi \partial_{\nu} \varphi - \frac{m_0^2}{2} \varphi^2 - \frac{\lambda}{4} (\varphi^2)^2 , \end{equation} where $u_{\mu}$ is the velocity of the fluid, and $g_{\mu\nu}$ is the background metric. The mesons $\varphi\equiv (\sigma ,$ {\boldmath$\pi$}) constitute the $(\frac{1}{2},\frac{1}{2})$ representation of the chiral SU(2)$\times$SU(2). The parameters $a$ and $b$ depend on the local temperature $T$ and on the parameters of the model $m_0$ and $\lambda$ and may be calculated in perturbation theory. At zero temperature the medium is absent in which case $a=1$ and $b=0$. If $m_0^{2} < 0$ the chiral symmetry will be spontaneously broken. At the classical level, the $\sigma$ and $\pi$ fields develop nonvanishing expectation values such that at zero temperature \begin{equation}\label{eq2} \langle \sigma \rangle^{2} + \langle \mbox{\boldmath$\pi$} \rangle^{2}= - \frac{m_0^{2}}{\lambda} \equiv f_{\pi}^{2} . \end{equation} It is convenient to choose here \begin{equation}\label{eq3} \langle \pi_{i} \rangle = 0, \;\;\;\;\; \; \langle \sigma \rangle = f_{\pi} . \end{equation} At nonzero temperature the expectation value $\langle \sigma \rangle$ is temperature dependent and vanishes at the chiral transition point. Redefining the fields \begin{equation}\label{eq9} \varphi \rightarrow \varphi +\varphi'(x) = (\sigma,\mbox{\boldmath$\pi$})+ (\sigma'(x),\mbox{\boldmath$\pi$}'(x)) , \end{equation} where {\boldmath$\pi'$} and $\sigma'$ are quantum fluctuations around the constant values {\boldmath$\pi$} = 0 and $\sigma =\langle \sigma \rangle$ respectively, we obtain the effective Lagrangian in which the chiral symmetry is explicitly broken: \begin{equation}\label{eq5} {\cal{L}'} = \frac{1}{2}(a\, g^{\mu\nu} +b\, u^{\mu}u^{\nu})\partial_{\mu} \varphi' \partial_{\nu} \varphi' - \frac{m_{\sigma}^{2}}{2} \sigma'^{2} - \frac{m_{\pi}^{2}}{2} \mbox{\boldmath$\pi$}'^{2} -g \sigma'\varphi'^2 - \frac{\lambda}{4} (\varphi'^2)^2 . \end{equation} The fields $\sigma'$ and $\mbox{\boldmath$\pi$}'$ correspond to the physical sigma meson and pions, respectively. The effective masses and the trilinear coupling $g$ are functions of $\sigma$ defined as \begin{eqnarray}\label{eq11} m_{\sigma}^{2} & = & m_0^{2} + 3 \lambda \sigma^2 , \nonumber \\ m_{\pi}^{2} & = & m_0^{2}+\lambda \sigma^{2} , \\ g & = & \lambda \sigma . \nonumber \end{eqnarray} For temperatures below the chiral transition point the meson masses are given by \begin{equation} m_{\pi}^2 = 0\, ; \;\;\;\;\;\; m_{\sigma}^2 = 2\lambda \sigma^{2} , \label{eq43} \end{equation} in agreement with the Goldstone theorem. The temperature dependence of the chiral condensate $\sigma$ is obtained by minimizing the thermodynamical potential $\Omega=-(T/V) \ln Z$ with respect to $\sigma$ at fixed inverse temperature $\beta$. At one loop order, the extremum condition reads \cite{bilic1} \begin{equation} \sigma^{2}= f_{\pi}^{2} - 3 \int \frac{d^3p}{(2\pi)^3} \: \frac{1}{\omega_{\sigma}} \: n_{B} (\omega_{\sigma}) - 3 \int \frac{d^3p}{(2\pi)^3} \: \frac{1}{\omega_{\pi}} \: n_{B} (\omega_{\pi}) \, , \label{eq032} \end{equation} where \begin{equation}\label{eq28} \omega_{\pi} =|\mbox{\boldmath $p$}| \, ; \;\;\;\;\;\; \omega_{\sigma} =(\mbox{\boldmath $p$}^2+m_{\sigma}^{2})^{1/2} \end{equation} are the energies of the $\pi$ and $\sigma$ particles respectively and \begin{equation}\label{eq30} n_{B}(\omega) = \frac{1}{e^{\beta \omega} - 1} \end{equation} is the Bose-Einstein distribution function. Eq.\ (\ref{eq032}) has been derived from the zero-order thermodynamical potential with meson masses at one loop order \cite{bilic1}. This approximation corresponds to the leading order in $1/N$ expansion, where $N$ is the number of scalar fields \cite{meyer}. In our case, $N=4$. The right-hand side of (\ref{eq032}) depends on $\sigma$ through the mass $m_{\sigma}$ given by (\ref{eq43}). The behavior of $\sigma$ near the critical temperature should be analyzed with special care. A straightforward solution to (\ref{eq032}) as a function of temperature exhibits a weak first-order phase transition \cite{bilic1,rod}. However, Pisarski and Wilczek have shown on general grounds that the phase transition in SU(2)$\times$SU(2) chiral models should be of second order \cite{pis}. Hence, it is generally believed that a first-order transition is an artifact of the one loop approximation. Two loop calculations \cite{baacke} make an improvement and confirm the general analysis of \cite{pis}. It is possible to mimic the second order phase transition even with (\ref{eq032}) by making the $\sigma$-meson mass temperature independent all the way up to the critical temperature and equal to its zero-temperature mean field value given by \begin{equation} m_{\sigma}^2 = 2\lambda f_\pi^2 , \label{eq31} \end{equation} instead of (\ref{eq43}). We fix the coupling $\lambda$ from the values of $m_\sigma$ and $f_\pi$ for which we take $m_\sigma=1$ GeV and $f_\pi = 92.4$ MeV as a phenomenological input. In Fig.\ \ref{fig1} we plot the solutions to (\ref{eq032}) for both temperature dependent and temperature independent $m_\sigma$ exhibitting apparent first and second order phase transitions, respectively. In the rest of our paper we employ the solution that corresponds to the second order phase transition. For our choice of parameters we find numerically $T_{\rm c}=182.822$ MeV. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth,trim= 0 0cm 0 0cm]{fig1.eps} \caption{Chiral condensate as a function of temperature for a temperature independent (full line) and temperature dependent $m_\sigma$ (dashed line) representing 2nd order phase transition and first order (discontinuous) phase transition, respectively. The critical temperature of the second order phase transition is indicated by $T_{c}$.} \label{fig1} \end{center} \end{figure} The propagation of pions is governed by the equation of motion \begin{equation} \frac{1}{\sqrt{-g}} \partial_{\mu} \left[ {\sqrt{-g}}\, ( a\, g^{\mu\nu}+ b\, u^{\mu} u^{\nu}) \partial_{\nu}\mbox{\boldmath{$\pi$}}\right] +V(\sigma, \mbox{\boldmath{$\pi$}}) \mbox{\boldmath{$\pi$}}=0, \label{eq013} \end{equation} where \begin{equation} V(\sigma, \mbox{\boldmath{$\pi$}})= m_\pi^2 + g\sigma+ \lambda (\sigma^2+\mbox{\boldmath{$\pi$}}^2) \label{eq213} \end{equation} is the interaction potential. In the comoving reference frame in flat spacetime, equation (\ref{eq013}) reduces to the wave equation \begin{equation} (\partial_t^2 - c_{\pi}^2 \Delta +\frac{c_{\pi}^2}{a}V) \mbox{\boldmath{$\pi$}}=0 , \label{eq014} \end{equation} where the quantity $c_{\pi}$ defined by \begin{equation} c_{\pi}^2=\left(1+\frac{b}{a}\right)^{-1} , \label{eq015} \end{equation} is the pion velocity. As we shall demonstrate in the next section. The constants $a$ and $b$ may be derived from the finite-temperature perturbation expansion of the pion self energy. \subsection{Pion velocity} \label{velocity} At temperatures below the chiral transition point the pions are massless. However, the velocity of massless particles in a medium is not necessarily equal to the velocity of light - in the chiral fluid pions usually propagate slower than light\footnote{If the chiral fermions are present pions become superluminal in certain range of temperature and baryon chemical potential \cite{bilic2}.}. The pion velocity in a sigma model at finite temperature has been calculated at one loop level by Pisarski and Tytgat in the low temperature approximation \cite{pisarski2} and by Son and Stephanov for temperatures close the chiral transition point \cite{son1,son2}. It has been found that the pion velocity vanishes as one approaches the critical temperature. Here we summarize the calculation of the parameters $a$ and $b$ in the entire range of temperatures in the chiral symmetry broken phase \cite{bilic2}. The pion velocity may be derived from the self energy $\Sigma(q,T)$ in the limit when the external momentum $q$ approaches 0. For a flat background geometry $g_{\mu\nu}=\eta_{\mu\nu}$, the inverse pion propagator $\Delta^{-1}$ derived directly from the effective Lagrangian (\ref{eq5}) as \begin{equation} \Delta^{-1}=a q^{\mu}q_{\mu} +b(q^{\mu}u_{\mu})^2 -m_{\pi}^2 , \label{eq200} \end{equation} may in the limit $q\rightarrow 0$ be expressed in the form \begin{equation} Z_{\pi}\Delta^{-1}= q^{\mu}q_{\mu}- \frac{1}{2!} q^{\mu}q^{\nu}\left[\frac{\partial}{\partial q^{\mu}} \frac{\partial}{\partial q^{\nu}} (\Sigma(q,T) - \Sigma(q,0)) \right]_{q=0} +\dots , \label{eq201} \end{equation} where the ellipsis denotes the terms of higher order in $q^{\mu}$. The $q^{\mu}$ independent term of the self energy absorbs in the renormalized pion mass, equal to zero in the chiral symmetry broken phase. The subtracted $T=0$ term has been absorbed in the wave function renormalization factor $Z_{\pi}$. By comparing this equation with Eq.\ (\ref{eq200}) written in the comoving frame as \begin{equation} \Delta^{-1}=(a+b)q_0^2 -a \mbox{\boldmath $q$}^2-m_\pi^2, \label{eq202} \end{equation} we can express the parameters $a$ and $b$, and hence the pion velocity, in terms of second derivatives of $\Sigma(q,T)$ evaluated at $q^{\mu}=0$. At one loop level the only diagram that gives a nontrivial q-dependence of $\Sigma$ is the bubble diagram. Subtracting the $T=0$ term one finds \cite{son2} \begin{eqnarray} \Sigma(q) \!&\! \equiv \!&\! \Sigma(q,T) - \Sigma(q,0) = -4{g}^2 \int\! \frac{d^3p}{(2\pi)^3} \frac{1}{2\omega_{\pi} 2\omega_{\sigma,q}} \nonumber\\ \!&\!\!&\! \left\{ [n_B(\omega_{\pi})+ n_B(\omega_{\sigma,q})] \left(\frac{1}{\omega_{\sigma,q}+ \omega_{\pi}} + \frac{1}{\omega_{\sigma,q}+ \omega_{\pi}+q_0}\right)\right. \nonumber\\ \!&\!\!&\! + \left. [n_B(\omega_{\pi})- n_B(\omega_{\sigma,q})]\left( \frac{1}{\omega_{\sigma,q}- \omega_{\pi}} + \frac{1}{\omega_{\sigma,q}- \omega_{\pi}+ q_0}\right)\right\} , \label{eq203} \end{eqnarray} where $\omega_{\sigma,q}= [(\mbox{\boldmath $p$}- \mbox{\boldmath $q$})^2 +m_\sigma^2]^{1/2}$. Here we take $m_\sigma$ to be a function of $\sigma$ through Eq. (\ref{eq43}). A straightforward evaluation of the second derivatives of $\Sigma(q)$ at $q_{\mu}=0$ yields \begin{equation} a = 1+ \frac{16 {g}^2}{m_{\sigma}^4} \int\! \frac{d^3p}{(2\pi)^3} \left[ \frac{n_B(\omega_{\pi})}{4\omega_{\pi}}+ \frac{n_B(\omega_{\sigma}) }{4\omega_{\sigma}} - \frac{1}{3} \frac{\omega_{\pi}^2}{m_{\sigma}^2} \left( \frac{n_B(\omega_{\pi})}{\omega_{\pi}} - \frac{n_B(\omega_{\sigma}) }{\omega_{\sigma}} \right)\right] , \label{eq204} \end{equation} \begin{equation} b = \frac{16{g}^2}{m_{\sigma}^4} \int\! \frac{d^3p}{(2\pi)^3} \left[ \frac{\omega_{\pi} n_B(\omega_{\pi}) }{m_{\sigma}^2}- \frac{\omega_{\sigma} n_B(\omega_{\sigma}) }{m_{\sigma}^2} + \frac{1}{3} \frac{\omega_{\pi}^2}{m_{\sigma}^2} \left( \frac{n_B(\omega_{\pi})}{\omega_{\pi}} - \frac{n_B(\omega_{\sigma}) }{\omega_{\sigma}} \right)\right] . \label{eq205} \end{equation} The pion velocity $c_{\pi}$ as given by (\ref{eq015}) depends on temperature explicitly through the thermal distribution function $n_B$ and implicitly through the chiral condensate $\sigma$ given by Eq.\ (\ref{eq032}). \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth,trim= 0 0cm 0 0cm]{fig2.eps} \caption{ Pion velocity as a function of temperature for a temperature independent (full line) and temperature dependent $m_\sigma$ (dashed line). The critical temperature of the second order phase transition is indicated by $T_{c}$.} \label{fig1a} \end{center} \end{figure} In Fig.\ \ref{fig1a} we plot $c_{\pi}$ as a function of temperature corresponding to two solutions depicted in the left panel. \subsection{Spherical Bjorken expansion} \label{bjorken} In order to explore the analogy between the chiral-fluid and cosmological expansions we consider a boost invariant spherically symmetric Bjorken type expansion \cite{bjorken} in Minkowski background spacetime. In radial coordinates $x^\mu=(t,r,\vartheta,\varphi)$ the fluid velocity is given by \begin{equation} u^\mu=(\gamma,\gamma v, 0,0)= (t/\tau, r/\tau,0,0), \label{eq144} \end{equation} where $v=r/t $ is the radial three-velocity and $\tau=\sqrt{t^2-r^2}$ is the {\em proper time}. Using the so called {\em radial rapidity} \begin{equation} y=\frac{1}{2} \ln \frac{t+r}{t-r} , \label{eq145} \end{equation} the velocity is expressed as \begin{equation} u^\mu=(\cosh y,\sinh y,0, 0), \label{eq146} \end{equation} and hence, the radial three-velocity is \begin{equation} v=\tanh y. \label{eq246} \end{equation} It is convenient to change $(t,r,\vartheta,\varphi)$ to new coordinates $(\tau,y,\vartheta,\varphi)$ via the transformation \begin{eqnarray} & &t=\tau \cosh y , \nonumber \\ & & r=\tau \sinh y . \label{eq147} \end{eqnarray} In these coordinates the background Minkowski metric takes the form \begin{equation} g_{\mu\nu} = \left(\begin{array}{cccc} 1 & & & \\ & -\tau^2 & & \\ & & -\tau^2\sinh^2\! y & \\ & & & -\tau^2\sinh^2\! y \sin^2 \vartheta \end{array} \right). \label{eq218} \end{equation} and the velocity componets become $u^\mu=(1,0,0,0)$. Hence, the new coordinate frame is comoving. The metric (\ref{eq218}) corresponds to the Milne cosmological model -- a homogeneous, isotropic, expanding universe with the cosmological scale $a=\tau$ and negative spatial curvature. The functional dependence of $T$ on $\tau$ follows from the energy-momentum conservation. For a perfect relativistic fluid the energy-momentum tensor is given by \begin{equation} T_{\mu\nu}=(p+\rho) u_{\mu}u_{\nu}-p g_{\mu\nu} , \label{eq001} \end{equation} where $p$ and $\rho$ denote respectively the pressure and the energy density of the fluid. From the energy-momentum conservation \begin{equation} {T^{\mu\nu}}_{;\nu}=0 \label{eq102} \end{equation} applied to (\ref{eq001}) we find \begin{equation} u^\mu \rho_{,\mu}+(p+\rho){u^\mu}_{;\mu}=0, \label{eq003} \end{equation} where the subscript $;\mu$ denotes the covariant differentiation associated with the background metric. Since our fluid is dominated by massles pions at nonzero temperature, it is a reasonable approximation to assume the equation of state $p=\rho/3$ of an ideal gas of massless bosons. Then, Eq.\ (\ref{eq003}) in comoving coordinates reads \begin{equation} \frac{\partial\rho}{\partial\tau} + \frac{4\rho}{\tau} =0 \label{eq148} \end{equation} with the solution \begin{equation} \rho=\rho_0 \left(\frac{\tau_0}{\tau}\right)^4. \label{eq006} \end{equation} This expression combined with the density of the pion gas \cite{landau} \begin{equation} \rho=\frac{\pi^2}{10}T^4, \label{eq106} \end{equation} implies the temperature profile \begin{equation} T=T_0 \frac{\tau_0}{\tau}. \label{eq007} \end{equation} The constants $T_0$ and $\tau_0$ may be fixed from the phenomenology of high energy collisions. For example, if we choose $T_0=1{\rm GeV}$, then a typical value of $\rho=1 {\rm GeV/fm^3}$ at $\tau\approx 5 \,{\rm fm}$ \cite{kolb-russkikh} is obtained with $\tau_0 = 1.5\, {\rm fm}$. In our case, with these values the interesting range of temperatures $T$ between 100 and 200 MeV corresponds to $\tau$ between 15 and 30 fm. In the following we work with $T_0=1{\rm GeV}$ and keep $\tau_0$ unspecified so that physical quantities of dimension of time or length are expressed in units of $\tau_0$. \section{Analog cosmology} \label{analog} In this section we turn to study the analog metric and formation and properties of the apparent horizon in an expanding chiral fluid. To this end we outline the formalism in the first subsection and derive a condition for the apparent horizon for a general hyperbolic spacetime. In Sec. \ref{horizon} we derive the analog metric for the expanding chiral fluid and study the properties of the analog apparent horizon. Then, in Sec. \ref{surface} we exploit the Kodama-Hayward definition of surface gravity to derive the Hawking temperature as a function of the parameters of the chiral fluid, in particular, as a function of the local fluid temperature. \subsection{Radial null geodesics} \label{radial} To study the apparent horizon in an expanding chiral fluid we need to examine the behavior of radial null geodesics of the analog metric which we shall derive in Sec. \ref{horizon}. With hindsight, we first consider a space time of the form \begin{equation} ds^2= \beta(\tau)^2 d\tau^2 -\alpha(\tau)^2 (dy^2 + \sinh^2\! y \, d\Omega^2), \label{eq008} \end{equation} where $\beta$ and $\alpha$ are arbitrary functions of $\tau$. The metric tensor is \begin{equation} G_{\mu\nu} = \left(\begin{array}{cccc} \beta^2 & & & \\ & -\alpha^2 & & \\ & & -\alpha^2\sinh^2\! y & \\ & & & -\alpha^2\sinh^2\! y \sin^2 \vartheta \end{array} \right). \label{eq243} \end{equation} This metric represents the class of hyperbolic ($k=-1$) FRW spacetimes including the flat spacetime example (\ref{eq218}). We denote by $l_+^\mu$ and $l_-^\mu$ the vectors tangent to outgoing and ingoing affinely parameterized radial null geodesics normal to a sphericall two-dimensional surface $S$. The tangent vectors are null with respect to the metric (\ref{eq243}), i.e., \begin{equation} G_{\mu\nu}l_+^\mu l_+^\nu =G_{\mu\nu}l_-^\mu l_-^\nu= 0 . \label{eq143} \end{equation} Using the geodesic equation \begin{equation} l^\mu \nabla_\mu{l^\nu}=0, \label{eq009} \end{equation} where the symbol $\nabla_\mu$ denotes a covariant derivative associated with the metric (\ref{eq243}), one easily finds the tangent null vectors corresponding to four types of radial null geodesics, \begin{equation} l_\pm^\mu =q_\pm \alpha^{-1}\left( \beta^{-1}, \pm\alpha^{-1},0,0\right), \label{eq010} \end{equation} tangent to future directed and \begin{equation} l_\pm^\mu =\tilde{q}_\pm \alpha^{-1}\left( - \beta^{-1}, \pm\alpha^{-1},0,0\right), \label{eq011} \end{equation} to past directed null geodesics , where $q_+$, $q_-$, $\tilde{q}_+$, and $\tilde{q}_-$ are arbitrary positive constants. The corresponding affine parameters $\lambda_+$ and $\lambda_-$ for the outgoing and ingoing null geodesics, respectively, are found to satisfy \begin{equation} \frac{d\lambda_\pm}{d\tau}= \frac{1}{q_\pm}\alpha\beta \label{eq012} \end{equation} for future directed and \begin{equation} \frac{d\lambda_\pm}{d\tau}=- \frac{1}{\tilde{q}_\pm}\alpha\beta , \label{eq019} \end{equation} for past directed null geodesics. For simplicity, from now on we set $q_+ =q_-=\tilde{q}_+=\tilde{q}_-=1$. The null vectors $l_+^\mu$ and $l_-^\mu$ point towards increasing and decreasing $y$, respectively. Hence, we adopt the usual convention and refer to $l_+^\mu$ ($l_-^\mu$) and the corresponding null geodesics as outgoing (ingoing) although increasing (decreasing) $y$ does not necessarily imply increasing (decreasing) of the radial coordinate $r$. As we move along a geodesic the changes of the coordinates $\tau$ and $y$ are subject to the condition $ds=0$ of radial null geodesics, i.e., \begin{equation} d\tau=\pm \frac{\alpha}{\beta} dy \label{eq017} \end{equation} along the geodesic. Here the signs determine whether $y$ is increasing or decreasing as we move along the geodesic. For example, for future directed null geodesics, it follows from (\ref{eq012}) and (\ref{eq017}) that an outgoing geodesic is directed along increasing $y$, i.e., $y$ increases with $\lambda_+$, whereas an ingoing geodesic is directed along decreasing $y$, i.e., $y$ decreases with $\lambda_-$. The key roles in the study of trapped surfaces are played by the expansion parameters $\varepsilon_+$ and $\varepsilon_-$ \begin{equation} \varepsilon_\pm=\nabla_\mu l_\pm^\mu \label{eq244} \end{equation} of outgoing and ingoing null geodesics, respectively. Particularly important are the values of $\varepsilon_+$ and $\varepsilon_-$ and their Lie derivatives \begin{equation} \frac{d\varepsilon_+}{d\lambda_-} \equiv l^\mu_-\partial_\mu \varepsilon_+ ; \hspace{1cm} \frac{d\varepsilon_-}{d\lambda_+} \equiv l^\mu_+\partial_\mu \varepsilon_- \end{equation} in the neighborhood of a marginally trapped surface. As we shall shortly demonstrate, the relevant marginally trapped surface in the expanding chiral fluid is future inner marginally trapped. According to our convention described in Appendix \ref{trapped}, a two-dimensional surface $H$ is said to be {\em future inner marginally trapped} if the future directed null expansions on $H$ satisfy the conditions: $\varepsilon_+|_H=0$, $l_-^\mu\partial_\mu\varepsilon_+|_H>0$ and $\varepsilon_-|_H<0$. The future inner marginally trapped surface is the {\bf inner} boundary of a future trapped region consisting of trapped surfaces with negative ingoing and outgoing null expansions. From now on we refer to this surface as the {\em apparent horizon}. From (\ref{eq010}) and (\ref{eq011}) we find \begin{equation} \varepsilon_\pm =\frac{2}{\alpha^2}\left(\frac{\dot{\alpha}}{\beta}\pm \frac{1}{\tanh y} \right) \label{eq110} \end{equation} for future directed and \begin{equation} \varepsilon_\pm =\frac{2}{\alpha^2}\left(-\frac{\dot{\alpha}}{\beta}\pm \frac{1}{\tanh y} \right) \label{eq111} \end{equation} for past directed radial null geodesics, where the overdot denotes a partial derivative with respect to $\tau$. The respective Lie derivatives are given by \begin{equation} \frac{d\varepsilon_\pm}{d\lambda_\mp} \equiv l^\mu_\mp\partial_\mu \varepsilon_\pm = \frac{2}{\alpha^2\beta^2}\left[\frac{\ddot{\alpha}}{\alpha} -\frac{\dot{\alpha}\dot{\beta}}{\alpha\beta} -\left( 1-\frac{1}{\tanh^2\! y} \right)\frac{\beta^2}{\alpha^2}\right] -\frac{2\dot{\alpha}}{\alpha^2\beta}\varepsilon_\pm, \label{eq112} \end{equation} for future directed and \begin{equation} \frac{d\varepsilon_\pm}{d\lambda_\mp} \equiv l^\mu_\mp\partial_\mu \varepsilon_\pm = \frac{2}{\alpha^2\beta^2}\left[\frac{\ddot{\alpha}}{\alpha} -\frac{\dot{\alpha}\dot{\beta}}{\alpha\beta} +\left( 1-\frac{1}{\tanh^2\! y} \right)\frac{\beta^2}{\alpha^2}\right] +\frac{2\dot{\alpha}}{\alpha^2\beta}\varepsilon_\pm, \label{eq113} \end{equation} for past directed radial null geodesics. For a spherically symmetric spacetime, the condition that one of the null expansions vanishes on the apparent horizon $H$ is equivalent to the condition that the vector $n_\mu$, normal to the surface of spherical symmetry, is null on $H$. In other words, the condition \begin{equation} \nabla_\mu l^\mu|_H=0, \label{eq115} \end{equation} where $l^\mu$ denotes either $l^\mu_+$ or $l^\mu_+$, is equivalent to the condition \begin{equation} G^{\mu\nu}n_\mu n_\nu|_H=0. \label{eq116} \end{equation} This may be seen as follows. For the metric (\ref{eq243}) the normal $n_\mu$ is given by \begin{equation} n_\mu=\partial_\mu (\alpha \sinh y). \label{eq114} \end{equation} The expansion $\varepsilon_+$ (or $\varepsilon_-$) defined in (\ref{eq244}) may be written as \begin{equation} \nabla_\mu l^\mu=\frac{1}{\sqrt{-G}}\partial_\mu (\sqrt{-G} l^\mu) =\frac{1}{\sqrt{-h}}\partial_\mu (\sqrt{-h} l^\mu) + \frac{2}{\alpha\sinh y}l^\mu n_\mu \label{eq120} \end{equation} where $h$ denotes the determinant of the metric \begin{equation} h_{\alpha\beta} = \left(\begin{array}{cc} \beta^2 & 0 \\ 0 & -\alpha^2 \\ \end{array} \right), \label{eq122} \end{equation} of the two-dimensional space normal to the surface of spherical symmetry. It may be shown that the first term on the right hand side of (\ref{eq120}) vanishes identically by the geodesic equation. Hence, the expansion $\nabla_\mu l^\mu$ vanishes on $H$ if and only if \begin{equation} l^\beta n_\beta|_H=0. \label{eq123} \end{equation} Suppose one of the expansions vanishes on $H$, i.e., Eq. (\ref{eq123}) holds for either $l^\mu_+$ or $l^\mu_-$. Since $l^\mu$ is null and both $l^\mu$ and $n^\mu$ are normal to $H$ and hence tangent to the two-dimensional space $(\tau, y)$ with the metric (\ref{eq122}), Eq.\ (\ref{eq123}) implies $h_{\alpha\beta}n^\alpha n^\beta|_H=0$. Hence, $\nabla_\mu l^\mu|_H=0$ implies $G^{\mu\nu}n_\mu n_\nu|_H=0$. To prove the reverse it is sufficient to show that $l^\beta_+ n_\beta\neq 0$ and $l^\beta_- n_\beta\neq 0$ implies $h_{\alpha\beta}n^\alpha n^\beta \neq 0$, which may be easily shown for a general two-dimensional metric in diagonal gauge. Then, the following statement holds: the vanishing of $h_{\alpha\beta}n^\alpha n^\beta$ on $H$ implies either $l^\beta_+ n_\beta|_H=0$ or $l^\beta_- n_\beta|_H=0$. This together with (\ref{eq120}), implies either $\varepsilon_+|_H=0$ or $\varepsilon_-|_H=0$. Either from (\ref{eq115}) or from (\ref{eq116}) one finds the condition for the apparent horizon \begin{equation} \frac{\dot{\alpha}}{\beta}\pm \frac{1}{\tanh y} =0 \label{eq117} \end{equation} \subsection{Analog horizons} \label{horizon} Next we derive the analog metric and define the analog Hubble and the apparent horizons. Equation (\ref{eq013}) may be written in the form \cite{moncrief,bilic3,visser2} \begin{equation} \frac{1}{\sqrt{-G}}\, \partial_{\mu} (\sqrt{-G}\, G^{\mu\nu}) \partial_{\nu} \mbox{\boldmath{$\pi$}} +\frac{c_{\pi}^2}{a}V(\sigma, \mbox{\boldmath{$\pi$}}) \mbox{\boldmath{$\pi$}}=0 , \label{eq028} \end{equation} with the analog metric tensor, its inverse, and its determinant given by \begin{equation} G_{\mu\nu} =\frac{a}{c_{\pi}} [g_{\mu\nu}-(1-c_{\pi}^2)u_{\mu}u_{\nu}] , \label{eq022} \end{equation} \begin{equation} G^{\mu\nu} = \frac{c_{\pi}}{a} \left[g^{\mu\nu}-(1-\frac{1}{c_{\pi}^2})u^{\mu}u^{\nu} \right], \label{eq029} \end{equation} \begin{equation} G = \frac{a^4}{c_{\pi}^2}g . \label{eq030} \end{equation} Hence, the pion field propagates in a (3+1)-dimensional effective geometry described by the metric $G_{\mu\nu}$. In the comoving coordinate frame defined by the coordinate transformation (\ref{eq147}) the velocity is $u^\mu=(1,0,0,0)$ and, as a consequence, the analog metric (\ref{eq022}) is diagonal \begin{equation} G_{\mu\nu} =\frac{a}{c_\pi} \left(\begin{array}{cccc} c_\pi^2 & & & \\ & -\tau^2 & & \\ & & -\tau^2\sinh^2\! y & \\ & & & -\tau^2\sinh^2\! y \sin^2 \vartheta \end{array} \right). \label{eq018} \end{equation} Here, the parameters $a$ and $c_\pi$ are functions of the temperature $T$ which in turn is a function of $\tau$. In the following we assume that these functions are positive. The metric (\ref{eq018}) is precisely of the form (\ref{eq243}) with \begin{equation} \beta(\tau)=\sqrt{ac_\pi} ; \hspace{0.5in} \alpha(\tau)=\tau\sqrt{\frac{a}{c_\pi}} . \label{eq108} \end{equation} The physical range of $\tau$ is fixed by Eq.~(\ref{eq007}) since the available temperature ranges between $T=0$ and $T=T_{\rm c}$. Hence, the proper time range is $\tau_{\rm c}\leq \tau < \infty$ where the critical value $\tau_{\rm c}$ is related to the critical temperature as $\tau_{\rm c}/\tau_0= T_0/T_{\rm c}$. The metric is singular at $\tau=\tau_{\rm c}$. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\textwidth,trim= 0 0cm 0 0cm]{fig3.eps} \caption{Null expansions $\varepsilon_+$ and $\varepsilon_-$ as functions of $r$ (top left panel) and $y$ (top right panel) for fixed $t=6\tau_0$ and fixed $\tau=5.77 \tau_0$, respectively. Similarly, the bottom panels depict the derivative of $\varepsilon_+$ as functions of $r$ and $y$ for fixed $t$ and $\tau$, respectively.} \label{fig2} \end{center} \end{figure} In Fig.\ \ref{fig2} we plot the expansions $\varepsilon_+$ and $\varepsilon_-$ of outgoing and ingoing radial null geodesics, respectively, as functions of $r$ for an arbitrarily chosen fixed time $t=6 t_0$ and, similarly, as functions of $y$ for a fixed $\tau=5.77 \tau_0$. In the lower two panels we plot the derivative of the outgoing null expansion $\varepsilon_+$ along the ingoing null geodesic. The outgoing null expansion decreases with increasing $r$ from positive to negative values and vanishes at the point $r=r_H$, whereas the ingoing null expansion remains negative. At this point the derivative of $\varepsilon_+$ with respect to $\lambda_-$ is positive. According to the standard convention described in Appendix \ref{trapped} the region $\{r>r_H, t=6\tau_0\}$ is future trapped and the location $r_H$ marks its inner boundary. Thus, the sphere at $r_H$ is future inner marginally trapped. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth,trim= 0 0cm 0cm 0cm]{fig4.eps} \caption{Spacetime diagram of outgoing (full line) and ingoing (dashed line) radial null geodesics in $(\tau,y)$ coordinates. The shaded area represents the evolution of the trapped region. The trapping horizon is represented by the full bold line with the endpoint at $\tau=\tau_{\rm max}=6.0182\tau_0$. The dashed and dash-dotted bold lines represent the evolution of the analog and naive Hubble horizons, respectively.} \label{fig3} \end{center} \end{figure} Spacetime diagrams corresponding to the metric (\ref{eq018}) are presented in Fig.~\ref{fig3} showing future directed radial null geodesics. The origin in the plots in both panels corresponds to the critical value $\tau_{\rm c}$ at which $c_\pi$ vanishes. Numerically, with the chosen $T_0=1$ GeV we have $\tau_{\rm c}/\tau_0=5.47$. The geodesic lines are constructed using \begin{equation} y= \pm \int_{\tau_{\rm c}}^\tau d\tau' c_\pi(\tau')/\tau' +{\rm const} \label{eq216} \end{equation} which follows from (\ref{eq017}) with (\ref{eq108}). As mentioned in Sec.\ \ref{radial}, increasing (decreasing) $y$ does not necessarily imply increasing (decreasing) of the radial coordinate $r$. With the help of the coordinate transformation (\ref{eq147}) the shift $dy$ along a geodesic may be expressed in terms of $dr$ \begin{equation} dy= \frac{c_\pi}{( c_\pi\pm v )\tau\cosh y } dr . \label{eq016} \end{equation} where we have used (\ref{eq246}) and (\ref{eq108}). We note that if $v<c_\pi$, increasing(decreasing) $y$ correspond to increasing (decreasing) $r$ for both signs, whereas if $v>c_\pi$ increasing $y$ corresponds to increasing $r$ for an outgoing and decreasing $r$ for an ingoing geodesic. Using (\ref{eq216}) we introduce null coordinates \begin{equation} w= \frac{1}{2}\left(y+\int_{\tau_{\rm c}}^\tau d\tau' \beta(\tau')/\alpha(\tau')\right); \hspace{1cm} u= \frac{1}{2}\left(-y+\int_{\tau_{\rm c}}^\tau d\tau' \beta(\tau')/\alpha(\tau')\right), \label{eq316} \end{equation} ranging in the intervals $[0,+\infty)$ and $(-\infty,+\infty)$, respectively, with a condition $0\leq w\pm u < \infty$. In these coordinates the metric (\ref{eq008}) becomes \begin{equation} ds^2= \alpha^2\left( 4 du dw - \sinh^2\! (w-u) \, d\Omega^2\right). \label{eq208} \end{equation} The singularity at $\tau=\tau_{\rm c}$ is mapped onto the entire $u+w=0$ line. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth,trim= 0 0cm 0 0cm]{fig5.eps} \caption{Conformal diagram for the analog spacetime described by the metric (\ref{eq018}). The lines parallel to the $W$ and $U$ axes, are future directed outgoing and ingoing null geodesics, respectively. The singularity at $\tau=\tau_c$ is represented by the wavy line and the apparent horizon by the full line. Te shaded area in between represents the evolution of the trapped region. The spacelike and future timelike infinities are denoted by $i_0$ and $i_+$, respectively. The future null infinity ${\cal I}_+$ is represented by the line $T+R=\sqrt{2}$.} \label{fig3a} \end{center} \end{figure} Next, we compactify the spacetime using the coordinate transformation \begin{equation} W=\tanh w; \hspace{1cm} U= \tanh u. \label{eq317} \end{equation} The coordinates $W$ and $U$ range in the intervals $[0,1]$ and $[-1,1]$, respectively, with a condition $0\leq W\pm U \leq 2$. Furthermore, the rotation \begin{equation} T=\frac{1}{\sqrt{2}}(W+U); \hspace{1cm} R= \frac{1}{\sqrt{2}}(W-U) , \label{eq319} \end{equation} brings the metric to a conformally flat form \begin{equation} ds^2= \frac{2\alpha^2}{(1-U^2)(1-W^2)}\left[ dT^2 -dR^2 - R^2 \, d\Omega^2\right], \label{eq309} \end{equation} where both coordinates $R$ and $T$ range in the interval $[0,\sqrt{2}]$ with a condition $R+T \leq \sqrt{2}$. The conformal diagram representing our analog spacetime is depicted in Fig.\ \ref{fig3a}. The singularity at $\tau=\tau_{\rm c}$ is mapped onto the segment $[0,\sqrt{2}]$ on the horizontal axis. The coordinate transformation \begin{equation} t'=\int \beta d\tau \label{eq138} \end{equation} brings the metric (\ref{eq018}) to the standard form of an open $k=-1$ FRW spacetime metric with the cosmological time $t'$. The time coordinate $t'$ is related to the original time $t$ via $\tau$ and the transformation (\ref{eq147}). The analog cosmological scale is $a(t')=\alpha(\tau(t'))/r_0$, where the costant $r_0$ is related to the spatial Gaussian curvature $K=-1/r_0^2$. The proper distance is $d_{\rm p}=\alpha y$ and the analog Hubble constant \begin{equation} {\cal H}=\frac{\dot{\alpha}}{\alpha\beta}, \label{eq238} \end{equation} where the overdot denotes a partial derivative with respect to $\tau$. Then, we define the {\em analog Hubble horizon} as a two-dimensional spherical surface at which the magnitude of the analog recession velocity \begin{equation} v_{\rm rec}= {\cal H}d_{\rm p} = y\frac{\dot{\alpha}}{\beta} \label{eq038} \end{equation} equals the velocity of light. Hence, the condition \begin{equation} y =\frac{\beta}{\left|\dot{\alpha}\right|} \label{eq318} \end{equation} defines the location of the analog Hubble horizon. Note that the radial fluid velocity $v$ in (\ref{eq246}) and the analog recession velocity (\ref{eq038}) are quite distinct quantities -- in an expanding fluid $v$ is allways positive and less than the velocity of light $c=1$ whereas $v_{\rm rec}$ may be positive or negative depending on the sign of $\dot{\alpha}$ and its magnitude may be arbitrary large. A two-dimensional spherical surface on which the radial velocity $v$ equals the velocity of pions $c_\pi$ defines another horizon, which we refer to as the {\em naive Hubble horizon}. This horizon is obviously distinct from the analog Hubble horizon defined above. The evolution of the naive and the analog Hubble horizons with $\tau$ are depicted in Fig.~\ref{fig3}. Next we introduce the concept of analog mar\-gi\-nal\-ly trapped surface or {\em analog apparent horizon} following closely the general considerations of Sec.~\ref{radial} and Appendix \ref{trapped}. Formation of an analog apparent horizon in an expanding hadronic fluid is similar to the formation of a black hole in a gravitational collapse although the role of an outer trapped surface is exchanged with that of an inner trapped surface. Unlike a black hole in general relativity, the formation of which is indicated by the existence of an \textbf{outer} marginally trapped surface, the formation of an analog black (or white) hole in an expanding fluid is indicated by the existence of a future or past \textbf{inner} marginally trapped surface. Equation (\ref{eq117}) with (\ref{eq108}) defines a hypersurface which we refer to as the {\em analog trapping horizon}. Any solution to Eq.\ (\ref{eq117}) , e.g., in terms of $r$ for fixed $t$, gives the location of the analog apparent horizon $r_H$. For example, the radius $r_H=1.53\tau_0$ computed using (\ref{eq117}) for fixed $t=6t_0$ is the point of vanishing outgoing null expansion which marks the location of the apparent horizon in the top left panel of Fig.\ \ref{fig2}. From (\ref{eq110}) it follows that the region of spacetime for which \begin{equation} \tanh y \geq |\beta/\dot{\alpha}| \label{eq158} \end{equation} is trapped. It is future trapped if $\dot{\alpha} < 0$ and past trapped if $\dot{\alpha} > 0$. The condition (\ref{eq158}) can be met only if $|\beta/\dot{\alpha}| \leq 1$ which holds for $\tau$ between $\tau_{\rm c}$ and $\tau_{\rm max}$. At $\tau=\tau_{\rm max}$ we have $|\beta/\dot{\alpha}|=1$ so the endpoint of the trapping horizon in Fig. \ref{fig3} is at $\tau=\tau_{\rm max}$, $\tanh y=1$. We find that $\dot{\alpha}$ is negative for $\tau$ in the entire range $\tau_{\rm c} \leq \tau \leq \tau_{\rm max}$ and, according to (\ref{eq238}), the analog Hubble constant is always negative. Hence, our analog cosmological model is a contracting FRW spacetime with a negative spatial curvature. The shaded area left of the bold line in Fig.\ \ref{fig3} represents the time evolution of the future trapped region. Note that the analog Hubble horizon is always behind the apparent horizon whereas the naive Hubble horizon may be located ahead of or behind the apparent horizon depending on the magnitude of $\dot\alpha$. The naive Hubble and apparent horizons coincide if $a$ and $c_\pi$ are $\tau$ independent constants. The apparent horizon is generally not a Killing horizon and normally does not coincide with the event horizon (one exception is de Sitter spacetime). Moreover, the apparent horizon exists in all FRW universes \cite{ellis}, whereas the event horizon does not exist in eternally expanding FRW universes with the equation of state $w>-1/3$ (see, e.g., \cite{davis}). For the metric (\ref{eq018}), the event horizon is defined by \begin{equation} y= \int_\tau^\infty d\tau' \frac{c_\pi (\tau')}{\tau'}. \label{eq207} \end{equation} In our chiral fluid model the integral on the righthand side diverges at the upper limit because $c_\pi\rightarrow 1$ as $\tau\rightarrow \infty$ and hence, the analog event horizon does not exist. In contrast, as we have demonstrated, the analog apparent horizon does exist. \subsection{Analog Hawking effect} \label{surface} One immediate effect related to the apparent horizon is the Hawking radiation. Unfortunately, in a non-stationary spacetime, the surface gravity associated to the apparent horizon is not uniquely defined \cite{nielsen2}. Several ideas have been put forward how to generalize the definition of surface gravity for the case when the apparent horizon does not coincide with the event horizon \cite{fodor,mukohyama-booth,hayward,hayward2}. In this paper we use the prescription of \cite{hayward2} which we have adapted to analog gravity in our previous paper \cite{tolic}. This prescription involves the so called Kodama vector $K^\mu$ \cite{kodama} which generalizes the concept of the time translation Killing vector to non-stationary spacetimes. The analog surface gravity $\kappa$ is defined by \begin{equation} \kappa =\frac{1}{2} \frac{1}{\sqrt{-h}} \partial_\alpha ( \sqrt{-h}h^{\alpha\beta}k n_\beta), \label{eq228} \end{equation} where the quantities on the righthand side should be evaluated on the trapping horizon. The metric $h_{\alpha\beta}$ of the two-dimensional space normal to the surface of spherical symmetry and the vector $n_\alpha$ normal to that surface are given by (\ref{eq122}) and (\ref{eq114}), respectively. The definition (\ref{eq228}) differs from the original expression for the dynamical surface gravity \cite{hayward2,hayward3} by a normalization factor $k$ which we have introduced in order to meet the requirement that $K^\mu$ should coincide with the time translation Killing vector $\xi^\mu$ for a stationary geometry. For the metric (\ref{eq018}) with (\ref{eq108}) we have found \cite{tolic} \begin{equation} k= \beta \left(\cosh^2y-\sinh^2y \frac{\tau \dot{\alpha}}{\alpha}\right). \label{eq324} \end{equation} Then, the definition (\ref{eq228}) yields \begin{equation} \kappa = \frac{v}{2 \beta\gamma(\alpha -\tau \dot{\alpha}v^2)^2} \left[\alpha(\dot{\alpha}^2+\alpha\ddot{\alpha}-\beta^2) +2\beta^2 \left(\tau\dot{\alpha}-\alpha\right)v + (\alpha \dot{\alpha}^2-2\tau \dot{\alpha}^3+\beta^2 \tau \dot{\alpha}) v^2 \right] \label{eq229} \end{equation} evaluated on the trapping horizon. The above expression may be somewhat simplified by making use of the horizon condition (\ref{eq117}). We find \begin{equation} \kappa = \frac{c_\pi}{2\tau } \frac{1 + 2c_\pi v (1-v)- (2+c_\pi) v^3}{\gamma v(1+c_\pi v )^2} +\frac{\ddot{\alpha}}{2\beta}\frac{v}{\gamma (1+c_\pi v)^2} \label{eq231} \end{equation} evaluated on the trapping horizon. It is worthwhile analysing the limitting case of (\ref{eq229}) when the quantities $a$, and $c_\pi$ are constants. Then $\dot{\alpha}=\alpha/\tau$, $\ddot{\alpha}=0$ and the apparent horizon is fixed by the condition $v = c_\pi$. At any chosen time $t=\tau (1-c_\pi^2)^{-1/2}$ the horizon is located at $r_H=c_\pi t$ and the expression for $\kappa$ reduces to \begin{equation} \kappa =\frac{1}{2t}=\frac{\sqrt{1-c_\pi^2}}{2\tau} \label{eq230} \end{equation} Hence, the analog surface gravity is finite for any physical value of $c_\pi$ and is maximal when $c_\pi=0$. However, with $c_\pi=0$ the horizon degenerates to a point located at the origin $r=0$. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\textwidth,trim= 0 0cm 0 0cm]{fig6.eps} \caption{Analog surface gravity as a function of the proper time $\tau$ (left panel) and the corresponding Hawking temperature as a function of the fluid temperature $T$ (right panel).} \label{fig4} \end{center} \end{figure} In the left panel of Fig.\ \ref{fig4} we plot $\kappa$ as a function of $\tau$ as given by (\ref{eq231}). The corresponding temperature defined as \begin{equation} T_H=\frac{\kappa}{ 2\pi} \label{eq044} \end{equation} represents the analog Hawking temperature of thermal pions emitted at the apparent horizon as measured by an observer near infinity. Since the background geometry is flat, this temperature equals the locally measured Hawking temperature at the horizon. Thus, equation (\ref{eq044}) with (\ref{eq229}) corresponds to the flat spacetime Unruh effect. As we move along the trapping horizon the radius of the apparent horizon increases and the Hawking temperature decreases rapidly with $\tau$. Hence, there is a correlation between $T_H$ and the local fluid temperature $T$ which is related to $\tau$ by (\ref{eq007}). In the right panel of Fig.\ \ref{fig4} we show the Hawking temperature $T_H$ as a function of the fluid temperature $T$ at the apparent horizon. In our previous paper \cite{tolic} we have shown that the surface gravity diverges as \begin{equation} \kappa = (\eta+1/2)(\tau-\tau_{\rm c})^{-1} \label{eq048} \end{equation} at the singular point, where $\eta$ is a constant related to the scaling of the quantity $\sqrt{a/c_\pi}$ \begin{equation} \sqrt{\frac{a}{c_\pi}}\propto (T_{\rm c}-T)^{-\eta} \label{eq045} \end{equation} in the neighborhood of the critical point. The constant $\eta$ may be roughly estimated as follows. The estimate of the function $\Sigma (q)$ defined in (\ref{eq203}) in the neighborhood of $q^\mu=0$ for small $\sigma$ yields \cite{son2} \begin{equation} \Sigma(0, \mbox{\boldmath $q$}^2) \sim \frac{T}{\sigma} \mbox{\boldmath $q$}^2; \hspace{1cm} \Sigma(q_0, 0) \sim \frac{T^2}{\sigma^2} q_0^2 \label{eq248} \end{equation} By comparing this with (\ref{eq202}) we deduce the behavior of the quantities $a$ and $b$ for small $\sigma$ \begin{equation} a \sim \frac{T}{\sigma} ; \hspace{1cm} a+b \sim \frac{T^2}{\sigma^2}. \label{eq249} \end{equation} Then, from (\ref{eq015}) the pion velocity goes to zero approximately as $c_\pi \propto \sigma^{1/2}$ whereas the ratio $a/c_\pi$ diverges as $a/c_\pi \propto \sigma^{-3/2}$. From Eq. (\ref{eq032}) we find $\sigma \propto (T_{\rm c}-T)^{1/2}$ near the critical point which yields $\eta=3/8$. Numerically, by fitting $\sqrt{a/c_\pi}$ in the close neighborhood of $T_{\rm c}$ to the function (\ref{eq045}) with the critical temperature $T_{\rm c}=182.822$ MeV obtained numerically from (\ref{eq032}), we find $\eta=0.253$. A more refined analysis based on scaling and universality arguments of Son and Stephanov \cite{son1} yields $\eta=0.1975$ \cite{tolic}. \section{Summary and discussion} \label{conclusion} We have demonstrated that, owing to the analog gravity effects in high energy collisions, a close analogy may be drawn between the evolution of a hadronic fluid and the spacetime expansion. Using the formalism of relativistic acoustic geometry we have analyzed the expanding chiral fluid in the regime of broken chiral symmetry. The expansion which takes place after the collision is modelled by spherically symmetric Bjorken type expansion. The propagation of massless pions in the chiral fluid provides a geometric analog of expanding spacetime equivalent to an open ($k=-1$) FRW cosmology. The geometry depends on the parameters $a$ and $b$ of the effective Lagrangian defined in Sec.\ \ref{chiral}. The elements of the analog metric tensor are functions of the spacetime coordinates via the temperature dependence of $a$ and the pion velocity $c_\pi$. The pions propagate slower than light with a velocity close to zero in the neighborhood of the critical point of the chiral phase transition. A trapped region forms for radial velocities of the fluid beyond the value defined by Eq. (\ref{eq117}). This value defines a hypersurface shown in Fig.\ \ref{fig3} which we refer to as the analog trapping horizon, at which the outgoing radial null expansion vanishes. Our trapping horizon is foliated by future inner marginally trapped surfaces and is equivalent to the trapping horizon in contracting FRW spacetime, i.e., in dynamical spacetime with a negative Hubble constant. The shaded area in Fig.\ \ref{fig3} represents the time evolution of the future trapped region, with the future inner marginally trapped surface (or the future apparent horizons) as its inner boundary. This marginally trapped surface may be regarded as an ``outer'' white hole: the ingoing pions (future directed ingoing null geodesics) freely cross the apparent horizon whereas the outgoing cannot penetrate the apparent horizon. This is opposite to an expanding FRW universe where the inner marginally trapped surface acts as a black hole: the future directed ingoing null geodesics cannot escape the apparent horizon whereas the outgoing null geodesics freely cross the apparent horizon. We have studied the Hawking effect associated with the analog apparent horizon using the Kodama-Hayward definition of surface gravity adapted to the analog gravity geometry. The Hawking temperature correlates with the local temperature of the fluid at the apparent horizon and diverges at the critical point. In contrast to the usual general relativistic Hawking effect, where the Hawking temperature is tiny compared with the temperature of the background, the analog horizon temperature is of the order or even larger than the local temperature of the fluid. The analog Hawking radiation of pions should not be confused with the Hawking-Unruh radiation of hadrons of Castorina et al.~\cite{castorina}. The latter is a usual Unruh effect due to the acceleration of quark-antiquark pairs produced in particle collisions, whereas the former is an analog thermal radiation due to effective geometry of the chiral fluid. A spherically symmetric Bjorken expansion model considered here may be phenomenologically viable as a model of hadron production in $e^+e^-$ but it is certainly not the best model for description of high energy heavy ion collisions. It would be desirable to apply our formalism to a more realistic hydrodynamic model that involves a transverse expansion superimposed on a longitudinal boost invariant expansion. In this case the calculations become rather involved as the formalism for general nonspherical spacetimes is not yet fully diveloped. This work is in progress. In conclusion, we believe that the study of analog gravity in high energy collision may in general improve our understanding of both particle physics phenomenology and dynamical general relativistic systems. \subsection*{Acknowledgments} This work was supported by the Ministry of Science, Education and Sport of the Republic of Croatia under Contract No. 098-0982930-2864 and partially supported by the ICTP-SEENET-MTP grant PRJ-09 ``Strings and Cosmology`` in the frame of the SEENET-MTP Network.
2,877,628,090,621
arxiv
\section{Introduction} The term {\em quantale} was suggested by C.J.~Mulvey at the Oberwolfach Category Meeting \ (see \cite{mulvey}) as "a quantization" of the term {\em locale}. Locales form an order-theoretic counterpart of topological spaces and are therefore able to describe commutative C$^{*}$-algebras. The main aim of C.J.~Mulvey has been to find a substitute of locales which could play the same r{\^o}le for general C$^{*}$-algebras to establish a generalized Gelfand--Naimark duality for all C*-algebras and study non-commutative topology. Quantales are also applied in linear and other substructural logics and automaton theory. An important moment in the development of the theory of quantales was the realization that quantales give a semantics for propositional linear logic in the same way as Boolean algebras give a semantics for classical propositional logic (see \cite{girard}). Quantales arise naturally as lattices of ideals, subgroups, or other suitable substructures of algebras, and then they are called \emph{spectra}. By definition, a {\em quantale} is a complete lattice $Q$ with an associative multiplication $\cdot$ that distributes over arbitrary joins. By the completeness of $Q$, there are left and right adjoint operations (called {\em residuals}) $\to$ and $\leadsto$ of $\cdot$ such that $$ \phantom{xxxxx}x\leq y\to z\ \text{if and only if}\ x\cdot y\leq z\ \text{if and only if}\ y\leq x\leadsto z. \phantom{xxxxx}\text{\rm (R)} $$ Note that, as was mentioned in \cite{rump} and \cite{ruya}, in any quantale the following conditions are satisfied: $$ \begin{array}{c} \begin{array}{r c l} y \to z &\leq& (x \to y) \to (x \to z)\\ y \leadsto z &\leq& (x \leadsto y) \leadsto (x \leadsto z)\\ \end{array}\\ \phantom{xxxxx}\phantom{xxxxx}\phantom{xxxxx}y \leq z \implies x \to y \leq x \to z\phantom{xxxxx}\phantom{xxxxx}\phantom{xxxxx}\text{\rm (QB)}\\ x \leq y \to z\ \Ekviv\ y \leq x \leadsto z. \end{array} $$ This lead Rump and Yang in \cite{ruya} to introduce {\em quantum B-algebras} which formalize the implicational part of the logic of quantales. Note that quantum B-algebras encompass pseudo-BCK algebras, partially ordered monoids with two residuals satisfying (R) and generalized pseudoeffect algebras. Moreover, in \cite{ruya} they established a one-to-one correspondence between quantum B-algebras and so-called logical quantales. In this paper, we continue the study of quantum B-algebras from \cite{ruya,rump} with emphasis on filters on integral quantum B-algebras. Namely, the filter theory of logical algebras (see e.g. \cite{gasse,zhu}) plays an significant role in studying these algebras and the completeness of the corresponding non-classical logics. It is natural to consider filters of algebras which are corresponding to congruences and to investigate quotient algebras by such filters. Recall that, from a logical point of view, filters correspond to sets of provable formulas. During the last decade study of many-valued reasoning a lot of noncommutative generalizations, which generalize MV-algebras developed by C.C. Chang \cite{Chan}, were introduced. Let us mention for example pseudo MV-algebras \cite{GeIo} (independently introduced also in \cite{Rac} as generalized MV-algebras), pseudo BL-algebras \cite{DGI1, DGI2} and pseudo-hoops,\cite{GLP}. We recall that pseudo BL-algebras are also a noncommutative generalization of P. H\'ajek's BL-algebras: a variety that is an algebraic counterpart of fuzzy logic \cite{Haj}. Therefore, a pseudo BL-algebra is an algebraic presentation of a non-commutative generalization of fuzzy logic. These structures are studied also in the area of quantum structures, see \cite{MNRP}. However, as it was recently recognized, many of these notions have a very close connections with notions introduced already by B. Bosbach in his pioneering papers on various classes of semigroups: among others he introduced complementary semigroups (today known as pseudo-hoops). A deep investigation of these structures can be found in his papers \cite{Bos1, Bos2}; more information is available in his recent papers \cite{Bos3,Bos4}. Nowadays, all these structures can be also studied under one common roof, as residuated lattices, \cite{GaTs}. The theory of filters, representations and normal-valued basic pseudo-hoops was studied in \cite{BDK}. Now all these structures are intensively studied by many experts (see \cite{JiMo},\cite{Dvu4}, \cite{AgMo}, \cite{DGK}). \begin{comment} Very important results were presented in \cite{JiMo}. In the paper \cite{Dvu4}, it was proved that every linearly ordered pseudo hoop is an ordinal sum of negative cones or intervals of lattice-ordered groups, see also \cite{AgMo}. The paper \cite{DGK} introduced interesting classes of pseudo-hoops, like systems $\mathcal{MPH}$ and $\mathcal{MPH}_b$ of all pseudo-hoops (bounded pseudo-hoops) $\mathbf M$ such that every maximal filter of $\mathbf M$ is normal, and the system $\mathcal{NVPH}$ of normal-valued basic pseudo-hoops $\mathbf M$ such that every value in $\mathbf M$ is normal in its cover. The latter one is inspired by analogous notions from theory of $\ell$-groups. In \cite{DGK}, there was proved that $\mathcal{NVPH} \subset \mathcal{MPH},$ $\mathcal{MPH}_b \subset \mathcal{MPH}$ and $\mathcal{NVPH},$ $\mathcal{MPH}_b$ are varieties but $\mathcal{MPH}$ is not a variety, \cite[Rem 4.2]{DGK}. \end{comment} The paper is organized as follows. After introducing several necessary algebraic concepts as quantale or quantum B-algebra in Section \ref{basicn} we introduce following \cite{ruya,rump} a multiplication $\cdot$ on the complete lattice $U(A)$ of upper subsets of a quantum B-algebra $A$ that makes $U(A)$ a quantale. Filters on an integral quantum B-algebra $A$ are exactly idempotent elements of $U(A)$. In Section \ref{filtersb} we show that, for a filter $F$ of an integral quantum B-algebra $A$, the set $U(F)$ of upper subsets of the filter $F$ is a subquantale of the quantale $U(A)$ using a map $\mu_F:U(A) \to U(A)$. Further, we establish basic properties of the map $\mu_F$. In Section \ref{michal} we study filters in the setting of pseudo-hoops. First, we establish an embedding of a cartesion product of polars of a pseudo-hoop into itself. Second, we give sufficient conditions for a pseudohoop to be subdirectly reducible. In Section \ref{filterkondo} we extend the result of Kondo and Turunen (see \cite{kondotur}) to the setting of noncommutative residuated $\vee$-semilattices that, if prime filters and $\vee$-prime filters of a residuated $\vee$-semilattice $A$ coincide, then $A$ must be a pseudo MTL-algebra. The terminology and symbols used here coincide in general with those used in \cite{ono}. \section{Basic notions}\label{basicn} Now, let us proceed by stating the definitions, some of them well known. A {\em quantum B-algebra} is a poset $A$ with two binary operations $\to$ and $\leadsto$ satisfying conditions $$ \begin{array}{c} \begin{array}{r c l} y \to z &\leq& (x \to y) \to (x \to z)\\ y \leadsto z &\leq& (x \leadsto y) \leadsto (x \leadsto z)\\ \end{array}\\ y \leq z \implies x \to y \leq x \to z \end{array} $$ and the equivalence $$ x \leq y \to z\ \Ekviv\ y \leq x \leadsto z $$ for all $x,y, z\in A$. A quantum B-algebra $A$ is {\em unital} if $A$ admits an element $u$, the {\em unit element}, which satisfies $u \to x = u \leadsto x = x$ for all $ x \in A$. A unit element is unique. The unit element reduces the relation $\leq$ to the operations $\to$ and $\leadsto$:\\ $$ x \leq y \ \Ekviv\ u \leq x \to y \ \Ekviv\ u \leq x \leadsto y. $$ Thus, if the unit element $u$ is the greatest element of $A$, the relation $x \leq y$ just means that $x \to y$ is true. An {\em integral quantum B-algebra} or a {\em pseudo BCK-algebra} is a unital quantum B-algebra $A$ such that $u$ is the top element of $A$, i.e. $u=1$. A {\em residuated poset} is a partially ordered semigroup $(A; \cdot)$ with two binary operations $\to$ and $\leadsto$ which satisfy $$ x \cdot y \leq z\ \Ekviv \ x \leq y \to z \ \Ekviv\ y \leq x \leadsto z. $$ Every residuated poset is a quantum B-algebra. A residuated poset $(A; \cdot, \to, \leadsto, \leq)$ is called {\em 2-sided} if $x\cdot y\leq x$ and $x\cdot y\leq y$ for all $x, y\in A$. We say that residuated poset $(A; \cdot, \to, \leadsto, \leq)$ is a {\em residuated $\vee$-semilattice} if $(A;\vee)$ is a semilattice with respect to the order $\leq$. A {\it quantale\/} is a complete lattice $Q$ with an associative binary multiplication satisfying $$ x\cdot\bigvee\limits_{i\in I} x_i=\bigvee\limits_{i\in I}x\cdot x_i\ \ \hbox{and}\ \ (\bigvee\limits_{i\in I}x_i)\cdot x=\bigvee\limits_{i\in I}x_i\cdot x $$ for all $x,\,x_i\in Q,\,i\in I$ ($I$ is a set). An element $x\in Q$ is called {\it idempotent\/} if $x\cdot x=x$. $1$ denotes the greatest element of $Q$, $0$ is the smallest element of $Q$. The set of all idempotent elements of a quantale $Q$ is denoted by $ \Ecal{}(Q)$. We shall say that a quantale $Q$ is said to be idempotent if $Q=\Ecal{}(Q)$. In the event that $Q$ has only one element we shall speak about a {\em trivial quantale}. Since the operators $a\cdot -$ and $-\cdot b: Q\to Q$, $a, b\in Q$ preserve arbitrary suprema they have right adjoints. We shall denote them by $a\leadsto{}-$ and $b\to -$ respectively. Let $a,b,c,a_i\in Q$. Then $$a\leadsto (b\to c) = b\to (a\leadsto c), $$ \begin{align*} a\to (b\to c) &= (ab)\to c, & b\leadsto (a\leadsto c)&= (ab)\leadsto c,\\ \left(\bigvee a_i\right)\to c &= \bigwedge(a_i\to c), & \left(\bigvee a_i\right)\leadsto c &= \bigwedge(a_i\leadsto c). \end{align*} Evidently, any quantale is a residuated poset and hence a quantum B-algebra. Since every quantale Q is a complete lattice, the {\em inverse residuals} $$ \begin{array}{r c l} a \rightarrowtriangle b &:= &\bigwedge\{x \in Q \mid x \cdot a \geq b\}\\ a \rightarrowtail b &:= &\bigwedge\{x \in Q \mid a \cdot x \geq b\} \end{array} $$ are well-defined, too. A non-zero element $c \in Q$ is {\em balanced} if it satifies $$ c\cdot\bigwedge\limits_{i\in I} x_i=\bigwedge\limits_{i\in I}c\cdot x_i\ \ \hbox{and}\ \ (\bigwedge\limits_{i\in I}x_i)\cdot c=\bigwedge\limits_{i\in I}x_i\cdot c $$ for all $x_i\in Q,\,i\in I$ ($I$ is a set). An element $c$ of a complete lattice $L$ is said to be {\em supercompact} if for any non-empty subset $X \subseteq L$, the inequality $c \leq \bigvee X$ implies that $c \leq x$ for some $x \in X$. For every quantum B-algebra $A$, the upper sets $X\subseteq A$ (i. e. the subsets $X$ with $ a \geq b \in X$ implies $a \in X$) can be made into a quantale $U(A)$ by defining $$X\cdot Y := \{a \in A \mid (\exists y\in Y) (y\to a)\in X\}.$$ It can be shown \cite{ruya} that this gives an associative multiplication which distributes over set-theoretic joins. Therefore, $$\begin{array}{c l}&X\leadsto Z :=\{y\in A \mid (\forall x\in X)(\forall z\in A)(x\leadsto z\geq y\ \implies\ z\in Z) \}\\ \quad \text{and} \quad&\\ &Y\to Z := \{x\in A \mid (\forall y\in Y)(\forall z\in A)(y\to z\geq x\ \implies z\in Z)\}. \end{array}$$ If $A$ is a residuated poset then $$X\cdot Y = \{a \in A \mid (\exists x\in X)(\exists y\in Y)(x\cdot y\leq a)\},$$ $$\begin{array}{c l}&X\leadsto Z :=\{y\in A \mid (\forall x\in X)(\forall z\in A)(x\cdot y\leq z\ \implies\ z\in Z) \}\\ \quad \text{and} \quad&\\ &Y\to Z := \{x\in A \mid (\forall y\in Y)(\forall z\in A)(x\cdot y\leq z \ \implies\ z\in Z)\}. \end{array}$$ In this case, for any $ n \in {\mathbb N}, n\geq 1$ and any $x\in A$ we put $x^{1} = x$ and $x^{n+1} = x^{n}\cdot x = x\cdot x^{n}$. A {\em filter} $F$ of a quantum B-algebra $A$ is a non-empty set $F\in U(A)$ such that $F\cdot F\subseteq F$. Note that this is equivalent with $z\in A, y\in F$, $y\to z\in F$ yields $z\in F$ and that $F$ is a non-empty upper subset of $A$. Recall also that any non-empty set $F\in U(A)$ that is idempotent is a filter. We denote by ${\mathcal F}(A)$ the set of all filters of $A$. Recall that any non-empty intersection of filters is again a filter and any directed union of filters is a filter. For every non-empty subset $X \subseteq A$, the smallest filter of $A$ containing $X$ (i.e., the intersection of all filters $F \in {\mathcal F}(A)$ such that $X \subseteq F$) is called the {\em filter generated by} $X$ and will be denoted by [X). If $A$ is a residuated poset then $$[X) = \{y \in A \mid y\geq x_1 \cdot x_2\cdot \dots \cdot x_n\ \text{for some}\ n\in {\mathbb N}, n\geq 1\ \text{and}\ x_1, x_2, \dots, x_n \in X\}.$$ Moreover, for an 2-sided residuated poset $A$ such that $F$ is a filter and $a\in A$, $a\notin F$ we have that $$ \begin{array}{r c l l} [F\cup \{a\})&=& \{y \in A \mid& y\geq x_1 \cdot a\cdot x_2 \cdot a \cdot \dots \cdot a \cdot x_n\ \text{for some}\ n\in {\mathbb N}, n\geq 1\\ & & &\text{and}\ x_1, x_2, \dots, x_n \in F\}.\\%[0.2cm] \end{array} $$ Furthermore, the set of supercompact elements of $U(A)$ coincides with the image of the embedding $A \hookrightarrow U(A)$ given by $x \mapsto \ua{}x$, and every balanced element of $U(A)$ is supercompact. Similarly, the embedding $U(A) \hookrightarrow U(U(A))$ is given by $X \mapst \ua{}X$. A quantale $Q$ is called {\it unital\/} if there is an element $e\in Q$ such that $$ e\cdot a = a = a\cdot e $$ \noindent for all $a\in Q$. A {\it subquantale\/} $S$ of $Q$ is a subset of $Q$ closed under all suprema and $\cdot$\,. $S$ is said to be a trivial subquantale if $S=\{ 0 \}$ or $S=Q$. A quantic (nucleus) conucleus on $Q$ is a (closure) coclosure operator $g$ such that $g(a)\cdot g(b) \leq g(a\cdot b)$ for all $a, b \in Q$. A quantic conucleus g is said to be trivial if $g(a)=a$ or $g(a)=0$ for all $a \in Q$. If $g$ is a quantic conucleus on $Q$, then $Q_g = \{ a \in Q \mid g(a)=a \}$ is a subquantale of $Q$. Moreover, if $S$ is any subquantale of $Q$, then $S=Q_g$ for some quantic conucleus $g$. \begin{comment} By a {\it morphism of quantales} will be meant a $\bigvee$- and $\cdot$-preserving mapping $$ f:Q\to Q'. $$ A {morphism $f$ of quantales} is called {\it unital} ({\it strong}, {\it dense}) provided that $$ e'= f(e), \hbox{($1'= f(1)$, $0'= f(a)$ implies $0= a$ for all $a\in Q$)} $$ \noindent where $e$ ($1$, $0$) and $e'$, ($1'$, $0'$) are the respective units (top elements, bottom elements) of $Q$ and $Q'$. \end{comment} \section{Filters in Quantum B-algebras}\label{filtersb} In this section we show that, for a filter $F$ of an integral quantum B-algebra $A$, the set $U(F)$ of upper subsets of the filter $F$ is a subquantale of the quantale $U(A)$ using a map $\mu_F:U(A) \to U(A)$. Further, we establish basic properties of the map $\mu_F$. Let us put, for any $F\in U(A)$ and $X\in U(A)$, $\mu_{F}(X)=F\cap X$. Then, for any $F\in U(A)$, $\mu_F:U(A)\to U(A)$ is an order preserving idempotent map. Evidently, if $X, Y\in U(A)$, $X\subseteq Y$ then $\mu_F(X)=\mu_{F}(Y)\cap X=\mu_{\mu_{F}(Y)}(X)$ and $\mu_{F}(X)=F\cap X=F\cap (F\cap X)=\mu_{F}(\mu_{F}(X))$. \begin{lemma}\label{muf} Let $A$ be a quantum B-algebra, $X, Y, F\in U(A)$, $F$ a filter of $A$. Then $\mu_F(X) \cdot \mu_F(Y)\subseteq \mu_F(X\cdot Y)$. Moreover $\mu_F$ is a conucleus on $U(A)$ and the set $U(F)=\{ U\in U(A)\mid U\subseteq F\}=\{\mu_F(X)\mid X\in U(A)\}$ equipped with the multiplication $\cdot_F=\cdot /U(F)$ is a subquantale of $U(A)$. \end{lemma} \begin{proof} Assume that $a\in A$ and $a\in \mu_F(X) \cdot \mu_F(Y)$. Then there is $y\in F\cap Y$ such that $y\to a\in F\cap X$. It follows that $a\in F\cdot F\subseteq F$ and $a\in X\cdot Y$. Therefore $a\in F\cap (X\cdot Y)$. The remaining part is evident. \end{proof} In what follows let $A$ be an integral quantum B-algebra. Note also that, for any $F\in U(A)$ and $X\in U(A)$ such that $1\in X\cap F$, $F\cdot X\supseteq F\cup X$, $1\in \mu_{F}(X)\to X$ and $1\in \mu_{F}(X)\leadsto X$. In particular, $F$ is an filter if and only if $F\cdot F=F$ and $1\in F$. Moreover, for any $X\in U(A)$, $X=\{1\}\cdot X=X\cdot \{1\}$. \begin{proposition}\label{mufid} Let $A$ be an integral quantum B-algebra, $X, Y, F\in U(A)$, $F$ a filter of $A$. Then the following holds: \begin{enumerate} \item $\mu_F(X)=\mu_{\mu_F(X)\cdot \mu_F(X)}(X)$; \item If $1\in \mu_{F}(X)\leadsto (\mu_{F}(X)\to X)$ then $\mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X)=\mu_F(X)$; \item $\mu_F(X)$ is a filter of $A$ iff $1\in \mu_F(X)\cap \mu_{F}(X)\leadsto (\mu_{F}(X)\to X)$. \end{enumerate} \end{proposition} \begin{proof} (1) Let $z\in \mu_{\mu_F(X)\cdot \mu_F(X)}(X)=(\mu_F(X)\cdot \mu_F(X))\cap X$. Then $z\in X$ and $z\in (F\cap X)\cdot (F\cap X)\subseteq F$. Hence $z\in \mu_F(X)$. Conversely, let $z\in \mu_F(X)$. Then $1\in \mu_F(X)$ and $z\in [z)\cdot [1)\subseteq (F\cap X)\cdot (F\cap X)$ and $z\in X$. It follows that $z\in \mu_{\mu_F(X)\cdot \mu_F(X)}(X)$.\\ (2) Evidently, $ \mu_{F}(X)\subseteq \mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X)$. To show the converse direction let us compute: $$ \begin{array}{l} \mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X) \subseteq \\ \mu_{F}(X)\cdot (\mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X)) \subseteq \mu_{F}(X)\cdot (\mu_{F}(X)\to X)\subseteq X. \end{array} $$ Since $ \mu_{F}(X)\subseteq F$ and $\mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\subseteq F$ we get that $\mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X) \subseteq F$. It follows that $$ \mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X) \subseteq \mu_{F}(X).$$ \noindent{}(3) Evidently, if $1\in \mu_{F}(X)\leadsto (\mu_{F}(X)\to X)$ then $\{1\}\subseteq \mu_{F}(X)\leadsto (\mu_{F}(X)\to X)$. This yields that $\mu_F(X)\cdot \mu_F(X) =\mu_F(X)\cdot \{1\}\cdot \mu_F(X) \subseteq \mu_F(X) \cdot (\mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_F(X) \subseteq \mu_F(X)$. Since $1\in \mu_F(X)$ we get that $\mu_F(X)$ is a filter of $A$. Conversely, let $\mu_F(X)$ be a filter of $A$. Then $\mu_F(X)\cdot \mu_F(X) \subseteq \mu_F(X) \subseteq X$. It follows that $\mu_F(X)\cdot \mu_F(X) \subseteq \mu_F(X) \subseteq \mu_{F}(X)\to X$. By the same reasoning $ 1\in \mu_F(X) \subseteq \mu_{F}(X)\leadsto (\mu_{F}(X)\to X)$. \end{proof} \begin{proposition}\label{mufc} Let $A$ be an integral quantum B-algebra, $X, Y, F\in U(A)$, $F$ a filter of $A$ and $1\in X\cap F$. Then the following holds \begin{enumerate} \item $\mu_F(X)\cdot \mu_F( \mu_{F}(X)\leadsto X) = \mu_F(X)=\mu_F(\mu_{F}(X)\to X)\cdot \mu_{F}(X)$; \item $\mu_F( \mu_{F}(X)\to X)\to (\mu_{F}(X)\to X)=\mu_F(X)\to X$; \item $\mu_F( \mu_{F}(X)\leadsto X)\leadsto (\mu_{F}(X)\leadsto X)=\mu_F(X)\leadsto X$; \item $\mu_F( \mu_{F}(X)\to X) \cdot (\mu_{F}(X)\to X)=\mu_{F}(X)\to X$; \item $( \mu_{F}(X)\leadsto X) \cdot \mu_F(\mu_{F}(X)\leadsto X)=\mu_{F}(X)\leadsto X$; \item $\mu_F(\mu_{F}(X)\to X)=\mu_F( \mu_{F}(X)\to X) \cdot \mu_F(\mu_{F}(X)\to X)$ and $\mu_F(\mu_{F}(X)\to X)$ is a filter of $A$ whenever $1\in \mu_F(\mu_{F}(X)\to X)$. \item $\mu_F(\mu_{F}(X)\leadsto X)=\mu_F( \mu_{F}(X)\leadsto X) \cdot \mu_F(\mu_{F}(X)\leadsto X)$ and $\mu_F(\mu_{F}(X)\leadsto X)$ is a filter of $A$ whenever $1\in \mu_F(\mu_{F}(X)\leadsto X)$. \end{enumerate} \end{proposition} \begin{proof} (1) Since $1\in \mu_F( \mu_{F}(X)\leadsto X) \cap \mu_F(X)$ we obtain that $\mu_F(X)\cdot\mu_F( \mu_{F}(X)\leadsto X) \supseteq \mu_F(X)$. Conversely, we have $\mu_F(X)\cdot\mu_F( \mu_{F}(X)\leadsto X)\subseteq ( \mu_{F}(X)\leadsto X) \cdot \mu_F(X)\subseteq X$ and $\mu_F(X)\cdot\mu_F( \mu_{F}(X)\leadsto X)\subseteq F\cdot F\subseteq F$. It follows that $\mu_F(X)\cdot\mu_F( \mu_{F}(X)\leadsto X)\subseteq \mu_F(X)$. The remaining part follows by analogous considerations.\\ (2) $\mu_F( \mu_{F}(X)\to X)\to (\mu_{F}(X)\to X)= (\mu_F( \mu_{F}(X)\to X)\cdot \mu_{F}(X))\to X=\mu_F(X)\to X$.\\ (3) As in (2).\\ (4) Since $1\in \mu_F( \mu_{F}(X)\to X) \cap (\mu_{F}(X)\to X)$ we get $(\mu_{F}(X)\to X)\cdot\mu_F( \mu_{F}(X)\to X) \supseteq \mu_{F}(X)\to X$. To prove the converse direction let us compute: $$ \begin{array}{r @{}l} (\mu_{F}(X)\to X)\cdot &\mu_F( \mu_{F}(X)\to X) =\\%&=& &(\mu_F( \mu_{F}(X)\to X)\to (\mu_{F}(X)\to X))\cdot \mu_F( \mu_{F}(X)\to X) \subseteq\\ &\mu_{F}(X)\to X.\\ \end{array} $$ Whence $ (\mu_{F}(X)\to X)\cdot \mu_F( \mu_{F}(X)\to X) =\mu_{F}(X)\to X$.\\ (5) As in (4).\\ (6) Evidently, $\mu_F(\mu_{F}(X)\to X)\supseteq\mu_F( \mu_{F}(X)\to X) \cdot \mu_F(\mu_{F}(X)\to X)$. Conversely, we have $$ \begin{array}{rcl} \mu_F( \mu_{F}(X)\to X) \cdot \mu_F(\mu_{F}(X)\to X &\subseteq& \mu_F( \mu_{F}(X)\to X) \cdot (\mu_{F}(X)\to X)=\\ &&\mu_{F}(X)\to X \end{array} $$ and $\mu_F( \mu_{F}(X)\to X) \cdot \mu_F(\mu_{F}(X)\to X)\subseteq F\cdot F\subseteq F$. Consequently, we obtain that $1\in \mu_F(\mu_{F}(X)\to X)=\mu_F( \mu_{F}(X)\to X) \cdot \mu_F(\mu_{F}(X)\to X)$. It follows that $\mu_F(\mu_{F}(X)\to X)$ is a filter of $A$.\\ (7) As in (6). \end{proof} \begin{comment} \begin{lemma}\label{mufidx} Let $A$ be an integral quantum B-algebra, $X, Y, F\in U(A)$, $F$ a filter of $A$. Then the following holds \begin{enumerate} \item $\mu_F(X) \cdot \mu_F(Y)\subseteq \mu_F(X\cdot Y)$; \item $\mu_F(X)=\mu_{\mu_F(X)\cdot \mu_F(X)}(X)$; \item If $1\in \mu_{F}(X)\leadsto (\mu_{F}(X)\to X)$ then $\mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X)=\mu_F(X)$; \item $\mu_F(X)$ is a filter of $A$ iff $1\in \mu_{F}(X)\leadsto (\mu_{F}(X)\to X)$. \end{enumerate} \end{lemma} \begin{proof} (1) It is evident.\\ (2) If $X=\emptyset$ then the statement immediately follows. Therefore we may assume that $1\in X$. Let $z\in \mu_{\mu_F(X)\cdot \mu_F(X)}(X)=(\mu_F(X)\cdot \mu_F(X))\cap X$. Then $z\in X$ and $z\in (F\cap X)\cdot (F\cap X)\subseteq F$. Hence $z\in \mu_F(X)$. Conversely, let $z\in \mu_F(X)$. Then $z=z\cdot 1\in (F\cap X)\cdot (F\cap X)$ and $z\in X$. It follows that $z\in \mu_{\mu_F(X)\cdot \mu_F(X)}(X)$.\\ (3) Evidently, $ \mu_{F}(X)\subseteq \mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X)$. To show the converse direction let us compute: $$ \begin{array}{l} \mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X) \subseteq \\ \mu_{F}(X)\cdot (( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X)) \subseteq \mu_{F}(X)\cdot (\mu_{F}(X)\to X)\subseteq X. \end{array} $$ Since $ \mu_{F}(X)\subseteq F$ and $\mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\subseteq F$ we get that $\mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X) \subseteq F$. It follows that $$ \mu_{F}(X)\cdot \mu_F( \mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_{F}(X) \supseteq \mu_{F}(X). $$ \noindent{}(4) Evidently, $\mu_F(X)\subseteq \mu_F(X) \cdot (\mu_{F}(X)\leadsto (\mu_{F}(X)\to X))$. This yields that $\mu_F(X)\subseteq \mu_F(X)\cdot \mu_F(X) \subseteq \mu_F(X) \cdot (\mu_{F}(X)\leadsto (\mu_{F}(X)\to X))\cdot \mu_F(X) =\mu_F(X)$, i.e., $\mu_F(X)$ is a filter of $A$. \end{proof} \end{comment} \section{Filters on pseudo-hoops} \label{michal} In the present section we study filters in the setting of pseudo-hoops. First, we establish an embedding of a cartesion product of polars of a pseudo-hoop ${\mathbf A}$ into ${\mathbf A}$. Second, we give sufficient conditions for a pseudohoop to be subdirectly reducible. We recall that according to \cite{GLP}, a \textit{pseudo-hoop} is an algebra $\mathbf M=(M; \cdot, \to,\squig,1)$ of type $\langle 2,2,2,0 \rangle$ such that, for all $x,y,z \in M,$ \begin{enumerate} \item[{\rm (i)}] $x\cdot 1 = x = 1 \cdot x;$ \item[{\rm (ii)}] $x\to x = 1 = x\squig x;$ \item[{\rm (iii)}] $(x\cdot y) \to z = x \to (y\to z);$ \item[{\rm (iv)}] $(x \cdot y) \squig z = y \squig (x\squig z);$ \item[{\rm (v)}] $(x\to y) \cdot x= (y\to x)\cdot y = x\cdot (x\squig y) = y \cdot (y \squig x)$ (divisibility). \end{enumerate} It can be easily checked that any pseudo-hoop is a residuated poset, see e.g. \cite[Proposition 2.1]{ciungu}. If $\cdot$ is commutative (equivalently $ \to = \squig$), $\mathbf M$ is said to be a \textit{hoop}. If we set $x \le y$ iff $x \to y=1$ (this is equivalent to $x \squig y =1$), then $\le$ is a partial order such that $x\wedge y = (x\to y)\cdot x=x\cdot (x\leadsto y)$ and $\mathbf M$ is a $\wedge$-semilattice. We say that a pseudo-hoop $\mathbf M$ \begin{enumerate} \item[(i)] is {\it bounded} if there is a least element $0,$ otherwise, $\mathbf M$ is {\it unbounded}, \item[(ii)] satisfies \textit{prelinearity} if, given $x,y \in M,$ $(x\to y)\vee (y\to x)$ and $(x\squig y)\vee (y\squig x)$ are defined in $\mathbf M$ and they are equal $1,$ \item[(iii)] is \textit{cancellative} if $x\cdot y=x\cdot z$ and $s\cdot x= t \cdot x$ imply $y= z$ and $s= t,$ \item[(iv)] is a \textit{pseudo BL-algebra} if $\mathbf M$ is a bounded lattice satisfying prelinearity. \end{enumerate} For a pseudo BL-algebra, we define $x^-=x\to 0$ and $x^\sim = x\squig 0.$ A pseudo BL-algebra is said to be a pseudo MV-algebra if $x^{-\sim}=x=x^{\sim -}$ for every $x \in M.$ From (v) of the definition of pseudo-hoops we have that a pseudo hoop is cancellative iff $x\cdot y\le x\cdot z$ and $s\cdot x\le t \cdot x$ imply $y\le z$ and $s\le t.$ Let us have a pseudo-hoop $\mathbf A=(A;\cdot,\to,\leadsto, 1).$ Then we for any set $M\subseteq A$ define the set $M^\perp=\{x\in A\mid x\vee y = 1\mbox{ for any } y\in M\}.$ One can easily check that following conditions hold: \begin{itemize} \item[1)] $M\subseteq N$ yields $N^\perp\subseteq M^\perp,$ \item[2)] $M\subseteq M^{\perp\perp},$ \item[3)] $M^\perp=M^{\perp\perp\perp}.$ \end{itemize} Consequently, $^{\perp\perp}$ is a closure operator on the subsets of $A.$ If $x_1\vee y=x_2\vee y=1$ then we can compute $x_1x_2\vee y = x_1x_2\vee x_1y\vee y = x_1(x_2\vee y)\vee y=x_1\vee y=1.$ Thus, the set $M^\perp$ is a filter for any $M\subseteq A.$ \begin{lemma}\label{MB-Lemma1} Let $\mathbf A=(A;\cdot,\to,\leadsto, 1)$ be a pseudo-hoop and let $x,y\in A$ be such that $x\vee y=1$ then $x\cdot y = x\wedge y=y\cdot x$ and $x\to y= x\leadsto y= y.$ \end{lemma} \begin{proof} Assume that $x,y\in A$ are such that $x\vee y=1$. Then, for any $z\in A$, we have $z\leq y$ iff $z\leq 1\to y$ iff $z\leq (x\vee y)\to y$ iff $z\cdot (x\vee y)\leq y$ iff $z\cdot x\vee z\cdot y\leq y$ iff $z\cdot x\leq y$ iff $z\leq x\to y$. It follows that $y=x\to y$. By symmetry, $x=y\to x$. Due to divisibility we can compute $y\cdot x = (x\to y)\cdot x = x\wedge y (y\to x)\cdot y= x\cdot y .$ \end{proof} \begin{theorem}\label{mb_theor1} Let $\mathbf A=(A;\cdot,\to,\leadsto, 1)$ be a pseudo-hoop and let us have any set $M\subseteq A.$ Then the mapping $f:M^\perp\times M^{\perp\perp}\longrightarrow A$ defined by $f(x,y)=x\wedge y$ is a embedding from $\mathbf{M^{\perp}\times M^{\perp\perp}}$ to $\mathbf A.$ \end{theorem} \begin{proof} If $f(x_1,y_1)=f(x_2,y_2)$ for any $(x_1,y_1), (x_2,y_2)\in M^\perp\times M^{\perp\perp}$ then $x_1\wedge y_1=x_1\cdot y_1=y_1\cdot x_1=x_2\wedge y_2=x_2\cdot y_2=y_2\cdot x_2$ and also $x_i\vee y_j=1$ for any $i,j\in\{1,2\}.$ Firstly we prove that the mapping $f$ is a injection. We can compute $y_1=y_1\cdot 1= y_1\cdot (x_1\vee y_2)=y_1\cdot x_1\vee y_1\cdot y_2 x_2\cdot y_2\vee y_1\cdot y_2=(x_2\vee y_1)\cdot y_2=1\cdot y_2=y_2.$ Thus $y_2= y_1.$ By symmetry, $x_1=x_2.$ Due to the Lemma \ref{MB-Lemma1} we can compute, for any $(x_1,y_1), (x_2,y_2)\in M^\perp\times M^{\perp\perp}$, $f(x_1,y_1)\cdot f(x_2,y_2)=(x_1\wedge y_1)\cdot(x_2\wedge y_2) (x_1\cdot y_1)\cdot (x_2\cdot y_2)=x_1\cdot (y_1\cdot x_2)\cdot y_2= x_1\cdot x_2\cdot y_1\cdot y_2=(x_1\cdot x_2)\wedge (y_1\cdot y_2)=f(x_1\cdot x_2,y_1\cdot y_2).$ Moreover, for any $(x_1,y_1), (x_2,y_2)\in M^\perp\times M^{\perp\perp}$ and any $z\in A$, $z\leq (x_1\cdot y_1)\to x_2$ iff $z\cdot x_1\cdot y_1\leq x_2$ iff $z\cdot x_1\leq y_1\to x_2$ iff (by Lemma \ref{MB-Lemma1}) $z\cdot x_1\leq x_2$ iff $z\leq x_1\to x_2$. It follows that $(x_1\cdot y_1)\to x_2=x_1\to x_2$. Similarly, $(y_1\cdot x_1)\to y_2=y_1\to y_2$. This yields, for any $(x_1,y_1), (x_2,y_2)\in M^\perp\times M^{\perp\perp}$ and any $z\in A$, $z\leq f(x_1,y_1)\to f(x_2,y_2)$ iff $z\cdot x_1\cdot y_1\leq x_2\wedge y_2$ iff $z\cdot x_1\cdot y_1\leq x_2$ and $z\cdot x_1\cdot y_1\leq y_2$ iff $z\leq x_1\cdot y_1\to x_2$ and $z\leq x_1\cdot y_1\to y_2$ iff $z\leq x_1\to x_2$ and $z\leq y_1\to y_2$ iff $z\leq (x_1\to x_2)\wedge (y_1\to y_2)$. Since $x_1\to x_2\in M^\perp$ and $y_1\to y_2\in M^{\perp\perp}$ we get that $f(x_1,y_1)\to f(x_2,y_2)= f(x_1\to x_2,y_1\to y_2)$. Analogously, we can prove $f(x_1,y_1)\leadsto f(x_2,y_2)=f(x_1\leadsto x_2,y_1\leadsto y_2).$ \end{proof} The previous Theorem shows that $M^\perp\cdot M^{\perp\perp}$ is both a filter and a sub hoop of $\mathbf A$. Moreover, the sub hoop $M^\perp\cdot M^{\perp\perp}$ is directly reducible. The idea of our research leads us to decide whether some $a\in A$ belongs to the sub hoop $M^\perp\cdot M^{\perp\perp}$ or not. If $a\in M^\perp\cdot M^{\perp\perp}$ then $a=x\wedge y,$ where $x$ is minimal in $M^\perp$ with $a\leq x$ and analogously $y$ is minimal in $M^{\perp\perp}$ with $a\leq y.$ Those facts are motivation for following definition. \begin{definition} Let $F$ be any filter in a pseudo-hoop $\mathbf A=(A;\cdot,\to,\leadsto,1).$ Then for any $X\subseteq A$ we define $\nu_F(X):=\{a\in F\mid x\leq a\mbox{ for any }x\in X\}.$ If there is the least element of the set $\nu_F(X)$ then we denote it by $\hat\nu_F(X).$ \end{definition} Note that, for any $X\subseteq A$, $\nu_F(X)$ is a set of upper bounds of $X$ that are in $F$. It follows that $\nu_F(X)=\mu_F(\{z\in A \mid z \ \text{is an upper bound of}\ X\}$. In particular, $\nu_F(X)$ is an upper set such that $1\in \nu_F(X)$. In what follows we denote for any subsets $X,Y\subseteq A$, where $\mathbf A=(A;\cdot,\to,\leadsto,1)$ is a pseudo-hoop, the following sets $$ \begin{array}{r c l} X{\bm{\cdot}} Y &=& \{x\cdot y\mid x\in X, y\in Y\},\\[0.1cm] X{\bm{\to}} Y &=& \{x\to y\mid x\in X, y\in Y\},\\[0.1cm] X{\bm{\leadsto}} Y &=& \{x\leadsto y\mid x\in X, y\in Y\}. \end{array} $$ \begin{theorem} Let $F$ be a filter in the pseudo-hoop $\mathbf A=(A;\cdot,\to,\leadsto,1)$ and let $X\subseteq A$. Then the sets $\nu_F(\nu_F(X)\bmto X)$ and $\nu_F(\nu_F(X)\bmleadsto X)$ are a filters. If $\hat\nu_F(x)$ exists for some $x\in A$ then also $\hat\nu_F(\hat\nu_F(x)\to x)$ and $\hat\nu_F(\hat\nu_F(x)\leadsto x)$ exist and moreover $\hat\nu_F(\nu_F(x)\to x)$ and $\hat\nu_F(\nu_F(x)\leadsto x)$ are idempotent. \end{theorem} \begin{proof} It is enough to verify the statement for the implication $\bmto$. The remaining case for $\bmleadsto $ can be shown dually. Now, let us prove first that $$\nu_F(\nu_F(X)\bmto X)\bmdot\nu_F(X)=\nu_F(X)$$ for any $X\subseteq A$. Clearly, we have $1\in\nu_F(\nu_F(X)\bmto X)$ and thus $\nu_F(\nu_F(X)\bmto X)\bmdot\nu_F(X)\supseteq\nu_F(X)$ holds. Conversely, assume that $a\in \nu_F(\nu_F(X)\bmto X)$ and $b\in\nu_F(X).$ Then $a, b\in F$ and $ b\to x \leq a$ for all $x\in X$. Consequently $x=x\wedge b = (b\to x)\cdot b\leq a\cdot b$ for all $x\in X$. Because $a\cdot b\in F$ (both $a$ and $b$ belong to $F$) also $a\cdot b\in \nu_F(X)$. Hence $\nu_F(\nu_F(X)\bmto X)\bmdot \nu_F(X)\subseteq \nu_F(X)$. Together we obtain $\nu_F(\nu_F(X)\bmto X)\bmdot \nu_F(X)= \nu_F(X)$. One can easily check that the set equality $A\bmto(B\bmto C)=(A\bmdot B)\bmto C$ holds. Thus we can compute \begin{eqnarray*} \nu_F(\nu_F(X)\bmto X)\bmto (\nu_F(X)\bmto X) &=&(\nu_F(\nu_F(X)\bmto X)\bmdot \nu_F(X))\bmto X\\ &=&\nu_F(X)\bmto X. \end{eqnarray*} Denoting $Y:=\nu_F(X)\bmto X$ in the previous equality we obtain $$\nu_F(\nu_F(Y)\bmto Y)=\nu_F(Y),$$ which together with $\nu_F(\nu_F(Y)\bmto Y)\bmdot \nu_F(Y)=\nu_F(Y)$ gives $$\nu_F(Y)\bmdot\nu_F(Y)=\nu_F(Y).$$ Thus $\nu_F(Y)=\nu_F(\nu_F(X)\bmto X)$ is a filter. If the element $\hat\nu_F(x)$ exists and if $x\leq y$ then clearly $y\leq y\vee \hat\nu_F(x)\in F$. Moreover, any $a\in F$ such that $y\leq a$ satisfies $x\leq a$ and consequently $\hat\nu_F(x)\leq a.$ Thus $y\vee \hat\nu_F(x)\leq y\vee a=a$ holds and we have proved that $\hat\nu_F(y)$ exists and that $\hat\nu_F(y)=y\vee\hat\nu_F(x).$ Because $x\leq \hat\nu_F(x)\to x$ then $\hat\nu_F(\hat\nu_F(x)\to x)=(\hat\nu_F(x)\to x) \vee\hat\nu_F(x)$ exists and it is the least element of the filter $\nu_F(\nu_F(\{x\})\bmto \{x\}).$ Thus $\hat\nu_F(\hat\nu_F(x)\to x)$ is an idempotent element. \end{proof} \begin{lemma}\label{mb_lemma1} Let $\mathbf A=(A;\cdot,\to,\leadsto,1)$ be a pseudo-hoop and let $F\subseteq A$ be a filter with a least element then $F$ is a normal filter. \end{lemma} \begin{proof} If $a=\bigwedge F$ is the least element of the filter $F$ then it is an idempotent element. Using the divisibility we obtain for any $x\in A$ \begin{eqnarray*} x\cdot a &=& (x\cdot a)\wedge a \\ &=& a\cdot(a\leadsto (x\cdot a) )\\ &=& a\cdot a\cdot(a\leadsto (x\cdot a) )\\ &=& a\cdot (a\wedge (x\cdot a) )\\ &=& a\cdot x\cdot a. \end{eqnarray*} and \begin{eqnarray*} a\cdot x &=& (a\cdot x)\wedge a \\ &=& (a\to (a\cdot x) )\cdot a\\ &=& (a\to (a\cdot x) )\cdot a\cdot a\\ &=& (a\wedge (a\cdot x) )\cdot a\\ &=& a\cdot x\cdot a. \end{eqnarray*} Hence $a\cdot x=x\cdot a$ holds. Finally, $x\to y\in F$ if, and only if, $a\leq x\to y$ if, and only if, $x\cdot a= a\cdot x\leq y$ if, and only if, $a\leq x \leadsto y$ if, and only if, $x\leadsto y\in F$. Thus $F$ is a normal. \end{proof} \begin{theorem} Let $\mathbf A=(A;\cdot,\to,\leadsto,1)$ be a pseudo-hoop and let $M\subseteq A.$ If there is an element $x\not\in M^\perp\cdot M^{\perp\perp}$ such that $\hat\nu_{M^\perp\cdot M^{\perp\perp}}(x)$ exists then $\mathbf A$ is subdirectly reducible. \end{theorem} \begin{proof} Let us assume that the element $\hat\nu_{M^\perp\cdot M^{\perp\perp}}(x)$ exists. Then we denote the idempotent $$y:=\hat\nu_{M^\perp\cdot M^{\perp\perp}}(\hat\nu_{M^\perp\cdot M^{\perp\perp}}(x)\to x).$$ We will show that $y\not\in M^{\perp},M^{\perp\perp}.$ Assume to the contrary that $y\in M^\perp$. Then for any $a\in M^{\perp\perp}$ we have $a\vee (\nu_{M^\perp\cdot M^{\perp\perp}}(x)\to x) =a\vee y= 1$ and thus $\nu_{M^\perp\cdot M^{\perp\perp}}(x)\to x\in M^{\perp}\subseteq {M^\perp\cdot M^{\perp\perp}}.$ Clearly, also $\nu_{M^\perp\cdot M^{\perp\perp}}(x)\in{M^\perp\cdot M^{\perp\perp}}$ and thus $(\nu_{M^\perp\cdot M^{\perp\perp}}(x)\to x)\cdot \nu_{M^\perp\cdot M^{\perp\perp}}(x)\leq x\in {M^\perp\cdot M^{\perp\perp}}$ which is absurd. Analogously it can be proved that $y\not\in M^{\perp\perp}.$ Because $y\in M^\perp\cdot M^{\perp\perp},$ there exist elements $1\not=y_1\in M^\perp$ and $1\not=y_2\in M^{\perp\perp}$ such that $y=y_1\wedge y_2$ (see Theorem \ref{mb_theor1}). Clearly, Theorem \ref{mb_theor1} shows that $\langle y_1,y_2\rangle$ is an idempotent element in $M^\perp\times M^{\perp\perp}$ (because $y$ is an idempotent in $ M^\perp\cdot M^{\perp\perp}$). Consequently, both $y_1$ and $y_2$ are idempotent too. Lemma \ref{mb_lemma1} shows that $\mathcal{F}(y_1)$ and $\mathcal{F}(y_2)$ are a normal filters. Moreover $y_1\vee y_2=1$ yields $\mathcal{F}(y_1)\cap \mathcal{F}(y_2)=\{ 1\}$ and thus $\mathbf A$ is subdiretly reducible. \end{proof} \section{Several types of prime filters in residuated $\vee$-semilattices} \label{filterkondo} In the paper \cite{gasse} Van Gasse et al. asked whether, for any commutative residuated lattice $L$, if prime filters and $\vee$-prime filters coincide, then $L$ must be an MTL-algebra. The affirmative answer was given in \cite{kondotur} by Kondo and Turunen. We extend their result to the setting of noncommutative residuated semilattices. \begin{definition}\label{leadstfilt} Let $A$ be a residuated $\vee$-semilattice. A filter $F$ of $A$ is called a {\em $\to$-prime filter} ({\em $\leadsto$-prime filter}) if $x \to y \in F$ or $y \to x \in F$ ($x \leadsto y \in F$ or $y \leadsto x \in F$) for all $x, y \in A$. A filter $F$ of $A$ is said to be a {\em prime filter} if it is both a $\to$-prime filter and a $\leadsto$-prime filter. A filter $F$ of $A$ is called a {\em $\vee$-prime filter} if $x \vee y \in F$ yields $x \in F$ or $y \in F$ for all $x, y \in A$. By ${\mathcal P}{\mathcal F}(A)$ (${\mathcal P}{\mathcal F}_{\to}(A)$, ${\mathcal P}{\mathcal F}_{\leadsto}(A)$, ${\mathcal P}{\mathcal F}_{\vee}(A)$, respectively) we mean the class of all prime filters of L ($\to$-prime filters, $\leadsto$-prime filters, $\vee$-prime filters, respectively). \end{definition} \begin{lemma} \label{bcxd} Let $A$ be an integral residuated $\vee$-semilattice. Then \begin{enumerate} \item ${\mathcal P}{\mathcal F}_{\to}(A)\subseteq {\mathcal P}{\mathcal F}_{\vee}(A)$; \item ${\mathcal P}{\mathcal F}_{\leadsto}(A)\subseteq {\mathcal P}{\mathcal F}_{\vee}(A)$; \item ${\mathcal P}{\mathcal F}(A)\subseteq {\mathcal P}{\mathcal F}_{\vee}(A)$. \end{enumerate} \end{lemma} \begin{proof} 1. Let $F\in{\mathcal P}{\mathcal F}_{\to}(A)$, $x, y\in A$, $x\vee y\in F$. Then $x \to y \in F$ or $y \to x \in F$. Assume first that $x \to y \in F$. Clearly, $1=y \to y \in F$. Hence also $(x \to y)\cdot (y \to y)\leq (x \to y)\wedge (y \to y)=(x\vee y) \to y \in F$. It follows that $((x\vee y) \to y)\cdot (x\vee y)\leq y \in F$. Similarly, $y \to x \in F$ yields that $x\in F$. \noindent{}2. It follows by corresponding reasonings as in (1) applied to $\leadsto$. \noindent{}3. It follows from the fact that ${\mathcal P}{\mathcal F}(A) {\mathcal P}{\mathcal F}_{\to}(A)\ca {\mathcal P}{\mathcal F}_{\leadsto}(A)$. \end{proof} \begin{definition}\label{mtlleadstfilt} Let $A$ be an integral residuated $\vee$-semilattice. $A$ is called a {\em $\to$-MTL-algebra} (a {\em $\leadsto$-MTL-algebra}, respectively) if $(x\to y)\vee (y\to x)=1$ ($(x\leadsto y)\vee (y\leadsto x)=1$, respectively) for all $x, y\in A$. $A$ is said to be a {\em pseudo MTL-algebra} if $A$ is both a {$\to$-MTL-algebra} and a {$\leadsto$-MTL-algebra}. \end{definition} \begin{lemma} \label{cxd} Let $A$ be an integral residuated $\vee$-semilattice. Then \begin{enumerate} \item If $A$ is a {$\to$-MTL-algebra} then ${\mathcal P}{\mathcal F}_{\to}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$. \item If $A$ is a {$\leadsto$-MTL-algebra} then ${\mathcal P}{\mathcal F}_{\leadsto}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$; \item If $A$ is a pseudo {MTL-algebra} then ${\mathcal P}{\mathcal F}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$. \end{enumerate} \end{lemma} \begin{proof} 1. Let $F\in{\mathcal P}{\mathcal F}_{\vee}(A)$, $x, y\in A$. Since $(x\to y)\vee (y\to x)=1\in F$ we have that $x \to y \in F$ or $y \to x \in F$. It follows that $F\in{\mathcal P}{\mathcal F}_{\to}(A)$. \noindent{}2. It follows by corresponding reasonings as in (1) applied to $\leadsto$. \noindent{}3. Let $A$ be a pseudo {MTL-algebra}. Then we have by (1) and (2) that ${\mathcal P}{\mathcal F}_{\to}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$ and ${\mathcal P}{\mathcal F}_{\leadsto}(A)={\mathcal P}{\mathcal F}_{\vee}(A)$. It follows that ${\mathcal P}{\mathcal F}_{\leadsto}(A)={\mathcal P}{\mathcal F}_{\to}(A) {\mathcal P}{\mathcal F}(A)={\mathcal P}{\mathcal F}_{\vee}(A)$. \end{proof} \begin{theorem} \label{oddelth} {\rm{}(The Prime filter theorem for 2-sided residuated $\vee$-semilattices).} Let $A$ be a 2-sided residuated $\vee$-semilattice, $F$ be a filter of $A$ and $a\notin $F. Then there exists a $\vee$-prime filter $G$ of $A$ that includes $F$ and does not contain $a$. \end{theorem} \begin{proof} Let $G$ be a maximal filter containing $F$ that does not contain $a$. Let us show that $G$ is $\vee$-prime. Assume that $x, y\in A$, $x\vee y\in G$ and $x, y\notin G$. Then, by maximality of $G$, we get that $$ \begin{array}{r c l l} a\in [G\cup \{x\})&=& \{z \in A \mid& z\geq u_1 \cdot x\cdot u_2 \cdot x \cdot \dots \cdot x \cdot u_n\ \text{for some}\ n\in {\mathbb N}, n\geq 1\\ & & &\text{and}\ u_1, u_2, \dots, u_n \in G\}.\\%[0.2cm] \end{array} $$ \noindent and $$ \begin{array}{r c l l} a\in [G\cup \{y\})&=& \{z \in A \mid& z\geq v_1 \cdot y\cdot v_2 \cdot y \cdot \dots \cdot y \cdot v_m\ \text{for some}\ m\in {\mathbb N}, m\geq 1\\ & & &\text{and}\ v_1, v_2, \dots, v_m \in G\}.\\%[0.2cm] \end{array} $$ This yields that there are $m, n\in {\mathbb N}, m, n\geq 1$ and $u_1, u_2, \dots, u_n, v_1, v_2, \dots, v_m \in G$ such that $ u_1 \cdot x\cdot u_2 \cdot x \cdot \dots \cdot x \cdot u_n\leq a$ and $v_1 \cdot y\cdot v_2 \cdot y \cdot \dots \cdot y \cdot v_m\leq a$. Let us put $k=\mathrm{max} \{m, n\}$ and $w=\prod_{1\leq i\leq n, 1\leq j\leq m}u_i\cdot v_j$. Then $w\in G$. We also put $w_i=w$ for all $i\leq k$. \begin{comment} $$w_i=\left\{\begin{array}{l c l} u_i\cdot v_i&\phantom{xxxxx} &\text{if}\ i\leq l\\ u_i& &\text{if}\ i> l\ \text{and} \ i\leq n\\ v_i& &\text{if}\ i> l\ \text{and} \ i\leq m\\ \end{array}.\right.$$ \end{comment} Evidently, $ (wx)^{k}=w_1 \cdot x\cdot w_2 \cdot x \cdot \dots \cdot x \cdot w_k\leq u_1 \cdot x\cdot u_2 \cdot x \cdot \dots \cdot x \cdot u_n\leq a$ and $ (wy)^{k}=w_1 \cdot y\cdot w_2 \cdot y \cdot \dots \cdot y \cdot w_k\leq v_1 \cdot y\cdot v_2 \cdot y \cdot \dots \cdot y \cdot v_m\leq a$. Let us compute the element $c=[w(x\vee y)]^{2k}\in G$. First, for any subset $S\subseteq \{1, \dots, 2k\}$ we put $c_S=\prod_{1\leq i\leq 2k} (w\cdot z_i)$, where $z_i=x$ if $i\in S$ and $z_i=y$ otherwise. Clearly, if $\mathrm{card}(S)\geq k$ then $c_S\leq (wx)^{k}\leq a$ and similarly if $\mathrm{card}(S)\leq k$ then $c_S\leq (wy)^{k}\leq a$. Since $c=\bigvee_{S\subseteq \{1, \dots, 2k\}}c_S$ we get that $c\leq a$, i.e. $a\in G$, a contradiction. \end{proof} We then have the following corollary. \begin{corollary}\label{inters} Any filter of a 2-sided residuated $\vee$-semilattice is equal to the intersection of the prime filters that include it. \end{corollary} \begin{theorem} \label{bacxd} Let $A$ be an integral residuated $\vee$-semilattice. Then \begin{enumerate} \item $A$ is a {$\to$-MTL-algebra} if and only if ${\mathcal P}{\mathcal F}_{\to}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$. \item $A$ is a {$\leadsto$-MTL-algebra} if and only if ${\mathcal P}{\mathcal F}_{\leadsto}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$; \item $A$ is a {pseudo MTL-algebra} if and only if ${\mathcal P}{\mathcal F}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$. \end{enumerate} \end{theorem} \begin{proof} 1. $\Longrightarrow$: It follows from Lemma \ref{cxd}, (i). $\Longleftarrow$: Assume that ${\mathcal P}{\mathcal F}_{\to}(A)= {\mathcal P}{\mathcal F}_{\vee}(A)$ and that $A$ is not a {$\to$-MTL-algebra}. Hence there are $a, b\in A$ such that $(a\to b)\vee (b\to a)\not =1$. Let $G_1=\bigcap \{ G\in {\mathcal F}{(A)} \mid G\not=\{1\} \} $. Assume first that $G_1=\{1\}$. Then there exists a $\vee$-prime filter $P$ such that $(a\to b)\vee (b\to a)\not \in P$. Since $P$ is also a $\to$-prime filter we have that $(a\to b)\in P$ or $ (b\to a) \in P$, i.e., $(a\to b)\vee (b\to a)\in P$ which yields a contradiction. Second, assume that $G_1\not=\{1\}$. Then $\{1\}$ is a $\vee$-prime filter, hence a $\to$-prime filter. It follows that either $1\leq a\to b$ or $1\leq b\to a$, i.e., $(a\to b)\vee (b\to a) =1$, a contradiction again. 2. It follows by the same arguments as in part 1. 3. It follows from parts 1 and 2. \end{proof} \section*{Acknowledgment} This is a pre-print of an article published in International Journal of Theoretical Physics. The final authenticated version of the article is available online at: https://doi.org/10.1007/s10773-015-2608-0.
2,877,628,090,622
arxiv
\section{Introduction} \label{sec:1} Teaching in the great universities of the Middle Ages focussed on the seven liberal arts: the {\it trivium} - logic, grammar, and rhetoric -, and the {\it quadrivium} - arithmetic, geometry, music, and astronomy. Training and competency in these subjects was believed sufficient to form an individual with the necessary intellectual capabilities to pursue a career or further study in law, theology, or natural philosophy. Today's natural philosophers are schooled in the arts of empirical, theoretical, and computational scientific methodology as preparation for their professional careers. However, the vanguard of the data revolution is now upon us with high-dimensional, high-volume, feature-rich data sets becoming an increasingly common aspect of our everyday workplace and we are ill-prepared. To meet this challenge, a fourth paradigm \cite{fp} is emerging: the so-called data-intensive science or $x$-informatics (where $x$ is the science of choice, such as {\it bio}-informatics, {\it geo}informatics or {\it astro}informatics), which will support and drive scientific discovery in the 21$^{st}$ century. This is not just an incremental development on what has gone before but something entirely new and we're still trying to figure out not only what shape it is and where its boundaries lie but, more fundamentally, what its basic rules are. Yet, at the same time, it would not be unfamiliar to a 13$^{th}$ century scholar. The core of the mediaeval syllabus was a systemization of knowledge - what rules does it obey, how is it symbolized and how is it communicated - and, in particular, numerical knowledge and the relationship of number to physical space and time. Arithmetic, for example, was the study of pure number whereas music was the study of number in relation to time \cite{kline}. In this paper, we aim to show how the new art of data science can similarly be framed as a systemization of {\it data} and its relationship to space and time, particularly in regard to its technological aspects. Those this has relevancy to many sciences, our broad theme will be astronomy. \section{The logic of data} \label{sec:2} Just as alchemists thought of mercury as the {\it prima materia} (first matter) from which all metals were formed, so scientists consider data to be the basis of all understanding. Yet it is a commodity as fluid and elusive as its elemental counterpart. Great cost and effort is expended by empiricists to measure it, computationalists to imitate it and theoreticians to formulate it but, even then, do we really understand what we are working with? Even the word itself is open to speculation \cite{norman}. The legitimate use and, especially reuse, of data requires context: not just the raw processing of numerical or symbolic values but also adequate attention to their origins, systematics, biases, and bounds. Hogg \& Lang \cite{hogglang1, hogglang2} argue that most of astronomy has been conducted through catalogues, an inferior data product, derived from raw data but missing the necessary knowledge about the data -- how it was analysed, errors estimated, etc. -- to support any sophisticated statistical inferencing, such as resolving deblending issues in SDSS. Anything beyond raw data values is metadata and needs to be sufficiently described, preferably in terms of a (Bayesian) posterior probability model, so that arbitrary questions (cast as hypotheses) can be asked of it with maximal usage of the available information. Taken to its extreme, the ultimate model would be of the entire sky through wavelength and time from which any astronomical image ever taken at any time with any equipment in any configuration could be generated and thus anomalies in any data easily identified. Semantics provides an alternative but complementary approach, framing knowledge about data in terms of programmable structures rather than likelihood functions. Semantic constructs such as ontologies allow domain knowledge to be expressed in a machine-processible format in terms of classes, properties (relationships) and operations and data as instances of these. Logical inferencing over the classes and the instances allows inconsistencies in both the data and its description to be determined. Semantics also aids reusing data: different descriptions/interpretations can be efficiently combined/reconciled {\it by machines} to construct data sets of wider scope and applicability than originally intended. For example, the study of spectral energy distributions using multiwavelength data sets formed by combining (heterogeneous) single filter/passband observations of astronomical objects needs a proper treatment of flux values in each component data set, which information would be encoded in their ontologies or equivalent structures. We should never just blindly work with data, particularly as it becomes ever more complex. Explorations may skirt around full and proper treatments but understanding the rules that data obeys and manipulating this logic through inferencing, be it statistical or logical, is necessary for validatable and replicable discovery. Developing such systems and ensuring that they are performant in the face of forthcoming data expectations is a real challenge, however, and not one to be easily glossed over but it is one that can be met by interdisciplinary engagement across the vertical silos of individual sciences. \section{The grammar of data} \label{sec:3} To the mediaeval mind, unravelling the mysteries of the world lay in decoding the symbolic languages that Nature employed to hide her secrets. Everything was charged with meaning, be it through number, colour, geometry, or some more subtle aspect or property. The wise man could read the hidden messages (the patterns in a monastery garden) whereas the fool saw just the forms (the flowers), understanding nothing further of their meaning. The symbolism of data is far more profane: complex objects are converted to sequences of bits for persistence and communication but there are still a variety of representations (data serialization formats), each with a specific meaning and purpose. At its base level, data is comprised of numbers or symbols, normally stored in a digital (binary) representation. Whilst every piece of data could just be treated as an amorphous chunk of bits, the utility of this approach is really limited to large data objects (blobs), such as the streaming multimedia that forms an increasing fraction of web traffic. Data is far more manipulable if it is structured in some way and a description of that structure is available. It is of even greater advantage if the structure is independent of any specific hardware or software and machine-processible. There is also a distinction between formats used for raw data, which are largely binary, and metadata and derived data, such as catalogues, which are more structured and predominantly textual. Raw binary formats tend to be domain specific, although there is some usage of FITS outside of astronomy. In common with other formats, such as HDF5, descriptions of the binary structures and their metadata (often combined) are separable. CDF and netCDF take the concept even further by defining a common data model for scientific data sets, which has its own associated API. This handles data reading, the coordinate systems the data are expressed in and specific types of data and divorces the data user entirely from the physical details of its storage. The most familiar textual data representations are XML and JSON and systems exist to describe the structures of these, e.g., XML Schema and JSON Schema. A frequent criticism of them, however, is that they are ineffectual formats, particularly where processing speed is a factor since this is done on a character-by-character basis. Bandwidth and storage issues can also be significant and binary versions are not necessarily any better. Google's Protocol Buffers follows a similar abstraction path to CDF/netCDF and was designed to be a faster alternate to XML. Data structures are defined in terms of a common data model with an API for access and manipulation. The actual format of the underlying data is immaterial -- the default is binary, but textual formats may also be used --, the libraries provide the necessary interfaces to it. Apache Avro follows a similar approach, employing JSON to define its data structures but only using a compact binary data format. When it comes to communicating data, actual physical transportation -- the so-called sneakernet method -- remains one of the most efficient and reliable means, sacrificing latency for high throughput, and employed by many large astronomy projects as well as commercial service providers. However, every instance remains a bespoke solution, defying standardization, with the exact details only known between the sender and the receiver. When the desire is to communicate data to potentially millions anywhere and at any time, alternate solutions are required. Despite living at the time of greatest interconnectivity in human history, the existing infrastructure is insufficient for our needs: we've officially run out of IPv4 addresses and the Internet pipes are straining under the pressures of streaming media. Next generation efforts, such as Internet2, are developing the advanced capabilities that are required, e.g., the on-demand creation and scheduling of high-bandwidth, high-performance data circuits but the use of the current setup can also be optimized. Conventional data transfer technologies rely on a single stream/channel between the sender/provider and the receiver to carry the data, which typically does not make full use of the available bandwidth. Chunking up the data and sending it over multiple streams to the receiver achieves a much greater use of bandwidth, e.g., GridFTP works in this way. These streams can either come from multiple providers, each with their own (partial) copy of the data (just the requested chunk needs to be available), or a single provider running parallel streams. Chunks are requested from providers based on their advertised availability and, once the receiver has a chunk, it can also become a provider for it -- this is the basis for many peer-to-peer transport systems. Data streams typically use TCP packets for their transport but this can exhibit poor performance in long distance links, particularly when the bandwidth is high, or when multiple concurrent flows are involved with different data transfer rates. UDT employs UDP packets instead to achieve much faster rates than TCP can but with its same reliability. Other solutions involve fine-tuning TCP specifically for high performance networks or modifying the TCP protocol. Not all data formats encode their information in as efficient a manner as achievable and it is often possible to reduce the size of a data object for transmission (or storage) by compressing it. Significant improvements can be achieved, particularly for textual data, with generic compression routines such as gzip and bzip2. For astronomical binary data -- images and tables -- FITS tile compression \cite{fitscompression} offers better performance than these and also preserves the FITS headers (structure description) uncompressed for faster access. In fact, with the appropriate library (CFITSIO), compressed data should be the default mode for operation with decompression never being necessary. With larger amounts of data, storage and bandwidth become a premium -- OAIS (ISO 14721; 2003) is a high-level and well-regarded model for the complete archival cycle, useful for framing discussions about, and critiquing, data management planning. The meaning of the data, however, lies in its inherent structure and making this independent of the actual arrangement of bytes is no different to abstracting the meaning of creation from its encoding in the world around us. \section{The rhetoric of data} \label{sec:4} Students in the Middle Ages were drilled by rote in the skills of writing letters and sermons, drawing on the rhetorical teachings of classical antiquity. It was presupposed that the structure of language corresponded to that of being and understanding and therefore the manner and style of communicating well and correctly was important, employing the appropriate tone and linguistic constructs for the given subject matter (an appreciation that contributed to the scientific method). Data is the language of the scientific dialectic and highly politicized with a suite of tricks and devices to lead an audience to a particular conclusion. The credibility of an interpretation is as much a function of how it has been reached as it is a matter of trust in the data upon which it is based. The level of such trust, however, seems to be inversely proportional to how easy the data is to access. Though astronomical data is commercially valueless (described by Jim Gray as a zero billion dollar problem), most of it still resides in protected vertical silos, accessible for long periods of time to only an elect handful. Attempts to create an open data culture are viewed as seditious: the only contemporary survey to make its data publicly accessible from the outset is the Catalina Real-Time Transient Survey~\cite{crts}. This level of control persists even when the results have undergone peer review and appeared in the public domain. It can be a Sisyphean task to get supporting data to replicate or build upon particular interpretations. In the life sciences, generally all data is required to be made available without precondition when an associated paper is published, preferably in a community-endorsed public data repository. Astronomy already has a culture of data centres but these tend to be too tied to specific big missions or wavelength regimes -- there is certainly no current repository where arbitrary data can be archived or even permanently registered. The glacial progress of traditional astronomical publishing is countered by online bibliographic services, such as arXiv and ADS, which provide access points to associated data where available. An even more recent trend is the pre-submission discussion of data on blogs and other social networking fora. Although much of this is clearly intended for the sake of publicity rather than serious scientific discourse, it does reflect a growing frustration with the existing peer review system, particularly in our increasingly connected and open society, and the interest in alternatives such as open peer review and open peer commentary. The progressive emergence of interdisciplinary fields is also challenging since data is often taken out of its original context and reused in an entirely new (and, maybe, not entirely appropriate) one. This so-called pick-and-mix model allows one far greater latitude to present (apparently) supported conclusions, either intentionally or, more usually, by accident, in areas where there is a current lack of domain expertise. As mentioned previously, however, the formal use of semantics can go some way to preventing this. For a thousand years, data has been a precious commodity, residing in select locations and to be safeguarded at all costs. The necessity of an open approach in the new era stands against the existing control and access structures but is far more in tune with the intended purity and selflessness of the scientific method. Data should be free to all. \section{The arithmetic of data} \label{sec:5} From the abacus to the algorithm, arithmetic was concerned less with reckoning than with understanding the nature of number, its properties, and the uniqueness of numerical series obtained by certain constant relationships. It was far more qualitative than quantitative, motivated by a desire to divine the presence of an unseen hand in Nature expressed in the beauty of Platonic perfection. Whilst we do not seek transcendence in data, exploring its nature and its properties is still an illuminating experience. The utility (or value) of data lies in its ability to convey information (although one person's data can be another person's noise). This is a highly variable quantity, dependent on the size and potential impact of its contents, i.e., how supportive or challenging they are to the current paradigm, as well as its timeliness. The relative utility of individual pieces of data can be ranked, producing an overall trend that is logistic: initial data in an area is approximately exponential in utility, e.g., observations of 10 Type Ia supernovae (SNe Ia) in the redshift range $0.16 \le z \le 0.62$ suggest an accelerating universe \cite{riess}; then, as progressively more data becomes available, saturation occurs and its utility slows, e.g., successive observations supporting the SNe Ia results; and at maturity, it has essentially zero utility, e.g., surveys are regularly showing consistent behaviour. The metatrend may well be a succession of logistic behaviours or approaching something that is multiply logistic, depending on how much new paradigms redefine the utility of old data. Unprecedented progress along these logistic trends is being driven by two factors. Firstly, the future is characterized by massive parallel streams of (small) data events rather than large monolithic slabs of data. The synergistic nature of data (as expressed in Szalay's law that the utility of $N$ comparable data sets is $N^{2}$) means that these streams clearly lead to potentially rapid progress along the logistic curve, providing that they are linkable. Paradoxically the advent of the data-intensive era marks the inflection point in utility growth for single data sets. Secondly, there is the increasing pace of data acquisition, driven by exponential growth rates in technology (in particular, Moore's law regarding the transistor density of integrated circuits). Some believe that these rates cannot continue indefinitely: at some stage, either the relative doubling times of different technologies will become incompatible -- the slowest one defining the breaking point --, or one of them will come up against the hard edge of some physical law, or the economics of continued growth will cease to be attractive or advantageous. Others feel that new technologies will arise to keep the exponential growth up at equivalent rates, at least, if not accelerating ones. Power considerations are an increasingly important aspect. Already in 2004, microprocessor clock rates flatlined owing to power dissipation limits, although increasing the number of cores per chip has maintained the growth rate for computational performance. Exascale systems (desktop petaflop/embedded teraflop) have predicted power needs of $\sim$100 MW \cite{exascale} but even commodity-level processors are heading towards a power wall. One mitigating strategy is to employ GPUs for as much general purpose computation as possible \cite{gpu} -- they offer far better flop/Watt performance than CPUs. However, they must be supported by a CPU to run the operating system and manage the GPU device. Using a low-power CPU processor, which would spend much of its time idling, is a viable short-term solution but, inevitably, trans-silicon technologies will need to be considered -- these require lower energy but at a cost of slower clock speeds. If the universe is fundamentally reducible to a set of equations then there is a finite amount of information to be extracted from data. The extent to which we can approach that limit is determined by the technology and energy available to us in that pursuit, although ultimately the law of diminishing returns may still render it unattainable. If, however, the world is unknowable then gathering data and deriving information from it are endless activities. \section{The geometry of data} \label{sec:6} The great cathedrals of mediaeval Europe were intended as sacred mirrors of creation, reflecting the design and structure of the universe through the laws and forms of geometry, translated by the master stonemason in imitation of the work of his divine master. By the same token, the great data centers of tomorrow will reflect the aspirations of master scientists and technologists to facilitate the study of the design and structure of the universe through the laws and forms of a new geometry, the architectural order of vast collections of data. The physical media of sacred geometries are well understood, be it Caen stone and Purbeck marble or hard drives. Petascale storage systems can be constructed from commodity Terabyte-sized components for approximately $\$$50000/PB at the time of writing, although suitable precautions must be taken to protect against the high failure rates and subsequent data loss that are associated with "cheap" commodity disks. The art and skill then lies in layering the data on these in as efficient and effectual a manner as possible according to user constraints. A standard architecture for high throughput data that is intended to be predominantly read and rarely overwritten or appended (e.g., for data processing) is to break it up into fixed size chunks (typically 64 MB) and then distribute multiple copies (typically three, two on the same rack and one on a different one) of each chunk across the disk cluster (see, for example, Google FS \cite{gfs} or its open-source equivalent, HDFS). This provides reliability against the potential inadequacies of the underlying hardware and can be fine-tuned (more copies) for specific data where greater demand or protection is anticipated. A central/master node maintains the list of which chunk is where and any attendant metadata, and also the list of all operations involving data. This does, however, present an obvious single point of failure and can limit scalability (distributing the master node is a possible solution). Such systems are optimized for very large data sets with a small number of constituent parts. When there are large numbers of small files in a data set, the dominant process during runtime execution of a computation on that data set is locating the relevant chunks, i.e., calls to the master node \cite{wiley}. HDFS mitigates this by defining a specific data structure for such situations -- the sequence file, which is essentially a container of smaller files bundled with an index -- vastly reducing the number of files on disk that need to be processed. Further improvements can be obtained by structuring sequence files according to some prescription, e.g., spatial or temporal location of image files, rather than just randomly grouping files into them. Alternate data scenarios involve low-latency random access (high availability) to the data, e.g., retrieving thumbnail images, or very large numbers of varying sized files with multiple concurrent writes, e.g., log files. In these cases, approaches based around distributed multi-dimensional sorted maps, such as Google's BigTable \cite{bigtable} or Hadoop's open-source equivalent, HBase (both built on top of GFS and HDFS respectively), or more general distributed data and metadata architectures, such as OpenStack Swift or iRODS, are more appropriate. All these physical architectures broadly have no knowledge of the structure of the data that they are dealing with. However, there is a subclass that is concerned specifically with the type of data that one would traditionally put in a (relational) database (RDBMS). RDBMSs do not function well beyond $\sim$100 TB in size \cite{gray} but there is a clear need for equivalent systems to support petascale catalogs, etc. BigTable and its variants belong to a superclass of systems known as NoSQL, which provide distributed storage for structured data, and can used as scaled equivalents to databases for many types of data. However, a better match for scientific data is afforded by SciDB which is a column-oriented (rather than row-oriented like a RDBMS) system that uses arrays as first-class objects rather than tables and is still ACID (like a RDBMS but unlike most NoSQL solutions). The intricate geometries that we employ in our data centers with replicated hierarchical patterns are no different to those used by stoneworkers ten centuries ago in their own towering edifices. Both are intended to reflect our knowledge of the design and structure of the universe itself, expressed in human works. \section{The music of data} \label{sec:7} The ancients believed that the heavens were pervaded by the harmony of the spheres, the majestic fugue created by the movements of the celestial bodies. The mediaeval curriculum formalized this, along with the internal fugue of the human body and the audible fugues that we could create, into the concept of {\em musica}, which studied the progression of proportions through time according to well-established patterns and rules. The progression of data through time as a result of computations on it is a similar fugue and, in the case of large data sets, there are a number of identifiable patterns. The predominant such computational pattern today is the so-called embarrassingly parallel task, which describes a computation for which little or no effort is required to separate it into a number of parallel tasks, often with no dependency between them, e.g., anything requiring a sweep through a parameter space. These can then be distributed across the available processors, bringing a substantial reduction to the computation time in comparison with a straightforward sequential approach. If the processors can be selected so that the data they require is local (data locality), this further reduces the computation time (in fact, this is a general principle with large data sets -- bring the computation to the data). Several frameworks exist for managing these computations: Condor and BOINC will handle generic jobs on general pools of machines, ranging from local resources dedicated to the process to spare cycles scavenged from online resources anywhere in the world (the usual scenario for BOINC), although data is invariably transferred to the computation with these. Note that GPUs offer an increasingly popular alternative to CPU clusters with single high-end chips offering performance speed-ups of up to $\sim$1000 compared to single CPUs, assuming appropriate code parallelization. In fact, GPU clusters make bulk brute force calculations viable over state-of-the-art CPU algorithmic approaches, for example, in $n$-point correlation functions \cite{tian}. MapReduce \cite{mapreduce}, and its open-source equivalent, Hadoop, take a different approach by expressing jobs in terms of two standard operations -- map and reduce, instances of which (mappers and reducers) are deployed to the compute resources holding the data to be processed (thus ensuring data locality). A mapper transforms its input data (as (key, value) pairs) to an intermediate set of different (key, value) pairs. Gathering these from all mappers, they are reordered and the group of data for each different key is sent to a reducer. Finally the outputs of the reducers are collected and returned as the overall result of the computation. Not all computations are expressible in this form -- those which require a large amount of state information to be shared between mappers, e.g., referencing a common training set, with a lot of fine-grained synchronization can be problematic, although those involving iterative processes can often be expressed as chains of MapReduce tasks. An alternate pattern is to apply a streaming solution to the computation, i.e., one which only requires a single pass through the data. Typically these involve an incremental (online) formulation of the computational algorithm which updates with each new data point. Further optimizations are possible for specific types of computation, such as stochastic gradient descent for some types of machine learning. Obviously for large data sets, computations based on a single reading of the data are ideal and, in some cases, such algorithms also lend themselves to parallelization. In the same way that polyphony lay at the heart of the mediaeval fugue with multiple voices combining to form a harmonic whole, parallelization is at the core of the modern data fugue with thousands of cores and threads acting in concert to transform vast data sets into harmonic representations of our knowledge of the cosmos. \section{The astrology of data} \label{sec:8} ``As above, so below'' underpinned the mediaeval conviction that patterns in the heavens reflected, or even presaged, happenings here on Earth in all spheres of life, from personal health to affairs of state to triumphs and disasters. {\em Astronomia} was both the science of observing these patterns and interpreting them, drawing on the corpora of Babylonian and Islamic thought. The plans for creation were writ large in the celestial arrangements of stars and planets and we could divine them by proper study. Data mining is ``the semi-automatic discovery of patterns, associations, changes, anomalies, and statistically significant structures and events in data'' \cite{kddig} and is the mainstay of astroinformatics. The application of data mining to a data set really has two primary goals \cite{fayyad}: predicting the future behaviour of certain entities based on the existing behaviour of other entities in the data (prediction) and finding human-interpretable patterns describing the data (description) -- interestingly the same division distinguished judicial and natural astrology. The suite of available data mining techniques, originating primarily from computer science (particularly artificial intelligence research) and statistics, can then be regarded as falling into one or more of these categories: classification, regression, clustering, summarization, dependency modelling, and change and deviation (or outlier) detection. The process of data mining extends well beyond just the casual employment of a particular algorithm, however. The data of interest first has to be collected and carefully prepared for analysis, e.g., normalization, handling missing values, binning, sampling, etc. The assumptions and limitations of the particular technique that is going to be applied have to be assessed, e.g., the specific number of clusters to be defined, and, in many cases, this will require multiple applications of the algorithm to fully determine these. Even then, the outcome has to be validated, either by rerunning the analysis on subsets of the data and/or using some particular measure of quality. Finally, the procedure is understood well enough that results can be interpreted and it can be used with further and wider data samples. An important aspect of data mining is the incorporation of appropriate prior knowledge. Statistical inferencing (see section \ref{sec:2}) is one approach to this but it builds its arguments on probabilistic models of the data and not on the actual observed values. Thus its interpretations rest on the assumption that the model is a good description of reality and not on the observations. Folding the knowledge into the data mining algorithm at least means that any interpretations are data-based, even if the knowledge might be model-derived. From semantic constructs, such as ontologies, similarity metrics can be defined which encode the degree to which two concepts share information. These quantitative measures of conceptual similarity can be then be incorporated into standard data mining algorithm formulations, giving knowledge-driven data mining. Of all the patterns discerned in the heavens by mediaeval scholars, the most vital was the {\em computus}, which allowed the determination of the date of Easter. The utility of the patterns that we have discovered in astronomical data has led to the discovery of new objects, improved processing, object detection and classification, and better photometric redshifts \cite{kddguide}. \section{The scholasticism of data} \label{sec:9} The {\em trivium} and the {\em quadrivium} created a scholastic culture in which all phenomena, both natural and artificial, were subject to interrogation and symbolic interpretation. The liberal arts not only conferred the necessary skills to uncover the knowledge hidden throughout creation but provided a framework onto which these discoveries could be attached and understood. In particular, the properties and relationships of numbers, unchanging and endless, were a path to divine revelation. Our desire to reveal the inner workings of the universe is unchanged but we no longer require it to be numinous. The scientific method which arose out of the dialectic criticisms of the Middle Ages is founded on rational thought and logic, dealing with hard data and facts, rather than association and exegetical consistency. We have shown, however, how the same themes run through our contemporary approach. In our vast data sets, we are still concerned with the structures that we employ to represent our knowledge, communicating them well and correctly, and how we can meaningfully design and make them. We still need to understand what it is that we are studying and what rules apply. And we still need to know how to look for the meaningful patterns that we want to uncover. Only with this grounding can we hope to manage the overwhelming volumes and complexities of data that are facing us. Finally, this has to be a community effort, both international and interdisciplinary. The challenges for astronomy are the same for climate science, for genomics, for any 21$^{st}$ century enterprise. Efforts such as the International Virtual Observatory Alliance \cite{ivoa} are a step in the right direction but we need something that is truly universal, educating at all levels and in all subjects. Data, like its mediaeval counterpart, number, must be a first-class entity in our worldview, and not just from a technological standpoint. From a future vantage point, today will be regarded as the point from which we emerged from the Dark Ages of data and initiated a truly modern perspective. \subsubsection*{Acknowledgments} We would like to thank Norman Gray and Helen Angove for useful feedback and discussion about this paper.
2,877,628,090,623
arxiv
\section{Introduction} Gaussian states, \emph{i.e.} states with Gaussian Wigner functions, are the key ingredient of many continuous variable (CV) Quantum Information protocols.\cite{RMP:12,oli:rev} However, in order to achieve some relevant tasks, non-Gaussianity (nonG) in the form of non-Gaussian states (states endowed with a non-Gaussian Wigner function) or non-Gaussian operations is either required or desirable. For instance, it has been recently demonstrated that nonG can be used to improve teleportation, cloning and storage; in addition non-Gaussian operations are interesting for the realization of entanglement distillation and noiseless amplification.\cite{barbieri10} \par Several implementations of non-Gaussian states have been reported so far, in particular from squeezed light,\cite{lvovsky01,zavatta04} close-to-threshold parametric oscillators\cite{dauria05} and in superconducting circuits.\cite{hofheinz08} Such states have been mainly achieved in the low energy regime\cite{ourjoumtsev07,takahashi10,braczyk10} by using single-photon detectors, visible light photon counters\cite{waks06} and time-multiplexed photo-resolving detectors.\cite{osullivan08} More recently, we extended the investigation to the mesoscopic regime by exploiting the linear response of hybrid photodetectors\cite{allevi10a,allevi10b,spie12a} and Si-photomultipliers.\cite{spie12b} \par Being it recognized as a resource for CV Quantum Information, the need of quantifying the non-Gaussian character of states and operations naturally arises and different non-Gaussianity measures have been proposed.\cite{genoni10} Indeed, not all of them are characterized by an operational meaning. Moreover, in the realistic situations, non-Gaussianity measures based only on quantities that can be experimentally accessed are desirable. To this aim, here we present an experimental work in which we compare the two measures introduced in Refs.~[18] and~[19] by testing them on phase-randomized coherent states or phase-averaged coherent states (PHAVs), a class of states exploited in communication channels and in decoy-state-based quantum key distribution protocols.\cite{OE12} The two measures, being in perfect agreement with each other, can be considered a useful tool to quantify the non-Gaussian nature exhibited by the Wigner functions and testified by the experimental results. Moreover, the sensitivity of such non-Gaussian measures to small changes in the mean number of photons suggests their possible exploitation to test decoy states based on the class of PHAVs. \section{Quantifying nonG amount} \label{sec:theory} A single-mode PHAV $\varrho_\beta$ is a classical state obtainable by randomizing the phase $\phi$ of a coherent state $| \beta \rangle$, $\beta = |\beta|\,e^{i\phi}$. It is characterized by a density matrix diagonal in the photon-number basis and by a Poissonian photon statistics. The state is obviously phase-insensitive and its Wigner function reads:\cite{bondani09a} \begin{equation} W_{\rm PHAV}(\alpha;\beta) = \int_{0}^{2 \pi} \frac{d \phi}{2 \pi}\, e^{-|\alpha - \beta|^2}= \frac{2}{\pi} \, I_0(4|\alpha||\beta|)\, \exp[-2(|\alpha|^2+|\beta|^2)]\,, \label{eq:wignerPHAV} \end{equation} where $I_0(z)$ is the modified Bessel function. This function, being it endowed with a dip in the origin of the phase space, is clearly non-Gaussian. \par Another state exhibiting a nonG nature, which can be useful for application to passive decoy state quantum key distribution,\cite{curty09} can be obtained from the interference of two PHAVs $\varrho_{\beta}$ and $\varrho_{\tilde{\beta}}$ (2-PHAV hereafter) at a beam splitter (BS) with transmissivity $t$. The states outgoing the BS are still diagonal in the photon number basis and the transmitted one is characterized by the following Wigner function: \begin{equation} \label{eq:wigner2PHAV:B} W_{\rm 2-PHAV}(\alpha;\beta,\tilde{\beta},t) = \int_{0}^{2 \pi} \frac{d\tilde{\phi}}{2\pi}\, W_{\rm PHAV}(\alpha-\tilde{\beta}\sqrt{1-t};\beta\sqrt{t}) \end{equation} where $\tilde{\beta} = |\tilde{\beta}|\,e^{i \tilde{\phi}}$ and the function in the integral is given by Eq.~(\ref{eq:wignerPHAV}). Obviously, the Wigner function of the reflected mode can be obtained by replacing $t$ with the reflectivity $(1-t)$. In both cases, the quantification of nonG amount can be achieved by considering two competitive measures recently introduced. One of them is based on the Hilbert-Schmidt distance from a Gaussian reference state, namely: \begin{equation}\label{nonGA} \delta_{\rm A}[\varrho_{\beta}] = \frac{D^2_{\rm HS}[\varrho_{\beta}, \sigma]}{\mu[\varrho_{\beta}]}=\frac{\mu[\varrho_{\beta}]+\mu[\sigma]+2\kappa[\varrho_{\beta}, \sigma]}{2\mu[\varrho_{\beta}]}, \end{equation} where $\mu[\varrho]$ is the purity of the state $\varrho$, $\sigma$ is a reference Gaussian state with the same covariance matrix as the state $\varrho_{\beta}$ under investigation and $\kappa[\varrho_{\beta}, \sigma]=$Tr$[\varrho_{\beta} \sigma]$ denotes the overlap between $\varrho_{\beta}$ and $\sigma$.\cite{genoni07} \par The second measure we address is given by: \begin{equation}\label{nonGB} \delta_{\rm B}[\varrho_{\beta}] = S(\sigma) - S(\varrho_{\beta}), \end{equation} where $S(\varrho) = -\hbox{Tr} [\varrho \ln \varrho]$ is the von Neumann entropy of the state $\varrho$.\cite{genoni08} As both PHAV and 2-PHAV are diagonal in the photon-number basis, their reference state is a thermal equilibrium state, with the same mean number of photons $N=|\beta|^2$. Moreover, the two measures result based only on quantities that can be experimentally accessed by direct detection, since both Eqs.~(\ref{nonGA}) and ~(\ref{nonGB}) can be expressed in terms of photon-number distributions.\cite{OE12} In particular, Eq.~(\ref{nonGA}) reduces to \begin{equation}\label{nonGA1} \delta_{\rm A}[\varrho_\beta] = \frac{1}{2}\left[1-\frac{\sum_n \tau_n (2 p_n- \tau_n)}{\sum_n {p_n}^2}\right], \end{equation} where $\tau_n = N^n/(1+N)^{n+1}$ is the photon-number distribution of a single-mode thermal state having $N$ mean number of photons and $p_n$ is the statistics of $\varrho_{\beta}$. In particular, PHAV is described by a Poissonian distribution \begin{equation} p^{\rm PHAV}_n = \exp{(-|\beta|^2)}\, |\beta|^{2n}/n!,\nonumber \end{equation} whereas 2-PHAV is characterized by a non-trivial 2-peaks distribution:\cite{zambra07,bondani09b} \begin{eqnarray} \label{eq:phaseaver} p^{\rm 2-PHAV}_{n} &=& \frac{A^{n}}{n!}e^{-A}\sum_{k=0}^n \left( \begin{array}{c} n \\ k \\ \end{array} \right) \frac{\left(-1\right)^k }{2\pi}\left(\frac{B}{A}\right)^k\frac{\Gamma\left(1/2+ k/2\right) \Gamma\left(1/2\right)}{\Gamma\left(1+ k/2\right)} \\ \nonumber &\mbox{}&\times {}_1F_2 \left[\left\{1/2+k/2\right\},\left\{1/2,1+k/2\right\},B^2/4 \right]\; , \end{eqnarray} in which $A = |\beta|^2+|\tilde{\beta}|^2$, $B=2 |\beta||\tilde{\beta}|$ and $_1F_2(a,b,z)$ is the generalized hypergeometric function.\\ On the other hand, Eq.~(\ref{nonGB}) becomes \begin{equation}\label{nonGB1} \delta_{\rm B}[\varrho_\beta] = (N+1) \ln (N+1)-N \ln N +\sum_n p_n \ln p_n. \end{equation} \section{Experimental results and discussion} \label{sec:experiment} We generated the class of PHAVs by exploiting the second harmonics (@ 523 nm, 5-ps pulses) of a mode-locked Nd:YLF laser amplified at 500 Hz (High-Q Laser Production). \begin{figure}[htbp] \centering\includegraphics[width=0.65\textwidth]{fig1} \caption{Experimental setup. F$_{\rm j}$: variable neutral density filter; BS: 50/50 beam splitter; Pz: piezoelectric movement; MF: multimode fiber (600~$\mu$m core).} \label{setup} \end{figure} According to the experimental setup sketched in Fig.~\ref{setup}, we obtained the single PHAV by sending the light pulses to a mirror mounted on a piezo-electric movement. Its displacement, which was controlled by a function generator, was operated at a frequency of 100~Hz and covered a 12~$\mu$m span. Moreover, we produced the 2-PHAV from the interference of two single PHAVs at a BS. A continuous variable density filter F$_1$ allowed us to change the total energy of the states, whereas a second filter F$_2$, inserted in the path of one of the two PHAVs, was used to change the balancing between the two components of the 2-PHAV. As the states to be characterized can be fully described by their photon-number distributions, we implemented a direct detection scheme involving a photon-counting detector, $i.e.$ a hybrid photodetector (HPD, R10467U-40, maximum quantum efficiency $\sim$0.5 at 500 nm, Hamamatsu). This detector is characterized not only by a partial photon-counting capability, but also by a linear response up to 100 photons. Thanks to its properties, the HPD can actually operate in the mesoscopic domain, where the states are robust with respect to losses. The output of the detector was amplified (preamplifier A250 plus amplifier A275, Amptek), synchronously integrated (SGI, SR250, Stanford) and digitized (AT-MIO-16E-1, National Instruments). The gain of the detection apparatus was obtained in a self-consistent way without any $a$ $priori$ calibration.\cite{bondani09c,andreoni09} This method allows us to reconstruct the detected photons distributions of the states, which represent the basic element to retrieve their Wigner function. Such a goal can be achieved by mixing at a BS the state to be characterized with a coherent probe field whose amplitude and phase are continuously varied.\cite{cahill69,banas96,vogel96} In this case, as both the states are phase-insensitive, we actually reconstruct only a section of their Wigner distribution for fixed phase. \par \begin{figure}[htbp] \centering\includegraphics[width=0.49\textwidth]{fig2a} \centering\includegraphics[width=0.49\textwidth]{fig2b} \caption{Experimental reconstruction of a section of the phase-insensitive Wigner function of a PHAV (left panel), with $|\beta|^2=1.97$ and $\xi=0.999$ and of a 2-PHAV (right panel), with $|\beta|^2=1.03$, $|\tilde{\beta}|^2=0.91$, $\xi_{\rm P}=0.95$ and $\xi_{\rm S}=1$. Blue dots: experimental data; mesh: theoretical surface.} \label{wig-comp} \end{figure} In Fig.~\ref{wig-comp} we plot the experimental data (blue dots) of a single PHAV (left panel) and of a balanced 2-PHAV (right panel), endowed with nearly the same mean number of detected photons, $M_{\rm T}=1.97$ and $M_{\rm T}=1.94$, respectively. In each panel we also show the 3D-theoretical expectations (mesh) for the PHAV and 2-PHAV, respectively:\cite{bondani09a} \begin{equation} \tilde{W}_{\rm PHAV}(\sqrt{\xi} \alpha) = W_{\rm PHAV}(\sqrt{\xi} \alpha) e^{-\sqrt{1-\xi}(|\alpha|+|\beta|)}, \label{eq:wignerPHAVoverlap} \end{equation} $\xi$ being the overall (spatial and temporal) overlap between the probe and the PHAV, and: \begin{equation} \tilde{W}_{\rm 2-PHAV}(\sqrt{\xi_{\rm P}} \alpha) = W_{\rm 2-PHAV}(\sqrt{\xi}_{\rm P} \alpha) e^{-[\sqrt{1-\xi_{\rm P}}|\alpha|+\sqrt{1-\xi_{\rm S}}(|\beta|+|\tilde{\beta}|)]}, \label{eq:wigner2PHAVoverlap} \end{equation} where $\xi_{\rm P}$ describes the overall overlap between the probe and the 2-PHAV and $\xi_{\rm S}$ the overall overlap between the two components of the 2-PHAV. In Eqs.~(\ref{eq:wignerPHAVoverlap}) and (\ref{eq:wigner2PHAVoverlap}), $|\beta|^2$ and $|\tilde{\beta}|^2$ are now the mean numbers of photons we measured (see Fig.~\ref{wig-comp}), thus including the quantum efficiency. In fact, it is worth noting that for classical states the functional form of the Wigner function is preserved also in the presence of losses and its expression, given in terms of detected photons, reads $ \tilde{W}(\alpha) =\frac{2}{\pi} \sum_{m=0}^\infty (-1)^m p_{m,\alpha}^{\ el}$, where $p_{m,\alpha}^{\ el}$ represent the detected-photon distributions of the state to be measured displaced by the probe field.\cite{bondani09a} As testified by the very high values of the overlaps $\xi$, $\xi_{\rm P}$ and $\xi_{\rm S}$ reported in the caption of Fig.~\ref{wig-comp}, we actually achieved a very good superposition in aligning the system. From a direct comparison between the two panels it emerges that the states under investigation are non-Gaussian. In fact, the Wigner function of a single PHAV is characterized by a dip in the origin of the phase space, whereas that of a 2-PHAV with almost the same mean value exhibits a peak in the origin followed by a ``shoulder''. Moreover, it is worth noting that the measurements were actually performed in the mesoscopic photon-number domain, as the reconstruction of the Wigner functions was achieved by displacing either the PHAV or the 2-PHAV with a coherent field whose intensity was changed from zero up to four times the mean value of the states themselves. \par To quantify the nonG amount, we considered the measures introduced in Sec.~\ref{sec:theory}. As from the experimental point of view we do not have access to photons, we calculated similar expressions, $\epsilon_{\rm A}$ and $\epsilon_{\rm B}$, for detected photons, which represent lower bounds to nonG.\cite{genoni10,allevi10a} In particular we found $\epsilon_{\rm A} = 0.207 \pm 0.004$ and $\epsilon_{\rm B} = 0.156 \pm 0.020$ for the single PHAV, whereas we obtained $\epsilon_{\rm A} = 0.036 \pm 0.005$ and $\epsilon_{\rm B} = 0.012 \pm 0.025$ for the 2-PHAV. The consistency between the two measures, together with the fact that measure $\epsilon_{\rm A}$ can be directly expressed in terms of Wigner functions\cite{genoni07}, demonstrate that a Wigner function exhibiting a dip in the origin of the phase space is more non-Gaussian than one characterized by a peak in the origin followed by a ``shoulder''. Moreover, the results prove that combining two non-Gaussian states does not necessarily lead to an increase of the overall nonG. \begin{figure}[htbp] \centering\includegraphics[width=0.49\textwidth]{fig3a} \centering\includegraphics[width=0.49\textwidth]{fig3b} \centering\includegraphics[width=0.49\textwidth]{fig3c} \caption{Upper left panel: nonG measures $\epsilon_{\rm A}$ and $\epsilon_{\rm B}$ as functions of the mean number of detected photons of almost balanced 2-PHAVs ($R=1.24$). Upper right panel: $\epsilon_{\rm A}$ and $\epsilon_{\rm B}$ as functions of the balancing between the two components of the 2-PHAV, at fixed mean number of detected photons ($M_T = 4.12$) of the overall state. Lower panel: $\epsilon_{\rm A}$ and $\epsilon_{\rm B}$ as functions of the mean number of detected photons of single PHAVs. Empty symbols: experimental data; Full circles: theoretical expectations.} \label{nonGunbal} \end{figure} As 2-PHAV is a state described by 2 parameters, namely the mean value, $M_{\rm T}$, and the balancing $R$ between its two components, we decided to better investigate its nonG nature as a function of one of these variables by keeping fixed the other one. In the upper left panel of Fig.~\ref{nonGunbal} we show $\epsilon_{\rm A}$ and $\epsilon_{\rm B}$ as functions of the mean total energy of the 2-PHAV for a fixed choice of the balancing, namely 1.24: we can notice that the values of both the measures increase at increasing the mean number of detected photons. Moreover, in the upper right panel of the same figure we plot the lower bounds of the nonG measures as functions of the ratio between the two components at fixed mean number of detected photons, that is $M_{\rm T}=4.12$. As one may expect, it monotonically decreases at increasing the balancing. In fact, the most unbalanced condition reduces to the case in which there is only a single PHAV, whereas the most balanced one corresponds to have a balanced 2-PHAV. \par For the sake of completeness, in the lower panel of the same figure we show the results we obtained for the single PHAV as a function of the mean value. Also in this case the experimental data, which are superimposed to the theoretical expectations, testify the accordance between the two measures since they increase their value at increasing the energy of the state. \par As in all the cases presented in Fig.~\ref{nonGunbal} the behavior of the two measures is very similar except for the absolute values, we decided to test their monotonicity by following the suggestion of Ref.~[17]. In the left panel of Fig.~\ref{comparison}, we plot the experimental values of measure B as a function of those of measure A for the three cases presented in Fig.~\ref{nonGunbal}. It is evident that the two measures are monotone to each other, even if the absolute values are different. In particular, measure B is endowed with higher values, thus resulting more sensitive to small differences in the choice of the parameters. This property, together with the fact that $\epsilon_{\rm B}$ is characterized by smaller error bars with respect to the other one, can be considered a good criterion to choose one measure instead of the other to quantify the nonG amount and thus discriminate the states under investigation for possible applications. Nevertheless, it is interesting to notice that the behavior of one amount with respect to the other is not described by a unique curve. In fact, in the case of the 2-PHAV at fixed ratio and variable total energy (blue dots) the curve is slightly different from those corresponding to the other two cases, namely the 2-PHAV at fixed total energy and variable ratio (red dots) or the single PHAV with variable energy (black dots). In order to prove that this difference does not depend on the reliability of the experimental data, in the right panel of Fig.~\ref{comparison} we present the results we obtained by a simulation. \begin{figure}[htbp] \centering\includegraphics[width=0.49\textwidth]{fig4a} \centering\includegraphics[width=0.49\textwidth]{fig4b} \caption{Left panel: $\epsilon_{\rm B}$ as a function of $\epsilon_{\rm A}$ for the three experimental cases presented in Fig.~\ref{nonGunbal}. Right panel: simulated behavior of $\epsilon_{\rm B}$ as a function of $\epsilon_{\rm A}$ for different choices of the parameters describing PHAVs and 2-PHAVs (see the text for details).} \label{comparison} \end{figure} We plot the theoretical behavior of a single PHAV at different mean numbers of detected photons as black line, whereas we used colored squares + line to indicate the 2-PHAV at fixed ratio ($R=0.2, 0.5, 0.8$) and variable total energy, and colored dots + line to indicate the 2-PHAV at fixed total energy ($M_{\rm T}=2, 4, 6$) and variable ratio. It is evident that there is not a unique curve, as already testified by the experimental data. Nevertheless, we want to notice that there are some limits in which the curves are superimposed (this happens either when the 2-PHAV is almost unbalanced or when it is very low populated as in both the cases it reduces to the case of a single PHAV) or intersect each other (such as in the case in which the 2-PHAV is characterized by a precise choice of total energy and ratio). \section{Concluding remarks}\label{s:remarks} In conclusion, we have presented an experimental investigation of the non-Gaussian nature of the class of PHAVs by reconstructing their Wigner function and using two different measures, both based on quantities experimentally accessed by direct detection, to quantify the nonG amount. We proved the consistency of the different approaches and tested the monotonicity of the two measures. Nevertheless, the comparison performed on diagonal states belonging to the class of PHAVs for different parameters settings showed that there is not a unique curve describing the behavior of one measure with respect to the other. In addition, we discussed the choice of the best measure between the two proposed. According to data, $\epsilon_{\rm B}$ seems to be better because it has higher absolute values and a reduced sensitivity to experimental errors. \section*{Acknowledgments} This work has been supported by MIUR (FIRB ``LiCHIS'' - RBFR10YQ3H).
2,877,628,090,624
arxiv
\section{Introduction} \label{sec:intro} Software engineering data sets are often a key ingredient for performing empirical software engineering by testing a hypothesis through an experiment run on such data~\citep{Cuk05}. They can be used to empirically evaluate software product quality and development process attributes and also to create or verify estimation models~\citep{LS12}. In addition, publicly available data sets can help researchers perform so-called \emph{exact} replications of existing studies and thus address potential internal validity problems~\citep{SCVJ08}. These, in contrast to \emph{conceptual} replications, which follow an independently developed experimental procedure, attempt to control as many factors of the original study as possible, varying almost no (in \emph{dependent} replications) or only some (in \emph{independent} replications) conditions of the experiment~\citep{SCVJ08}. Yet, at least in the past, data sets for software engineering research were small in size and difficult to obtain~\citep{KPPJ02}. The situation has improved over the past decades with the emergence of open source software~\citep{KH06}, and the growing interest in sharing artifacts, including data sets, along with research publications. Such efforts are encouraged by initiatives such as the \textsc{acm sigsoft} Artifact Evaluation Working Group,\footnote{\url{https://github.com/acmsigsoft/artifact-evaluation}} which aims to integrate the artifact evaluation in the publication process, or the \emph{Recognizing and Rewarding Open Science in Software Engineering} (\textsc{rose}) festival, which salutes replication and reproducibility in software engineering. For these reasons researchers have collaborated~\citep{Cuk05} through various initiatives to develop data set repositories, such as the \emph{International Conference on Predictive Models and Data Analytics for Software Engineering} (\textsc{promise})~\citep{SM05}, or to promote the sharing and publication of data, as through the \textsc{us} National Institute of Standards and Technology's ``Error, Fault, and Failure Data Collection and Analysis Project''~\citep{Wal98}, the Mining Software Repositories (\textsc{msr}) conference data showcase track~\citep{MSR13}, or the \emph{awesome-msr} GitHub project.\footnote{\url{https://github.com/dspinellis/awesome-msr}} The \textsc{msr}\ data showcase track, established in 2013, aims at encouraging the research community to develop, share, and document software engineering research data sets. In the words of the 2013 \textsc{msr}\ conference chairs~\citep{ZMSD13}: \begin{quote} ``rather than describing research achievements, data papers describe datasets curated by their authors and made available to others. Such papers provide description of the data, including its source; methodology used to gather it; description of the schema used to store it, and any limitations and/or challenges of this data set.'' \end{quote} In the past decade tens of data set papers have been published in the \textsc{msr}\ conference. Given the effort that went into creating the data sets and publishing the corresponding papers, it is reasonable to investigate what the outcome has been. This study aims to answer the question by examining the usage of the data papers published in the \textsc{msr}\ proceedings in terms of use frequency --- to evaluate the data track's actual impact, users --- to examine researchers' potential reluctance to work with data coming outside their organization, and use purpose --- to identify the most used types of data papers, and the types of studies that mainly use them. The study's contributions are: \begin{itemize} \item the systematic collection of research that has been based on \textsc{msr}\ data papers, \item the categorization of the subjects tackled using \textsc{msr}\ data papers, \item the quantitative analysis of the \textsc{msr}\ data papers' impact, and \item the analysis of the community's opinion regarding data paper publication and use. \end{itemize} In the following Section~\ref{sec:related} we present an overview of related work. We then describe our study's methods in Section~\ref{sec:methods}, present our results in Section~\ref{sec:results}, discuss the findings, and identify our study's implications in Section~\ref{sec:discussion}. The study is complemented by the associated validity threats in Section~\ref{sec:validity}, followed by our conclusions in Section~\ref{sec:conclusions}. The data sets associated with our study (data papers, citing papers, categorizations, \textsc{msr}\ papers, citations, survey questionnaire and responses) are made available online.\footnote{\url{http://doi.org/10.5281/zenodo.3709219}} A shorter version of this study appeared in the 2019 \textsc{ieee}/\textsc{acm} 16th International Conference on Mining Software Repositories (\textsc{msr}\ '19)~\citep{KS19}. This work extends the conference paper by using multiple raters and established research methods for the manual clustering of data papers and the classification of strong citations. Furthermore, this work introduces a questionnaire survey study on all identified primary authors and users of data papers, and an analysis of weak citations (defined in Section~\ref{sec:citation-methods}) to data papers. \section{Related Work} \label{sec:related} A variety of evaluations have been conducted through research analysis. We recognize two major fields of evaluations: surveys and bibliometrics. Surveys review and summarize previously published studies of a particular topic through qualitative analysis. Webster and Watson~\citeyearpar{WW02} have authored a thorough guide on writing high quality literature reviews. On the other hand, bibliometric studies are statistical analyses of publication data. We consider our work part of the bibliometric research strand, and to the best of our knowledge, we are the first to conduct a quantitative review of data paper usage. A first step toward the assistance of bibliometric research in the field of software engineering models was made in 2004 by the organizers of the \textsc{promise} workshop, in their attempt ``to strengthen the community's faith in software engineering models''~\citep{Cuk05}. Authors of such models were asked to submit, along with their work, a related data set to the \textsc{promise} repository. Many individuals have carried out interesting quantitative bibliometric research on various topics. Robles~\citeyearpar{Rob10} conducted bibliometric research on papers that contained experimental analyses of software projects and were published in the \textsc{msr}\ proceedings from 2004--2009. His objective was to review their potential replicability. The outcome proves that \textsc{msr}\ authors prefer publicly available data sources from free software repositories. However, the amount of publicly available processed data collections was very low at the time, a fact we also state in Section~\ref{sec:rq1}. Concerning replicability, Robles found that only a limited number of publications are replication friendly. Liebchen and Shepperd~\citeyearpar{LS08} performed a different quantitative analysis on data sets. Their aim was to assess quality management techniques used by authors when producing data collections. They found that a surprisingly small percentage of studies take data quality into consideration. The authors of that work stress the need for more quality data rather than quantity data. To achieve this, they advise researchers to provide a clear description of the procedures they follow prior to their analysis and data archiving. They also encourage the use of automated tools for assessing quality and the use of sensitivity analysis. Another related publication is Cheikhi and Abran's~\citeyearpar{CA13} survey on data repositories. They noticed that the lack of structured documentation of \textsc{promise} and \textsc{isbsg} repositories hampered researchers' attempts to find specific types of data collection. To address this problem, they supplemented these data collections with additional information, such as the subject of the study, the availability of data files and of further descriptions, and also their usefulness for benchmarking studies. Information on each study's subject was analyzed following the corresponding classification of the data studies, reflecting the classification we subsequently perform on the {\sc msr} data papers (Section~\ref{sec:rq1}). In the field of Systems and Software Engineering, the 13-part study series by Glass et al.~\citeyearpar{Gla94,WTGB11} assesses scholars and institutions based on the number of papers they have published in related journals. The majority of the studies span a five-year period covering overall the years 1993--2008. The progress of the study results indicates that the top three institutions up to 2003 were mainly from the \textsc{usa} and involved an equal number of industry research centers and academic institutions. Since 2004, the top three institutions are mainly academic from Korea, Taiwan, and, lastly, Norway. This change is also observed in the entire list of the 15 top-ranked institutions presented in the studies. \textsc{usa} was first in number of top-ranked institutions up to 2002, followed by Asia-Pacific, Europe, and, lastly, Middle East. Middle East has disappeared from the list since 2001. Additionally, the Asia-Pacific institutions have surpassed \textsc{usa}'s since 2003, setting Europe in the last place. Regarding the type of the top-ranked institutions of the list, during the years 1993--2008, an average of 82\% were academic institutions, as opposed to the remaining 18\% which were industry research centers. A second evaluation of the \textsc{isbsg} software project repository was carried out by Almakadmeh and Abran~\citeyearpar{AA17}. Their purpose was to assess the repository from the Six Sigma measurement perspective and to correlate this assessment with software defect estimation. They found that the \textsc{isbsg} Microsoft Excel data extract contains a high ratio of missing data within the fields related to the total number of defects. They consider this outcome a serious challenge, especially for studies that use the particular data set for software defect estimation purposes. The analysis on the Search Based Software Engineering (\textsc{sbse}) publications~\citep{FS11} is the first bibliometric research of this community, covering a ten-year list of studies, from 2001--2010. The evaluation focuses on the categories of \emph{Publication}, \emph{Sources}, \emph{Authorship}, and \emph{Collaboration}. Estimations of various publication metrics are included for the following years. Along with the metric forecasting, the authors also studied the applicability of bibliometric laws in \textsc{sbse}, such as those by Bradford~\citeyearpar{Bra85} and Lotka~\citeyearpar{Lot26}. In the same context, Harman et al.~\citeyearpar{HMZ09} assessed research trends, techniques and their applications in \textsc{sbse}. They classified \textsc{sbse} literature in order to extract specific knowledge on distinct areas of study. Then, they performed a trend analysis, which supplied them with information on activity in these areas. Finally, for each area of study, they recognized and presented opportunities for further improvement, and avenues for supplementary research. The work of Gu~\citeyearpar{Gu04} is another interesting bibliometric analysis. The main point of evaluation in this study is the productivity of authors in the field of Knowledge Management (\textsc{km}). To conduct the analysis, Gu collected articles published in the (former) \textsc{isi} Web of Science\footnote{\url{https://www.webofknowledge.com}} from 1975--2004. He then recorded all unique productive authors, along with their contribution and authorship position, in order to examine their productivity and degree of involvement in their research publications. The results indicate that 86\% of the authors have only written one publication. As far as citation frequency is concerned, Gu demonstrates its significant correlation with the reputation of the journal it has been published in. On the other hand, his findings reveal no correlation between \textsc{r\&d} expenditures and research productivity or citation counts. In the field of Requirements Engineering, Zogaan et al.~\citeyearpar{ZSMA17} conducted a systematic literature review on 73 data sets used for software traceability over a fifteen-year period, between 2000--2016. Using both manual and automated methods they selected studies that have used data sets, case studies, or empirical data, to develop, validate, train, or test traceability techniques. Analyzing these studies they identified that healthcare and aerospace are the two most frequent domains represented by traceability data sets. The majority of the data sets are \textsc{oss}, followed by academic and industrial. Concerning availability, almost 40\% of the data sets examined are not available for reuse, originating mainly from industry and academia. On the contrary, almost all \textsc{oss} data sets are available. To assess the quality of traceability data sets, the authors designed a framework consisting of quality characteristics such as availability, licensing, completeness, trustworthiness, and interpretability. \section{Methods} \label{sec:methods} We framed our investigation on the usage of \textsc{msr}\ data papers in terms of the following research questions. \begin{description} \item[RQ1] \emph{What data papers have been published?} We answer this by finding all data papers published in the \textsc{msr}\ proceedings by hand, and further elaborate by manually clustering them based on the year of publication and the content of the data sets. \item[RQ2] \emph{How are data papers used?} We answer this by collecting all citations to \textsc{msr}\ data papers by hand and manually classifying them according to their subject and authors. \item[RQ3] \emph{What is the impact of published data papers?} We answer this through the statistical analysis and visualization of the citations and their slicing according to their type. \item[RQ4] \emph{What is the community's opinion regarding data paper publication and use?} We answer the following subquestions through a web-based survey study on 108 authors and users of data papers. \begin{description} \item[Q4.1] \emph{What motivates people to produce a data paper?} \item[Q4.2] \emph{How much effort is required to produce a data paper?} \item[Q4.3] \emph{What are the characteristics of a useful data set?} \item[Q4.4] \emph{What characteristics prevent a data set from being used?} \item[Q4.5] \emph{What direction should data papers follow in the future?} \end{description} \end{description} To answer the above research questions, we performed a mixed methods study. In order to ensure consistency of the manual processes that we performed for the analysis of the research usage of the data papers (Section~\ref{sec:citation-methods}), the clustering of them (Section~\ref{sec:data-paper-methods}), and the classification of their strong citations (Section~\ref{sec:citation-methods}), we employed certain guidelines for systematic literature reviews and systematic mapping studies. Furthermore, to answer RQ4, we performed a survey study employing survey research principles (Section~\ref{sec:survey}). \subsection{RQ1: Data Paper Collection and Clustering} \label{sec:data-paper-methods} We first obtained all data papers of the proceedings of the International (Working) Conference on Mining Software Repositories (\textsc{msr}). By the term \emph{data papers} we refer to all papers included in the data showcase track of the \textsc{msr}\ proceedings, as well as other papers from older proceedings that primarily provide a data set --- consider e.g. Conklin et al.'s~\citeyearpar{CHC05} collection of \textsc{floss} data and analyses. To acquire the aforementioned papers, we searched through the programs of the \textsc{msr}\ conferences on their respective websites. Programs that contained an explicit \emph{Data Showcase} section immediately informed us about the data papers of the particular year. In programs that did not include the aforementioned section, we manually searched for potential research offering data sets. From the gathered studies, those which genuinely offered complete data sets were included in our data paper archive. Following the collection, we sorted the data papers into distinct clusters according to their topic by combining methods of two prominent studies. From the systematic mapping studies in software engineering~\citep{PFMM08}, we applied the classification scheme of abstract keywording. In addition, we followed two data extraction approaches suggested in the work of~\citet{BKBT07} on systematic literature review within the software engineering domain. The first approach introduces the use of two reviewers performing individually the data extraction process and discussing their disagreements, while the second approach proposes the use of a data extractor and a data checker. According to the above methods, the first and third authors of this paper individually labeled all data papers with keywords. For each data paper, the two authors read the abstract and extracted keywords related to the content of the data set provided by the particular data paper. For ambiguous abstracts that hampered the extraction of meaningful keywords, the two authors also studied the introduction or conclusion sections of the paper~\citep{PFMM08}. The keywords were either \emph{in vivo}~\citep{GS67} --- when representative phrases could be extracted as is from the aforementioned sections, or otherwise constructed by the authors. Following the individual keyword-labeling process, the two authors met, discussed their keywords and agreed on a final set of keywords by refining, merging, and renaming the initial ones. In this way, a structured keyword set was formed consisting of the general keywords \emph{code} and \emph{people} (after observing that all individual keywords could be divided into these two groups), and more specialized keywords. \emph{Code} and \emph{people} were used to signify whether a data paper mostly targeted the software development process or the human factor respectively, while multiple specialized keywords could be assigned to it. Using the latter keyword set, the same authors repeated the labeling process for the data papers together. Supplementary keywords that appeared during the second round of keyword assignment were added to the final keyword set. Once the second round was completed, the two authors grouped together the conceptually related keywords. Through this process, the clusters of data papers were formed and then named accordingly. Finally, the first and third authors assigned each data paper to the most conceptually relevant cluster (i.e., in case a paper could be assigned to more than one cluster, the authors selected the one they considered most descriptive of its content). To ensure the correct mapping of the papers to the clusters, the second author also assessed the cluster assignments, and then discussed and resolved his objections with the first and third authors. The agreement rate of the second author with the first and third authors was 91\%. From the 9\% of the disagreements, 57\% were resolved in favor of the second author, while the remaining 43\% were resolved in favor of the other two authors. \subsection{RQ2: Data Paper Use Identification and Classification} \label{sec:citation-methods} To conduct the analysis on the research usage of the data papers, we implemented the \emph{Identification of Research} and \emph{Study Selection} processes, as proposed in Kitchenham's~\citeyearpar{Kit04} work on procedures for performing systematic reviews. The identification of research was made through widely used and established platforms that provide citation data: \emph{Google Scholar},\footnote{\url{https://scholar.google.com/}} \emph{Scopus} --- Elsevier's abstract and citation database\footnote{\url{https://www.scopus.com/}} and the \emph{ACM Digital Library}.\footnote{\url{https://dl.acm.org/}} Most research papers that were not publicly available were provided to us through personal communication with the authors. After collecting the citations of a particular data paper, we followed the study selection process. Specific criteria were applied to the collected research, in order to ensure quality and validity of our analysis. First, we applied the whitelisting practice and kept studies of conference proceedings, journal articles, master's and doctoral theses, books, and technical reports. Studies published in multiple venues, such as conference publications that are later published in journals --- e.g. Krueger et al.'s \citeyearpar{KSAB18, KSAB19} study on the usage of cryptographic \textsc{api}s, were only listed once. Priority was given sequentially to books, journal articles, conference proceedings, reports, and, lastly, theses. We additionally decided to retain only studies written in the English language, due to its widespread adoption for scientific communication. The main criterion for retaining citing studies was their actual use of the data sets of the papers they had cited. We term these \emph{strong} citations. Research that solely referred to a data paper without using its data set was not taken into account in our study. A representative example of a non-strong (\emph{weak}) citation is the study of repository badges in the npm ecosystem~\citep{TZKV18}, which has cited the collection of social diversity attributes of programmers~\citep{VSF15a}, although it has not used its data. Weak citations were manually analyzed to determine the most common types of uses of data papers in these cases. The process of citation collection and segregation into strong and weak was held from July to November 2018. For data papers that had been strongly cited by at least one of their authors, we divided their strong citations into three categories. The first category contains references to the data papers made by their first author. The second category includes strong citations made by at least one co-author of the respective data paper. The remaining references that were not made by any author of the particular data paper were placed in the third category. Furthermore, we classified the collected strong citations according to the knowledge areas of the Guide to the Software Engineering Body of Knowledge~\citep{SWEBOK14} (\textsc{swebok}). Again, we followed a combination of the two data extraction approaches of \citet{BKBT07} described in Section~\ref{sec:data-paper-methods}. The first and second authors of this paper individually assigned each strong citation to a particular \textsc{swebok} knowledge area after reading their abstract. Similarly to Section~\ref{sec:data-paper-methods}, in cases of ambiguous abstracts, they also read the introduction or conclusion sections. Next, the two authors met, discussed, and partially resolved their disagreements. After that, 16\% of the total citation categorizations remained conflicting. These disagreements were resolved by the paper's last author who again selected among all knowledge areas. The selections of the first two authors were not divulged to him to avoid bias. Through this process, 30\% of the pending disagreements were resolved in favor of the first author, and 15\% in favor of the second author. Hence, the overall agreement rate between the last and the first two authors was 45\%. The last author's opinion prevailed over the first two authors' in the remaining 55\% of disagreements, where all three assigned knowledge areas were conflicting, due to his long experience in software engineering and his extended familiarity with the \textsc{swebok} knowledge areas. \subsection{RQ3: Citation Analysis} \label{sec:citation-analysis} To assess in an objective manner the impact of \textsc{msr}\ data papers compared to other \textsc{msr}\ papers, we collected all \textsc{msr}\ papers and coupled them with citation data provided by Scopus. This process differs from the one described in the preceding Section~\ref{sec:citation-methods}, because citations are not manually evaluated regarding actual use, and are retrieved only from a single source (Scopus). Consequently, the collected metrics are only appropriate for assessing relative rather than absolute impact. We first created a data set of all 1267 \textsc{msr}\ papers by downloading the complete \textsc{dblp} computer science bibliography database,\footnote{\url{https://dblp.org/}} and filtering its \textsc{xml} records to obtain only those whose \emph{booktitle} tag contained \emph{MSR}. We split the \textsc{msr}\ papers at hand into two sets: data papers (as determined in Section~\ref{sec:data-paper-methods}) and the rest. We also split the \textsc{msr}\ papers by year to simplify the selection of samples. Furthermore, we created a collection mirroring the yearly distribution of data papers in order to compare in a fair manner citations to data papers against citations to other \textsc{msr}\ papers. We created this collection as follows. For each year in which $N$ data papers were published, we randomly chose $N$ non-data papers from the \textsc{msr}\ papers published in the same year. To assess research building on data papers, we also created a set of \textsc{msr}\ papers that cite \textsc{msr}\ data papers. We did this by calculating the intersection between all \textsc{msr}\ papers and the papers that use them (as determined in Section~\ref{sec:citation-methods}). Although this new set of papers citing data papers is not exhaustive (it only contains \textsc{msr}\ papers), it allows us to compare the citation metrics of these papers against those of a known tractable population, namely \textsc{msr}\ papers as a whole. We then used the Scopus \textsc{rest api} to obtain the number of times each \textsc{msr}\ paper was cited. The citation data obtained in this step are not comparable with those we obtained through the widespread search and manual filtering described in Section~\ref{sec:citation-methods}, because they may be associated with false positives and false negatives. However, they allow comparisons to be made between different \textsc{msr}\ sets, because all citation metrics are obtained through the same methods employed by Scopus and all probably suffer from the same types of bias. Finally, we joined the Scopus citation data with the sets obtained in the previous steps. We then calculated simple descriptive statistics for the citation counts of the following sets: \begin{itemize} \item all \textsc{msr}\ data papers, \item a sample of \textsc{msr}\ non-data papers mirroring the yearly distribution of data papers, \item all \textsc{msr}\ non-data papers for years in which data papers were published, and \item \textsc{msr}\ papers citing \textsc{msr}\ data papers. \end{itemize} \subsection{RQ4: Survey Planning, Execution, and Analysis} \label{sec:survey} To conduct our survey study on authors and users of data papers in order to explore our community's view regarding data paper publication and use, we followed the set of ten activities introduced in Kitchenham and Pfleeger's~\citeyearpar{PK01,KP02a,KP02b,KP02c,KP02d,KP03} six-part series of principles of survey research. \textbf{Survey design.} We adopted a cross sectional, case control observational study design (i.e., participants were surveyed about their past experiences at a particular fixed point in time), which is typical of surveys in software engineering~\citep{KP02a}. The goal of this study was \emph{to obtain further insights on the production, use, and future desired direction of data papers}. Hence, we framed the objectives of our survey in terms of the following questions. \begin{description} \item[Q1] \emph{What motivates people to produce a data paper?} \item[Q2] \emph{How much effort is required to produce a data paper?} \item[Q3] \emph{What are the characteristics of a useful data set?} \item[Q4] \emph{What characteristics prevent a data set from being used?} \item[Q5] \emph{What direction should data papers follow in the future?} \end{description} To prevent a low response rate due to the summer vacation period that co-occurred with the study preparation, the survey was scheduled to run in early September 2019. \textbf{Survey sample.} The survey was conducted on two different samples in two different time periods --- September 2019 and January 2020. The reason behind the second conduction was to include more strong citation authors. In the first conduction, all primary authors of the 81 data papers comprised our sample, along with an equal set of primary authors of strong citations. The unique primary authors of data papers were in total 71. For the selection of the 71 (out of 419) primary authors of strong citations, we implemented the probabilistic sampling method of simple random sample~\citep{KP02d}. For that purpose, we used the random number generator of the Python 3.7 \emph{random} library with the default seed value and replacements. From the collected candidate respondents, 14 were primary authors of both data papers and strong citations, leading to a total of 128 unique candidates (instead of 142). In the second conduction, the remaining primary authors of strong citations comprised our sample excluding the ones already included in the first sample. In this manner, our second sample was composed of 189 unique candidates, while the overall sample size was 317. \textbf{Survey instrument.} The survey questionnaire was organized into eight sections. In the first section participants specified whether they were authors of data papers, along with the number of their data paper publications in \textsc{msr}\ and in any other venue. The next five sections were organized according to the objective questions, and were composed of both mandatory and optional open-ended, multiple choice, and Likert scale questions. We intentionally used even Likert scales to force participants to make a choice~\citep{GSB16}. To address Q1, data paper authors applied on a four-level Likert scale the extent to which a set of specific attributes motivate them to produce a data paper. There was also an open-ended question for additional motivational factors. For Q2, data paper authors specified through a multiple choice question the effort-months they need on average to produce a data paper. For Q3 and Q4, both authors and users of data papers evaluated on a four-level Likert scale the importance of a set of particular characteristics for the selection or avoidance of a data set for their research. The same characteristics were included in both questions. In our view, an attribute considered as useful to some extent is not necessarily considered as discouraging to the exact same extent, thus affecting the overall ranking of the characteristics. An open-ended question was included for additional useful or preventing attributes of data sets. To address Q5, respondents listed through open-ended questions what data papers they would like to see published in the future, and from what sources new data could be derived. The following section included demographic questions aiming to assess the diversity of the responses. Through the final section of the survey we retrieved feedback regarding the completeness of the questionnaire; respondents assessed whether the objective questions were sufficiently or weakly addressed, and left their comments. Finally, they could leave their e-mail address, to receive a report with the survey results. To ensure anonymity, the collected e-mail addresses are not publicly distributed within the online available data set of survey responses. \textbf{Survey evaluation.} To evaluate and refine the questions of the survey, and to calculate the average time required to complete it, we initially performed a pilot study. The sample of the pilot study was composed of 23 members of our laboratory (two faculty members, three senior, six associate, and twelve junior researchers), four external faculty members, and one more senior researcher. In total 28 potential respondents constituted the sample of the pilot study. The pilot study ran from July 25th to August 1st, 2019, and nine responses were received (32\% response rate). Respondents of the pilot study were asked at the beginning of the questionnaire to complete their current local time. Through subtraction from the reported completion timestamp (which was automatically recorded unlike the starting timestamp), the average time required was calculated at 18 minutes. \textbf{Survey operation.} Both the pilot and the final survey were distributed as a \emph{Google form}, which the candidate participants were invited to complete through an invitational mail. Although the mailing process was automated, it was personalized by addressing the candidates by name, and by including details on how they were selected and which of their research papers (data papers and/or strong citation papers) the survey involved. The candidates were informed on the average time required for the questionnaire completion (rounded up to 20 minutes), along with the goal and the objectives of the survey study. \\ From the total of 317 mails that were sent as part of the final survey, 39 failed to be delivered. These failures involved twelve e-mail addresses no longer in use, 26 rejected recipients, and one wrong e-mail address. We consider our final sample a total of 278 potential participants. \\ The final survey ran from September 2nd to 24th, 2019, and from January 24th to February 16th, 2020. In both rounds we aimed for a three week duration, but we would briefly reopen the survey when candidate participants requested it. \\ A reminder mail was distributed to potential respondents ten days after the invitational mail in both rounds. Verified respondents who had either answered our initial mail or had left their e-mail address in their survey response were excluded from the recipient list of the reminder mail. The final survey received 108 responses (39\% response rate calculated on the basis of the final sample size --- 278). \textbf{Survey analysis.} Q3--Q5 were analyzed from the perspective of authors and users combined, and of users independently. We applied manual pair coding~\citep{SPP08} to summarize the results of the six open-ended questions. For the first survey conduction, the first and second authors of this paper applied together codes to all open-ended responses following a mixed approach of line-by-line and sentence-by-sentence coding~\citep{Cha16}. Next, they combined conceptually-related codes by generalizing or specializing them, and integrated them into distinct groups. For the second survey conduction, the same authors used the first group of codes to annotate the new responses~\citep{GSB16}. In case a response was not connected to any group, or was connected but further ideas were also introduced, the authors would apply new codes to it. At the end, the generalization-specialization and grouping process was repeated for the new codes. \subsection{RQ4: Survey Participants} \label{sec:participants} The questionnaire was completed by 108 respondents of various age groups. Concerning their current occupation, 37\% (24) were academic staff, 24\% (15) worked in industry, 19\% (11) were post doctoral researchers, and 19\% (5) were doctoral students. From the 108 respondents of the survey, 30\% (32) were primary authors only of strong citation papers, as opposed to the remaining 70\% (76) who were primary authors of data papers, with a portion of them also being authors of strong citation papers. From the 76 data paper authors, one's data papers have not been published in any venue, 42\% (32) have published only one data paper, 26\% (20) have published two, 11\% (8) of the authors have published three, 5\% (4) have published four, and three authors own five, six, and seven publications respectively. Furthermore, 3\% (2) of the authors have published eight data papers, followed by another 3\% (2) with ten publications. Lastly, four authors own eleven, 20, 25, and 30 data paper publications respectively. Concerning data paper publications in the \textsc{msr}\ conference, 22\% (17) of the authors have no \textsc{msr}\ data papers, 55\% (42) have one, 15\% (11) of the authors have two \textsc{msr}\ publications, 3\% (2) own three publications in \textsc{msr}, 4\% (3) own four, and one is the primary author of five \textsc{msr}\ data papers. \section{Results} \label{sec:results} The findings of our study are framed in respect to the four research questions. \subsection{RQ1: What data papers have been published?} \label{sec:rq1} \begin{table}[t] \renewcommand{\arraystretch}{1.4} \caption{\label{tab:data-papers-by-year}MSR Data Papers by Year\newline There has been a significant rise in the number of data papers since 2013, the year that the MSR data showcase track was established.} \centering \begin{tabular}{lL{10.5cm}} \hline Year & Data Papers \\ \hline \input{data-papers-by-year} \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth, keepaspectratio]{Fig1.pdf} \caption{Timeline of the data papers and the strong citations. Each year depicts the number of data papers published in the particular year and the number of studies published in the particular year that are based on any data paper. The number of strong citations to data papers is constantly rising, indicating that the concept of data papers has long-term value.} \label{fig:yearly-papers} \end{figure} \begin{table*}[t] \renewcommand{\arraystretch}{1.3} \caption{\label{tab:data-paper-clusters}Data Paper Clusters and Strong Citations\newline Publications providing VCS primary and derived data are the most frequent data papers and the most often strongly cited ones.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l r r r r r} \hline Cluster & Data Papers & Str. cited DPs & Non-cited DPs & Str. Citation Ratio (\%) & Str. Citations \\ \hline VCS Primary and Derived Data & 29 & 20 & 9 & 69 & 312 \\ Software Faults, Failures, Smells & 17 & 11 & 6 & 65 & 42 \\ Software Evolution & 11 & 6 & 5 & 55 & 17 \\ Group Dynamics & 9 & 7 & 2 & 78 & 41 \\ Computational Linguistics & 7 & 5 & 2 & 71 & 20 \\ Software Models & 3 & 2 & 1 & 67 & 5 \\ Computing Education and Programming Practices & 3 & 1 & 2 & 33 & 1 \\ Enhanced Developer Data & 2 & 1 & 1 & 50 & 2 \\ \hline \textbf{Total} & 81 & 53 & 28 & 65 & 440 \\ \hline \end{tabular} } \end{table*} We identified the 81 data papers presented in Table~\ref{tab:data-papers-by-year}. These comprise about 15\% of the 507 papers published in the \textsc{msr}\ conference in the years when data papers appeared. The timeline of the data papers and the research based on them is depicted in Figure~\ref{fig:yearly-papers}. For each year, the number of published data papers is showcased, along with the number of studies published in the particular year, which have been based on any of these data papers. (It should be noted that this is not a cumulative graph; each year's outcome is independent of the previous.) There has been a significant rise in the number of data papers since 2013, which is the year when the data showcase track was founded~\citep{ZMSD13}. Until then, 2005 was the year with the most data papers. The smallest number of data showcase research papers --- seven --- was published in 2016 and 2017. Nevertheless, 2018 indicates a double increase in data publications --- 15 (see Table~\ref{tab:data-papers-by-year}). From the clustering of the data papers, as described in Section~\ref{sec:data-paper-methods}, eight data clusters emerged. Table~\ref{tab:data-paper-clusters} shows for each cluster the number of data papers it comprises, the number of strongly cited and non-cited data papers, and the strong citations that have been made to them. We consider as \textit{non-cited} the data papers with either weak citations or no citations at all. The clusters are sorted in descending order according to the number of data papers they contain. \emph{VCS Primary and Derived Data} preponderate. The particular cluster consists of 29 studies that provide Version Control System (\textsc{vcs}) raw or processed data, along with descriptive statistics and analyses. The collection of Java source code of the Merobase Component Finder project~\citep{JHSA13} is part of this cluster. \emph{Software Faults, Failures, Smells} concern 17 data sets of security failures, software inconsistencies, and bad programming practices detected in a variety of software applications and ecosystems. For instance, Vulin\textsc{oss} offers a data set of security vulnerabilities in open-source systems~\citep{GMS18}. \emph{Software Evolution} involves eleven collections with information on the evolution of artifacts, such as operating systems~\citep{Spi18}, software architectures~\citep{WY15}, and software packages~\citep{BAKO14a}. The cluster of \emph{Group Dynamics} is composed of nine data papers that focus on social networks~\citep{MK13}, code reviewing~\citep{MBR13}, project roles~\citep{Squ13b}, and social diversity in programming teams~\citep{VSF15a}. Seven data papers were grouped together due to their common theme of facilitating studies related to natural language processing and information extraction~\citep{PML15, BLPH13}. These papers constitute the \emph{Computational Linguistics} cluster. \emph{Software Models} provide simplified visual representations of software processes, such as simplified syntax trees~\citep{PANM16a} and \textsc{uml} models~\citep{RHHC17}. Papers that share records regarding novices' and experts' programming practices and abilities --- e.g. the list of Scratch programs of students~\citep{AHMR17} --- were classified in \emph{Computing Education and Programming Practices}. The aim of this cluster is to facilitate studies on computing education. The last defined cluster contains papers offering \emph{Enhanced Developer Data}, such as screen and real names extracted from Twitter~\citep{Squ13a} and personal characteristics (e.g. gender, age, civil status, nationality), education and level of English, and professional status~\citep{RASV14}. Only two papers represent this cluster, however the uniqueness of their data sets segregates them from the other clusters. \subsection{RQ2: How are data papers used?} \label{sec:rq2} \begin{table*}[t] \renewcommand{\arraystretch}{1.3} \caption{\label{tab:top-five-data-papers}Top Five Data Papers in Number of Strong Citations\newline The most strongly cited data paper offers a collection of primary and derived data extracted from GitHub.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{m{6cm} l r l r} \hline Title & Data Paper & Year & Cluster & Str. Citations \\ \hline The GHTorrent Dataset and Tool Suite & \citep{Gou13} & 2013 & VCS Primary and Derived Data & 165 \\ AndroZoo: Collecting Millions of Android Apps for the Research Community & \citep{ABKL16} & 2016 & VCS Primary and Derived Data & 57 \\ Lean GHTorrent: GitHub Data on Demand & \citep{GVSZ14a} & 2014 & VCS Primary and Derived Data & 24 \\ Who Does What During a Code Review? Datasets of OSS Peer Review Repositories & \citep{HKYC13} & 2013 & Group Dynamics & 16 \\ The Maven Repository Dataset of Metrics, Changes, and Dependencies & \citep{RVV13} & 2013 & VCS Primary and Derived Data & 12 \\ The Eclipse and Mozilla Defect Tracking Dataset: A Genuine Dataset for Mining Bug Information & \citep{LPD13} & 2013 & Software Faults, Failures, Smells & 12 \\ The Emotional Side of Software Developers in JIRA & \citep{OMDT16} & 2016 & Computational Linguistics & 12 \\ \hline \end{tabular} } \end{table*} The 81 \textsc{msr}\ data papers are associated with 1169 citations to them, coming from 982 distinct studies (some studies cite multiple data papers). Out of the 1169 citations, 440 (419 distinct studies) use the data sets provided by the data papers (\emph{strong} citations). The remaining 729 citations (610 distinct studies) refer to data papers without utilizing the particular data sets (\emph{weak} citations). We were able to obtain most citations from digital libraries and the web. Six citations that were publicly unavailable were received from their respective authors through personal communication, as stated in Section~\ref{sec:citation-methods}, but no access was obtained for another three. (These three studies have been excluded from the total citations.) Table~\ref{tab:top-five-data-papers} depicts the most strongly cited data papers. Through manual analysis we found that the most common uses of weak citations were mentioning the work as an example ($n=524$, 77\%), attributing a work's statement ($n=59$, 9\%), using the work's methods ($n=47$, 7\%), presenting the study as related work ($n=12$, 2\%), and reporting obtained statistics ($n=13$, 2\%). \renewcommand{\sidecaptionsep}{0.6cm} \begin{SCtable}[][b] \renewcommand{\arraystretch}{1.2} \caption{\label{tab:strong-citation-categories}Areas of Strong Citing Studies\newline The studies that strongly cite data papers span the SWEBOK knowledge areas fairly unequally.} \resizebox{0.68\columnwidth}{!}{ \begin{tabular}{l r r} \hline SWEBOK Knowledge Area & Studies & Percentage\\ \hline Software Maintenance & 89 & 21.2 \\ Software Engineering Management & 63 & 15.0 \\ Software Engineering Professional Practice & 57 & 13.6 \\ Software Quality & 55 & 13.1 \\ Software Configuration Management & 46 & 11.0 \\ Software Construction & 43 & 10.3 \\ Software Design & 20 & 4.8 \\ Software Engineering Process & 19 & 4.5 \\ Software Testing & 15 & 3.6 \\ Software Engineering Economics & 6 & 1.4 \\ Software Engineering Models and Methods & 5 & 1.2 \\ Software Requirements & 1 & 0.2 \\ \hline \end{tabular} } \end{SCtable} Table~\ref{tab:strong-citation-categories} depicts the classification of the studies based on data papers according to the knowledge areas of the \textsc{swebok}. This suggests that research on \emph{Software Maintenance}, \emph{Software Engineering Management}, and \emph{Software Engineering Professional Practice} uses data papers to a considerable extent. On the other hand, only a slight portion of research on \emph{Software Requirements}, \emph{Software Engineering Models and Methods}, and \emph{Software Engineering Economics} is facilitated by data showcase papers. \begin{figure*}[t] \centering \includegraphics[width=\textwidth, keepaspectratio]{Fig2.pdf} \caption{Use of data papers by their authors (\%). Data papers used at least once by the same first author or any of his/her co-authors are represented by the number of strong citations made by the first author, the co-authors, and other unrelated teams. From the total 81 data papers, 37 have been used by the teams that authored them.} \label{fig:self-citations} \end{figure*} Furthermore, concerning the use of data papers by their respective authors, our findings show that 37 out of the 81 papers have been used by the teams that authored them. Specifically, 15 studies have been solely used either by their first author or his/her co-authors. Figure~\ref{fig:self-citations} depicts for each data paper strongly cited at least once by the first author or the co-authors, the percentage of the uses that stem from the first author, the co-authors, and other unrelated teams. The data papers are sorted in ascending order downwards based on the percentage of the sum of the strong citations made by the first author and the co-authors. For instance, 67\% of the strong citations to the collection of \textsc{api}s usage information~\citep{SB15} were made by the first author. \subsection{RQ3: What is the impact of published data papers?} \label{sec:rq3} The relative impact of published data papers can be deduced from Table~\ref{tab:citation-metrics}, which compares citations to data and non-data papers, collected in the way described in Section~\ref{sec:citation-analysis}. (The three data papers missing from the table are those published in \textsc{msr}\ '05, which are not tracked by Scopus.) The table shows that data papers are typically cited less often, compared to others of the \textsc{msr}\ conference in terms of the median and average number of references. This occurs both in terms of yearly-weighted samples and as a whole. Also, \textsc{msr}\ papers that cite data papers appear to be cited about the same ($\mu=10$, $\bar{x}=15,$ 7) as other \textsc{msr}\ papers ($\mu=8$, $\bar{x}=17,$ 0), meaning that citing an \textsc{msr}\ data paper does not promise greater popularity concerning incoming citations. \renewcommand{\sidecaptionsep}{0.6cm} \begin{SCtable}[][t] \renewcommand{\arraystretch}{1.1} \caption{\label{tab:citation-metrics}Citation Metrics by Paper Type\newline Data papers are typically cited less often, compared to other papers of the MSR conference, in terms of the median and average number of references.} \resizebox{0.6\columnwidth}{!}{ \begin{tabular}{l rrrr} \hline Metric & Data Papers & Non-DP & Non-DP & Citing DP \\ & & (Sample) & (All) & \\ \hline \input{citation-metrics} \hline \end{tabular} } \end{SCtable} \renewcommand{\sidecaptionsep}{2.2cm} \begin{SCtable}[][t] \renewcommand{\arraystretch}{1.2} \caption{\label{tab:venues}Venues With Research Based on Data Papers \newline The majority are top-notch venues, indicating the high quality of studies that can be performed through data papers.} \resizebox{0.4\columnwidth}{!}{ \begin{tabular}{l r r} \hline Venue & Papers & Percentage \\ \hline \input{venues} \hline \end{tabular} } \end{SCtable} Table~\ref{tab:venues} shows the venues where research that is based on data papers has been published. We see that more than a third of the corresponding papers are published in top-tier conferences and journals. This showcases the high quality of research that is conducted based on data papers. We examined by hand the papers published in the Computing Research Repository (CoRR),\footnote{\url{https://arxiv.org/corr}} and found that almost all of them (19) are fairly recent (published in 2017 or 2018). This indicates that they are probably archival submissions of material that will eventually also end up in a conference or journal. The timeline of the data paper uses is depicted in Figure~\ref{fig:yearly-papers}. The strong citations of all data papers were summed up and illustrated as yearly records. We see that strong citations have risen since 2014, which was expected after the data showcase track's introduction in 2013. Only six studies were identified before the category's establishment. In addition, we studied the growth of data paper use in a five-year window after the data papers' publication, and imprinted it on Figure~\ref{fig:five-year-citations}. The limit five was chosen because it provided us with sufficient insights, without excluding too many papers that were less than five years old. Consequently, we included data studies published in the years 2005--2014. The majority of them reveal a peak in the number of strong citations during the second year of their existence, but appear to have a significant decrease of uses in the following year. Research based on data papers seems to plateau after the third year of their life. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth, keepaspectratio]{Fig3.pdf} \caption{Timeline of strong citations to data papers published from 2005--2014 over a five-year window. Each strongly cited data paper is represented with the same color along the years. The height of each color bar is relative to the number of strong citations. Data papers with the most strong citations during the second year of their existence seem to retain their citation number in the following years, or to obtain even more strong citations.} \label{fig:five-year-citations} \end{figure} \subsection{RQ4: What is the community's opinion regarding data paper publication and use?} \label{sec:rq4} In this section we present the answers of the survey respondents to the questionnaire in respect to the objective questions, and their feedback on the survey study. \textbf{Q4.1: What motivates people to produce a data paper?} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Fig4.pdf} \caption{Motivational impact of specific attributes for data paper production. According to the responses of the primary data paper authors, the motivational impact of a set of eight predefined characteristics for data paper production is depicted on a four-level Likert scale and on a percentile basis of the total data paper authors. The majority of the participants think that particular data sets should be made available, and feel that publishing data papers is a worthwhile way to contribute to our community.} \label{fig:motivational-factors} \end{figure} All 76 data paper authors were requested to assess on a four-level Likert scale the motivational impact of a set of eight predefined attributes for data paper production. The collected answers to this question are illustrated in Figure~\ref{fig:motivational-factors}, where the attributes are sorted downwards in descending order of the \emph{Extremely} level. The majority of participants claim to a significant or extreme degree that publishing data papers is \emph{a worthwhile way to contribute to our community}, and that their \emph{contribution will be appreciated by the former}. In addition, they emphasize significantly that \emph{certain data sets should be made available to the community}, while others \emph{want to cover the lack of data in a particular research area}. To a similar extent of high significance, the responding authors of data papers mention that they usually \emph{have gathered the data in the context of other research}, or \emph{plan to use their published data in future research}. Apart from the predefined set of attributes, some participants also stated in the related open-ended question regarding additional motivational aspects, the expectancy of obtaining a high number of citations, the challenging process of data paper production, validity, transparency, research reusability, and quality improvement as another set of aspiring characteristics. Others recognized that data papers provide a more thorough data set representation and description, promote open science and potential partnerships, and reveal new research trends. A few respondents identified their skillfulness in data paper production as another valuable factor. \textbf{Q4.2: How much effort is required to produce a data paper?} From the 76 data paper authors who responded to the survey, 8\% (6) stated that they need less than an effort-month, 36\% (27) need from one to three effort-months, 34\% (26) require a total of four to six effort-months, 15\% (11) of the authors need from seven to nine effort-months, 7\% (5) need from ten to twelve effort-months, and one needs more than a year of effort to produce a data paper. \textbf{Q4.3: What are the characteristics of a useful data set?} In Figure~\ref{fig:useful-characteristics}, the assessment of a set of 15 characteristics regarding their importance in data set selection for research purposes is illustrated. The characteristics are sorted downwards in descending order of the \emph{Very Important} level. All 108 respondents evaluated the particular attributes on a four-level Likert scale. The most important characteristics were found to be \emph{ease of use} (i.e., the ability of users to effortlessly obtain the information they are after), \emph{high data quality}, \emph{data freshness}, \emph{replicability of data set construction}, \emph{data schema documentation}, and \emph{documentation of the data collection methods}. Less important characteristics included \emph{personal connection with the data set curators}, \emph{data set having been published as a data paper}, and \emph{data set having been highly cited}. Separating data paper users from authors in Figure~\ref{fig:useful-characteristics}, we observe a similar ranking of the characteristics. The main difference is in the first place, which for users is \emph{high data quality}, as opposed to \emph{ease of use} selected by authors and users combined. \begin{figure}[h!] \centering \includegraphics[width=\linewidth, height=0.45\textheight]{Fig5.pdf} \includegraphics[width=\linewidth, height=0.45\textheight]{Fig6.pdf} \caption{Encouraging characteristics in data set selection for data paper authors and users (top), and users isolated (bottom). Respondents assessed a set of 15 predefined characteristics on a four-level Likert scale. Ease of use and high data quality appear to be the most important characteristic in data set selection.} \label{fig:useful-characteristics} \end{figure} In addition to the above characteristics, through the complementary open-ended question, respondents stressed the importance of existing application examples, compatibility and extensibility with other data sets, data balance and integrity, data completeness, reproducibility, updatability, and ease of access and filtering. Participants also valued modification traceability, data novelty and diversity, data validation, continuous maintenance of data and support by the curators, and enhancement of existing data sets. Furthermore, documented threats and flaws, curation of duplicate data (e.g. repository forks) and code clones, inclusion of timestamps, goal-oriented data, anonymity of subjects, contextuality (i.e., the data set captures its context), duality (i.e., the data set contains both positive and negative samples where applicable), and usage metadata of the data set are also appreciated. For software engineering fields with ongoing radical changes, such as malware detection, some remarked that the shelf life of a data set should be short. \textbf{Q4.4: What characteristics prevent a data set from being used?} \begin{figure}[h!] \centering \includegraphics[width=\linewidth, height=0.45\textheight]{Fig7.pdf} \includegraphics[width=\linewidth, height=0.45\textheight]{Fig8.pdf} \caption{Discouraging characteristics in data set selection for data paper authors and users (top), and users isolated (bottom). Respondents assessed a set of 15 predefined characteristics on a four-level Likert scale. Among the most discouraging characteristics, low data quality stands out.} \label{fig:preventing-characteristics} \end{figure} For this question, all 108 respondents evaluated the degree to which a set comprising of the negations of the attributes presented in the previous question discourage them from using a data set. The evaluation was again done on a four-level Likert scale and the results are presented in Figure~\ref{fig:preventing-characteristics}. Again, the attributes are sorted downwards in descending order of the \emph{Extremely} level. Among the most discouraging characteristics, we discerned \emph{low data quality}, \emph{difficulty of data set use}, \emph{lack of documentation of the data collection methods}, \emph{lack of replicability of data set construction}, \emph{restrictive license}, and \emph{lack of data schema documentation}. Less discouraging characteristics included \emph{less known data set curators}, \emph{lack of personal connection with the data set curators}, and \emph{data set not having been published as a data paper}. Isolating data paper users from authors in Figure~\ref{fig:preventing-characteristics}, we notice the same characteristics in the last places and in the first position, with the intermediate attributes varying. Users placed \emph{lack of replicability of data set construction}, \emph{stale data}, and \emph{no data verification methods employed in the construction} higher than authors and users combined. Except for the above characteristics, respondents further recognized through the related open-ended question, the discouragement derived from troubled access such as download issues, difficult access, and tool incompatibility/dependency, data isolation, lack of feature variety, novelty, support by the curators, or extensibility, and the limited scope of a data set. \textbf{Q4.5: What direction should data papers follow in the future?} \emph{Future data papers.} According to the 108 responses, future data papers could draw from artificial intelligence and machine learning, alternative and evolving software engineering, logs analysis (e.g. build logs, test failure logs), continuous integration and DevOps (particularly operations of DevOps), collaborative software development, code metrics and analyses, cross-disciplinary or domain specific processes. Furthermore, the respondents would appreciate shared data regarding health, fitness and performance, repository duplication and code cloning, human-centered and human-assessed data, concerning new technologies, and referring to security, social media, online computing courses, programming competitions, video material and video games. Data from software services, industrial ecosystems, grey literature, databases, Internet of things firmware, voluntarily collected data, and data on hyper-parameter optimization are also desired. Lastly, some participants suggested conducting systematic literature reviews on published data sets, and producing metadata after curating them. \emph{Future data sources.} As far as data sources are concerned, many similar responses with the previous set of answers were observed. Authors and users of data papers would like to see data extracted from sources of entertainment, such as music and video streaming, smart devices, surveys, sources with documented and valid data, as well as data from the sectors of education, health, energy, defense and security, manufacturing and retail, blockchain, finance, and autonomous driving. Moreover, participants suggested exploiting Alexa Rank, execution logs, code review systems, integrated development environments, activity sensors, domain specific data sources, and sources complementary to software repositories. Industrial cloud systems, safety critical software systems, software industries, human resources, and industries with security breaches are also proposed as prospective sources of data. Separating data paper users from authors, no variation was observed in their responses, since the majority of the above ideas regarding future data papers and data sources were also included in their answers. \textbf{Feedback} \emph{Overall.} 95\% (103) of the respondents assessed the completeness of the questionnaire as \emph{sufficient}, whereas the remaining 5\% (5) reviewed it as \emph{weak}. Overall, the survey was characterized as good and concise, with interesting aspects and sufficient maps to the objective questions. Still, a respondent was not sure whether the open-ended feedback question was enough for assessing completeness. \emph{Objective questions.} Q1 was characterized as answerable, as opposed to Q2 which was characterized as ambiguous. A few participants separated the process of data retrieval as part of another research paper requiring different effort from the data paper composition. This could be the reason we observe a significant wide spread in the responses. Q3 and Q4 were considered as mirroring questions by some respondents. However, the results of both the pilot and the final study partially contradict this opinion due to the asymmetry observed in the answers on an individual level. For instance, although \emph{ease of use} is ranked first in Figure~\ref{fig:useful-characteristics} with 63\% of respondents considering it \emph{very important}, it is placed second in Figure~\ref{fig:preventing-characteristics} with 46\% of respondents considering it \emph{extremely} discouraging. Similarly, \emph{high data set curator reputation} was evaluated as \emph{important} by 48\% of participants, but only 8\% evaluated it as \emph{significantly} discouraging. We also observe that \emph{data freshness} is placed considerably higher than the equivalent \emph{stale data}. Regarding Q5, a few respondents considered it enjoyable as opposed to some others who characterized it as ambiguous and unanswerable. Moreover, one participant remarked the lack of neutral option in the Likert scale questions. \emph{General comments.} Apart from comments on the questionnaire, some participants highlighted the significant effort required to produce a data paper, especially highly citable ones. According to them, these are a combination of good tooling architecture, documentation, novelty, generality, reproducibility, and clarity. Nevertheless, others expressed their concerns regarding data paper practical issues and troubled data sharing guidelines, such as the General Data Protection Regulation. Finally, an additional question was suggested concerning the number of existing data sets that researchers have used in their research. \section{Discussion and Implications} \label{sec:discussion} As evidenced by the large increase in the published data papers since the \textsc{msr}\ data showcase track was formalized in 2013, it is apparent that the track has catalyzed the publication of data papers. With data papers being more than 15\% of the \textsc{msr}\ publications in 2019, it is clear that the \textsc{msr}\ data showcase track has spurred a new type of publication, yielding each year a notable number of studies. More generally, the data showcase track's success in driving the publication of data papers indicates that a suitably themed conference track can in some cases drive research toward a given direction. The categories of data papers (Table~\ref{tab:data-paper-clusters}) span equally product and process, but product-oriented papers outnumber the process ones. This can be explained by the preponderance of publicly available product data, which are associated with open source software projects, over process data, which are more difficult to come by. Although past experience with calling for the publication of particular data types has not been encouraging~\citep{Wal98}, many years have passed since then and it might be worth to try focusing the \textsc{msr}\ call for data papers on specific topics each year, with an emphasis on software processes, in order to overcome the previous bias. The studies that strongly cite data papers span the \textsc{swebok} knowledge areas fairly unequally. It seems that software maintenance and engineering management can be profitably studied using materials from \textsc{msr}\ data papers, but software requirements, economics, and engineering models and methods less so. Given the, by definition, primary importance of all \textsc{swebok} areas, it would seem that the \textsc{msr}\ data showcase track chairs could promote studies associated with the less covered areas by adjusting the track's call for papers to specifically invite data sets targeting them. We acknowledge, however, that for certain \textsc{swebok} areas, such as software economics, the release of data sets is hard due to the often proprietary nature of the corresponding data, while in others, such as software requirements, there is an established tradition to publish data sets together with research papers~\citep{ZSMA17}. Nevertheless, data sets for underrepresented \textsc{swebok} areas might have lasting impact in their subfield despite being less popular. \begin{implication} The \textsc{msr}\ data showcase track chairs could target the call for data papers on process-oriented topics and less covered \textsc{swebok} areas, to possibly improve their footprint. \end{implication} Although one might expect that a data paper is typically cited mainly when it is actually used, our findings do not support this assertion. We manually identified 440 strong citations; far fewer than half of the 1169 total citations that were made to data papers according to our results. This demonstrates that citations to any kind of published studies (including data research) can be made for a variety of reasons. According to the manual analysis of the weak citations, the most prominent reasons are mentioning the work as an example, attributing a work's statement, and using the work's methods. Be that as it may, based on the difference between the citations to data papers and to other studies, there seems to be room for improving the data papers' use. \begin{implication} The actual use of data papers could potentially be increased through the promotion of open science initiatives by journal editors and conference program committees, such as the \textsc{acm} Artifact Review and Badging policy~\citep{Boi16}. \end{implication} With each data paper strongly cited on average 5.4 times, it appears that data papers are in general useful for conducting other empirical studies. Many of these studies are published in top-notch venues (see Table~\ref{tab:venues}), indicating the high quality of studies that can be performed through data papers. On the other hand, at least for \textsc{msr}\ papers that cite data papers, their basis on published empirical data does not seem to increase their impact in terms of citations to them (see last column of Table~\ref{tab:citation-metrics}). Regarding impact, the number of strong citations to data papers is constantly rising (Figure~\ref{fig:yearly-papers}), indicating that the concept of data papers has long-term value. The enduring usefulness of specific papers is also apparent by looking at the timeline of strong citations to specific \textsc{msr}\ data showcase papers over a five-year period (Figure~\ref{fig:five-year-citations}). While the majority of papers indicate a short shelf life, the trend of the most strongly cited data papers retaining their citation number, or obtaining even more strong citations, is yet another manifestation of the Matthew effect in science~\citep{Mer68}. However, this results in a constant need for new data papers, which was also expressed by some respondents of the survey stating that fields with ongoing changes result in a short shelf life for the corresponding data sets. \begin{implication} The short shelf life of data sets implies a need for a constant stream of new data papers. \end{implication} \begin{figure*}[t] \centering \includegraphics[width=0.49\textwidth]{Fig9.pdf} \includegraphics[width=0.49\textwidth]{Fig10.pdf} \caption{Distribution in the number of citations to MSR data papers (left) and MSR non-data papers (right). The similar shape of the two distributions indicates that the reason for the lower citation count of MSR data papers is the overall lower number of citations to each data paper compared to the citations to each non-data paper.} \label{fig:citation-distribution} \end{figure*} Yet, surprisingly for an artifact whose main purpose is for others to build on, data papers are cited less than other \textsc{msr}\ papers. One might think that this is due to the 28 out of 81 (35\%) of the data papers that are never used. The citation's distribution long tail --- just 9\% of the data papers are strongly cited by 67\% of all citing studies --- could be another reason. However, by comparing the distribution of citations to data papers (according to Scopus) with that of citations to non-data papers (Figure~\ref{fig:citation-distribution}), we see that the two distributions are similar in shape. It is apparent that the reason for the lower citation count of \textsc{msr}\ data papers is the overall lower number of citations to each data paper compared to the citations to each non-data paper. There are three reasons that could explain this phenomenon. First, data papers may not publish data that are actually useful for conducting other studies. Our author survey suggests that high data quality, ease of use, and data freshness are the most valuable characteristics in data set selection. To address this problem, authors of new data papers can draw from these findings and ensure the satisfaction of the particular criteria in their work. In addition, the \textsc{msr}\ program committee could adopt more stringent criteria for accepting data papers, though this will certainly lead to a decline in the number of accepted papers, and there is no guarantee that a more selective track will still select the papers that will be most frequently cited. The track's toughening of data sharing can be counterbalanced again by promoting open science initiatives. \begin{implication} Program committees could consider adopting more stringent criteria for accepting data papers, to potentially improve their usage. \end{implication} Second, existing data papers may not satisfy the needs and interests of software researchers. The responses propose exploiting data related to topics such as artificial intelligence and machine learning, collaborative software development, health, fitness and performance, online computing courses, video material and video games. Suggested future data sources include the sectors of education, health, energy, manufacturing, and autonomous driving, entertainment, and smart devices. \begin{implication} Prospective authors of data papers can exploit the survey's insights to produce quality work that will meet the community's expectations, needs, and interests. \end{implication} Third, researchers may be reluctant to work with data coming outside their organization --- also known as the \emph{not invented here syndrome}~\citep{PD15}, or fear that working with publicly available data is less likely to yield original results. Although respondents of the survey reported that personal connections with data paper authors play little role in selecting data papers, the high number of papers used by their authors (Figure~\ref{fig:self-citations}) contradicts this. This practice suggests the possibility of adopting a workflow similar to that of pre-registered studies~\citep{HI18}: publishing a data paper and then employing it for empirical software engineering research. Such a workflow may further strengthen the safeguards promoted by pre-registered studies against p-hacking and publication bias~\citep{Kup18}. In addition, encouraging the advance publication of a study's data would level the playing field between the scientists with access to rich empirical data and those without. \begin{implication} Methodology researchers, conference program committees, and journal editorial boards could examine the opportunities and implications associated with a research paradigm where the data employed in empirical studies are published before the studies that analyze them. \end{implication} Looking at the most used data paper, GHTorrent~\citep{Gou13}, we observe that it is characterized by the majority of the attributes considered useful by the respondents~(Figure~\ref{fig:useful-characteristics}). Particularly, one highlighted that GHTorrent's updatability through its available source code ``lends credence to the construct validity of the data set, since the instrument used to curate it is open for review''. Adding to that the continuous human effort and attention by the curators for its regular maintenance, through daily and bi-monthly database dumps accessible from its web site,\footnote{\url{https://ghtorrent.org}} addressing users' suggestions and bug reports submitted to the GitHub project,\footnote{\url{https://github.com/ghtorrent/ghtorrent.org}} this seems to create a self-reinforcing feedback loop between the curators' efforts and the continuing citations to it. \begin{implication} Self-reinforcing feedback loops could affect positively a data set's citations over time, starting from the curators' regular data set updates, maintenance, and support. \end{implication} Overall, data paper authors tend to publish work with data they have gathered in the context of other research. Still, at the same time they seem motivated to benefit the community with new data, and they consider that a worthwhile way of doing so is by publishing data papers. Encouragingly, our author survey paints a picture of an open and meritocratic community, with authors failing to agree that drafting a data paper is an easy way to pad a {\sc cv} with more publications. Moreover, they seem notable supporters of open science, through various open-ended responses where they expressed their desire for furthering open science goals. Hence, the suggestion regarding the promotion of open science initiatives is enhanced through the particular observation. \section{Threats to Validity} \label{sec:validity} The study's external validity in terms of generalizability, obviously suffers by studying only data papers that have been published within the framework of the \textsc{msr}\ conference and ignoring venues such as the \textsc{promise} conference --- consider e.g. the work by \citet{FTLS18} --- or the \emph{Empirical Software Engineering} --- e.g. the paper by \citet{Squ18}. However, studying the \textsc{msr}\ conference in isolation allowed us to analyze the effect of establishing the \textsc{msr}\ data showcase track, and to compare citation counts among different groups of papers (Section~\ref{sec:citation-analysis}), without the bias associated with a paper's publication venue. Furthermore, external threats are also related to our ability to generalize the author survey results. Again, the sample selection only within the \textsc{msr}\ boundaries prevents us from generalizing our conclusions to other venues. Still, it allowed meaningful insights to be derived, which could be enhanced and generalized through replication of the study to other venues. The major threats to the study's internal validity stem from the steps during which we followed manual processes involving subjective judgment: the selection of data papers before the showcase track was introduced, the filtering of studies that actually use data papers, the analysis of the weak citations, the clustering of data papers, the classification of studies using data papers, and the pair coding of open-ended responses. Especially the clustering of data papers introduced in Table~\ref{tab:data-paper-clusters} holds another serious threat associated with the establishment of the clusters themselves. As elaborated in Section~\ref{sec:data-paper-methods}, clusters resulted from a conceptual analysis of the corresponding data studies. The risk stemming from the pair coding process is related to the loss of accuracy of the original response due to an increased level of categorization. The trustworthiness of the processes of clustering, classification, and pair coding were enhanced through the use of multiple raters and coders, and by grounding them on established research methods. However, we acknowledge that validity risks derived from manual processes requiring human judgment cannot be completely eliminated~\citep{PVK15}. Another threat related to the survey responses is social desirability bias~\citep{Fur86} (i.e., a respondent's possible tendency to appear in a positive light, such as by showing they are fair or rational). Particularly, the answers presented in Figure~\ref{fig:motivational-factors} may lack some truthfulness. For instance, one should not over-interpret that few respondents consider the \textsc{msr}\ data showcase track \emph{a straightforward way to publish in the MSR conference}. To mitigate this bias, participants were informed that responses would be anonymous. Question-order effect~\citep{Sig81} (e.g. one question may have provided context for the next one) may have led respondents to a specific answer, especially in the answers presented in Figures~\ref{fig:useful-characteristics} and \ref{fig:preventing-characteristics}. One approach to mitigate this bias could have been randomizing the order of questions. In our case, we decided to order the questions in a convenient manner for respondents to easily recall and understand the context of the questions asked. \section{Conclusions} \label{sec:conclusions} The \textsc{msr}\ data showcase track has been successful in encouraging the publication of data papers. Data papers are generally used by other empirical studies, though not as much as one might expect or hope for. The gatekeepers of science, such as journal editors and program committees, can address this by setting a higher bar for the publication of data papers, by encouraging their constant stream and use, and by promoting open science initiatives. An additional policy to improve the use and impact of data papers might be to provide incentives for researchers to enrich existing collections of data instead of reproducing similar data sets from scratch. Such incentives could involve awarding a most influential data paper award or inviting papers where researchers describe how they expanded upon a data track study. \begin{acknowledgements} Panos Louridas provided insightful comments on this manuscript. Furthermore, Georgios Gousios's suggestions regarding the refinement of the questionnaire were crucial for the survey attainment. This work has received funding from: the European Union's Horizon 2020 research and innovation programme under grant agreement No 825328; the {\sc gsrt} 2016--2017 Research Support (EP-2844-01); and the Research Centre of the Athens University of Economics and Business, under the Original Scientific Publications framework 2019. \end{acknowledgements} \section*{Conflict of Interest} The authors declare that they have no conflict of interest. \bibliographystyle{spbasic}
2,877,628,090,625
arxiv
\section{Introduction} \label{sec:introduction} Reinforcement Learning (RL) is one of the three machine learning paradigms besides supervised learning and unsupervised learning. It uses agents acting as human experts in a domain to take actions. RL does not require data with labels; instead, it learns from experiences by interacting with the environment, observing, and responding to results. RL can be expressed with Markov Decision Process (MDP) as shown in Figure \ref{fig:mdp-arch}. Each environment is represented with a state that reflects what is happening in the environment. The RL agent takes actions in the environment, that causes a change in the environment's current state generating a new state and receives a reward based on the results. The agent receives a positive reward for good actions and a negative reward for bad actions, which helps the agent evaluate the performed action in a given state and learn from experiences. \begin{figure}[!t] \centering \includegraphics[width=.7\linewidth]{DRL-Architecture.png} \caption{Markov Decision Process} \label{fig:mdp-arch} \end{figure} Video games have been one of the most popular RL applications, and RL algorithms have been mainly tested and evaluated on video games. However, RL has other applications and can be used in different domains such as self-driving cars, natural language processing (NLP), autonomous robotics, delivery drones, and many others. Furthermore, there are many diverse RL algorithms with different variations. Therefore, it is imperative to understand the differences between RL algorithms, select the appropriate algorithm suitable for the environment type and the task on hand. The most widely used algorithm is Deep Q-Network (DQN) with its variations because of its simplicity and efficiency. Nevertheless, DQN is suitable only for environments with discrete actions. For example, in autonomous UAV navigation (self-flying drones), many papers tend to simplify the environment to enable the use of DQN \cite{okuyama2018autonomous, chishti2018self}. However, in complex real-life environments, DQN would not be suitable if the environment is dynamic or the required actions are continuous. Therefore, to assist in matching the RL algorithm with the task, the classification of RL algorithms based on the environment type is needed. Consequently, this study provides an overview of different RL algorithms, classifies them based on the environment type, and explains their primary principles and characteristics. Additionally, relationships among different RL algorithms are also identified and described. The paper provides a perspective on the domain and helps researchers and practitioners to select appropriate algorithms for their use cases. Moreover, it provides options for selecting a suitable algorithm for the environment, rather than attempting to simplify the environment for the algorithm \cite{okuyama2018autonomous, chishti2018self}. The remainder of the paper is organized as follows: Section \ref{sec:background} introduces RL and discusses RL's main principles. Section \ref{sec:rl-algorithms} classifies RL algorithm and provides their overview. Finally, Section \ref{sec:conclusion} concludes the paper. \section{Background} \label{sec:background} This section first introduces reinforcement learning. Next, concepts of policy and value functions are described, and finally, experience replay is explained as a commonly used technique in different RL algorithms. \subsection{Reinforcement Learning} The RL agent learns from taking actions in the environment, which causes a change in the environment's current state and generates a reward based on the action taken as expressed in the Markov Decision Process (MDP). We define the probability of the transition to state $\mathbf{s^{\prime}}$ with reward $\bm{r}$ from taking action $\bm{a}$ in state $\bm{s}$ at time $\bm{t}$, for all $s^{\prime} \in S, \; s \in S, \; r \in R, \; a \in A(s)$, as: \begin{equation} P(s^{\prime},r|s,a) = Pr\{S_t = s^{\prime}, R_t=r| S_{t-1}=s, A_{t-1}=a\} \end{equation} The agent receives rewards for performing actions and uses them to measure the action's success or failure. The Reward $R$ can be expressed in different forms, as a function of the action $R(a)$, or as a function of action-state pairs $R(a,s)$. The agent's objective is to maximize the expected summation of the discounted rewards, which drives the agent to take the selected actions. The reward is granted by adding all the rewards generated from executing an episode. The \textit{episode} (trajectory) represents a finite number of actions and ends when the agent achieves a final state, for example, when a collision occurs in a simulated navigation environment. However, in some cases, the actions can be continuous and cannot be broken into episodes. The discounted reward, as shown in equation \ref{equ:sum-disc-reward} uses a multiplier $\bm{\gamma}$ to the power $\bm{k}$, where $\bm{\gamma \in [0,1]}$. The value of $k$ increases by one at each time step to emphasize the current reward and to reduce the impact of the future rewards, hence the term discounted reward. \begin{equation} \label{equ:sum-disc-reward} G_t = E \left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1}\right] \end{equation} Emphasizing the current action’s immediate reward and reducing the impact of future actions’ rewards help the expected summation of discounted rewards to converge. \subsection{Policy and Value Function} The agent’s behavior is defined by following a policy $\bm{\pi}$, where the policy $\bm{\pi}$ defines the probability of taking action $\bm{a}$, given a state $\bm{s}$, which is denoted as $\bm{\pi(a|s)}$. Once the agent takes an action, the agent uses a value function to evaluate the action. The agent either uses: 1) a \textit{state-value function} to estimate how good for the agent to be in state $\bm{s}$, or 2) a \textit{action-value function} to measure how good it is for the agent to perform an action $\bm{a}$ in a given state $\bm{s}$. The action-value function is defined in terms of the expected summation of the discounted rewards and represents the target Q-value: \begin{equation} \label{equ:action-value-func} Q_\pi(s,a) = E_\pi \left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1} \; | \; S_t = s, A_t = a \right] \end{equation} The agent performs the action with the highest Q-value, which might not be the optimal Q-value. Finding the optimal Q-value requires selecting the best actions that maximize the expected summation of discounted rewards under the optimal policy $\bm{\pi}$. The optimal Q-value $\bm{Q_*(s,a)}$ as described in equation \ref{equ:optimal-policy} must satisfy the \textit{Bellman optimality equation}, which is equal to the expected reward $\bm{R_{t+1}}$, plus the maximum expected discounted return that can be achieved for any possible next state-action pairs $\bm{(s^\prime,a^\prime)}$. \begin{equation} \label{equ:optimal-policy} Q_*(s,a) = \underset{\pi}{max} \; Q(s,a) \end{equation} \vspace{-10pt} \begin{equation} \label{equ:bellman-optimality} Q_*(s,a) = E \left[ R_{t+1} + \gamma \; \underset{a^\prime}{max} \; Q_*(s^\prime,a^\prime) \right] \end{equation} This optimal Q-value $\bm{Q_*(s,a)}$ is used to train the neural network. The Q-value $\bm{Q(s,a)}$ predicted by the network is subtracted from the optimal Q-value $\bm{Q_*(s,a)}$ estimated using the Bellman equation and backpropagated through the network. The loss function is defined as follows: \begin{equation} \label{equ:loss-func} \overset{Target}{\overbrace{E \left[ R_{t+1} + \gamma \; \underset{a^\prime}{max} \; Q_*(s^\prime,a^\prime) \right]}} \; - \; \overset{Predicted}{\overbrace{E \left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1}\right]}} \end{equation} \subsection{Experience Replay} In RL, an experience $\bm{e}$ can be described as the knowledge produced from the agent performing an action $\bm{a}$ in a state $\bm{s}$ causing a new state $\bm{s^\prime}$ and a generated reward $\bm{r}$. The experience can be expressed as a tuple $\bm{e(s,a,s^\prime,r)}$. Lin \cite{lin1992self} proposed a technique called \textit{Experience Replay}, where experiences are stored in a replay memory $\bm{D}$ and used to train the agent. Since experiences are stored in the memory, and some experiences might be of a high importance, they can repeatedly be reused to train the agent what improves convergence. Although experience replay should help the agent theoretically learn from previous important experiences, it entails sampling experiences uniformly from the replay memory $\bm{D}$ regardless of their significance. Schaul \textit{et al.} \cite{schaul2015prioritized} suggested the use of \textit{Prioritized Experience Replay}, which aims to prioritize experiences using Temporal Difference error (TD-error) and replay more frequently experiences that have lower TD-error. \section{Reinforcement Learning Algorithms Classification} \label{sec:rl-algorithms} While most reinforcement learning algorithms use deep neural networks, different algorithms are suited for different environment types. We classify RL algorithms according to the number of the states and action types available in the environment into three main categories: 1) a limited number of states and discrete actions, 2) an unlimited number of states and discrete actions, and 3) an unlimited number of states and continuous actions. The three categories, together with algorithms belonging to those categories, are shown in Figure \ref{fig:RL-Algorthims} and discussed in the following subsections. \begin{figure*}[!t] \centering \includegraphics[width=.7\linewidth]{RL-Algorthims.png} \caption{Reinforcement Algorithms classification based on the environment type} \label{fig:RL-Algorthims} \end{figure*} \subsection{Environments with Limited States and Discrete Actions} \label{LimitedStatesDiscreteActions} The environments with discrete actions and limited states are relatively simple environments where the agent can select from pre-defined actions and be in pre-defined known states. For example, when an agent is playing a tic-tac-toe game, the nine boxes represent the states, and the agent can choose from two actions: X or O, and update the available states. Q-Learning \cite{watkins1992q} algorithm is commonly used to solve problems in such environments. This algorithm finds the optimal policy in a Markov Decision Process (MDP) by maintaining a Q-Table table with all possible states and actions and iteratively updating the Q-values for each state-action pair using the Bellman equation until the Q-function converges to the optimal Q-value. State–Action–Reward–State–Action (SARSA) \cite{rummery1994line} is another algorithm from this category: it is similar to Q-learning except it updates the current $\bm{Q(s,a)}$ value in a different way. In Q-learning, in order to update the current $\bm{Q(s,a)}$ value, we need to compute the next state-action $\bm{Q(s^\prime,a^\prime)}$ value, and since the next action is unknown, then Q-learning takes a greedy action to maximize the reward \cite{zhao2016deep}. In contrast, when SARSA updates the current state-action $\bm{Q(s,a)}$ value, it performs the next action $\bm{a^\prime}$ \cite{zhao2016deep}. \subsection{Environments with Unlimited States and Discrete Actions} In some environments, such as playing a complex game, the states can be limitless; however, the agent's choice is limited to a finite set of actions. In such environments, the agent mainly consists of a Deep Neural Network (DNN), usually a Convolutional Neural Network (CNN), responsible for processing and extracting features from the state of the environment and outputting the available actions. Different algorithms can be used with this environment type, such as Deep Q-Networks (DQN), Deep SARA, and their variants. \vspace{6 pt} \subsubsection{\textbf{Deep Q-Networks (DQN)}} \hfill \break Deep Q-Learning, also referred to as Deep Q-Networks (DQN), is considered the main algorithm used in environments with unlimited states and discrete actions, and it inspires other algorithms used for a similar purpose. DQN usually combines convolutional and pooling layers, followed by fully connected layers that produce Q-values corresponding to the number of actions. Figure \ref{fig:dqn} \cite{anwar2020autonomous} shows AlexNet CNN followed by two fully connected layers to produce Q-value. The current scene from the environment represents the environment's current state; once it is passed to the network, it produces Q-value representing the best action to take. The agent acts and then captures the changes in the environment's current state and the reward generated from the action. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{DQN_Architecture.png} \caption{DQN using AlexNet CNN} \label{fig:dqn} \end{figure} A significant drawback of the DQN algorithm is overestimating the action-value (Q-value), where the agent tends to choose a non-optimal action because it has the highest Q-value \cite{VanHasselt2010}. \vspace{3 pt} \paragraph{Double and Dueling DQN (DD-DQN)} \hfill \break Double DQN uses two networks to solve this overestimation problem in DQN. The first network, called the Policy Network, optimizes the Q-value, and the second network, the Target Network, is a replica of the policy network, and it is used to produce the estimated Q-value \cite{VanHasselt2016}. The target network parameters are updated after a certain number of time steps by copying the policy network parameters rather than using the backpropagation. Another improvement on DQN is Dueling DQN illustrated in Figure \ref{fig:Dueling-DQN} \cite{Wang2016}. Dueling DQN tries to define a better way to evaluate the Q-value by explicitly decomposing the Q-value function into two functions: \begin{itemize} \item State-Value function $\bm{V(s)}$ measures how good is for the agent to be in state $\bm{s}$. \item Advantage-Value function $\bm{A(s, a)}$ captures how good is an action compared to other actions at a given state. \end{itemize} The two functions shown in Figure \ref{fig:Dueling-DQN} \cite{Wang2016}, are combined via a special aggregation layer to produce an estimate of the state-action value function \cite{Wang2016}. The value of this function is equal to the summation of the two values produced by the two functions: \begin{equation} \label{eq:dueling-dqn} Q(s,a) = V(s) + \big( A(s,a) -\frac{1}{|\mathcal{A}|} \sum_{a^\prime} A(s,a) \big) \end{equation} The subtracted term $\bm{\frac{1}{|\mathcal{A}|} \sum_{a^\prime} A(s, a)}$ represents the mean, where $\bm{|\mathcal{A}|}$ represents the size of the vector $\bm{A}$. This term helps with identifiability, and it does not change the relative rank of the A (and hence Q) values. Additionally, it increases the stability of the optimization as the advantage function only needs to change as fast as the mean \cite{Wang2016}. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{Dueling_DQN.png} \caption{DQN vs. Dueling DQN} \label{fig:Dueling-DQN} \end{figure} Double Dueling DQN (DD-DQN) is another DQN algorithm: it combines Dueling DQN with Double DQN to find the optimal Q-value as suggested originally by Wang \textit{et al.} \cite{Wang2016} where the output from the Dueling DQN is passed to Double DQN. \vspace{3 pt} \paragraph{Deep Recurrent Q-Networks (DRQN)} \hfill \break Deep Recurrent Q-Network (DRQN) \cite{hausknecht2015deep} is an extension of the DQN algorithm, replacing the first fully connected layer with a recurrent LSTM layer of the same size. Adding the LSTM layer requires changing the input size from a single state of the environment to multiple states (frames) as a single input, which helps to integrate information through time \cite{hausknecht2015deep}. \vspace{6 pt} \subsubsection{\textbf{Deep SARSA}} \hfill \break Basic SARSA uses Q learning and is suitable for limited states and discrete actions environments, as described in subsection \ref{LimitedStatesDiscreteActions}. On the other hand, Deep SARSA for unlimited states uses a deep neural network similar to DQN: the main difference is that SARSA computes $\bm{Q(s^\prime,a^\prime)}$ by performing the next action $\bm{a^\prime}$, which is required to calculate the current state-action $\bm{Q(s,a)}$. As shown in Figure \ref{fig:RL-Algorthims}, extensions of Deep SARSA are the same as extensions of DQN with the main difference on how to calculate the next action-state value $\bm{Q(s^\prime,a^\prime)}$. \subsection{Environments with Unlimited States and Continuous Actions} Although discrete actions are sufficient to move a car or UAV in a virtual environment, such actions do not provide a realistic object movement in real-life scenarios. Continuous actions describe the quantity of movement in different directions where the agent does not choose from a list of predefined actions. For example, a realistic UAV movement specifies the quantity of required change in roll, pitch, yaw, and throttle values to navigate the environment while avoiding obstacles, rather than moving UAV using one step in predefined directions: up, down, left, right, and forward. Continuous action space requires the agent to learn a parameterized policy $\bm{\pi_\theta}$ to maximize the expected summation of the discounted rewards because it is impossible to calculate action-value for all continuous actions at different states. The problem is a maximization problem and can be solved using gradient descent algorithms to find the optimal $\bm{\theta}$. The value of $\bm{\theta}$ is updated as follows: \begin{equation} \label{eq:gradient-ascent} \theta_{t+1} = \theta_{t} + \alpha\nabla J(\theta_{t}) \end{equation} \noindent where $\bm{\alpha}$ is the learning rate and $\bm{\nabla}$ is the gradient. The reward function $\bm{J}$ objective is to maximize the expected reward using a parameterized policy $\pi_\theta$ as follows \cite{sutton2018reinforcement}: \begin{equation} \label{eq:policy-gradient} \begin{split} J(\pi_\theta) &= \sum_{s \in S} \rho_{\pi_\theta}(s) \; V^{\pi_\theta}(s) \\ &= \sum_{s \in S} \rho_{\pi_\theta}(s) \; \sum_{a\in A} Q^{\pi_\theta}(s, a) \ \pi_{\theta}(a|s) \end{split} \end{equation} Here $\bm{\rho_{\pi_\theta}(s)}$ defines the stationary probability of $\bm{\pi_\theta}$ starting from state $\bm{s_0}$ and transitioning to future states following the policy $\bm{\pi_\theta}$ for $\bm{t}$ time steps. Finding the optimal $\bm{\theta}$ that maximizes the function $\bm{J(\pi_\theta)}$ requires finding the gradient $\bm{\nabla_{\theta} J(\theta)}$: \begin{equation} \label{eq:policy-gradient-theorem} \begin{split} \nabla_{\theta} J(\theta) &= \nabla_{\theta} \biggl( \sum_{s \in S} \rho_{\pi_\theta}(s) \; \sum_{a\in A} Q^{\pi_\theta}(s, a) \ \pi_{\theta}(a|s) \biggr) \\ &\propto \sum_{s \in S} \mu(s) \; \sum_{a\in A} Q^{\pi_\theta}(s, a) \ \nabla \pi_{\theta}(a|s)\ \end{split} \end{equation} Equation \ref{eq:policy-gradient-theorem} can be further rewritten in continuous episodes since $\bm{\sum_{s \in S} \eta(s) = 1}$ as: \begin{equation} \label{eq:policy-gradient-theorem2} \nabla_{\theta} J(\theta) = \mathbb{E}_{s \sim \rho^{\pi_\theta} , a \sim \pi_\theta} \Big [ Q^{\pi_\theta}(s, a) \; \nabla_{\theta} \ln \pi_\theta (a_t|s_t) \Big ] \end{equation} When the training sample is collected according to the target policy $\bm{s \sim \rho^{\pi_\theta}}$ and the expected return is generated for the same policy $\bm{\pi_\theta}$, the algorithm is referred to as \textit{on-policy algorithm}. On the other hand, in \textit{off-policy algorithms}, the training sample follows a behavior policy $\bm{\beta(a|s)}$, which is different than the target policy $\bm{\pi_\theta(a|s)}$ \cite{silver2014deterministic}, while the expected reward is generated using the target policy $\bm{\pi_\theta}$. Off-policy algorithms do not require full rejectories (episodes) for the training sample and they can reuse past trajectories. Equation \ref{eq:off-policy-gradient-theorem} \cite{silver2014deterministic} shows how the policy is adjusted to the ratio between the target policy $\bm{\pi_\theta(a|s)}$ and behaviour policy $\bm{\beta(a|s)}$. \begin{equation} \label{eq:off-policy-gradient-theorem} \nabla_{\theta} J(\theta) = \mathbb{E}_{s \sim \rho^{\beta} , a \sim \beta} \Big [ \frac{\pi_\theta(a|s)}{\beta_{\theta}(a|s)} Q^{\pi_\theta}(s, a) \; \nabla_{\theta} \ln \pi_\theta (a_t|s_t) \Big ] \end{equation} The policy gradient theorem shown in equation \ref{eq:policy-gradient} \cite{sutton2000policy} considered the fundamental base of distinct Policy Gradients (PG) algorithms such as REINFORCE \cite{williams1992simple}, Actor-Critic algorithms \cite{konda2000actor}, Trust Region Policy Optimization (TRPO) \cite{schulman2015trust}, and Phasic Policy Gradient \cite{raileanu2021decoupling}, Stein Variational Policy Gradient \cite{liu2017stein}, Proximal Policy Optimization (PPO) \cite{schulman2017proximal}, and many others. \vspace{6 pt} \subsubsection{\textbf{REINFORCE}} \hfill \break REINFORCE is an acronym for \textbf{RE}ward \textbf{I}ncrement $=$ \textbf{N}onnegative \textbf{F}actor $\times$ \textbf{O}ffset \textbf{R}einforcement $\times$ \textbf{C}haracteristic \textbf{E}ligibility \cite{williams1992simple}. REINFORCE is a Monte-Carlo policy gradient algorithm that works with the episodic case. It requires a complete episode to obtain a sample proportional to the gradient, and updates the policy parameter $\bm{\theta}$ with the step size $\bm{\alpha}$. Because $\bm{\mathbb{E}_{\pi}[G_t|S_t, A_t] = Q^{\pi}(s, a)} $, REINFORCE can be defined as \cite{sutton2018reinforcement}: \begin{equation} \label{eq:REINFORCE} \nabla_{\theta} J(\theta) = \mathbb{E}_{\pi} \Big [G_t \; \nabla_{\theta} \ln \pi_\theta (A_t|S_t) \Big ] \end{equation} REINFORCE uses the Monte Carlo method, which suffers from high variance and, consequently, has slow learning \cite{williams1992simple}. Adding a baseline to REINFORCE reduces the variance and speeds up learning while keeping the bias unchanged by subtracting the baseline value from the expected return $\bm{G_t}$ \cite{sutton2018reinforcement}. \vspace{6 pt} \subsubsection{\textbf{Trust Region Policy Optimization (TPRO)}} \hfill \break Trust Region Policy Optimization (TRPO) \cite{schulman2015trust} is a PG algorithm that improves the performance of gradient descent by taking more extensive steps within trust regions defined by a constraint of KL-Divergence and performs the policy update after each trajectory rather than after each state. Proximal Policy Optimization (PPO) \cite{schulman2017proximal} can be considered an extension of TRPO; it imposes the constraint as a penalty and clips the objective to ensure that the optimization is carried out within the predefined range \cite{shin2019obstacle}. Phasic Policy Gradient (PPG) \cite{cobbe2020phasic} extends PPO by including a periodic auxiliary phase which distills features from the value function into the policy network to improve training. This auxiliary phase enables feature sharing between the policy and value function while decoupling their training. \vspace{6 pt} \subsubsection{\textbf{Stein Variational Policy Gradient (SVPG)}} \hfill \break Stein Variational Policy Gradient (SVPG) \cite{liu2017stein} applies the Stein variational gradient descent (SVGD) \cite{liu2016stein} to update the policy parameterized by $\bm{\theta}$, which reduce variance and improves convergence. SVPG improves the average return and data efficiency when used on top of REINFORCE and advantage actor-critic algorithms \cite{liu2017stein}. \vspace{6 pt} \subsubsection{\textbf{Actor-Critic}} \hfill \break Actor-Critic algorithms are a set of algorithms based on policy gradients theorem that consist of two components: \begin{enumerate} \item An Actor responsible for adjusting the parameter $\bm{\theta}$ of the policy $\bm{\pi_\theta}$ \item A Critic which employs a parameterized vector $\bm{w}$ to estimate the value-function $\bm{Q^{w}(s_t,a_t) \approx Q^{\pi}(s_t,a_t)}$ using a policy evaluation algorithm such as temporal-difference learning \cite{silver2014deterministic} \end{enumerate} The actor can be described as the network trying to find the probability of all available actions and select the action with the highest value, while the critic can be described as a network evaluating the selected action by estimating the value of the new state resulted from performing the action. Different algorithms fall under the actor-critic category; the main ones are described in the following subsections. \vspace{3 pt} \paragraph{Deterministic Policy Gradients (DPG) Algorithms} \hfill \break All deterministic policy gradients algorithms model the policy as a deterministic policy $\bm{\mu(s)}$, rather than stochastic policy $\bm{\pi(s,a)}$ that is modeled over the action's probability distribution. We described earlier in Equation \ref{eq:policy-gradient}, the objective function under a selected policy $\bm{J(\pi_\theta)}$ to be $\bm{\sum_{s \in S} \rho_{\pi_\theta}(s) \; V^{\pi_\theta}(s)}$; however, a deterministic policy is a special case of stochastic policy, where the objective function of the target policy is averaged over the state distribution of the behaviour policy as described in equation \ref{eq:deterministic-policy-gradient} \cite{silver2014deterministic}. \begin{equation} \label{eq:deterministic-policy-gradient} \begin{split} J_{\beta}(\mu_{\theta}) &= \int_{S} \rho^{\beta}(s) \ V^{\mu}(s) \ ds \\ &= \int_{S} \rho^{\beta}(s) \ Q^{\mu}(s,\mu_{\theta}(s)) \ ds \end{split} \end{equation} In the off-policy approach with a stochastic policy, importance sampling is often used to correct the mismatch between behaviour and target policies. However, because the deterministic policy gradient removes the integral over actions, we can avoid importance sampling and the gradient becomes: \begin{equation} \label{eq:deterministic-policy-gradient-theorem} \begin{split} \nabla_{\theta} J_{\beta}(\mu_{\theta}) &\approx \int_{S} \rho^{\beta}(s) \ \nabla_{\theta} \ \mu_{\theta}(a|s) \ Q^{\mu}(s,\mu_{\theta}(s)) \ ds \\ &= \mathbb{E}_{s \sim \rho^{\beta}} \ \Big [ \nabla_{\theta} \ \mu_{\theta}(s) \nabla_{a} Q^{\mu}(s,a)|_{a=\mu_{\theta}(s)} \Big ] \end{split} \end{equation} Different algorithms build on DPG with improvements; for example, Deep Deterministic Policy Gradient (DDPG) \cite{lillicrap2015continuous} adapts DQN to work with continuous action space and combines it with DPG. On the other hand, Distributed Distributional DDPG (D4PG) \cite{barth2018distributed} adopts distributed settings for DDPG with additional improvements such as using N-step returns and prioritized experience replay \cite{barth2018distributed}. Multi-agent DDPG (MADDPG) \cite{lowe2017multi} is another algorithm that extends DDPG to work with multi-agents, where it considers action policies of other agents and learns policies that require multi-agent coordination \cite{lowe2017multi}. Twin Delayed Deep Deterministic (TD3) \cite{fujimoto2018addressing} builds on Double DQN and applies to DDPG to prevent the overestimation of the value function by taking the minimum value between a pair of critics \cite{fujimoto2018addressing}. \vspace{3 pt} \paragraph{Advantage Actor-Critic (A3C)} \hfill \break Asynchronous Advantage Actor-Critic (A3C) \cite{mnih2016asynchronous} is a policy gradient algorithm that uses multi-threads, also known as agents or workers, for parallel training. Each agent maintains a local policy $\bm{\pi_\theta(a_t|s_t)}$ and an estimate of the value function $\bm{V_\theta(s_t)}$. The agent synchronizes its parameters with the global network having the same structure. The agents work asynchronously, where the value of the network parameters flows in both directions between the agents and the global network. The policy and the value function are updated after $t_{max}$ actions or when a final state is reached \cite{mnih2016asynchronous}. Advantage Actor-Critic (A2C) \cite{mnih2016asynchronous} is another policy gradient algorithm similar to A3C, except it has a coordinator responsible for synchronizing all agents. The coordinator waits for all agents to finish their work either by reaching a final state or by performing $\bm{t_{max}}$ actions before it updates the policy and the value function in both direction between the agents and the global network. Actor-Critic with Experience Replay (ACER) is an off-policy actor-critic algorithm with experience replay that uses a single deep neural network to estimate the policy $\pi_\theta(a_t|s_t)$ and the value function $V_{\theta_v}^{\pi}(s_t)$ \cite{wang2016sample}. The three main advantages of ACER over A3C are \cite{mnih2016asynchronous}: 1) it improves the truncated importance sampling with the bias correction, 2) it uses stochastic dueling network architectures, and 3) it applies a new \textit{trust region policy optimization} method \cite{wang2016sample}. ACER uses an improved Retrace algorithm as described in Equation \ref{eq:retrace-algorithm} \cite{munos2016safe} by applying truncated importance sampling with bias correction technique and using the value $\bm{Q^{ret}}$ as the target value to train the critic by minimizing the L2 error term \cite{wang2016sample}. In ACER, the gradient $\bm{\hat{g}_{t}^{acer}}$ is computed by truncating the importance weights by a constant $\bm{c}$, and subtracting $\bm{V_{\theta_v}(s_t)}$ to reduce variance: this is denoted in Equation \ref{eq:acer-gradiant} \cite{wang2016sample}. \begin{equation} \label{eq:retrace-algorithm} \begin{split} Q^{ret}(s_t,a_t) &= r_t + \gamma \bar{\rho}_{t+1} \big [ Q^{ret}(s_{t+1},a_{t+1}) - Q(s_{t+1},a_{t+1}) \big ] \\ & \;\; + \gamma V(s_{t+1}) \end{split} \end{equation} \begin{equation} \label{eq:acer-gradiant} \begin{split} \hat{g}_{t}^{acer} & = \bar{\rho}_t \nabla_{\theta} \ln \pi_\theta(a_t|s_t) \big [Q^{ret}(s_t , a_t) - V_{\theta_v}(s_t) \big ] \\ & \;\; + \underset{a \sim \pi}{\mathbb{E}} \Big ( \big [ \frac{\rho_t(a) - c}{\rho_t(a)} \big ] \nabla_{\theta} \ln \pi_\theta(a_t|s_t) \\ & \;\;\;\;\;\; \big [ Q_{\theta_v}(s,_t,a_t) - V_{\theta_v}(s_t) \big ] \Big ) \end{split} \end{equation} Actor-Critic using Kronecker-Factored Trust Region (ACKTR) \cite{wu2017scalable} is another extension of A3C \cite{mnih2016asynchronous}, which optimizes both the actor and critic by using Kronecker-factored approximation curvature (K-FAC) \cite{martens2015optimizing}. It provides an improved computation of the natural gradients by allowing the covariance matrix of the gradient to be efficiently inverted \cite{wu2017scalable}. \vspace{3 pt} \paragraph{Soft Actor-Critic (SAC)} \hfill \break Soft Actor-Critic (SAC) aims to maximize the expected reward while maximizing the entropy \cite{haarnoja2018soft}. SAC ameliorates the maximum expected sum of rewards defined through accumulating the reward over states transitions $J(\pi) = \sum_{t=1}^{T} \mathbb{E}_{s \sim \rho^{\pi} , a \sim \pi} \Big [ r(s_t,a_t) \Big ]$ by adding the expected entropy of the policy over $\rho_\pi(s_t)$ \cite{haarnoja2018soft}. Equation \ref{eq:sac-entropy} shows a generalized entropy objective, where the temperature parameter $\alpha$ controls the stochasticity of the optimal policy through defining the relevance of the entropy $\mathcal{H}(\pi(.|s_t))$ term to the reward \cite{haarnoja2018soft}. \begin{equation} \label{eq:sac-entropy} J(\pi) = \sum_{t=1}^{T} \mathbb{E}_{s \sim \rho^{\pi} , a \sim \pi} \Big [ r(s_t,a_t) + \alpha \mathcal{H}(\pi(.|s_t)) \Big ] \end{equation} SAC uses two separate neural networks for the actor and critic, and applies function approximators to estimate a soft Q-function $\bm{Q_\theta(s_t,a_t)}$ parameterized by $\bm{\theta}$, a state value function $\bm{V_\psi(s_t)}$ parameterized by $\bm{\psi}$, and an adjustable policy $\bm{\pi_\phi(a_t|s_t)}$ parameterized by $\bm{\phi}$. \vspace{3 pt} \paragraph{Importance Weighted Actor-Learner Architecture (IMPALA)} \hfill \break Importance Weighted Actor-Learner Architecture (IMPALA) \cite{espeholt2018impala} is an off-policy learning algorithm that decouples acting from learning and can be used in two different setups: 1) single learner and 2) multiple synchronous learners. Using a single learner and multiple actor setup, each actor generates trajectories and sends each trajectory to the learner, and receives the updated policy before starting a new trajectory. The learner learns from the actors simultaneously by saving the received trajectories from the actors in a queue and generating the updated policy. Nevertheless, actors might learn an older model because actors are not aware of each other and because of the lag between the actors and the learner. To resolve this issue, IMPALA uses a novel v-trace correction method that considers a truncated importance sampling (IS), which is the ratio between the learner policy $\bm{\pi}$ and the actor current policy $\bm{\mu}$. Similarly, in multiple synchronous learners, the policy parameters are distributed across multiple learners that work synchronously through a master learner \cite{espeholt2018impala}. \section{Conclusion} \label{sec:conclusion} Deep Reinforcement Learning has shown advancement in solving sophisticated problems in real-life scenarios. The environment type of the application has a vital role in selecting an appropriate RL algorithm that provides good results and performance. In this work, we have identified three environment types based on the number of actions and states: 1) Limited states and discrete actions, 2) Unlimited states and discrete actions, and 3) Unlimited states and continuous actions. Environments with a limited number of states and limited actions are considered austere environments and can be solved using Q-learning and SARSA. Complex environments have unlimited states representing the environment, and applying the appropriate algorithm depends on the number of actions. If the actions are limited (discrete), the value-based algorithms such as DQN and its variations would be the choice. However, if the actions are continuous, the policy gradient algorithms are appropriate as they can learn a parameterized policy that approximates the solution. This classification helps researchers and practitioners select appropriate RL algorithms for their studies and applications. Further investigation of algorithms performance in different use case scenarios is needed: the algorithms should be compared in respect to accuracy, convergence, computational resources, and ease of use. Moreover, diverse use cases and requirements should be considered in the evaluation. \bibliographystyle{IEEEtran} \section{Introduction} \label{sec:introduction} Reinforcement Learning (RL) is one of the three machine learning paradigms besides supervised learning and unsupervised learning. It uses agents acting as human experts in a domain to take actions. RL does not require data with labels; instead, it learns from experiences by interacting with the environment, observing, and responding to results. RL can be expressed with Markov Decision Process (MDP) as shown in Figure \ref{fig:mdp-arch}. Each environment is represented with a state that reflects what is happening in the environment. The RL agent takes actions in the environment, that causes a change in the environment's current state generating a new state and receives a reward based on the results. The agent receives a positive reward for good actions and a negative reward for bad actions, which helps the agent evaluate the performed action in a given state and learn from experiences. \begin{figure}[!t] \centering \includegraphics[width=.7\linewidth]{DRL-Architecture.png} \caption{Markov Decision Process} \label{fig:mdp-arch} \end{figure} Video games have been one of the most popular RL applications, and RL algorithms have been mainly tested and evaluated on video games. However, RL has other applications and can be used in different domains such as self-driving cars, natural language processing (NLP), autonomous robotics, delivery drones, and many others. Furthermore, there are many diverse RL algorithms with different variations. Therefore, it is imperative to understand the differences between RL algorithms, select the appropriate algorithm suitable for the environment type and the task on hand. The most widely used algorithm is Deep Q-Network (DQN) with its variations because of its simplicity and efficiency. Nevertheless, DQN is suitable only for environments with discrete actions. For example, in autonomous UAV navigation (self-flying drones), many papers tend to simplify the environment to enable the use of DQN \cite{okuyama2018autonomous, chishti2018self}. However, in complex real-life environments, DQN would not be suitable if the environment is dynamic or the required actions are continuous. Therefore, to assist in matching the RL algorithm with the task, the classification of RL algorithms based on the environment type is needed. Consequently, this study provides an overview of different RL algorithms, classifies them based on the environment type, and explains their primary principles and characteristics. Additionally, relationships among different RL algorithms are also identified and described. The paper provides a perspective on the domain and helps researchers and practitioners to select appropriate algorithms for their use cases. Moreover, it provides options for selecting a suitable algorithm for the environment, rather than attempting to simplify the environment for the algorithm \cite{okuyama2018autonomous, chishti2018self}. The remainder of the paper is organized as follows: Section \ref{sec:background} introduces RL and discusses RL's main principles. Section \ref{sec:rl-algorithms} classifies RL algorithm and provides their overview. Finally, Section \ref{sec:conclusion} concludes the paper. \section{Background} \label{sec:background} This section first introduces reinforcement learning. Next, concepts of policy and value functions are described, and finally, experience replay is explained as a commonly used technique in different RL algorithms. \subsection{Reinforcement Learning} The RL agent learns from taking actions in the environment, which causes a change in the environment's current state and generates a reward based on the action taken as expressed in the Markov Decision Process (MDP). We define the probability of the transition to state $\mathbf{s^{\prime}}$ with reward $\bm{r}$ from taking action $\bm{a}$ in state $\bm{s}$ at time $\bm{t}$, for all $s^{\prime} \in S, \; s \in S, \; r \in R, \; a \in A(s)$, as: \begin{equation} P(s^{\prime},r|s,a) = Pr\{S_t = s^{\prime}, R_t=r| S_{t-1}=s, A_{t-1}=a\} \end{equation} The agent receives rewards for performing actions and uses them to measure the action's success or failure. The Reward $R$ can be expressed in different forms, as a function of the action $R(a)$, or as a function of action-state pairs $R(a,s)$. The agent's objective is to maximize the expected summation of the discounted rewards, which drives the agent to take the selected actions. The reward is granted by adding all the rewards generated from executing an episode. The \textit{episode} (trajectory) represents a finite number of actions and ends when the agent achieves a final state, for example, when a collision occurs in a simulated navigation environment. However, in some cases, the actions can be continuous and cannot be broken into episodes. The discounted reward, as shown in equation \ref{equ:sum-disc-reward} uses a multiplier $\bm{\gamma}$ to the power $\bm{k}$, where $\bm{\gamma \in [0,1]}$. The value of $k$ increases by one at each time step to emphasize the current reward and to reduce the impact of the future rewards, hence the term discounted reward. \begin{equation} \label{equ:sum-disc-reward} G_t = E \left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1}\right] \end{equation} Emphasizing the current action’s immediate reward and reducing the impact of future actions’ rewards help the expected summation of discounted rewards to converge. \subsection{Policy and Value Function} The agent’s behavior is defined by following a policy $\bm{\pi}$, where the policy $\bm{\pi}$ defines the probability of taking action $\bm{a}$, given a state $\bm{s}$, which is denoted as $\bm{\pi(a|s)}$. Once the agent takes an action, the agent uses a value function to evaluate the action. The agent either uses: 1) a \textit{state-value function} to estimate how good for the agent to be in state $\bm{s}$, or 2) a \textit{action-value function} to measure how good it is for the agent to perform an action $\bm{a}$ in a given state $\bm{s}$. The action-value function is defined in terms of the expected summation of the discounted rewards and represents the target Q-value: \begin{equation} \label{equ:action-value-func} Q_\pi(s,a) = E_\pi \left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1} \; | \; S_t = s, A_t = a \right] \end{equation} The agent performs the action with the highest Q-value, which might not be the optimal Q-value. Finding the optimal Q-value requires selecting the best actions that maximize the expected summation of discounted rewards under the optimal policy $\bm{\pi}$. The optimal Q-value $\bm{Q_*(s,a)}$ as described in equation \ref{equ:optimal-policy} must satisfy the \textit{Bellman optimality equation}, which is equal to the expected reward $\bm{R_{t+1}}$, plus the maximum expected discounted return that can be achieved for any possible next state-action pairs $\bm{(s^\prime,a^\prime)}$. \begin{equation} \label{equ:optimal-policy} Q_*(s,a) = \underset{\pi}{max} \; Q(s,a) \end{equation} \vspace{-10pt} \begin{equation} \label{equ:bellman-optimality} Q_*(s,a) = E \left[ R_{t+1} + \gamma \; \underset{a^\prime}{max} \; Q_*(s^\prime,a^\prime) \right] \end{equation} This optimal Q-value $\bm{Q_*(s,a)}$ is used to train the neural network. The Q-value $\bm{Q(s,a)}$ predicted by the network is subtracted from the optimal Q-value $\bm{Q_*(s,a)}$ estimated using the Bellman equation and backpropagated through the network. The loss function is defined as follows: \begin{equation} \label{equ:loss-func} \overset{Target}{\overbrace{E \left[ R_{t+1} + \gamma \; \underset{a^\prime}{max} \; Q_*(s^\prime,a^\prime) \right]}} \; - \; \overset{Predicted}{\overbrace{E \left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1}\right]}} \end{equation} \subsection{Experience Replay} In RL, an experience $\bm{e}$ can be described as the knowledge produced from the agent performing an action $\bm{a}$ in a state $\bm{s}$ causing a new state $\bm{s^\prime}$ and a generated reward $\bm{r}$. The experience can be expressed as a tuple $\bm{e(s,a,s^\prime,r)}$. Lin \cite{lin1992self} proposed a technique called \textit{Experience Replay}, where experiences are stored in a replay memory $\bm{D}$ and used to train the agent. Since experiences are stored in the memory, and some experiences might be of a high importance, they can repeatedly be reused to train the agent what improves convergence. Although experience replay should help the agent theoretically learn from previous important experiences, it entails sampling experiences uniformly from the replay memory $\bm{D}$ regardless of their significance. Schaul \textit{et al.} \cite{schaul2015prioritized} suggested the use of \textit{Prioritized Experience Replay}, which aims to prioritize experiences using Temporal Difference error (TD-error) and replay more frequently experiences that have lower TD-error. \section{Reinforcement Learning Algorithms Classification} \label{sec:rl-algorithms} While most reinforcement learning algorithms use deep neural networks, different algorithms are suited for different environment types. We classify RL algorithms according to the number of the states and action types available in the environment into three main categories: 1) a limited number of states and discrete actions, 2) an unlimited number of states and discrete actions, and 3) an unlimited number of states and continuous actions. The three categories, together with algorithms belonging to those categories, are shown in Figure \ref{fig:RL-Algorthims} and discussed in the following subsections. \begin{figure*}[!t] \centering \includegraphics[width=.7\linewidth]{RL-Algorthims.png} \caption{Reinforcement Algorithms classification based on the environment type} \label{fig:RL-Algorthims} \end{figure*} \subsection{Environments with Limited States and Discrete Actions} \label{LimitedStatesDiscreteActions} The environments with discrete actions and limited states are relatively simple environments where the agent can select from pre-defined actions and be in pre-defined known states. For example, when an agent is playing a tic-tac-toe game, the nine boxes represent the states, and the agent can choose from two actions: X or O, and update the available states. Q-Learning \cite{watkins1992q} algorithm is commonly used to solve problems in such environments. This algorithm finds the optimal policy in a Markov Decision Process (MDP) by maintaining a Q-Table table with all possible states and actions and iteratively updating the Q-values for each state-action pair using the Bellman equation until the Q-function converges to the optimal Q-value. State–Action–Reward–State–Action (SARSA) \cite{rummery1994line} is another algorithm from this category: it is similar to Q-learning except it updates the current $\bm{Q(s,a)}$ value in a different way. In Q-learning, in order to update the current $\bm{Q(s,a)}$ value, we need to compute the next state-action $\bm{Q(s^\prime,a^\prime)}$ value, and since the next action is unknown, then Q-learning takes a greedy action to maximize the reward \cite{zhao2016deep}. In contrast, when SARSA updates the current state-action $\bm{Q(s,a)}$ value, it performs the next action $\bm{a^\prime}$ \cite{zhao2016deep}. \subsection{Environments with Unlimited States and Discrete Actions} In some environments, such as playing a complex game, the states can be limitless; however, the agent's choice is limited to a finite set of actions. In such environments, the agent mainly consists of a Deep Neural Network (DNN), usually a Convolutional Neural Network (CNN), responsible for processing and extracting features from the state of the environment and outputting the available actions. Different algorithms can be used with this environment type, such as Deep Q-Networks (DQN), Deep SARA, and their variants. \vspace{6 pt} \subsubsection{\textbf{Deep Q-Networks (DQN)}} \hfill \break Deep Q-Learning, also referred to as Deep Q-Networks (DQN), is considered the main algorithm used in environments with unlimited states and discrete actions, and it inspires other algorithms used for a similar purpose. DQN usually combines convolutional and pooling layers, followed by fully connected layers that produce Q-values corresponding to the number of actions. Figure \ref{fig:dqn} \cite{anwar2020autonomous} shows AlexNet CNN followed by two fully connected layers to produce Q-value. The current scene from the environment represents the environment's current state; once it is passed to the network, it produces Q-value representing the best action to take. The agent acts and then captures the changes in the environment's current state and the reward generated from the action. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{DQN_Architecture.png} \caption{DQN using AlexNet CNN} \label{fig:dqn} \end{figure} A significant drawback of the DQN algorithm is overestimating the action-value (Q-value), where the agent tends to choose a non-optimal action because it has the highest Q-value \cite{VanHasselt2010}. \vspace{3 pt} \paragraph{Double and Dueling DQN (DD-DQN)} \hfill \break Double DQN uses two networks to solve this overestimation problem in DQN. The first network, called the Policy Network, optimizes the Q-value, and the second network, the Target Network, is a replica of the policy network, and it is used to produce the estimated Q-value \cite{VanHasselt2016}. The target network parameters are updated after a certain number of time steps by copying the policy network parameters rather than using the backpropagation. Another improvement on DQN is Dueling DQN illustrated in Figure \ref{fig:Dueling-DQN} \cite{Wang2016}. Dueling DQN tries to define a better way to evaluate the Q-value by explicitly decomposing the Q-value function into two functions: \begin{itemize} \item State-Value function $\bm{V(s)}$ measures how good is for the agent to be in state $\bm{s}$. \item Advantage-Value function $\bm{A(s, a)}$ captures how good is an action compared to other actions at a given state. \end{itemize} The two functions shown in Figure \ref{fig:Dueling-DQN} \cite{Wang2016}, are combined via a special aggregation layer to produce an estimate of the state-action value function \cite{Wang2016}. The value of this function is equal to the summation of the two values produced by the two functions: \begin{equation} \label{eq:dueling-dqn} Q(s,a) = V(s) + \big( A(s,a) -\frac{1}{|\mathcal{A}|} \sum_{a^\prime} A(s,a) \big) \end{equation} The subtracted term $\bm{\frac{1}{|\mathcal{A}|} \sum_{a^\prime} A(s, a)}$ represents the mean, where $\bm{|\mathcal{A}|}$ represents the size of the vector $\bm{A}$. This term helps with identifiability, and it does not change the relative rank of the A (and hence Q) values. Additionally, it increases the stability of the optimization as the advantage function only needs to change as fast as the mean \cite{Wang2016}. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{Dueling_DQN.png} \caption{DQN vs. Dueling DQN} \label{fig:Dueling-DQN} \end{figure} Double Dueling DQN (DD-DQN) is another DQN algorithm: it combines Dueling DQN with Double DQN to find the optimal Q-value as suggested originally by Wang \textit{et al.} \cite{Wang2016} where the output from the Dueling DQN is passed to Double DQN. \vspace{3 pt} \paragraph{Deep Recurrent Q-Networks (DRQN)} \hfill \break Deep Recurrent Q-Network (DRQN) \cite{hausknecht2015deep} is an extension of the DQN algorithm, replacing the first fully connected layer with a recurrent LSTM layer of the same size. Adding the LSTM layer requires changing the input size from a single state of the environment to multiple states (frames) as a single input, which helps to integrate information through time \cite{hausknecht2015deep}. \vspace{6 pt} \subsubsection{\textbf{Deep SARSA}} \hfill \break Basic SARSA uses Q learning and is suitable for limited states and discrete actions environments, as described in subsection \ref{LimitedStatesDiscreteActions}. On the other hand, Deep SARSA for unlimited states uses a deep neural network similar to DQN: the main difference is that SARSA computes $\bm{Q(s^\prime,a^\prime)}$ by performing the next action $\bm{a^\prime}$, which is required to calculate the current state-action $\bm{Q(s,a)}$. As shown in Figure \ref{fig:RL-Algorthims}, extensions of Deep SARSA are the same as extensions of DQN with the main difference on how to calculate the next action-state value $\bm{Q(s^\prime,a^\prime)}$. \subsection{Environments with Unlimited States and Continuous Actions} Although discrete actions are sufficient to move a car or UAV in a virtual environment, such actions do not provide a realistic object movement in real-life scenarios. Continuous actions describe the quantity of movement in different directions where the agent does not choose from a list of predefined actions. For example, a realistic UAV movement specifies the quantity of required change in roll, pitch, yaw, and throttle values to navigate the environment while avoiding obstacles, rather than moving UAV using one step in predefined directions: up, down, left, right, and forward. Continuous action space requires the agent to learn a parameterized policy $\bm{\pi_\theta}$ to maximize the expected summation of the discounted rewards because it is impossible to calculate action-value for all continuous actions at different states. The problem is a maximization problem and can be solved using gradient descent algorithms to find the optimal $\bm{\theta}$. The value of $\bm{\theta}$ is updated as follows: \begin{equation} \label{eq:gradient-ascent} \theta_{t+1} = \theta_{t} + \alpha\nabla J(\theta_{t}) \end{equation} \noindent where $\bm{\alpha}$ is the learning rate and $\bm{\nabla}$ is the gradient. The reward function $\bm{J}$ objective is to maximize the expected reward using a parameterized policy $\pi_\theta$ as follows \cite{sutton2018reinforcement}: \begin{equation} \label{eq:policy-gradient} \begin{split} J(\pi_\theta) &= \sum_{s \in S} \rho_{\pi_\theta}(s) \; V^{\pi_\theta}(s) \\ &= \sum_{s \in S} \rho_{\pi_\theta}(s) \; \sum_{a\in A} Q^{\pi_\theta}(s, a) \ \pi_{\theta}(a|s) \end{split} \end{equation} Here $\bm{\rho_{\pi_\theta}(s)}$ defines the stationary probability of $\bm{\pi_\theta}$ starting from state $\bm{s_0}$ and transitioning to future states following the policy $\bm{\pi_\theta}$ for $\bm{t}$ time steps. Finding the optimal $\bm{\theta}$ that maximizes the function $\bm{J(\pi_\theta)}$ requires finding the gradient $\bm{\nabla_{\theta} J(\theta)}$: \begin{equation} \label{eq:policy-gradient-theorem} \begin{split} \nabla_{\theta} J(\theta) &= \nabla_{\theta} \biggl( \sum_{s \in S} \rho_{\pi_\theta}(s) \; \sum_{a\in A} Q^{\pi_\theta}(s, a) \ \pi_{\theta}(a|s) \biggr) \\ &\propto \sum_{s \in S} \mu(s) \; \sum_{a\in A} Q^{\pi_\theta}(s, a) \ \nabla \pi_{\theta}(a|s)\ \end{split} \end{equation} Equation \ref{eq:policy-gradient-theorem} can be further rewritten in continuous episodes since $\bm{\sum_{s \in S} \eta(s) = 1}$ as: \begin{equation} \label{eq:policy-gradient-theorem2} \nabla_{\theta} J(\theta) = \mathbb{E}_{s \sim \rho^{\pi_\theta} , a \sim \pi_\theta} \Big [ Q^{\pi_\theta}(s, a) \; \nabla_{\theta} \ln \pi_\theta (a_t|s_t) \Big ] \end{equation} When the training sample is collected according to the target policy $\bm{s \sim \rho^{\pi_\theta}}$ and the expected return is generated for the same policy $\bm{\pi_\theta}$, the algorithm is referred to as \textit{on-policy algorithm}. On the other hand, in \textit{off-policy algorithms}, the training sample follows a behavior policy $\bm{\beta(a|s)}$, which is different than the target policy $\bm{\pi_\theta(a|s)}$ \cite{silver2014deterministic}, while the expected reward is generated using the target policy $\bm{\pi_\theta}$. Off-policy algorithms do not require full rejectories (episodes) for the training sample and they can reuse past trajectories. Equation \ref{eq:off-policy-gradient-theorem} \cite{silver2014deterministic} shows how the policy is adjusted to the ratio between the target policy $\bm{\pi_\theta(a|s)}$ and behaviour policy $\bm{\beta(a|s)}$. \begin{equation} \label{eq:off-policy-gradient-theorem} \nabla_{\theta} J(\theta) = \mathbb{E}_{s \sim \rho^{\beta} , a \sim \beta} \Big [ \frac{\pi_\theta(a|s)}{\beta_{\theta}(a|s)} Q^{\pi_\theta}(s, a) \; \nabla_{\theta} \ln \pi_\theta (a_t|s_t) \Big ] \end{equation} The policy gradient theorem shown in equation \ref{eq:policy-gradient} \cite{sutton2000policy} considered the fundamental base of distinct Policy Gradients (PG) algorithms such as REINFORCE \cite{williams1992simple}, Actor-Critic algorithms \cite{konda2000actor}, Trust Region Policy Optimization (TRPO) \cite{schulman2015trust}, and Phasic Policy Gradient \cite{raileanu2021decoupling}, Stein Variational Policy Gradient \cite{liu2017stein}, Proximal Policy Optimization (PPO) \cite{schulman2017proximal}, and many others. \vspace{6 pt} \subsubsection{\textbf{REINFORCE}} \hfill \break REINFORCE is an acronym for \textbf{RE}ward \textbf{I}ncrement $=$ \textbf{N}onnegative \textbf{F}actor $\times$ \textbf{O}ffset \textbf{R}einforcement $\times$ \textbf{C}haracteristic \textbf{E}ligibility \cite{williams1992simple}. REINFORCE is a Monte-Carlo policy gradient algorithm that works with the episodic case. It requires a complete episode to obtain a sample proportional to the gradient, and updates the policy parameter $\bm{\theta}$ with the step size $\bm{\alpha}$. Because $\bm{\mathbb{E}_{\pi}[G_t|S_t, A_t] = Q^{\pi}(s, a)} $, REINFORCE can be defined as \cite{sutton2018reinforcement}: \begin{equation} \label{eq:REINFORCE} \nabla_{\theta} J(\theta) = \mathbb{E}_{\pi} \Big [G_t \; \nabla_{\theta} \ln \pi_\theta (A_t|S_t) \Big ] \end{equation} REINFORCE uses the Monte Carlo method, which suffers from high variance and, consequently, has slow learning \cite{williams1992simple}. Adding a baseline to REINFORCE reduces the variance and speeds up learning while keeping the bias unchanged by subtracting the baseline value from the expected return $\bm{G_t}$ \cite{sutton2018reinforcement}. \vspace{6 pt} \subsubsection{\textbf{Trust Region Policy Optimization (TPRO)}} \hfill \break Trust Region Policy Optimization (TRPO) \cite{schulman2015trust} is a PG algorithm that improves the performance of gradient descent by taking more extensive steps within trust regions defined by a constraint of KL-Divergence and performs the policy update after each trajectory rather than after each state. Proximal Policy Optimization (PPO) \cite{schulman2017proximal} can be considered an extension of TRPO; it imposes the constraint as a penalty and clips the objective to ensure that the optimization is carried out within the predefined range \cite{shin2019obstacle}. Phasic Policy Gradient (PPG) \cite{cobbe2020phasic} extends PPO by including a periodic auxiliary phase which distills features from the value function into the policy network to improve training. This auxiliary phase enables feature sharing between the policy and value function while decoupling their training. \vspace{6 pt} \subsubsection{\textbf{Stein Variational Policy Gradient (SVPG)}} \hfill \break Stein Variational Policy Gradient (SVPG) \cite{liu2017stein} applies the Stein variational gradient descent (SVGD) \cite{liu2016stein} to update the policy parameterized by $\bm{\theta}$, which reduce variance and improves convergence. SVPG improves the average return and data efficiency when used on top of REINFORCE and advantage actor-critic algorithms \cite{liu2017stein}. \vspace{6 pt} \subsubsection{\textbf{Actor-Critic}} \hfill \break Actor-Critic algorithms are a set of algorithms based on policy gradients theorem that consist of two components: \begin{enumerate} \item An Actor responsible for adjusting the parameter $\bm{\theta}$ of the policy $\bm{\pi_\theta}$ \item A Critic which employs a parameterized vector $\bm{w}$ to estimate the value-function $\bm{Q^{w}(s_t,a_t) \approx Q^{\pi}(s_t,a_t)}$ using a policy evaluation algorithm such as temporal-difference learning \cite{silver2014deterministic} \end{enumerate} The actor can be described as the network trying to find the probability of all available actions and select the action with the highest value, while the critic can be described as a network evaluating the selected action by estimating the value of the new state resulted from performing the action. Different algorithms fall under the actor-critic category; the main ones are described in the following subsections. \vspace{3 pt} \paragraph{Deterministic Policy Gradients (DPG) Algorithms} \hfill \break All deterministic policy gradients algorithms model the policy as a deterministic policy $\bm{\mu(s)}$, rather than stochastic policy $\bm{\pi(s,a)}$ that is modeled over the action's probability distribution. We described earlier in Equation \ref{eq:policy-gradient}, the objective function under a selected policy $\bm{J(\pi_\theta)}$ to be $\bm{\sum_{s \in S} \rho_{\pi_\theta}(s) \; V^{\pi_\theta}(s)}$; however, a deterministic policy is a special case of stochastic policy, where the objective function of the target policy is averaged over the state distribution of the behaviour policy as described in equation \ref{eq:deterministic-policy-gradient} \cite{silver2014deterministic}. \begin{equation} \label{eq:deterministic-policy-gradient} \begin{split} J_{\beta}(\mu_{\theta}) &= \int_{S} \rho^{\beta}(s) \ V^{\mu}(s) \ ds \\ &= \int_{S} \rho^{\beta}(s) \ Q^{\mu}(s,\mu_{\theta}(s)) \ ds \end{split} \end{equation} In the off-policy approach with a stochastic policy, importance sampling is often used to correct the mismatch between behaviour and target policies. However, because the deterministic policy gradient removes the integral over actions, we can avoid importance sampling and the gradient becomes: \begin{equation} \label{eq:deterministic-policy-gradient-theorem} \begin{split} \nabla_{\theta} J_{\beta}(\mu_{\theta}) &\approx \int_{S} \rho^{\beta}(s) \ \nabla_{\theta} \ \mu_{\theta}(a|s) \ Q^{\mu}(s,\mu_{\theta}(s)) \ ds \\ &= \mathbb{E}_{s \sim \rho^{\beta}} \ \Big [ \nabla_{\theta} \ \mu_{\theta}(s) \nabla_{a} Q^{\mu}(s,a)|_{a=\mu_{\theta}(s)} \Big ] \end{split} \end{equation} Different algorithms build on DPG with improvements; for example, Deep Deterministic Policy Gradient (DDPG) \cite{lillicrap2015continuous} adapts DQN to work with continuous action space and combines it with DPG. On the other hand, Distributed Distributional DDPG (D4PG) \cite{barth2018distributed} adopts distributed settings for DDPG with additional improvements such as using N-step returns and prioritized experience replay \cite{barth2018distributed}. Multi-agent DDPG (MADDPG) \cite{lowe2017multi} is another algorithm that extends DDPG to work with multi-agents, where it considers action policies of other agents and learns policies that require multi-agent coordination \cite{lowe2017multi}. Twin Delayed Deep Deterministic (TD3) \cite{fujimoto2018addressing} builds on Double DQN and applies to DDPG to prevent the overestimation of the value function by taking the minimum value between a pair of critics \cite{fujimoto2018addressing}. \vspace{3 pt} \paragraph{Advantage Actor-Critic (A3C)} \hfill \break Asynchronous Advantage Actor-Critic (A3C) \cite{mnih2016asynchronous} is a policy gradient algorithm that uses multi-threads, also known as agents or workers, for parallel training. Each agent maintains a local policy $\bm{\pi_\theta(a_t|s_t)}$ and an estimate of the value function $\bm{V_\theta(s_t)}$. The agent synchronizes its parameters with the global network having the same structure. The agents work asynchronously, where the value of the network parameters flows in both directions between the agents and the global network. The policy and the value function are updated after $t_{max}$ actions or when a final state is reached \cite{mnih2016asynchronous}. Advantage Actor-Critic (A2C) \cite{mnih2016asynchronous} is another policy gradient algorithm similar to A3C, except it has a coordinator responsible for synchronizing all agents. The coordinator waits for all agents to finish their work either by reaching a final state or by performing $\bm{t_{max}}$ actions before it updates the policy and the value function in both direction between the agents and the global network. Actor-Critic with Experience Replay (ACER) is an off-policy actor-critic algorithm with experience replay that uses a single deep neural network to estimate the policy $\pi_\theta(a_t|s_t)$ and the value function $V_{\theta_v}^{\pi}(s_t)$ \cite{wang2016sample}. The three main advantages of ACER over A3C are \cite{mnih2016asynchronous}: 1) it improves the truncated importance sampling with the bias correction, 2) it uses stochastic dueling network architectures, and 3) it applies a new \textit{trust region policy optimization} method \cite{wang2016sample}. ACER uses an improved Retrace algorithm as described in Equation \ref{eq:retrace-algorithm} \cite{munos2016safe} by applying truncated importance sampling with bias correction technique and using the value $\bm{Q^{ret}}$ as the target value to train the critic by minimizing the L2 error term \cite{wang2016sample}. In ACER, the gradient $\bm{\hat{g}_{t}^{acer}}$ is computed by truncating the importance weights by a constant $\bm{c}$, and subtracting $\bm{V_{\theta_v}(s_t)}$ to reduce variance: this is denoted in Equation \ref{eq:acer-gradiant} \cite{wang2016sample}. \begin{equation} \label{eq:retrace-algorithm} \begin{split} Q^{ret}(s_t,a_t) &= r_t + \gamma \bar{\rho}_{t+1} \big [ Q^{ret}(s_{t+1},a_{t+1}) - Q(s_{t+1},a_{t+1}) \big ] \\ & \;\; + \gamma V(s_{t+1}) \end{split} \end{equation} \begin{equation} \label{eq:acer-gradiant} \begin{split} \hat{g}_{t}^{acer} & = \bar{\rho}_t \nabla_{\theta} \ln \pi_\theta(a_t|s_t) \big [Q^{ret}(s_t , a_t) - V_{\theta_v}(s_t) \big ] \\ & \;\; + \underset{a \sim \pi}{\mathbb{E}} \Big ( \big [ \frac{\rho_t(a) - c}{\rho_t(a)} \big ] \nabla_{\theta} \ln \pi_\theta(a_t|s_t) \\ & \;\;\;\;\;\; \big [ Q_{\theta_v}(s,_t,a_t) - V_{\theta_v}(s_t) \big ] \Big ) \end{split} \end{equation} Actor-Critic using Kronecker-Factored Trust Region (ACKTR) \cite{wu2017scalable} is another extension of A3C \cite{mnih2016asynchronous}, which optimizes both the actor and critic by using Kronecker-factored approximation curvature (K-FAC) \cite{martens2015optimizing}. It provides an improved computation of the natural gradients by allowing the covariance matrix of the gradient to be efficiently inverted \cite{wu2017scalable}. \vspace{3 pt} \paragraph{Soft Actor-Critic (SAC)} \hfill \break Soft Actor-Critic (SAC) aims to maximize the expected reward while maximizing the entropy \cite{haarnoja2018soft}. SAC ameliorates the maximum expected sum of rewards defined through accumulating the reward over states transitions $J(\pi) = \sum_{t=1}^{T} \mathbb{E}_{s \sim \rho^{\pi} , a \sim \pi} \Big [ r(s_t,a_t) \Big ]$ by adding the expected entropy of the policy over $\rho_\pi(s_t)$ \cite{haarnoja2018soft}. Equation \ref{eq:sac-entropy} shows a generalized entropy objective, where the temperature parameter $\alpha$ controls the stochasticity of the optimal policy through defining the relevance of the entropy $\mathcal{H}(\pi(.|s_t))$ term to the reward \cite{haarnoja2018soft}. \begin{equation} \label{eq:sac-entropy} J(\pi) = \sum_{t=1}^{T} \mathbb{E}_{s \sim \rho^{\pi} , a \sim \pi} \Big [ r(s_t,a_t) + \alpha \mathcal{H}(\pi(.|s_t)) \Big ] \end{equation} SAC uses two separate neural networks for the actor and critic, and applies function approximators to estimate a soft Q-function $\bm{Q_\theta(s_t,a_t)}$ parameterized by $\bm{\theta}$, a state value function $\bm{V_\psi(s_t)}$ parameterized by $\bm{\psi}$, and an adjustable policy $\bm{\pi_\phi(a_t|s_t)}$ parameterized by $\bm{\phi}$. \vspace{3 pt} \paragraph{Importance Weighted Actor-Learner Architecture (IMPALA)} \hfill \break Importance Weighted Actor-Learner Architecture (IMPALA) \cite{espeholt2018impala} is an off-policy learning algorithm that decouples acting from learning and can be used in two different setups: 1) single learner and 2) multiple synchronous learners. Using a single learner and multiple actor setup, each actor generates trajectories and sends each trajectory to the learner, and receives the updated policy before starting a new trajectory. The learner learns from the actors simultaneously by saving the received trajectories from the actors in a queue and generating the updated policy. Nevertheless, actors might learn an older model because actors are not aware of each other and because of the lag between the actors and the learner. To resolve this issue, IMPALA uses a novel v-trace correction method that considers a truncated importance sampling (IS), which is the ratio between the learner policy $\bm{\pi}$ and the actor current policy $\bm{\mu}$. Similarly, in multiple synchronous learners, the policy parameters are distributed across multiple learners that work synchronously through a master learner \cite{espeholt2018impala}. \section{Conclusion} \label{sec:conclusion} Deep Reinforcement Learning has shown advancement in solving sophisticated problems in real-life scenarios. The environment type of the application has a vital role in selecting an appropriate RL algorithm that provides good results and performance. In this work, we have identified three environment types based on the number of actions and states: 1) Limited states and discrete actions, 2) Unlimited states and discrete actions, and 3) Unlimited states and continuous actions. Environments with a limited number of states and limited actions are considered austere environments and can be solved using Q-learning and SARSA. Complex environments have unlimited states representing the environment, and applying the appropriate algorithm depends on the number of actions. If the actions are limited (discrete), the value-based algorithms such as DQN and its variations would be the choice. However, if the actions are continuous, the policy gradient algorithms are appropriate as they can learn a parameterized policy that approximates the solution. This classification helps researchers and practitioners select appropriate RL algorithms for their studies and applications. Further investigation of algorithms performance in different use case scenarios is needed: the algorithms should be compared in respect to accuracy, convergence, computational resources, and ease of use. Moreover, diverse use cases and requirements should be considered in the evaluation. \bibliographystyle{IEEEtran}
2,877,628,090,626
arxiv
\section{Introduction} \label{section:intro} Data observed on a two-dimensional domain arise naturally across several disciplines, motivating an increasing demand for dedicated analysis techniques. Functional Data Analysis (FDA) (\citealt{ramsay2005functional}) is thus accruing interest as a valid option for representing such complex data, since it allows preserving their continuous nature, and provides a rigorous mathematical framework. Among the others, \citet{Zhou} analyzed temperature tracking of specific areas, presenting and comparing two approaches for performing Functional Principal Component Analysis (FPCA) on functions defined on a non-rectangular domain, \citet{Munoz} gave a detailed description of the entire imaging process using the FDA approach, proposing also a representation of iris images through functional data, while a novel regularization technique for Gaussian random fields on a rectangular domain has been proposed by \citet{Raket} and applied to 2D electrophoresis images. Another bivariate smoothing approach in a penalized regression framework is introduced by \citet{Ivanescu}, allowing for the estimation of multiple functional parameters of completely or incompletely sampled two-dimensional functional data. As shown by \citet{Gervini}, also mortality rates can be interpreted as two-dimensional functional data, where one dimension is the temporal one ad the other one refers to age. Whereas in all the work reviewed above functions are assumed to be realization of \textit{iid} or at least \textit{exchangeable} random objects, to the best of our knowledge there is no literature focusing on forecasting of time-dependent two-dimensional functional data. In this work, we will focus on time series of surfaces, representing them as two-dimensional functional time series (FTS). A two-dimensional functional time series is an ordered sequence $Y_1, \dots, Y_T$ of random variables with values in a functional Hilbert space $\mathbb{H}$, characterized by some sort of temporal dependency. More formally, we consider a probability space $(\Omega, \mathcal{F},\mathbb{P})$, and define a random function at time $t$ as $Y_t: \Omega \rightarrow \mathbb{H}$, measurable with respect to the Borel $\sigma$-algebra $\mathcal{B}(\mathbb{H})$. In the rest of the article we will consider functions belonging to $\mathbb{H} = \mathcal{L}^2([c,d] \times [e,f])$, the space of measurable square integrable real-valued functions defined on the rectangle $[c,d] \times [e,f] \subset \mathbb{R}^2$, with $c,d,e,f \in \mathbb{R}$, $c<d$, $e<f$\footnote{Such choice is motivated by many reasons, one above all is the fact that, by considering functions in $\mathcal{L}^2([c,d] \times [e,f])$, the usual Fréchet mean for functional data coincides with the pointwise mean and the covariance kernel coincides with the point-wise covariance. Moreover, $\mathbb{H}$ is a separable Hilbert space, with the usual inner product: $ \langle x,y \rangle := \int_c^d \int_e^f x(u,v) y(u,v) du dv \qquad \forall x,y \in \mathbb{H} $ }. We stress the fact that, from a theoretical point of view, our methodology can be applied to functions defined on a generic subset of $\mathbb{R}^2$, however, for simplicity and without loss of generality, we will only consider rectangular domains. Given a realization of such a stochastic process $\{Y_t\}_{t= \, \dots, N}$, we aim to forecast the next realization and, at the same time, quantify the uncertainty around the predicted function. Whereas uncertainty quantification in the context of \textit{univariate} FTS forecasting has received increasing attention in the statistical community in recent decades, no attempts have been made to extend them to functions defined on a bidimensional domain. Specifically, most of the research has focused on adapting of the Bootstrap to the functional setting (see e.g. \citealt{Hyndman_Shang} and \citealt{Rossini}). However, the aforementioned Bootstrap procedure is very computationally intensive, especially in the infinite-dimensional context of functional data. In this work, we will instead focus on Conformal Prediction (CP), another more recent and versatile nonparametric approach to prediction. The first appearance of such technique dates back in \citet{gammerman} and it has been later presented in great details in the book of \citet{Vovk2005AlgorithmicLI} and in \citet{Balasubramanian}. A recent review of the theory of Conformal inference can be found in \citet{Zeni2020ConformalPA}. In a nutshell, the CP approach is based on the idea of assigning scores to new candidate realizations in order to assess their non-conformity with respect to other observed data. Prediction sets are then derived by inverting the hypothesis test obtained using such scores, including only candidate realizations with a conformity level higher than a suited and properly selected threshold. The attractiveness of Conformal Prediction relies on its great generality and versatility, which permits to couple it with potentially any predictive algorithm, in order to obtain distribution free prediction sets. We will consider only the Inductive Conformal Prediction a.k.a. Split Conformal Prediction method (\citealt{Papadopoulos2002InductiveCM}). Such modification of the original Transductive Conformal method is not only computationally efficient, but is also necessary in high-dimensional frameworks like the one of functional data. Indeed, the main drawback of the Full Conformal approach is that the prediction algorithm needs to be trained again for every possible candidate realization $y$. In practice, in multivariate problems, where $y$ lies in $\mathbb{R}^p$, one runs the above routine for candidate $y$ over a $p$-dimensional grid. While such approach is prohibitive for high-dimensional spaces, since computational times grow exponentially with $p$, it is actually unfeasible in a functional setting, in which $y$ lies in an infinite-dimensional space. On the contrary, employing Split Conformal inference along with a specific family of nonconformity scores, introduced by \citet{diquigiovanni2021importance} and explicitly tailored to functional data, permits deriving prediction sets in closed form. It is important to notice that the theory of CP is developed under the only assumption of exchangeable data. Such very mild hypothesis, despite being one of the strengths of CP, is clearly not suitable for the time series context, in which one have to deal with temporal dependence between data. Adapting CP beyond exchangeable data has recently gathered attention in the statistical community. \citet{chernozhukov2018exact} rephrased the CP framework in the context of randomization inference, proving approximate validity of the resulting prediction sets under weak assumptions on the conformity scores and on the ergodicity of the time series. Later, \citet{diquigiovanni2021_FTS} adapted such methodology to allow for functional time series in a Split Conformal setting. We will extend such method to two-dimensional functional data. At the same time, we will propose different point predictors for 2D FTS, eventually comparing them in an order to assess how forecasting performances influence the amplitude of the resulting prediction band. The rest of this paper is as follows: we start by introducing Conformal Prediction methodology for two-dimensional functional data in Section \ref{section:CP}, providing theoretical guarantees on the resulting prediction bands. In Section \ref{section:point_prediction}, we introduce different point-prediction algorithms for two-dimensional functional time series, proposing for the first time an extension of the Functional Autoregressive Model for two-dimensional functional data. Such forecasting algorithms are then adapted to the Conformal inference setting and consequently compared by means of the resulting prediction bands in a simulation study in Section \ref{section:4}. Finally, in Section \ref{section:BlackSea} we employ the developed techniques to obtain forecasts and prediction bands in a real scenario, predicting day by day the sea level over the Black Sea. Section \ref{section:conclusions} concludes. \section{Methodology: Conformal Prediction for 2D Functional Data} \label{section:CP} Consider a time series $Z_1, \dots, Z_T$ of regression pairs $Z_t = (X_t, Y_t)$, with $t=1, \dots, T$. Let $Y_t$ be a random variable with values in $\mathbb{H}$, while $X_t$ is a set of random covariates at time $t$ belonging to a measurable space. Notice that $X_t$ is a generic set of regressors, which may contain both exogenous and endogenous variables. In particular, later in the manuscript, we will consider $X_t$ to contain only the lagged version of the function $Y_t$, namely $Y_{t-1}$. Given a significance level $\alpha \in [0,1]$, we aim to design a procedure that outputs a prediction set $\mathcal{C}_{T,1-\alpha}(X_{T+1})$ for $Y_{T+1}$ based on $Z_1, \dots, Z_{T}$ and $X_{T+1}$, with unconditional coverage probability close to $1-\alpha$. More formally, we define $\mathcal{C}_{T,1-\alpha}(X_{T+1})$ to be a \textit{valid} prediction set if: \begin{equation} \label{eqn:valid_set} \mathbb{P}(Y_{T+1} \in \mathcal{C}_{T,1-\alpha}(X_{T+1})) \geq 1 - \alpha \end{equation} Specifically, we would like to construct a particular type of prediction sets, commonly known as \textit{prediction bands}, formally defined as: \begin{equation} \label{eq:bands} \{ y \in \mathbb{H}: y(u,v) \in B_{n}(u,v) \quad \forall (u,v) \in [c,d] \times [e,f] \} \end{equation} where $B_n(u,v) \subseteq \mathbb{R}$ is an interval for each $(u,v) \in [c,d] \times [e,f]$. The convenience of such type of prediction sets in applications is extensively motivated in the literature (see e.g. \citealt{Pintado}, \citealt{Lei1} and \citealt{diquigiovanni2021importance}), since a prediction set of this type can be visualized easily, a property that is instead not guaranteed if the prediction region is a generic subset of $\mathbb{H}$. Let $z_1, \dots, z_T$ be realizations of $Z_1, \dots, Z_T$. As the name suggests, Split Conformal inference is based on a random split of data into two disjoint sets: let $\mathcal{I}_1$, $\mathcal{I}_2$ be a random partition of $\{1, \dots, T\}$, such that $|\mathcal{I}_1|=m$, $|\mathcal{I}_2|=l$, $m,l \in \mathbb{N}$ $m,l > 0$, $m+l=T$. Historical observations $z_1, \dots, z_T$ are divided into a \textit{training set} $\{z_h,\, h \in \mathcal{I}_1\}$, from which we will estimate the model, and a \textit{calibration set} $\{z_h,\, h \in \mathcal{I}_2\}$, that will be used in an out-of-sample context to measure the nonconformity of a new candidate function. The choice of the split ratio and of the type of split is non-trivial and has motivated discussion in the statistical community. We will fix the training-calibration ratio equal to 1 and perform a random split, but we refer to \ref{appendix_split} for a more extensive discussion on these issues. We then introduce a \textit{nonconformity measure} $\mathcal{A}$, which is a measurable function with values in $\mathbb{R} \cup \{+\infty\}$. The role of $\mathcal{A}(\{z_h, \, h \in \mathcal{I}_1\}, z)$ is to quantify the nonconformity of a new datum $z$ with respect to the training set $\{z_h, \, h \in \mathcal{I}_1\}$. The choice of the nonconformity measure is crucial if we aim to find prediction bands \eqref{eq:bands} and if we want to find them in closed form. Moreover, as will be discussed later, this choice will also affect the size of the resulting prediction bands, and thus their efficiency. Motivated by such considerations, we will employ the following nonconformity score, introduced by \citet{diquigiovanni2021importance}, extended here to two-dimensional functional data: \begin{equation} \label{eqn:nonconformitymeasure} \mathcal{A}(\{z_h: h \in \mathcal{I}_1\}, z) = \esssup_{(u,v) \in [c,d] \times [e,f]} \frac{| y(u,v) - g_{\mathcal{I}_1}(u,v;x_{T+1}) |}{s_{\mathcal{I}_1}(u,v)} \end{equation} where $z=(x_{T+1}, y)$, $g_{\mathcal{I}_1}$ is a point predictor built from the training set $\mathcal{I}_1$, and $s_{\mathcal{I}_1}$ is a \textit{modulation function}, which is a positive function depending on $\mathcal{I}_1$ itself that allows for prediction bands with non-constant width along the domain. The estimation of $g_{\mathcal{I}_1}$ is discussed in Section \ref{section:4}, while the functional standard deviation will be employed as modulation function $s_{\mathcal{I}_1}$, allowing for wider bands in the parts of the domain where data show high variability and narrower and more informative prediction bands in those parts characterized by low variability. For an extensive discussion on the optimal choice of modulation function, we refer to \citet{diquigiovanni2021importance}. \begin{comment} Notice that, from a theoretical point of view, the nonconformity measure $\mathcal{A}$ may be $+\infty$, since we are embedding functions in $\mathcal{L}^2([c,d] \times [e,f])$ and we have thus no guarantee on their boundedness. To overcome this issue, one can instead consider the functional space $\mathcal{L}^{\infty}([c,d] \times [e,f])$, as done by \citet{diquigiovanni2021importance}, however, such space equipped with the usual $\mathcal{L}^2$ scalar product, is not closed, and is therefore not a Hilbert space. For such reason, we decided to settle anyway the analysis in $\mathcal{L}^2([c,d] \times [e,f])$, resorting to the fact that, in practical applications, \eqref{eqn:nonconformitymeasure} will only assume finite values, given the finite nature of observed data. \end{comment} Consider now a candidate function $y \in \mathbb{H}$ and define the augmented dataset as $Z_{(y)} = \{Z_t\}_{t=1}^{T+1}$, where: \begin{equation} Z_t = \begin{cases} (X_t, Y_t),& \text{if } 1 \leq t \leq T \\ (X_{T+1}, y) ,& \text{if } t = T+1 \end{cases} \end{equation} The key idea of the methodology proposed by \citet{chernozhukov2018exact} and extended by \citet{diquigiovanni2021_FTS} is to generate several randomized versions of $Z_{(y)}$ through a specifically tailored permutation scheme, and compute nonconformity scores on each of them. We will then decide whether to include $y$ in the prediction region, by comparing the nonconformity score of $Z_{(y)}$ with that of its permuted replicas. In order to do so, we aim to define a family $\Pi$ of index permutations $\pi: \{1,\dots,T+1\} \rightarrow \{1,\dots,T+1\}$, that leaves unchanged the indices of the training set, and modify only $\{\mathcal{I}_2, T+1\}$, namely the indices of the calibration set and the next time step. In order to do so, let's first introduce a function $\lambda: \{\mathcal{I}_2, T+1\} \rightarrow \{1,\dots,l+1\}$ such that $\lambda(t)$ returns the $t$-th element of the ordered set $\{\mathcal{I}_2,T+1\}$. Fix now a positive integer $b \in \{1, \dots l+1\}$ such that $\frac{l+1}{b} \in \mathbb{N}$ and define a family $\tilde{\Pi}$ of index permutations that acts on the set $\{1,\dots,l+1\}$. Each $\tilde{\pi}_i \in \tilde{\Pi}$ is required to be a bijection $\tilde{\pi}_i : \{1,\dots,l+1\} \rightarrow \{1,\dots,l+1\}$, for $i = 1, \dots, \frac{l+1}{b}$. In particular, we will consider non-overlapping blocking permutations, with $b$ representing the size of the blocking scheme: \begin{equation} \tilde{\pi}_i(j) = \begin{cases} j+(i-1)b & \text{if } 1 \leq j \leq l - (i-1)b+1 \\ j+(i-1)b-l-1 & \text{if } l - (i-1)b+2 \leq j \leq l+1 \end{cases} \end{equation} By definition, we have that $|\tilde{\Pi}|= \frac{l+1}{b}$. Moreover, such family of transformations forms an algebraic group, containing among other the identity transformation $\tilde{\pi}_1$. It is then straightforward to introduce the family $\Pi$ of index permutations acting on $\{1,\dots,T+1\}$. Each $\pi_i \in \Pi$, with $i=1,\dots,\frac{l+1}{b}$ is defined as: \begin{equation} \label{eqn:blocking_scheme} \pi_i(t) = \begin{cases} t & \text{if } t \in \mathcal{I}_1 \\ \lambda^{-1}(\tilde{\pi}_i(\lambda(t)))) & \text{if } t \in \mathcal{I}_2 \cup \{T+1\} \end{cases} \end{equation} In Figure \ref{fig:permutations} is reported a trivial example of the families of permutation $\Pi$ and $\tilde{\Pi}$. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{Diagrams/Permutations_1.png} \caption[Example of permutation families $\tilde{\Pi}$ and $\Pi$]{Example of permutation families $\tilde{\Pi}$ and $\Pi$, with sample size $T=6$, training set $\mathcal{I}_1=\{1,3,6\}$, calibration set $\mathcal{I}_2=\{2,4,5\}$, $l=m=3$, size of blocking scheme $b=2$. In this case $\lambda:\{\mathcal{I}_2,T+1\}\equiv\{2,4,5,7\}\rightarrow \{1,2,3,4\}$, $\tilde{\Pi}=\{\tilde{\pi}_1,\tilde{\pi}_2\}$ and $\Pi=\{\pi_1,\pi_2\}$.} \label{fig:permutations} \end{figure} We refer to $Z^{\pi}_{(y)} = \{Z_{\pi(t)}\}_{t=1}^{T+1}$ as the randomized version of $Z_{(y)}=\{Z_t\}_{t=1}^{T+1}$. Let's then define the randomization p-value as: \begin{equation} \label{eqn:p_value_function} p(y) = \frac{1}{|\Pi|} \sum_{\pi \in \Pi} \bm{1} (S(Z^{\pi}_{(y)}) \geq S(Z_{(y)})) \end{equation} where the nonconformity scores $S(Z_{(y)})$ and $S(Z^{\pi}_{(y)})$ are defined as: \begin{gather} S(Z_{(y)}) = \mathcal{A}(\{Z_h\!: h \in \mathcal{I}_1\}, Z_{T+1}) \\ S(Z^{\pi}_{(y)}) = \mathcal{A}(\{Z_h\!: h \in \mathcal{I}_1\}, Z_{\pi(T+1)}) \end{gather} The idea is to apply permutations, modifying the order of observations in the calibration set, while at the same time preserving the dependence between them, thanks to the block structure of $\Pi$. For each $\pi$, we compute the nonconformity score of $Z^{\pi}_{(y)}$. The p-value of a test candidate value $y$ is then determined as the proportion of randomized versions $Z^{\pi}_{(y)}$ with a higher or equal nonconformity score than the one of the original augmented dataset $Z_{(y)}$. Notice that $p(y)$ is a measure of the \textit{conformity} of the candidate function $y$ with respect to the permutation family $\Pi$. It is then natural to include in the prediction set only functions $y$ with an ``high" conformity level. Therefore, in accordance with Conformal Prediction, given a significance level $\alpha \in [b/(l+1),1]$\footnote{If $\alpha \in (0,b/(l+1))$ the resulting prediction set will coincide with the entire space $\mathbb{H}$.}, we define the prediction bands by test inversion: \begin{equation} \mathcal{C}_{T,1-\alpha}(X_{T+1}) := \{y \in \mathbb{H}\!: \, p(y) > \alpha\} \end{equation} As mentioned before, the advantage of using the Split Conformal method along with the conformity measure \eqref{eqn:nonconformitymeasure} relies on the possibility to find the prediction set in closed form. By defining $k^s$ as the $\lceil (|\Pi|+1)(1-\alpha) \rceil$th smallest value of $\{S(Z_{(y)}^{\pi}), \pi \in \Pi \setminus \pi_1\}$, we can derive a prediction band in closed form: \begin{comment} By defining $k^s$ as the $\lceil (|\Pi|+1)(1-\alpha) \rceil$th smallest value of the set $\mathcal{W}:=\{S(Z_{(y)}^{\pi}), \pi \in \Pi \setminus \pi_1\}$. Notice that, at a first glance, it may appear that $\mathcal{W}$ and $k^s$ depend on $y$. However, thanks to the chosen permutation scheme, we can notice that: \begin{equation} Z_{\pi(T+1)}=Z_{T+1}=y \iff \pi(T+1)=T+1 \iff \pi=\pi_1 \end{equation} Therefore, having excluded $S(Z_{(y)}^{\pi_1})$ from the set $\mathcal{W}$, both $\mathcal{W}$ and $k^s$, are actually independent of $y$. This fact is peculiar to the Split Conformal approach, which is here reflected by the choice of the employed permutation scheme $\Pi$. We can exploit this property, together with the non-conformity measure introduced in \eqref{eqn:nonconformitymeasure}, to derive a prediction band in closed form: \end{comment} \begin{gather*} y \in \mathcal{C}_{T,1-\alpha}(X_{T+1}) \iff p(y) > \alpha \\ \iff S(Z_{(y)}) \leq k^s \\ \iff \esssup_{(u,v) \in [c,d] \times [e,f]} \frac{| y(u,v) - g_{\mathcal{I}_1}(u,v;x_{T+1}) |}{s_{\mathcal{I}_1}(u,v)} \leq k^s \\ \iff | y(u,v) - g_{\mathcal{I}_1}(u,v;x_{T+1}) | \leq k^s s_{\mathcal{I}_1}(u,v) \qquad \forall (u,v) \in [c,d] \times [e,f] \\ \iff y(u,v) \in \left[ g_{\mathcal{I}_1}(u,v;x_{T+1}) \pm k^s s_{\mathcal{I}_1}(u,v) \right] \qquad \forall (u,v) \in [c,d] \times [e,f] \end{gather*} The prediction band is therefore: \begin{multline} \label{eqn:prediction_set_explicit} \mathcal{C}_{T,1-\alpha}(X_{T+1}) := \left\{ y \in \mathbb{H}: y(u,v) \in \left[ g_{\mathcal{I}_1}(u,v;x_{T+1}) \pm k^s s_{\mathcal{I}_1}(u,v) \right] \forall (u,v) \in [c,d] \times [e,f] \right\} \end{multline} In the case in which regression pairs are exchangeable, the proposed method retains exact, model-free validity (\citealt{chernozhukov2018exact}). However, when such assumption is not met, one can guarantee only approximately validity (in the sense of Theorem \ref{my_theorem}) of the proposed approach under weak assumptions on the conformity score and the ergodicity of the time series. More formally, let $\mathcal{A}^*$ be an oracle nonconformity measure, inducing oracle nonconformity score $S^*$. Define $F$ to be the cumulative (unconditional) distribution function of the oracle nonconformity scores, namely $F(x)=\mathbb{P}(S^*(Z_{(y)}^{\pi})<x)$ and $\hat{F}$ the empirical counterpart, obtained by applying permutations $\pi \in \Pi$: $\hat{F}(x) := \frac{1}{|\Pi|} \sum_{\pi \in \Pi} \mathbbm{1}\{S^*(Z_{(y)}^{\pi}) < x\}$. Let $\{\delta_{1\bar{l}},\delta_{2\bar{m}},\gamma_{1\bar{l}},\gamma_{2\bar{m}}\}$ be sequences of numbers converging to zero. Moving from \citet{chernozhukov2018exact}, Theorem 1 of \citet{diquigiovanni2021_FTS} prescribes sufficient conditions in order to guarantee asymptotic exactness of the prediction set. We report such result with a slightly modified notation. Let here $Z = Z_{(Y_{T+1})}$, where the candidate function $y$ is now substituted by the random function $Y_{T+1}$.\\ \begin{thm} \label{my_theorem} If the following conditions hold: \begin{itemize} \item $\sup_{a \in \mathbb{R}} |\hat{F}(a)-F(a)| \leq \delta_{1\bar{l}}$ with probability $1-\gamma_{1\bar{l}}$ \item $\frac{1}{|\Pi|} \sum_{\pi \in \Pi}\left[S(Z^{\pi})-S^*(Z_{(y)}^{\pi})\right]^2 \leq \delta_{2\bar{m}}^2$ with probability $1-\gamma_{2\bar{m}}$ \item $|S(Z^{\pi}) - S^*(Z^{\pi})| \leq \delta_{2\bar{m}}$ with probability $1-\gamma_{2\bar{m}}$ \item With probability $1-\gamma_{2\bar{m}}$ the pdf of $S^*(Z^{\pi})$ is bounded above by a constant $D$ \end{itemize} then the Conformal confidence set has approximate coverage $\-\alpha$: \begin{equation} |\mathbb{P}(Y_{T+1}\in \mathcal{C}_{T,1-\alpha}(X_{T+1})-(1-\alpha))| \leq 6 \delta_{1\bar{l}}+ 2 \delta_{2\bar{m}} + 2 D \left( \delta_{2\bar{m}} + 2 \sqrt{\delta_{2\bar{m}}} \right) + \gamma_{1\bar{l}} + \gamma_{2\bar{m}} \end{equation} \end{thm} The first condition concerns the approximate ergodicity of $\hat{F}$ for $F$, a condition which holds for strongly mixing time series using blocking permutation $\Pi$ defined in \eqref{eqn:blocking_scheme} (\citealt{chernozhukov2018exact}). The other conditions are requirements for the quality of approximating the oracle $S^*(Z^{\pi})$ with $S(Z^{\pi})$. Intuitively, $\delta_{2\bar{m}}^2$ bounds the discrepancy between the nonconformity scores and their oracle counterparts. Such condition is related to the quality of the point prediction and to the choice of the employed nonconformity measure. \section{Point prediction} \label{section:point_prediction} In order to obtain CP band with empirical coverage close to the nominal one while maintaining a small width, the choice of an accurate point predictor is crucial. As mentioned before, whereas in the typical i.i.d. case finite-sample unconditional coverage still holds when the model is heavily misspecified (\citealt{diquigiovanni2021importance}), in the time series context a strong model misspecification may compromise the coverage guarantees and not only the efficiency of the resulting prediction bands (\citealt{chernozhukov2018exact}, \citet{diquigiovanni2021_FTS}). For this reason, it is important to consider models that are consistent with the functional nature of the observations and that can adequately deal with their infinite dimensionality. We hence extend for the first time the theory of autoregressive processes in Hilbert space, in order to allow for temporarily evolving surfaces, i.e. for time series of functions with a bivariate domain. Given the novelty of the subject, we introduce the Functional Autoregressive model (FAR) in Section \ref{sec:FAR1} and propose estimation techniques for it in Section \ref{sec:FAR1_estimation}. \subsection{FAR(1) Model} \label{sec:FAR1} One of the most popular statistical models used to capture temporal dependence between functional observations is the functional autoregressive process (FAR). The theory of functional autoregressive processes in Hilbert spaces is developed in the pioneering monograph of \citet{bosq} and a comprehensive collection of statistical advancements for the FAR model can be found in the book by \citet{horvath2012inference}. A sequence of mean zero random functions $\{Y_t\}_{t =1}^{T} \subset \mathbb{H}$ follows a non-concurrent functional autoregressive process of order 1 if: \begin{equation} \label{eq:FAR_equation} Y_{t} = \Psi Y_{t-1} + \varepsilon_{t} \qquad t=2,\dots,T \end{equation} where $\{ \varepsilon_{t} \}_{t \in \mathbb{N}}$ is a sequence of iid mean-zero innovation errors with values in $\mathbb{H}$ satisfying $\mathbb{E}[||\varepsilon_{t}||^2] < +\infty$ and $\Psi$ is a linear bounded operator from $\mathbb{H}$ to itself. In particular, we will consider $\Psi$ to be a Hilbert-Schmidt operator with kernel $\psi$, in such a way that: \begin{equation} (\Psi x) (u,v) = \int_c^d \int_e^f \psi(u,v;w,z) x(w,z) dw dz \quad \forall x \in \mathbb{H}, \, \forall (u,v) \in [c,d] \times [e,f] \end{equation} In order to ensure existence of a stationary solution to the FAR(1) equation \eqref{eq:FAR_equation}, one has to require the existence of an integer $j_0 \in \mathbb{N}$ such that $||\Psi||^{j_0} < 1$ (\citealt{bosq}, Lemma 3.1). \subsection{FAR(1) Estimation} \label{sec:FAR1_estimation} We first introduce a model that may appear simplistic, since it does not exploit the possible time dependence between the values of the function in different points of the functional domain, but that in practical applications provides satisfying results. The prediction method assumes an autoregressive structure in each location $(u,v)$ of the domain, ignoring the dependencies between different points. More formally, we define the \textit{concurrent} FAR(1) as: \begin{equation} \label{eqn:FAR_concurrent} Y_{t}(u,v) = \psi_{u,v} Y_{t-1}(u,v) + \varepsilon_{t}(u,v) \qquad \forall (u,v) \in [c,d] \times [e,f] \end{equation} where $\psi_{u,v} \in \mathbb{R}$ and $t=2,\dots,T$. Supposing to have observed functional data $y_1, \dots, y_T$ on a common two-dimensional grid $\{(u_i,v_j)\}$ with $i=1,\dots,N_1$ and $j=1,\dots,N_2$, the goal becomes to estimate $\psi_{u_i,v_j}$ for each location $(u_i,v_j)$ in the grid. We will later refer to this model as \textbf{FAR(1)-Concurrent}. Following a procedure similar to the Yule-Walker estimation in the scalar setting, we propose now a more sophisticated estimator of $\Psi$. Proceeding similarly to \citet{horvath2012inference}, we derive the following estimator: \begin{equation} \label{eq:EK_estimator} \hat{\Psi}_{M} x = \frac{1}{T-1} \sum_{i,j = 1}^{M} \sum_{t=1}^{T} \hat{\lambda}_j \langle x, \hat{\xi}_j \rangle \langle Y_{t-1}, \hat{\xi}_j \rangle \langle Y_{t}, \hat{\xi}_i \rangle \hat{\xi}_i \qquad \forall x \in \mathbb{H} \end{equation} where $\xi_1, \dots ,\xi_M$ are the first M normalized functional principal components (FPC's), $\lambda_1, \dots, \lambda_M$ are the corresponding eigenvalues and $\langle x, \xi_1 \rangle, \dots, \langle x, \xi_M \rangle$ are the scores of $x$ along the FPC's. \ref{appendix_FPCA} is dedicated to the illustration of two different estimation techniques for $\xi_i$ and $\lambda_i$, one based on a discretization of functions on a fine grid and the other designed starting from an expansion of data on a finite basis system. We further refer to \ref{appendix_EK} for more details on the derivation of estimator \eqref{eq:EK_estimator} and for an extensive discussion on how to adapt it to the Conformal Prediction setting, in order to estimate $\Psi$ from the training set $\mathcal{I}_1$ only. A different yet simpler forecasting procedure has been proposed by \citet{Aue} for one dimensional functional data and will be here extended to the two-dimensional setting. Calling once again $\xi_1, \dots, \xi_M$ the first $M$ functional principal components, we decompose the functional time series as follows: \begin{align} Y_t(u,v) &= \sum_{j=1}^{M} \langle Y_t, \xi_j \rangle \xi_j(u,v) + e_t(u,v) = \\ &= \bm{Y}_t^T \bm{\xi}(u,v) + e_t(u,v) \end{align} where $\bm{Y}_t=[\langle Y_t, \xi_1 \rangle, \dots, \langle Y_t, \xi_M \rangle]^T$ contains the scores of the projection, $\bm{\xi}(u,v) = [\xi_1(u,v),\dots,\xi_M(u,v)]^T$ and $e_t(u,v)$ is the approximation error due to the truncation of the expansion on the first $M$ principal components. Neglecting the approximation error $e_t$, one can prove that the vector $\bm{Y}_t$ follows a multivariate autoregressive process of order 1. Plugging in the estimated FPCs $\hat{\xi}_1, \dots, \hat{\xi}_M$, we can estimate the parameters of the resulting model using standard techniques of multivariate statistics and forecast $\hat{\bm{Y}}_{T+1}$ based on historical data $Y_1,\dots,Y_T$. The predicted function $\hat{Y}_{T+1}$ can be simply reconstructed as: \begin{equation} \label{eqn:VAR_efpc_hat} \hat{Y}_{T+1}(u,v) = \hat{\bm{Y}}_{T+1}^T\bm{\xi}(u,v) \end{equation} Throughout the rest of the work, we will employ all the estimators presented above, comparing them in Section \ref{section:4} in terms of prediction performances. \section{Simulation study} \label{section:4} \subsection{Study Design} In this section, we will evaluate the procedure presented above through a simulation study. The goal is twofold: we aim to assess the quality of the proposed Conformal Prediction bands and, at the same time, evaluate different point predictors in terms of the resulting prediction regions. We will employ as a data generating process a FAR(1) model in order to compare the different estimation routines presented in Section \ref{sec:FAR1}. In order to fix a benchmark on the forecasting performances, we will employ as reference prediction algorithm the following naive one: $\hat{Y}_{T+1}=Y_T$. Notice that, by including a forecasting algorithm that is not coherent with the data generating process, we can illustrate how the presented CP procedure performs when a good point predictor $g_{\mathcal{I}_1}$ is not available. Although as reported in Section \ref{section:point_prediction} an accurate forecasting algorithm is necessary to guarantee asymptotic validity, we will see that in the performed simulations CP bands will be valid even when such assumption does not hold. Since this work is focused on uncertainty quantification of prediction in the context of two-dimensional functional data, we will compare forecasting performances by means of the resulting Conformal Prediction bands. Firstly and foremost, we will estimate the unconditional coverage by computing the \textit{empirical unconditional coverage} in order to compare it with the nominal confidence level $1-\alpha$. In the second place, we will consider the size of the prediction bands obtained since, intuitively, a small prediction band is preferable because it includes subregions of the sample space where the probability mass is highly concentrated (\citealt{Lei1}) and it is typically more informative in practical applications. In each scenario, we will compare the performances of five selected prediction algorithms, one of which do not exploit the autoregressive structure. To obtain further insights, we also include the errors obtained by assuming perfect knowledge of the operator $\Psi$. For ease of reference, we briefly describe these methods, and introduce some convenient notation. \begin{itemize} \item \textbf{FAR(1)-Concurrent} refers to the forecasting algorithm based on the estimation of the concurrent FAR(1) model \eqref{eqn:FAR_concurrent}. \item \textbf{FAR(1)-EK} (Estimated Kernel) denotes the first estimation procedure presented in Section \ref{sec:FAR1}, where we explicitly compute $\hat{\Psi}_M$ as prescribed by \eqref{eq:EK_estimator} and then set $\hat{Y}_{T+1} = \hat{\Psi}_M Y_T$. \item \textbf{FAR(1)-EK+} (Estimated Kernel improved) is a modification of the above method, where eigenvalues $\hat{\lambda}_i$ are replaced by $\hat{\lambda}_i + 1.5(\hat{\lambda}_1+\hat{\lambda}_2)$, as recommended by \citet{didericksen} and further discussed in \ref{appendix_EK}. \item \textbf{FAR(1)-VAR} denotes the forecasting procedure \eqref{eqn:VAR_efpc_hat}, where we exploit the expansion on estimated functional principal components and forecast $Y_{T+1}$ using the underlying VAR(1) model. \item \textbf{Naive}: we just set $\hat{Y}_{T+1}=Y_T$. This method does not attempt to model temporal evolution, it is included to see how much can be gained by exploiting the autoregressive structure of data. \item \textbf{Oracle}: we set $Y_{T+1}=\Psi Y_T$, using the actual $\Psi$ from which data are simulated. This point predictor is clearly not available in practical application, but it is interesting to include it in order to see if poor predictions might be due to poor estimation of $\Psi$. \end{itemize} When it is required (namely in FAR(1)-EK, FAR(1)-EK+, FAR(1)-VAR), FPCA is performed using the discretization approach, as motivated in \ref{appendix_FPCA}. The number of principal components is selected by the cumulative proportion of variance criterion. Calling $\hat{\lambda}_1,\dots,\hat{\lambda}_M$ the $M$ largest estimated eigenvalues, we choose $M\in \mathbb{N}$ such that $\sum_{j=1}^{M}\hat{\lambda}_j / \sum_{j=1}^{\infty}\hat{\lambda}_j$ exceeds a predetermined percentage value, in this case fixed equal to 0.8. We noticed that, on average, this entails to select a number of harmonics equal to 5. Throughout the whole simulation study, we set the significance level $\alpha=0.1$. In the first simulation, in Section \ref{subsec:sample_size}, we will fix the size $b$ of the blocking scheme \eqref{eqn:blocking_scheme} equal to 1 and the sample size $T$ will take values $19,49,99,499$. Secondly, in Section \ref{subsec:block_size}, we will instead keep the sample size fixed equal to $119$ and repeat the simulations with $b=1,3,6$. As usually done in the time series setting, the first observation is taken into account as a covariate only and will neither take part of the training set, nor of the calibration set. The proportion of data in the training and in the calibration set are hence equal to one half of the remaining observations: $m=l=(T-1)/2$. For each value of $T$, we repeat the procedure by considering $N=1000$ simulations. The simulations are performed using the R Programming Language (\citealt{R}). Extending the implementation of \citet{freqdomftsa}, in order to simulate a sequence of functions $\{Y_t\}_{t=1,\dots,T}$ from a functional autoregressive process of order one, we assume that observations lie in a finite dimensional subspace of the function space $\mathbb{H}$\footnote{Without loss of generality, throughout this section we will consider functions in $\mathbb{H}=\mathcal{L}^2([0,1]\times[0,1])$.}, spanned by orthonormal basis functions $\phi_1, \dots, \phi_M$, with $M\in \mathbb{N}$ representing the dimension of such subspace. Therefore, we have: \begin{gather} Y_{t}(u,v) = \bm{\phi}(u,v)^T \bm{Y}_t \\ \varepsilon(u,v) = \bm{\phi}(u,v)^T \bm{\varepsilon}_t \\ \label{eqn:kernel_basis} \psi(u,v;w,z) = \bm{\phi}(u,v)^T \bm{\Psi} \bm{W} \bm{\phi}(w,z) \end{gather} where $\bm{\phi}(u,v)=[\phi_1(u,v), \dots, \phi_M(u,v)]^T \in \mathbb{R}^M, \, \forall (u,v) \in [0,1]\times[0,1]$, $\bm{Y}_t, \,\bm{\varepsilon}_t \subset \mathbb{R}^M, \, \forall t = 1, \dots, M$ and $\bm{\Psi} \in \mathbb{R}^{M \times M}$ and $\bm{W} \in \mathbb{R}^{K \times K}$, defined as $\bm{W} := \int_c^d \int_e^f \bm{\phi}(u,v) \bm{\phi}(u,v)^T du dv$. It follows that: \begin{equation} \bm{Y}_{t} = \bm{\Psi} \bm{Y}_{t-1} \bm{W} + \bm{\varepsilon}_t \qquad t=1,\dots,T \end{equation} The basis system $\phi_1, \dots, \phi_M$ is constructed as the tensor product basis of two cubic B-spline systems $\{g_i\}_{i = 1,\dots, M_1}$, $\{h_j\}_{j = 1,\dots, M_2}$, both defined on $[0,1]$. We set $M_1=M_2=5$, in such a way that $M=25$. Notice that, by including more functions, we will better approximate the space $\mathbb{H}$, though inevitably producing rougher curves. On the other hand, by reducing the size of the basis system, one renounce to have a good representation of $\mathbb{H}$, but this permits to obtain smoother functions. The choice proposed for $M$ is arbitrary, but provides a good compromise between the two outlined extremes. For an exhaustive discussion on the tensor product basis system, we refer to \ref{sec:FPCA_basis} The matrix $\bm{\Psi}$ is defined as $\bm{\Psi} :=0.9 \frac{\tilde{\bm{\Psi}}}{||\tilde{\bm{\Psi}}||_F}$, with $\tilde{\bm{\Psi}}$ having diagonal values equal to $0.8$ and out-diagonal elements equal to 0.3\footnote { One can easily prove that, if relation \eqref{eqn:kernel_basis} holds, then $||\Psi||=||\bm{\Psi}||_F$, where $||.||$ is the usual operatorial norm and $||.||_F$ denotes the Frobenius norm. } Innovation errors $\bm{\varepsilon}_t$ are independently sampled from a multivariate Student's $t$-distribution, with 4 degrees of freedom and scale matrix having diagonal elements equal to 0.5 and out-diagonal entries equal to 0.3. We report in \autoref{fig:evolution} an example of the first three realizations of a simulated Functional Autoregressive Process of order one, represented on a grid of $10^4$ points\footnote{A GIF of the FAR(1) process evolution can be found at \href{https://github.com/Niccolo-Ajroldi/ARMA-Surfaces/blob/main/Pics/FAR.gif}{GitHub repository} ``Functional-Autoregressive-Process-2D".}. \begin{figure} \centering \includegraphics[width=1\linewidth]{Pics/Evolution.png} \caption[Representation of a simulated FAR(1)]{Example of the first three realizations ($Y_1,Y_2,Y_3$) of a simulated Functional Autoregressive Process of order one.} \label{fig:evolution} \end{figure} \subsection{Increasing the Sample Size} \label{subsec:sample_size} As mentioned before, we first fix the size $b$ of the blocking scheme equal to 1 and let the sample size $T$ take values $19,49,99,499$ \autoref{fig:simulation_1_coverage} shows the empirical coverage, together with the related 99\% confidence interval. Specifically, the empirical coverage is computed as the fraction of the $N=1000$ replications in which $y_{T+1}$ belongs to $\mathcal{C}_{T,1-\alpha}(x_{T+1})$, and the confidence interval is reported in order to provide insights into the variability of the phenomenon in various scenarios, rather than to draw inferential conclusions about the unconditional coverage. Notice that different point predictors will intrinsically have dissimilar coverages, consequently this analysis aims to compare forecasting algorithm in terms of their predictive performances. We can appreciate that, in all the cases, the 99\% confidence interval for the empirical coverage includes the nominal confidence level, regardless of the sample size at disposal. Moreover, it's interesting to notice that, even when an accurate forecasting algorithm $g_{\mathcal{I}_1}$ is not available (namely with FAR(1)-Concurrent and Naive), the proposed CP procedure still outputs prediction regions with an appropriate unconditional coverage. \begin{figure}[] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/Coverage_CI_Test_4.png} \caption{Empirical coverage of CP bands. The dashed line represents nominal coverage $1-\alpha$.} \label{fig:simulation_1_coverage} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/Width_Test_4.png} \caption{Size of CP bands.} \label{fig:simulation_1_width} \end{subfigure} \caption[Results of first simulation study]{Results of first simulation study.} \label{fig:simulation_1} \end{figure} Similarly to \citet{diquigiovanni2021importance}, we define the size of a two-dimensional prediction band as the \textit{volume} between the upper and the lower surfaces that define the prediction band: \begin{equation} \mathcal{Q}(s_{\mathcal{I}_1}) := \int_0^1 \int_0^1 2 k s_{\mathcal{I}_1}(u,v) du dv = 2 k \end{equation} Measuring the size of the correspondent prediction bands, we can compare the efficiency of different forecasting routines. We stress the fact that distinct point predictors may guarantee potentially different coverages. For this reason, it is crucial to first evaluate the empirical coverage of the resulting prediction bands and only afterwards investigate their size. \begin{comment} \autoref{fig:simulation_1_width} reports the boxplots concerning the size of the $N=1000$ prediction bands, while in \autoref{table:sim_1_size} we collected mean sizes to allow for easier comparison. \begin{table} \begin{subtable}[c]{0.5\linewidth} \centering \begin{tabular}[t]{lcccc} \toprule Method & \multicolumn{4}{c}{$T$} \\ \midrule & 19 & 49 & 99 & 499 \\ \midrule FAR(1)-C & 0.895 & 0.902 & 0.892 & 0.909 \\ FAR(1)-EK & 0.895 & 0.903 & 0.891 & 0.897 \\ FAR(1)-EK+ & 0.883 & 0.901 & 0.891 & 0.906 \\ Naive & 0.892 & 0.903 & 0.893 & 0.895 \\ Oracle & 0.895 & 0.909 & 0.901 & 0.913 \\ FAR(1)-VAR & 0.908 & 0.894 & 0.887 & 0.894 \\ \bottomrule \end{tabular} \subcaption{Empirical coverage of CP bands} \label{table:sim_1_coverage} \end{subtable} \begin{subtable}[c]{0.5\linewidth} \centering \begin{tabular}[t]{lcccc} \toprule Method & \multicolumn{4}{c}{$T$} \\ \midrule & 19 & 49 & 99 & 499 \\ \midrule concurrent & 44.66 & 28.33 & 24.59 & 22.84 \\ EK & 41.08 & 26.98 & 23.42 & 21.64 \\ EK+ & 39.65 & 26.63 & 23.23 & 21.58 \\ Naive & 44.98 & 31.75 & 28.14 & 26.58 \\ Oracle & 37.75 & 25.99 & 22.87 & 21.50 \\ VAR-efpc & 49.30 & 27.05 & 23.17 & 21.54 \\ \bottomrule \end{tabular} \subcaption{Average size of CP bands} \label{table:sim_1_size} \end{subtable} \caption[Results of first simulation study]{Results of first simulation study.} \end{table} \end{comment} \autoref{fig:simulation_1_width} reports the boxplots concerning the size of the $N=1000$ prediction bands. One can notice that the size tends to decrease as long as the number of observations $T$ increases, hence improving the efficiency of the prediction sets. As expected, Naive predictor provides larger prediction bands and while the difference is milder with small sample sizes, when $T$ grows the size of the prediction regions of other methods systematically dominates the Naive's one. On the other hand, FAR(1)-EK and FAR(1)-EK+, that are both based on the estimation of autoregressive operator $\Psi$, provide the tightest prediction bands, not only when numerous observations are available, but also in small sample sizes scenario. Moreover, one can notice that FAR(1)-EK+ do not significantly improves FAR(1)-EK neither in terms of coverage, nor in terms of band size. We acknowledge that, when $T=19$, VAR-efpc performs remarkably worse than the other methods. Indeed, the aforementioned method produces wider prediction bands, which are further source of the higher empirical coverage in \autoref{fig:simulation_1_coverage}. However, when the sample size increases, such forecasting algorithm performs comparably with the already mentioned FAR(1)-EK and FAR(1)-EK+. Finally, although the Conformal Prediction bands produced by the oracle predictor are obviously the most performing one, we can appreciate that both FAR(1)-EK and FAR(1)-EK+ provide CP bands with coverage and size comparable to the theoretically perfect oracle forecasting method. \subsection{Increasing the Blocking Scheme Size} \label{subsec:block_size} We repeat here the previous simulation with fixed sample size $T$ and a blocking scheme of increasing size $b$, in order to determine how the validity and efficiency of the resulting prediction bands are influenced by such parameter $b$. The value of $T$ is fixed equal to 99, which in our opinion provides a good balance between scenarios with small and large sample sizes, moreover, analogous results have been found letting vary the value of $T$. Results are reported in \autoref{fig:simulation_2} \begin{figure}[] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/Coverage_CI_Test_5.png} \caption{Empirical coverage of CP bands. The dashed line represents nominal coverage $1-\alpha$.} \label{fig:simulation_2_coverage} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/Width_Test_5.png} \caption{Size of CP bands.} \label{fig:simulation_2_width} \end{subfigure} \caption[Results of second simulation study]{Results of second simulation study.} \label{fig:simulation_2} \end{figure} Once again, in all the circumstances the 99\% confidence interval for the empirical coverage includes the target level of $1-\alpha$, hence confirming the validity of the CP bands even for higher values of $b$. Moreover, one can notice that, as already pointed out by \cite{diquigiovanni2021_FTS} in the one-dimensional functional setting, the band size tends to decrease when $b$ decreases, thus providing more efficient prediction regions. We conjecture that this behaviour is related to the inverse proportionality between the blocking scheme size $b$ and the dimension of the permutation family $|\Pi|$. A comparison of the different forecasting algorithms performances validates the consideration in Section \ref{subsec:sample_size}. \section{Case Study: Forecasting Black Sea Level Anomaly} \label{section:BlackSea} \subsection{Dataset} In this section, we aim to illustrate the application potential of the proposed methodology on a proper case study. We will consider data from Copernicus, the European Union's Earth observation program, which collects vast amounts of global data from satellites and ground-based, airborne, and seaborne measurement systems. More specifically, we will analyze a data set from Copernicus Climate Change Service (\href{https://climate.copernicus.eu/}{C3S}), a project operated by the European Center for Medium-Range Weather Forecasts (\href{https://www.ecmwf.int/}{ECMWF}), collecting daily sea level anomalies of the Black Sea in the last twenty years (\citealt{BS_data}). Sea level anomalies are measured as the height of water over the mean sea surface in a given time and region. Anomalies are computed with respect to a twenty-year mean reference period (1993-2012). Satellite altimeters are used to estimate the sea level anomalies with a mapping algorithm dedicated to the Black Sea region. Observations are collected on a spatial raster, with a $0.125^{\circ}$ resolution both on the longitude and on the latitude axis. Since observations are collected on a geoid, the domain actually lies on a two-dimensional manifold, however, because both longitude and latitude ranges are very small ($14^{\circ}$ and $7^{\circ}$ respectively), we will ignore this detail and assume data to be observed on a bidimensional grid. The resulting lattice can hence be considered as the Cartesian product of a grid on the longitude axis made by $N_1 = 120$ points and a latitude grid of $N_2 = 56$ points. We will refer to $(u_i,v_j)$, with $i=1,\dots,N_1$ and $j=1,\dots,N_2$ as the point $(i,j)$-th of such two-dimensional mesh. Since the Black Sea does not have a rectangular shape, we will model such data as realization of random surfaces defined on the rectangle circumscribed to the perimeter of the sea, but identically equal to zero outside of it. As a consequence, being $\mathcal{B}$ the set of points internal to the perimeter of the Black Sea, we slightly redefine the non conformity measure \ref{eqn:nonconformitymeasure} to become: \begin{equation} \label{eqn:nonconformitymeasure_application} \mathcal{A}(\{z_h: h \in \mathcal{I}_1\}, z) = \esssup_{(u,v) \in [c,d] \times [e,f]} \mathcal{R}(u,v), \end{equation} where $\mathcal{R}(u,v)$ is defined as: \begin{equation} \mathcal{R}(u,v) \begin{cases} \frac{| y(u,v) - g_{\mathcal{I}_1}(u,v;x_{T+1}) |}{s_{\mathcal{I}_1}(u,v)}, \text{ if} & (u,q) \in \mathcal{B}\\ 0 &\text{otherwise }. \end{cases} \end{equation} Altimetry instruments give access to the sea surface height (SSH) above the reference ellipsoid (see \autoref{fig:Satellite}), which is calculated as the difference between the orbital altitude of the satellite and the measured altimetric distance of the satellite from the sea. Starting from this information, the Sea Level Anomaly (SLA) is defined as the anomaly of the signal around the Mean Sea Surface component (MSS), which is computed with respect to a 20-year reference period (1993-2012). Since we are settling the study in a functional data analysis framework, we will consider the time series $\{SLA_t\}$, without making explicit the dependence on the bivariate domain. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Pics/Satellite.png} \caption[Data measurement process]{Data measurement process. The image is an author's replica of Figure 1 in Copernicus' \href{https://datastore.copernicus-climate.eu/documents/satellite-sea-level/D3.SL.1-v1.2_PUGS_of_v1DT2018_SeaLevel_products_v2.4.pdf}{Product User Guide and Specification v2.4}.} \label{fig:Satellite} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Pics/BS_raster.png} \caption[Sea Level Anomaly on 01/01/2018]{Raster representation of the Sea Level Anomaly (in meters) on 01/01/2018} \label{fig:BS_plot} \end{figure} \subsection{Data Preprocessing} If possible, one would preferably forecast directly the time series of Sea Level Anomalies ($SLA_t$). However, given the nature of the dataset, we expect anomalies to exhibit a strong seasonality as well as a linear trend (\citealt{black_sea_linear_trend}). Moreover, both tide gauge and altimetry observations show that sea level trends in the Black Sea vary over time (\citealt{black_sea_changes}). \citet{Tsimplis} estimated a rise in the mean sea level of $2.2$ mm/year from 1960 to the early 1990s, while long-track altimetry data indicate that sea level rose at a rate of $13.4 \pm 0.11$ mm/year over 1993–2008 (\citealt{black_sea_Ginzburg}). In order to further investigate this issue, we should proceed by testing the functional time series $\{SLA_t\}_t$ for stationarity. However, despite for one dimensional functional time series one could resort to the test proposed by \citet{FTS_stationarity}, to the best of our knowledge an \textit{ad-hoc} stationarity test for two-dimensional functional time series has still to be developed. For such reason, and aware of the limits of this approach, we will resort here to the analysis of stationarity of univariate time series $SLA_t(u_i,v_j)$, fixing some random locations $(u_i,v_j)$. We stress the fact that stationarity is indeed not necessary to obtain valid CP bands, but as proved by \citet{chernozhukov2018exact}, it is a sufficient condition to guarantee the first assumption of Theorem \autoref{my_theorem}, that we would hence like to be satisfied. By analyzing univariate time series $SLA_t(u_i,v_j)$ in fixed locations $(u_i,v_j)$, we acknowledge not negligible partial autocorrelation up to lag 2 or 3 depending on circumstances and the evident presence of a cyclical behaviour. Moreover, Augmented Dickey Fuller (ADF) stationarity test fails to reject the null hypothesis of unit root against the alternative one of a stationary process. As usually done in time series analysis, we proceed by differentiating $\{SLA_t\}_t$, hence considering the time series of first differences, $\{\Delta SLA_t\}_t$, defined as $\Delta SLA_t := SLA_t - SLA_{t-1}$. However, differentiated data still exhibit high partial autocorrelation for lags greater than one. A similar behaviour has also been found after a seasonal differentiation, where we employed as differentiation lag both a delay of 29 days, namely the moon phase cycle, and a lag of 365 days, coinciding with the Earth revolution time. Since we aim to eventually fit a Functional Autoregressive Process of order one, we proceed with a second differentiation, obtaining stationary time series with negligible partial autocorrelation for lags greater than one, as confirmed by ADF test and Partial Autocorrelation Function plots. Therefore, we will finally consider the time series of second differences $\{Y_t\}_t$, formally defined as: \begin{equation} Y_t := \Delta^2 SLA_t = (SLA_{t}-SLA_{t-1}) - (SLA_{t-1}-SLA_{t-2}) \end{equation} We will forecast the differentiated time series $Y_t$, and obtain prediction bands for it, using the methodology presented in Section \ref{section:CP}. However, in order to provide a better insight into the prediction problem, we will then retrieve results pertaining to the original time series $SLA_t$. Specifically, we will apply the conformal machinery to the differentiated time series, calling $\hat{Y}_{T+1}$ the forecasted function, and obtaining the prediction band for $Y_{T+1}$: \begin{gather*} \mathcal{C}_{T,1-\alpha} := \left\{ y \in \mathbb{H}: y(u,v) \in \left[ \hat{Y}_{T+1}(u,v) \pm k^s s_{\mathcal{I}_1}(u,v) \right] \forall (u,v)\right\} \end{gather*} Exploiting the fact that: \begin{gather*} Y_{T+1}(u,v) \in \left[ \hat{Y}_{T+1}(u,v) \pm k^s s_{\mathcal{I}_1}(u,v) \right] \quad \forall (u,v) \\ \iff \\ SLA_{T+1}(u,v) \in \left[ \hat{Y}_{T+1}(u,v)+ 2 SLA_T(u,v) - SLA_{T-1}(u,v) \pm k^s s_{\mathcal{I}_1}(u,v) \right] \forall (u,v) \end{gather*} we define the prediction band for $SLA_{T+1}$ as: \begin{gather*} \tilde{\mathcal{C}}_{T,1-\alpha} := \left.\{ y \in \mathbb{H}: y(u,v) \in \left[ \hat{Y}_{T+1}(u,v)+ 2 SLA_T(u,v) - SLA_{T-1}(u,v) \pm k^s s_{\mathcal{I}_1}(u,v) \right] \forall (u,v)\right.\} \end{gather*} \subsection{Study Design} The case study will employ a rolling estimation framework which recalculates the model parameters on a daily basis and consequently shifts and recomputes the entire training, calibration and test windows by 24 hours, as shown in \autoref{fig:rolling_bs}. As before, we will use a random split of data in the training and calibration sets, with split proportion equal to 50\%. The significance level $\alpha$ is once again fixed equal to 0.1. For each of the 1000 days we aim to predict, we will build the corresponding prediction band based on the information provided by the last 99 days, thereby fixing $T=99$. Choosing this value, indeed, provides accurate forecasts and thus small prediction bands, while maintaining reasonable computational times. The size of the blocking scheme will be fixed equal to 1, since, as motivated in Section \ref{subsec:block_size}, this choice produces the narrowest prediction bands, maintaining at the same time satisfactory performance in terms of empirical coverage. The rolling window will be shifted 1000 times, thus iterating for almost three years the forecasting of the next day based on the last $T$ observations. More specifically, and to allow for reproducibility of subsequent results, we will consider a rolling window ranging from 01/01/2017 to 04/01/2020. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{Diagrams/Split_random.png} \caption[Training-calibration-test split in a rolling window scenario]{Example of the training-calibration-test split in a rolling window scenario.} \label{fig:rolling_bs} \end{figure} The point predictors used throughout this application will be those described in Section \ref{section:point_prediction}. The number of Functional Principal Components is once again chosen by means of the cumulative proportion of variance criterion, setting the target percentage of variance value to 0.8. For each shift of the rolling window and for each forecasting algorithm, we will check if $SLA_{T+1}$ belongs to $\tilde{\mathcal{C}}_{T,1-\alpha}(x_{T+1})$, saving also the size of the corresponding prediction band. After having collected such results, we can calculate the average coverage, and use it to compare performances of the different point predictors in this scenario. \subsection{Results} \label{subsec:BS_results} In order to visualize results and show the potential applications of our method, we outline in \autoref{fig:panel} the observed and forecast surfaces obtained for one of the day we aim to predict, as long as the lower and upper bounds defining the prediction band. For the sake of simplicity, we display only the results obtained using the FAR(1)-EK \eqref{eq:EK_estimator} estimator, since, as discussed below, it provides on average the narrowest prediction bands. In order to allow for a more insightful analysis, and to further investigate the evolution of such surfaces, we also implemented a dedicated \href{https://niccolo-ajroldi.shinyapps.io/Black-Sea-Forecasting/}{Shiny App} available online where results can be explored. \begin{figure}[] \label{ fig7} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Pics/observed.png} \caption{Observed Surface} \vspace{4ex} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Pics/predicted.png} \caption{Predicted Surface} \vspace{4ex} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Pics/lower.png} \caption{Prediction Band Lower Bound} \vspace{4ex} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Pics/upper.png} \caption{Prediction Band Upper Bound} \vspace{4ex} \end{minipage} \caption{Observed surface, predicted surface, prediction band lower bound, prediction band upper bound on 14/05/2017. Results are obtained using a sample size $T$ equal to 99, $b=1$, $l=48$, $\alpha=0.1$. The employed forecasting algorithm $g_{\mathcal{I}_1}$ is FAR(1)-EK \eqref{eq:EK_estimator}.} \label{fig:panel} \end{figure} We report in \autoref{fig:BS_coverage} the average coverage of the Conformal Prediction bands obtained across 1000 predictions. As in Section \ref{section:4}, we pair such quantity with a 99\% confidence interval. Notice that in this case the confidence interval may be biased, due to the inevitable correlation between data used to construct it, however, we still decided to include it in order to assess the dispersion of the average coverage around the mean. Coherently with the results of the simulation study, we can appreciate that in all cases the prediction bands capture the observed surface $y_{T+1}$ approximately $(1-\alpha)\%$ of the times, regardless of the forecasting algorithm used. For what concerns the size of the prediction bands, the Naive predictor produces by far the widest ones (see \autoref{fig:BS_width}), and, this fact does not reflect in a greater coverage compared to the other methods. On the other hand, prediction bands obtained with forecasting algorithms modelling the autoregressive structure provide narrower prediction regions. Among these, we can see that the non-concurrent FAR(1) is the most performing one, despite the way in which it is estimated (namely with FAR(1)-EK, FAR(1)-EK+ or FAR(1)-VAR). Nevertheless, also the concurrent FAR(1) model provides very tight prediction bands, almost comparable with the ones produced by the non-concurrent prediction algorithm. We are also interested in analyzing the \textit{pointwise} properties of CP bands in this scenario. Therefore, we display in \autoref{fig:map_cov} a map of the pointwise coverage of the prediction bands, obtained using FAR(1)-EK. We can appreciate, as expected, a high empirical coverage across the entire domain, emphasizing once again the peculiarity of our approach, which guarantees global coverage of the prediction surfaces, reflected by an obvious pointwise coverage higher than the global one. We report in \autoref{fig:map_cov} also the average width of such bands, which denotes a peculiar pattern, probably due to data collection routines. Indeed, we can see from \autoref{fig:map_std}, that a similar behaviour observed in the map of pointwise width can be found by plotting the standard deviation of original data. This is coherent with the employed CP framework, since the size of prediction bands depends on the amplitude of the functional standard deviation. \begin{figure}[] \centering \includegraphics[width=.7\linewidth]{Pics/pointwise_coverage.png} \caption{Pointwise coverage of the prediction bands across the domain, obtained by counting the number of times that each point falls between prediction bands, and dividing by the total number of time steps in the rolling window framework.} \label{fig:map_cov} \end{figure} \begin{figure}[] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/pointwise_width.png} \caption{Pointwise average width of CP bands.} \label{fig:map_width} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/pointwise_std.png} \caption{Functional standard deviation of original data.} \label{fig:map_std} \end{subfigure} \caption{Comparison between CP bands' pointwise average width map and functional standard deviation of original data. The figure on the right denotes a peculiar pattern in the data collection process, which is retrieved in the left panel, due to the choice of using the standard deviation as a modulation function in the CP framework.} \end{figure} \begin{figure}[] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/Coverage_CI.png} \caption{Empirical coverage of CP bands. The dashed line represents nominal coverage $1-\alpha$.} \label{fig:BS_coverage} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Pics/Width_Boxplot.png} \caption{Size of CP bands.} \label{fig:BS_width} \end{subfigure} \caption{Results of the forecasting procedure in a rolling window setting. All the methods produce prediction bands with average coverages close to the nominal one. For what concerns predictive efficiency, the naive predictor produces the widest bands, whereas the other methods produce instead more efficient bands, with sizes similar to each other.} \label{fig:BS_results} \end{figure} In conclusion, the study case confirms the validity of our procedure and proves how a FAR(1) model significantly improves the predictive efficiency even in this more complex scenario. We conjecture that the accuracy of the forecast surface (and thus the efficiency of the resulting CP bands) may be further enhanced by taking into consideration a more refined predictor. \section{Conclusions and Further Developments} \label{section:conclusions} In this work, we proposed a mathematical framework for probabilistic forecasting of two-dimensional functional time series. Moving from the work of \citet{chernozhukov2018exact} and \citet{diquigiovanni2021_FTS}, we presented in Section \ref{section:point_prediction} a randomization inference procedure, called Conformal Prediction, adapting it to our specific setting, and thus allowing for uncertainty quantification. Given the novelty of the subject, we have also extended classical forecasting procedures from the one-dimensional functional framework. In particular, we have focused on the non-concurrent Functional Autoregressive process of order one, which represents the state of the art of functional time series modelling, thereby proposing different estimation techniques. Whereas Theorem \autoref{my_theorem} provides theoretical performance guarantees of the uncertainty quantification algorithm, we were interested in verifying empirical properties of CP bands, and, at the same time, testing and comparing different forecasting algorithms in terms of the resulting prediction regions. We proved the strength of the proposed method through a proper simulation study, emphasizing the advantages of using a correctly specified forecasting model. We have finally applied the proposed technique to a real case study, employing a novel time series dataset, which consists of daily observations of Sea Anomaly Level over the Black Sea during the last 20 years. Empirical results proved the validity of our approach even on non-synthetic data. Throughout this work, we modeled time series of surfaces as two-dimensional functional time series. This approach is very appealing, since it permits to embed the analysis in a Hilbert space, thus providing s a solid mathematical framework. An interesting further development could be to exploit more advanced Machine Learning forecasting algorithms in the prediction pipeline. For instance, given the temporal dependence of our data, one might predict the next realization of the time series through a Recurrent Neural Network (RNN) or one of its improved versions, such as Long Short Term Memory (LSTM) or Gated Recurrent Units (GRU). Moreover, since surfaces can be represented as bidimensional images, one may also consider adding convolutional layers to the network. We stress the fact that this approach is not antithetical to our work. In fact, given the high flexibility of the uncertainty quantification technique employed, namely Conformal Prediction, we would just have to substitute the point predictor $g_{\mathcal{I}_1}$ with a user-defined neural network, while keeping the rest of the procedure unchanged, in order to pair the forecasted surfaces with probabilistic prediction bands. As shown in the simulation study, we expect that an improvement in the quality of the point predictor will result in narrower and thus more efficient prediction bands. \section*{Acknowledgments} This work has been partially supported by ACCORDO Attuativo ASI-POLIMI ``Attività di Ricerca e Innovazione" n. 2018-5-HH.0, collaboration agreement between the Italian Space Agency and Politecnico di Milano. M.F. acknowledges the support of the JRC Centre of Advanced Studies CSS4P - ``Computational Social Science for Policy" \bibliographystyle{elsarticle-harv}
2,877,628,090,627
arxiv
\section{Introduction} \label{sec:intro} Free fermion systems are trivially integrable and thus are described by an extensive number of conserved quantities. The corresponding conservation laws dictate a particular structure to their energy and entanglement spectra \cite{Haldane,Peschel2}. Moreover, these spectra are given in terms of a number of parameters that grows only polynomially with respect to system size. When interactions are introduced these conserved quantities cease to exist giving rise to energy or entanglement spectra that, in general, are described by an exponential number of parameters \cite{PachosD}. This complexity makes interacting systems hard to investigate and qualitatively understand. Despite their complexity, interacting systems are responsible for a wide variety of interesting phenomena, such as the fractional quantum Hall effect~\cite{Tsui,Laughlin,QuantumHall}, the emergence of anyonic quasiparticles~\cite{PachosBook,KitaevAnyons}, many-body localisation~\cite{Nandkishore} and quantum many-body scars~\cite{Papic}. Many of these phenomena can be efficiently described in terms of a small number of emerging degrees of freedom. The simplest such scenario is the case where the presence of interactions transforms a system into a free or nearly free system \cite{PachosD}. Identifying the free degrees of freedom enables the efficient description of the system in terms of very few parameters that only grow polynomially with respect to its size. Moreover, the emergence of freedom in interacting systems determines their thermalisation properties, the ballistic/diffusive propagation of quenches and the nature of their quasiparticle excitation~\cite{PachosD}. Surprisingly, there are many interacting systems that, even if they appear to be strongly interacting, they behave effectively as almost free in the thermodynamic limit \cite{AlmostFree}, such as the Ising model in transverse and longitudinal field~\cite{IntDis} or the XYZ model~\cite{Matos}. In order to identify the ``freeness" of interacting systems we need a measurable quantity that can reveal if an interacting system is effectively behaving as free, either theoretically or in experiments. To this end several measures have been put forward to identify the emergence of free behaviour~\cite{IntDis}. Here, we propose to employ the violation of Wick's theorem~\cite{Wick} that is written in terms of expectation values \cite{Peschelmonos} which can be in principle measured experimentally, see \cite{ZollerMeas}. If the violation is very small then the system behaves as almost free. The violation of Wick's theorem was previously observed even in non-interacting systems \cite{Wicknonequilibrium}; therein, the authors study the non-equilibrium dynamics of many-body systems, and witness the violation of Wick's theorem due to connected density correlations in the initial state. On the other hand, in our work Wick's violation is employed to provide diagnostics for the ground state of a static system. We also express the violation of Wick's theorem in terms of a few low-entanglement energies of the ground state. This provides a simple physical way to interpret the distribution of the few lowest entanglement spectra of a system in terms of the applicability of Wick's theorem. In a different vein, various quantities from quantum information and information geometry have been used in the study of many-body systems in different contexts: from detecting the presence of phase transitions \cite{KitaevQPTent, NikolaFid,SanperaQPTs,VlachouUhlmann,VlachouDQPT} to characterizing many-body correlations \cite{EntvsCor,KitaevModel,CorrelationsvonNeumann,SanperaReview, Swingle,FMMV} and describing the dynamics of such systems \cite{Calabrese, Moitra, Hamma}. Along these lines, we also relate the violation of Wick's theorem to the interaction distance~\cite{IntDis}, a quantum information-theoretical quantity that manifests the extent of interactions in the quantum correlations of a system. The interaction distance is the trace distance between the reduced density matrix of the system's ground state from the density matrix of the closest possible free system \cite{StateDiscr}. While the interaction distance is an optimal theoretical way to infer emergent gaussianity, its relation to the Wick's theorem violation provides an intuitive way to understand its properties and to experimentally estimate its value through measurements of quantum correlations (e.g. \cite{GaussianCorExp,localdensboson,highordecorbosons,PHMott}), as one can relate the operators of the original and the entanglement Hamiltonian \cite{Peschel, Zoller}. Along these lines, in \cite{Matos} one can find an example of applying our approach to the XYZ model in the simple case of a system with only two fermionic modes. Here, we go beyond, by presenting also the more physically relevant case of systems with any number of fermionic modes. Moreover, the relation between Wick's violation and the interaction distance was only sketched in the Supplemental Material of \cite{Matos}, while here we present it in full detail and with rigorous proofs. In other words, the current article is the theoretical backbone the underpins the numerical work in \cite{Matos}. This paper is organised as follows: In Section \ref{sec:freefermions} we consider systems of free fermions. In particular, in Subsection \ref{subsec:expvalspecfreef} we present their entanglement spectra, and with respect to these we calculate the expectation value of the density operator of a fermionic mode. In Subsection \ref{subsec:wickfreef} we present Wick's theorem and derive its form for such systems without interactions. In Section \ref{sec:interf} we consider systems of interacting fermions. In Subsection \ref{subsec:expvalspecinterf}, we employ their entanglement spectra to calculate the expectation values of density operators involving one and two fermionic modes. Furthermore, we define a quantity that evaluates the violation of Wick's theorem in interacting systems, and depends on the expectation values of density operators involving one and two fermionic modes. To illustrate our method, in Subsection \ref{subsubsec:twomodes} we consider as an example the simple case of a system of interacting fermions with two fermionic modes, while in Subsection \ref{subsubsec:manymodes} we study the general case of a system with $N$ fermionic modes. In Subsection \ref{subsec:wickinterf}, we show that the quantity we defined to indicate the violation of Wick's decomposition can be bounded from above by the interaction distance. Finally, in Section \ref{sec:conclusions} we present a summary of our results and point out directions of future work. \section{Free fermions} \label{sec:freefermions} We start by studying the behaviour of free fermions. We identify the pattern of quantum correlations exhibited by the ground state of free fermion systems expressed in terms of its entanglement spectrum. In particular, we review the applicability of Wick's theorem for determining two-point correlation functions. This presentation will help us to subsequently define measures that quantify the deviation from free-spectra patterns when interactions are present. \subsection{Expectation values and entanglement spectra of free fermions} \label{subsec:expvalspecfreef} Consider a free fermion system in its ground state, $\ket{\psi_0}$, and its bipartition in subsystems $A$ and $B$. The reduced density matrix $\rho =\text{tr}_B(\ket{\psi_0}\!\bra{\psi_0})$ can be expressed as a thermal state \begin{equation} \rho = {e^{-H_E} \over Z}, \label{eq:rho} \end{equation} where $H_E$ is the entanglement Hamiltonian and $Z$ the corresponding partition function, given as $Z=\text{Tr} (e^{-H_E})$. Since the fermionic model is free, its entanglement Hamiltonian is also free~\cite{Peschel}, and can be given in terms of $N$ \emph{fermionic eigenoperators} $a_i$, $a_i^\dagger$ associated to $N$ \emph{fermionic modes}, as \begin{equation} H_E = \sum_{i=1}^N \epsilon_i a^\dagger_i a_i, \label{eqn:entHam} \end{equation} where $\epsilon_i$ are the \emph{single-mode entanglement energies} for $i=1,\ldots, N$. Note that we can absorb the partition function $Z$ in $H_E$ by shifting the overall energy by $E_0 \neq 0$, i.e. $H_E=E_0+\sum_{i=1}^N \epsilon_i a^\dagger_i a_i$, as in Ref.~\cite{IntDis}. Equivalently, we can write \eqref{eqn:entHam} in terms of the corresponding \emph{single-mode density operators} $\hat{n}_i=a^\dagger_i a_i$ for $i=1,\ldots, N$, as \begin{equation} H_E = \sum_{i=1}^N \epsilon_i \hat{n}_i. \label{eqn:entHamn} \end{equation} The entanglement spectrum of $H_E$ for a mode $k$ is given by \begin{equation} E_k = \sum_{i=1}^N \epsilon_i n_i, \label{eqn:free} \end{equation} where $n_i=0,1$ are the eigenvalues of $\hat{n}_i$. For a density matrix $\rho$ as in \eqref{eq:rho}, the expectation value of the single-mode density operator $\hat{n}_k$ for some mode $k$ is \begin{equation} \langle \hat{n}_k\rangle_\rho = \text{Tr} (\hat{n}_k \rho). \label{eqn:2point} \end{equation} To introduce our notation and techniques employed in later sections, let us explicitly calculate $\langle n_k \rangle_{\rho}$ following well-established steps. Let us start with the partition function for the state $\rho$ as given in \eqref{eq:rho}. As the terms of $H_E$ in \eqref{eqn:entHam} commute with each other we can write the partition function as \begin{align} Z &= \text{Tr}\left(\prod_{i=1}^N e^{-\epsilon_i \hat{n}_i}\right)= \prod_{i=1}^N \left(\sum_{n_i=0}^{1}e^{-\epsilon_in_i}\right)\nonumber\\ &=\prod_{i=1}^N\left(1+e^{-\epsilon_i }\right), \label{en:Z} \end{align} where the trace is calculated with respect to the two possible values of the occupation number, $n_i=0,1$, of each fermionic mode $i$. The expectation value of the single-mode density operator for some mode $k$ becomes \begin{align} \langle \hat{n}_k\rangle_\rho&=\text{Tr}\left(\hat{n}_k\frac{e^{-H_E}}{Z}\right)=\frac{1}{Z}\text{Tr}\left(\hat{n}_ke^{-\sum_{i=1}^{N}\epsilon_i \hat{n}_i}\right)\nonumber\\ &=\frac{1}{Z}\text{Tr}\left[-\frac{\partial}{\partial \epsilon_k}\left(e^{-H_E}\right)\right] =-\frac{1}{Z}\frac{\partial Z}{\partial \epsilon_k}. \end{align} By employing \eqref{en:Z} we obtain \begin{equation} \langle \hat{n}_k\rangle_\rho ={1 \over 1+ e^{\epsilon_k}}, \end{equation} i.e., the expectation value of the density for the $k-$th eigenmode in terms of the single-mode entanglement energy, $\epsilon_k$. \subsection{Wick's theorem for free fermions} \label{subsec:wickfreef} In the case of free-fermion systems Wick's theorem provides the means to calculate the expectation values of many-mode density operators in terms of the expectation values of fewer-mode density operators. Let $i,j$ be two fermionic modes. Then, the general form of Wick's theorem for the \textit{two-mode density operator} $\hat{n}_i\hat{n}_j=a_i^\dagger a_i a_j^\dagger a_j $ with respect to the reduced density matrix, $\rho$, of the ground state is \begin{widetext} \begin{equation} \langle a_i^\dagger a_i a_j^\dagger a_j \rangle_\rho = \langle a_i^\dagger a_i \rangle_\rho \langle a_j^\dagger a_j \rangle_\rho- \langle a_i^\dagger a_j^\dagger \rangle_\rho \langle a_i a_j \rangle_\rho+\langle a_i^\dagger a_j \rangle_\rho \langle a_i a_j^\dagger \rangle_\rho. \label{eqn:Wicksfull} \end{equation} \end{widetext} If we choose $a_i$ to be the eigenoperators of the entanglement Hamiltonian $H_E$ from \eqref{eqn:entHam} the above equation simplifies to $\langle a_i^\dagger a_i a_j^\dagger a_j \rangle_\rho = \langle a_i^\dagger a_i \rangle_\rho \langle a_j^\dagger a_j \rangle_\rho$, as the last two expectation values in the RHS of \eqref{eqn:Wicksfull} are necessarily zero with respect to the ground state of the diagonalised $H_E$. In terms of the corresponding density operators this can be written as \begin{equation} \langle \hat{n}_i \hat{n}_j \rangle_\rho = \langle \hat{n}_i \rangle_\rho \langle \hat{n}_j \rangle_\rho. \label{eqn:Wicksn} \end{equation} Note that Wick's theorem when applied to non-interacting systems, it is usually expressed in terms of operators with respect to the original Hamiltonian of the system, while here we apply it to the mode-density operators of the entanglement Hamiltonian. One can show that the density operators of the entanglement Hamiltonian can be expressed in terms of the density operators of the original Hamiltonian (see \cite{Zoller,Peschel}), and thus, verify that the form of Wick's theorem that we use is also valid. This decomposition of the two-mode density operators in the respective single-mode density operators is a result of the absence of interactions, dictating a trivial pattern of quantum correlations between the two fermionic modes. \section{Interacting fermionic systems} \label{sec:interf} We now investigate the behaviour of quantum correlations for interacting fermionic systems. Initially, we want to express the expectation values of density operators for a single mode and for two modes as a function of the entanglement spectra following the same methodology as in the free case. This will help us determine the violation of Wick's theorem, and employ it, in turn, to quantify the effect of interactions in terms of the interaction distance. \subsection{Expectation values and entanglement spectra of interacting fermions} \label{subsec:expvalspecinterf} For an interacting fermionic system we expect that in general the entanglement Hamiltonian of its ground state is also interacting. We assume that $H_E^{\text{int}}$ is diagonal in some basis of eigenoperators $a_i$ and $a_i^\dagger$. As $H_E^{\text{int}}$ is diagonal it is necessarily expressed in terms of density operators $\hat n_i = a_i^\dagger a_i$. The simplest first term we can write down is the free term given by \eqref{eqn:entHam}. The simplest interaction is the Coulomb-like two-mode density operator. More complicated higher order interactions are also possible, but we expect the corresponding correlators to be negligible for generic Hamiltonians. Hence, the simplest diagonal entanglement Hamiltonian, $H^\text{int}_E$, can be expressed as \begin{equation} H^\text{int}_E = \sum_{i=1}^N \epsilon_i \hat{n}_i + \sum_{i=1}^{N-1}\sum_{j=2;j>i}^N \epsilon_{ij} \hat{n}_i\hat{n}_j + \dotsm, \label{eqn:entHamint1} \end{equation} where $\epsilon_{ij}$ are the \emph{two-mode entanglement energies} of $H^\text{int}_E$. The ellipsis in \eqref{eqn:entHamint1} refers to terms comprising more than two modes. As a result, the entanglement spectrum of $H^\text{int}_E$ is given by \begin{equation} E_k = \sum_{i=1}^N \epsilon_i n_i + \sum_{i=1}^{N-1}\sum_{j=2;j>i}^N \epsilon_{ij} n_in_j. \end{equation} Note that $\epsilon_{ij}$ contributes to $E_k$ only if both modes $i$ and $j$ are populated, signalling the interaction between them. For weak interactions it is expected that the two-mode energies $\epsilon_{ij}$ are much smaller than the single-mode energies, i.e. $|\epsilon_{ij}|\ll |\epsilon_i|$. We can express the expectation value of the single-mode density operator $\hat{n}_k$ as a function of the corresponding single-mode entanglement energy $\epsilon_{k}$, as \begin{align} \langle \hat{n}_k\rangle_\rho&=\frac{1}{Z}\text{Tr}\left(\hat{n}_ke^{-\sum_{i=1}^{N}\epsilon_i \hat{n}_i-\sum_{i=1}^{N-1}\sum_{j=2;j>i}^N \epsilon_{ij} \hat{n}_i\hat{n}_j}\right)\nonumber\\&=-\frac{1}{Z}\frac{\partial Z}{\partial \epsilon_k}. \label{eqn:kmode} \end{align} Furthermore, the expectation value of the two-mode density operator $\hat{n}_k\hat{n}_l$ can be analogously expressed as a partial derivative with respect to the corresponding two-mode entanglement energy $\epsilon_{kl}$, as \begin{eqnarray} \langle \hat{n}_k\hat{n}_l\rangle_\rho=-\frac{1}{Z}\frac{\partial Z}{\partial \epsilon_{kl}}. \label{eqn:klmodes} \end{eqnarray} For a free system an equivalent expression to \eqref{eqn:klmodes} does not exist, as Wick's theorem given by \eqref{eqn:Wicksn} directly provides the two-mode expectation value in terms of single-mode densities. For the ground state of an interacting system the expectation values of single and two-mode density operators are in general unrelated. Hence, we define the violation of Wick's theorem, ${\cal W}(\rho)$ as \begin{equation} {\cal W}(\rho) := |\langle \hat{n}_i \hat{n}_j \rangle_\rho - \langle \hat{n}_i \rangle_\rho \langle \hat{n}_j \rangle_\rho|, \label{eqn:W} \end{equation} in order to quantify the effect interactions have on the ground state quantum correlations of the system. In the following, we will first determine its value in terms of the entanglement spectrum for the simple entanglement Hamiltonian of two interacting fermionic modes. Second, we will consider a system with any number of fermionic modes and we will employ adequate assumptions to derive approximate expressions for $\langle \hat n_k\rangle_{\rho}$ and $\langle \hat n_k \hat n_l \rangle_{\rho}$, which can be used to evaluate ${\cal W}(\rho)$. \subsubsection{ The two-mode case} \label{subsubsec:twomodes} Using \eqref{eqn:kmode} and \eqref{eqn:klmodes} we can calculate the violation of Wick's decomposition as defined in \eqref{eqn:W} for an interacting system of fermions with any number of modes. Here, we present a simple example illustrating it for the case where the entanglement Hamiltonian has only two interacting fermionic modes. In this case \eqref{eqn:entHamint1} becomes \begin{equation} H_E^\text{int}=\epsilon_1\hat{n}_1+\epsilon_2\hat{n}_2+\epsilon_{12}\hat{n}_1\hat{n}_2. \end{equation} The entanglement spectrum of $H_E^\text{int}$ is given by $E_0=0$, $E_1=\epsilon_1$, $E_2 =\epsilon_2$, $E_{12}=\epsilon_1+\epsilon_2+\epsilon_{12}$. The partition function is \begin{align} Z&=\text{Tr} \left(e^{-H_E^\text{int}}\right)=\sum_{n_1=0}^{1}\sum_{n_2=0}^{1}e^{-\epsilon_1n_1-\epsilon_2n_2-\epsilon_{12}n_1n_2}\nonumber\\ &=1+e^{-\epsilon_{1}}+e^{-\epsilon_{2}}+e^{-\epsilon_{1}-\epsilon_{2}-\epsilon_{12}},\label{eqn:2partition} \end{align} or, written in terms of the entanglement energies, $Z=1+e^{-E_1}+e^{-E_2}+e^{-E_{12}}$. By employing \eqref{eqn:kmode} we have that the expectation value $\langle \hat{n}_1\rangle_\rho$ is given by \begin{align} \langle \hat{n}_1\rangle_\rho&=-\frac{1}{Z} \frac{\partial Z}{\partial \epsilon_1}\nonumber\\ &= \frac{1+e^{E_{12} -E_1}}{ 1 + e^{E_{12} -E_1} + e^{E_{12} -E_2} +e^{E_{12}}}, \label{eqn:n1} \end{align} and analogously for $\langle \hat{n}_2\rangle_\rho$. By employing \eqref{eqn:klmodes} we find that the expectation value $\langle \hat{n}_1\hat{n}_2\rangle_\rho$ is given by \begin{align} \langle \hat{n}_1\hat{n}_2\rangle_\rho & = -\frac{1}{Z}\frac{\partial Z}{\partial\epsilon_{12}}\nonumber\\ & = \frac{1}{1+e^{E_{12}-E_{1}}+e^{E_{12}-E_{2}}+e^{E_{12}}}. \label{eqn:bothmodes} \end{align} Using \eqref{eqn:n1} and \eqref{eqn:bothmodes} we can determine the violation of Wick's decomposition, ${\cal W}(\rho)$, given by \eqref{eqn:W} as \begin{equation} {\cal W}(\rho)=\frac{\left|1-e^{E_{12}-E_1-E_2}\right|}{(1+e^{E_{12} - E_{1}}+e^{E_{12} - E_{2}}+e^{E_{12}})^2}. \label{eq:Two} \end{equation} Clearly, for non-interacting systems $E_{12}=E_1+E_2$ which gives ${\cal W}(\rho)=0$. \subsubsection{ The many-mode case} \label{subsubsec:manymodes} In the general case of a system with $N$ fermionic modes, finding closed formulas for $\langle \hat n_k\rangle_{\rho}$ and $\langle \hat n_k \hat n_l \rangle_{\rho}$ (and, in turn, for ${\cal W}(\rho)$) is a tedious task. One way to simplify the calculation is to assume that the interactions are weak, i.e. the two mode energies $\epsilon_{ij}$ are much smaller in magnitude than the single-mode energies $\epsilon_{k}$ for any modes $i,j,k\in\{1,2,\ldots, N\}$. We can, then, approximate $\langle \hat n_k\rangle_{\rho}$ and $\langle \hat n_k \hat n_l \rangle_{\rho}$ keeping only terms up to first order in $\epsilon_{ij}$. The single-mode density operator is given by \begin{widetext} \begin{equation} \langle n_k\rangle_\rho=\frac{e^{-\epsilon_{k}}}{Z}\sum_{n_i=0}^1\sum_{n_j=0}^1\left(\prod_{i=1;i\neq k}^Ne^{-\epsilon_i n_i}\right)\left(1-\sum_{i=1;i\neq k}^{N-1}\sum_{j=2;\{j>i, j\neq k\}}^{N}\epsilon_{ij}n_in_j-\sum_{i=1;i<k}^{N-1}\epsilon_{ik}n_i-\sum_{i=2;i>k}^{N}\epsilon_{ki}n_i \right),\label{eqn:nkmain} \end{equation} \end{widetext} where $Z$ is given by \eqref{eqn:Zsingle}, and the two-mode density operator by \begin{equation} \langle n_kn_l\rangle=\frac{e^{-\epsilon_{k}-\epsilon_{l}}}{Z}\prod_{i=1;i\neq k,l}^N(1+e^{-\epsilon_i}),\label{eqn:nknlmain} \end{equation} where $Z$ is given by \eqref{eqn:Ztwomode} (for a detailed derivation see Appendix \ref{sec:Appendix}). The above expressions can, then, be used along with \eqref{eqn:W} to compute ${\cal W}(\rho)$. In the following we will relate the violation of Wick's decomposition with the interaction distance, that optimally identifies the interactiveness of a fermionic system. \subsection{Violation of Wick's theorem for interacting fermions and its relationship with the interaction distance} \label{subsec:wickinterf} Inspired from quantum information, a measure to define the effect of interactions on the groundstate correlations of a system is the \emph{interaction distance}, $D_{\cal F}(\rho)$~\cite{IntDis}. The interaction distance measures the distance between the ground state quantum correlations and the closest pattern of correlations of a system of free fermions. To define $D_{\cal F}(\rho)$ we consider the ground state $\ket{\psi}$ and the reduced density matrix $\rho = \text{Tr}_B(\ket{\psi}\!\bra{\psi})$ obtained from a bipartition of the system in $A$ and its complement $B$. The interaction distance of a state $\rho$ is then given by \begin{equation} D_{\mathcal{F}}(\rho)=\min_{\sigma\in\mathcal{F}} \frac{1}{2}\text{Tr}(\rho-\sigma), \label{eqn:Df} \end{equation} where $\text{Tr}(\rho-\sigma)$ is the trace distance between two quantum states $\rho$ and $\sigma$. The minimisation is over all states $\sigma$ in the manifold $\mathcal{F}$ of all possible free density matrices. In the presence of interactions, the spectrum of the entanglement Hamiltonian that determines the quantum correlations between $A$ and $B$, can deviate from the pattern of the spectra of free-fermion systems, given in \eqref{eqn:free}. Hence, $D_{\mathcal{F}}(\rho)$ quantifies how interacting a state $\rho$ is by means of how far it is from the closest possible free state. Similar to the violation of Wick's decomposition, the interaction distance can successfully identify if free-fermion behaviour can emerge out of a strongly interacting system by means of the corresponding entanglement spectra ~\cite{IntDis, Meichanetzidis, PachosD,Patrick, Vincent, Matos}. However, there is no direct way to measure $D_{\cal F}$ in the laboratory. On the other hand, the violation of Wick's theorem, ${\cal W}$, is given in terms of expectation values of observables that can, in principle, be determined in an experiment. Below, we relate the violation of Wick's theorem for an interacting system with the interaction distance of its ground state. This relation will facilitate the physical interpretation of the interaction distance as well as its estimation, without the need for the optimisation procedure in the definition procedure required in \eqref{eqn:Df}. We now demonstrate that the violation of Wick's decomposition ${\cal W}(\rho)$ is upper bounded by the interaction distance, $D_{\cal F}(\rho)$. To begin with, note that for a free state $\sigma$, we know that $\langle \hat{n}_i \hat{n}_j \rangle_\sigma - \langle \hat{n}_i \rangle_\sigma \langle \hat{n}_j \rangle_\sigma =0$. We then have \begin{widetext} \begin{equation} {\cal W}(\rho) = \big|\langle \hat{n}_i \hat{n}_j \rangle_\rho - \langle \hat{n}_i \hat{n}_j \rangle_\sigma -\langle \hat{n}_i \rangle_\rho \langle \hat{n}_j \rangle_\rho + \langle \hat{n}_i \rangle_\sigma \langle \hat{n}_j \rangle_\sigma \big|. \end{equation} \end{widetext} By employing the Cauchy-Schwarz inequality, we have \begin{widetext} \begin{align} {\cal W}(\rho)& \leq \big|\langle \hat{n}_i \hat{n}_j \rangle_\rho - \langle \hat{n}_i \hat{n}_j \rangle_\sigma \big| + \big| \langle \hat{n}_i \rangle_\rho \left(\langle \hat{n}_j \rangle_\rho - \langle \hat{n}_j \rangle_\sigma \right)\big| +\big| \left(\langle \hat{n}_i \rangle_\rho - \langle \hat{n}_i \rangle_\sigma\right)\langle \hat{n}_j \rangle_\sigma \big| \nonumber \\ & \leq \big| \text{Tr}\left[\hat{n}_i \hat{n}_j(\rho-\sigma)\right]\big| + \big|\langle \hat{n}_i \rangle_\rho\big| \cdot \big|\text{Tr} \left[\hat{n}_j (\rho-\sigma)\right]\big| + \big|\text{Tr}\left[\hat{n}_i (\rho-\sigma)\right]\big| \cdot \big|\langle \hat{n}_j\rangle_\sigma \big|. \label{eqn:ineq1} \end{align} \end{widetext} Writing the state $\rho-\sigma$ in its diagonal basis as $\rho-\sigma=\sum_zs_z\ket{s_z}\bra{s_z}$,\footnote{In general, the states $\rho$ and $\sigma$ are not necessarily diagonal in the same basis. We consider, though, such states, because in what follows we will use the interaction distance, which is the \emph{minimum} distance between the interacting state and the manifold of free states. In other words the state $\sigma$ is the free state \emph{closest} to $\rho$. For this optimization the two states must commute, i.e., they are indeed diagonal in the same basis (for details, see \cite{IntDis} and \cite{PachosD}).} we have that for any mode $k$ \begin{align} \text{Tr}\left[\hat{n}_k (\rho-\sigma)\right] & = \sum_zs_z\braket{s_z|\hat{n}_k|s_z}\nonumber\\ & \leq \max_z \braket{s_z|\hat{n}_k|s_z}\sum_zs_z\nonumber\\ & =\|\hat n_k\|\sum_zs_z=\|\hat n_k\| \big|\text{Tr} (\rho-\sigma)\big|, \label{eqn:maxeigen} \end{align} where we define $ \| \hat n_k\|$ to be the largest eigenvalue of $\hat{n}_k$, i.e. $ \| \hat n_k\| = 1$, the maximum population of the fermionic mode. \eqref{eqn:maxeigen} holds for any free state $\sigma$ that commutes with $\rho$, thus it also holds for the optimal free state determined by the optimisation procedure in the evaluation of $D_{\mathcal{F}}(\rho)$. Therefore, from \eqref{eqn:maxeigen} and \eqref{eqn:Df} we get \begin{equation} \text{Tr}\left[\hat{n}_k (\rho-\sigma)\right]\leq 2 D_{\mathcal{F}}(\rho), \end{equation} see also~\cite{Vincent}. Hence, from \eqref{eqn:ineq1},\eqref{eqn:maxeigen} and using $|\langle \hat{n}_k\rangle_{\rho,\sigma}|\leq 1$, for all $\sigma$, $\rho$ and $k$, we obtain \begin{equation} {\cal W}(\rho) \leq 6 D_{\cal F}(\rho). \label{eq:DfandW} \end{equation} Hence, the violation of Wick's decomposition, ${\cal W}(\rho)$, is bounded from above by the interaction distance, $D_{\cal F}(\rho)$. Recent investigations have shown that the numerical values of ${\cal W}(\rho)$ and $D_{\cal F}(\rho)$ are often almost identical~\cite{Matos}. This tight relation provides a practical way to estimate the interaction distance in terms of simple expectation values that can be measured in the laboratory, see for example \cite{ZollerMeas}. \section{Conclusions and future work} \label{sec:conclusions} In this paper we investigated systems of interacting fermions and the effect interactions have in their quantum correlations. The coupling of interactions gives only very crude means of a system's ``interactiveness". To overcome this we considered the effect interactions have on the quantum correlations of a system. We analysed the violation of Wick's decomposition, a common tool used in free systems to decompose high-order correlations in terms of low-order ones. Specifically, Wick's theorem can be applied to systems of free fermions to decompose ground state expectation values of two-mode density operators into expectation values of single-mode density operators. For systems of interacting fermions such a decomposition does not hold. It is exactly the extent of this violation that we used here to quantify the effect of interactions between the fermionic modes. Our analytic investigation was carried out in terms of the entanglement spectra, thus offering the possibility to translate our findings to quantum information language. In particular, we expressed the violation of Wick's decomposition in terms of few low-entanglement spectra that faithfully reproduce the dominant correlations in the system. In addition, we related the violation of Wick's decomposition to the interaction distance, which is an optimal measure of ``interactiveness" in terms of quantum correlations. While both, the violation of Wick's decomposition ${\cal W}(\rho)$ and the interaction distance $D_{\cal F}(\rho)$, can be theoretically seen as serving the same purpose, the relationship we derived can be very useful in practice. The violation ${\cal W}(\rho)$ is given in terms of expectation values of observables that can in principle be measured in the laboratory, see for instance \cite{ZollerMeas}. Nevertheless, we do not know if ${\cal W}(\rho)$ is optimal in identifying the ``interactiveness" of a system. On the other hand, $D_{\cal F}(\rho)$ is defined as the optimal measure of ``interactiveness". Nevertheless, it lacks a relation to observables, making it hard to relate it to experiments. Here we establish a tight relation between these two quantities that can facilitate the theoretical and experimental investigation of physically relevant models, as it was numerically verified in \cite{Matos} for the XYZ model in the simple case of two fermionic modes. Having linked the violation of Wick's decomposition, ${\cal W}$, to the entanglement energies, one could go further to relate it to other quantities associated to the entanglement spectrum, such as the entanglement entropies \cite{Calabrese,Swingle,Moitra} or alternatives presented in \cite{Hamma}. Along these lines ${\cal W}$ could possibly be useful to infer other features of many-body systems, such as their integrability or the lack of it. Besides the interaction distance, one could try to relate $W$ with different quantities in order to find different -- possibly tighter -- bounds. Further theoretical work could consist of applying our approach to interacting bosons or spin systems. Also, one could use our study to experimentally infer the effect of interactions in the groundstate correlations of fermionic systems. \begin{acknowledgements} The authors would like to thank A. Deger, A. Hallam, G. Matos, K. Meichanetzidis, Z. Papi\'c and C. J. Turner for fruitful discussions. C. V. further acknowledges the hospitality of the Theoretical Physics Research Group at the University of Leeds. J.K.P. acknowledges support from EPSRC Grant No. EP/R020612/1. This project was partially funded by the EPSRC (Grant No. EP/R020612/1). The data that support the findings of this study are available from the authors upon request. C. V. acknowledges support from the Security and Quantum Information Group (SQIG) in Instituto de Telecomunica\c c\~oes, Lisbon. This work is funded by the FCT (Funda\c c\~ao para a Ci\^encia e a Tecnologia) through national funds FCT I.P. and, when eligible, by COMPETE 2020 FEDER funds, under Award UIDB/50008/2020 and the Scientific Employment Stimulus -- Individual Call (CEEC Individual) -- 2020.03274.CEECIND/CP1621/CT0003. \end{acknowledgements}
2,877,628,090,628
arxiv
\subsection{Please Capitalize the First Letter of Each Notional Word in Subsection Title} \section{Introduction} \label{sect:intro} Our understanding of the accretion processes at low accretion rates suggests that magnetic field may be entangled with hot ions at virial temperatures and could be sheared and amplified to the local equipartition value (Rees 1984). If so, dissipation of this field, albeit small, should produce micro-flares from time to time, and they could be detectable especially if the object is located nearby. In the case of AGNs and QSOs, the flares are common and the energy release could carry information about the accretion rates in those systems. We present here an application of this understanding of the accretion process in the context of the galactic black hole transient A0620-00 (Pal \& Chakrabarti, 2004). A0620-00 was discovered in 1975 through the Ariel V sky survey (Elvis et al. 1975). It is located at a distance of $D=1.05$ kpc (Shahbaz, Naylor \& Charles1994). A0620-00 is in a binary system and its mass is estimated to be around $10M_\odot$ (Gelino, Harrison \& Orosz 2001). A0620-00 is not particularly well known for its activity in radio wavelengths. It was last reported to have radio outbursts in 1975 at $962$ and $151$ MHz (Davis et al. 1975; Owen et al. 1976). A few years after these observations, Duldig et al. (1979) reported a low level activity at $2$ cm ($14.7$ GHz). More recent re-analysis of the $1975$ data revealed that it underwent multiple jet ejection events (Kuulkers et al. 1999). There are no other reports of radio observations of this object. The outbursts and quiescence states are thought to be due to some form of thermal-viscous-instability in the accretion disk. In the quiescent state, the accretion rate becomes very low (e.g. Lasota 2001). Assuming that there is a Keplerian disk, from the observations in the optical and the X-ray, the accretion rate was estimated to vary from a few times the Eddington rate in outbursts to less than $ 10^{-11} M_\odot$ yr$^{-1}$ in quiescence (de Kool 1988; McClintock \& Remillard 1986). Assuming a low-efficiency flow model, McClintock \& Remillard (2000), obtained the accretion rate to be $\sim 10^{-10} M_\odot$ yr$^{-1}$ using X-ray observations. A0620-00 has been in a quiescent state for quite some time. In the present Paper, we report the observation of a micro-flare in radio wavelength (frequency 1.28 GHz) coming from this object. \section{Observations and results} \label{sect:Obs} On Sept. 29th, 2002, during UT 00:45-02:03 we observed A0620-00 with the Giant Meter Radio Telescope (GMRT) located in Pune, India. GMRT has $30$ parabolic reflector antennae placed with altazimuth mounts each of which is of $45$ meter diameter placed in nearly `Y' shaped array. It has a tracking and pointing accuracy of $1'$ for wind speeds less than $20$ km/s. GMRT is capable of observing at six frequencies from $151$ MHz to $1420$ MHz. On the higher side, $608-614$ MHz and $1400-1420$ MHz are protected frequency bands by the International Telecommunication Union (ITU). During our observation, 28 out of 30 antennae were working and the observational conditions were stable. The observed frequency is $\nu_{obs} \sim 1280$ MHz which is far away from the ITU bands. The band width is $16$ MHz. There were 128 channels with a channel separation of $125$ kHz. We used 3C147 as the flux calibrator and 0521+166 as the phase calibrator. No other source was found within the field of view. The primary beam width was $0.5$ degree and the synthesized beam width was $3$ arc second. \begin{figure} \begin{center} \mbox{\epsfxsize=0.8\textwidth\epsfysize=0.8\textwidth\epsfbox{SPA0620fig1rot.ps}} \caption{Light curve of A0620-00 without background subtraction on Sep. 29th, 2002 as obtained by GMRT radio observation at $1.28$ GHz. Subtracting the background reveals a micro-flare of mean flux $3.84$ mJy of duration $192 \pm 32 s$.} \end{center} \end{figure} Data analysis were carried out using the AIPS package. The data for A06200-00 is band passed and self calibrated. The light-curve without the background subtraction is shown in Fig. 1. The data is integrated in every $16$ seconds. The background is due to two side lobes and is found to be constant in time. The UV coverage was very good and the background was found to be constant within the field of view with rms noise $8.6 \times 10^{-4}$ Jy as tested by the task IMAGR in AIPS. The background subtraction reveals that a micro-flare of average flux density $F_\nu=3.84$ mJy occurred and it lasted for about $t_{\mu f} = 192 \pm 32$ seconds. We found that each of the antennae independently showed this event and the synthesized image of the field of view showed no significant signal from any other source. This confirms the presence of this micro-flare very convincingly. \section{Interpretation of the micro-flaring event} \label{sect:disc} Fast variabilities occur in time scales of the order of the light crossing time $t_l = r_g/c \sim 0.1\frac{M}{10M_\odot}$ ms, ($r_g=2GM/c^2$ is the Schwarzschild radius) in the vicinity of a black hole. Shot noise in this time scale is observed during X-ray observations. Since the duration $t_{\mu f}$ of the micro-flare that we observe is much larger ($t_{\mu f}>>t_l$), hence we rule out the possibility that it is a shot noise type event. Assuming that the flare is due to magnetic dissipation, with an energy density of $B^2/8\pi$, the expression for the total energy release (fluence) is: $$ E_{mag} = \frac{B^2}{8\pi} V = 4 \pi D^2 \nu_{obs} F_\nu t_{\mu f} , \eqno{(1)} $$ where $V \sim r_g^3$ is the lower limit of the volume in the accretion flow that released the energy, $D$ is the distance of the source from us, $\nu_{obs}$ is the frequency at which the observation is made and $F_\nu$ is the specific intensity of radiation. Here, $B$ is the average magnetic field in the inflow where the flare forms. Re-writing Eqn. (1) using the equipartition law, $$ \frac{B^2}{8\pi} \sim \frac{GM\rho}{r} =\frac{GM{\dot M}}{4 \pi v r^3} , \eqno{(2)} $$ where $\rho$ is the density of matter in the accretion flow, ${\dot M}$ is the accretion rate and $v$ is the velocity of inflow. Since there is no signature of a Keplerian disk in the quiescent state, one may assume the inflow to be generally like a low-angular momentum flow (Chakrabarti, 1990), especially close to the black hole. Estimations of McClintock \& Remillard (2000) was carried out with a low-efficiency radial flow model. Thus, we use the definition of the accretion rate to be ${\dot M}=4\pi \rho r^2 v$. More specifically, we assume, the free-fall velocity, $v \sim (2GM/r)^{1/2}$. Introduction of pressure and rotation effects do not change the result since the gas is tenuous, i.e., hot, and since the Keplerian flow is absent, i.e., the angular momentum is very low. These simple but realistic assumptions allow us to obtain the upper limit of the accretion rate of the flow to be, $$ {\dot M} \sim (3.5 \pm 0.58) \times 10^{14} x^{5/2} {\rm \ gm/s} = (5.5 \pm 0.91) \times 10^{-12} x^{5/2} M_\odot {\rm yr}^{-1}. \eqno{(3)} $$ Here $x=\frac{r}{r_g}$, is the dimensionless distance of the flaring region from the center. From the transonic flow models (Chakrabarti 1990), the flow is expected to be supersonic only around $x_c\sim 2-3$ before disappearing into the black hole. Ideally, in a subsonic flow ($x>x_c$), the chance of flaring is higher as the residence time of matter becomes larger, or comparable with the reconnection time scale. For $x<x_c$ there is little possibility of flaring. We thus estimate the the accretion rate of A0620-00 in the quiescent state to be $$ {\dot M} = (8.5 \pm 1.4) \times 10^{-11} (\frac{x}{3})^{5/2} M_\odot {\rm yr}^{-1}. \eqno{(4)} $$ In the case of a low angular momentum flow, there are possibilities of shock formation at around $x\sim 10$ (Chakrabarti, 1990). So it is likely that the flare forms in the immediate vicinity of the post-shock (subsonic) region where the density of matter as well as magnetic pressure are very high. In any case, the rate we get is consistent with that reported by McClintock \& Remillard (2000) on the basis of X-ray observations. It is to be noted that Duldig et al. (1979) found a flux of $44\pm14$ mJy well after the outburst in 1975 and concluded that intermittent emissions are possible and that mass transfer continues even in quiescence states. Our result also verifies such an assertion. The procedure we have suggested here is sufficiently general. For instance, it is generally believed that in the hard state of a black hole, the hot, sub-Keplerian matter plays an important role in producing the so-called Compton cloud and this would an ideal location for flaring activities if some entangled magnetic fields are present. In case the mass of the black hole and its distance are known, as in the present case, the mass accretion rate could be calculated. In case the accretion rate and the distance were known then the mass could be estimated by inverting the logical steps given above. One of our assumptions is to estimate the magnetic field by assuming it to be in equipartition with the gravitational energy density. In reality, the magnetic field could be less than the equipartition value. On the other hand, since we assumed flow to be freely falling, while in presence of angular momentum, the flow would slow down and both the density and the magnetic field energy would be higher. These to opposite effects should make our estimate to be still sufficiently realistic. \acknowledgements We thank the staffs of the GMRT who have helped us to make this observation possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. SP thanks a CSIR Fellowship which supported his work at the Centre for Space Physics.
2,877,628,090,629
arxiv
\section{Introduction} Hot subdwarf stars (spectral types i.e.: sdB, sdO and related objects) are low mass stars in a core or shell helium (He) burning stage (Heber 2009, 2016). These stars lose nearly their whole hydrogen (H) envelopes during the evolution on the red giant branch (RGB), therefore they present very high effective temperatures ($T_\mathrm{eff}$ $\geq$ 20 000 K) on reaching the horizontal branch (HB) stage. Hot subdwarf stars are considered to be the main source of UV-excess found in elliptical galaxies (O'Connell 1999; Han et al. 2007). These stars also turned out to be important objects in studying close binary interactions, since many hot subdwarf stars are found in close binaries (Maxted et al. 2001; Napiwotzki et al. 2004; Copperwheat et al. 2011). The most common types of companion stars in hot subdwarf binaries are main-sequence (MS) stars, white dwarfs (WDs), brown dwarfs and planets. Hot subdwarf stars with massive WD companions are considered to be the progenitors of type Ia supernovae (Wang et al. 2009; Geier et al. 2011; Geier 2015). The atmospheres of hot subdwarf stars are good places to study diffusion processes, such as gravitational settling and radiative levitation. Moreover, pulsating sdB/O stars are extensively used in asteroseismology to study stellar interiors and rotation. For a recent review on hot subdwarf stars see Heber 2016. The formation mechanism of hot subdwarf stars is still unclear. Since about half of the hot subdwarf B type (sdB) stars are found in close binaries, Han et al. (2002, 2003) carried out a detailed binary population synthesis to study the formation of sdB stars. They found that common envelope (CE) ejection, mass transfer through Roche lobe overflow (RLOF) or merger of two helium core white dwarfs (He-WDs) could produce sdB stars in a close binary, wide binary and single system respectively. Based on these results, Chen et al. (2013) predicted that the orbital period of sdB binaries formed from RLOF mass transfer could be up to 1200 days, if atmospheric RLOF and a different angular momentum loss are considered in binary evolution. This result could explain the formation of sdB stars found in wide binaries. Furthermore, Xiong et al. (2017) found that two distinct groups of sdB stars could be formed through the detailed CE ejection channel. One group is flash-mixing sdB stars without H-rich envelopes, and the other is canonical sdB stars with H-rich envelopes. In addition, Zhang et al. (2012, 2017) studied the formation channel in detail for single sdB stars through the merger of two He-WDs or the merger of a He-WD with a low-mass MS companion. Their results could account for some He-rich sdB stars found in the field. The counterpart of hot sudwarf stars in globular clusters (GCs) are known as extreme horizontal branch (EHB) stars. Some of these stars with particularly high effective temperatures (e.g., $T_\mathrm{eff}$ $\geq$ 32 000 K ) form a blue hook in the ultraviolet (UV) color-magnitude diagram (CMD) of GCs (Brown et al. 2016), and they are known as blue hook stars in GCs. Lei et al. (2015, 2016) proposed that tidally-enhanced stellar wind during binary evolution may lead to huge mass loss of the primary stars at RGB and could produce blue hook stars in GCs after undergoing late core He flash. Thanks to large surveys over the past decade a significant number of previously unknown hot subdwarfs have been catalogued, e.g., Kepler ($\varnothing$stensen et al. 2010), Galaxy Evolution Explorer (GALEX, Vennes et al. 2011; N\'emeth et al. 2012; Kawka et al. 2015), the Sloan Digital Sky Survey (SDSS, Geier et al. 2015; Kepler et al. 2015, 2016) and the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) survey (Luo et al. 2016). $\varnothing$stensen (2006) compiled a widely used hot subdwarf database by searching extensive literatures, in which more than 2300 hot subdwarf stars are archived. Furthermore, Geier et al. (2017) compiled a catalogue of known hot subdwarf stars and candidates retrieved from literatures and unpublished databases. This catalogue contains 5613 objects with multi-band photometry, proper motions, classifications, atmospheric parameters, radial velocities and information on light curve variability. Using the first data release (DR1) of the LAMOST survey, Luo et al. (2016) identified 166 hot subdwarf stars, among which 122 objects are single-lined, while the other 44 objects present double-lined composite spectra (e.g., Mg I triplet lines at 5170 $\mathrm{\AA}$ or Ca II triplet lines at 8650 $\mathrm{\AA}$) , which demonstrates the binary nature of these stars. We need even more spectroscopically identified hot subdwarf stars and candidates to improve our understanding on their formation and evolution. Fortunately, large spectroscopic surveys provide us a good opportunity to search for new hot subdwarf stars, e.g., SDSS (York et al. 2000) and LAMOST (Cui et al. 2012; Zhao et al. 2006, 2012). The traditional method extensively used to search for hot subdwarf stars in large spectroscopic surveys is based on color cuts, followed by visual inspections. However, this method requires homogeneous photometry for the spectra to obtain their colors in different band (e.g., \textit{u-g} and \textit{g-r}, Geier et al. 2011), thus it might not work well in spectral database without any or lack of homogeneous photometric information, such as the database of LAMOST. Employing the Hierarchical Extreme Learning Machine (HELM) algorithm, Bu et al. (2017, hereafter Paper I) explored a machine learning method to search for hot subdwarf stars in LAMOST spectra. The Extreme Learning Machine (ELM) is a special type of single hidden-layer feed-forward network, while HELM is the hierarchical framework of the ELM algorithm (Huang et al. 2006). It is inspired by the deep learning algorithms, and built in a multilayer manner. HELM has been frequently used in many fields, such as image-quality assessment (Mao et al. 2014), human action recognition (Minhas et al. 2010) and hyper-spectral image classification (Li et al. 2015). Using the HELM algorithm in Paper I, we obtained an accuracy and efficiency of classifying single-lined hot subdwarf stars in LAMOST spectra up to 92\% and 96\% respectively, which demonstrated the reliability of the method to search for hot subdwarf stars in the LAMOST survey spectral database. Like in the seminal study of Paper I, we applied the HELM algorithm method to LAMOST DR1 and identified 56 hot subdwarf stars. We obtained the atmospheric parameters of these stars by fitting their spectra with synthetic spectra calculated from NLTE model atmospheres (N\'emeth et al. 2012, 2014). The structure of the paper is as follows. In Section 2, we briefly introduced the LAMOST spectral survey and sample filtering method based on the HELM algorithm. In Section 3, we introduced the selection criteria to sort out hot subdwarf stars selected from the candidates by the HELM algorithm. We give our results in Section 4. Finally, a discussion and a summary of this study are presented in Section 5 and 6, respectively. \section{The Lamost survey and sample filtering with the HELM algorithm} \label{sec:LAMOST and sample} \subsection{The LAMOST survey and database DR1} LAMOST is a special reflecting Schmidt telescope designed with both large aperture (effective aperture of 3.6 - 4.9 m) and a wide field of view (FOV, 5$^{\circ}$, Cui et al. 2012). LAMOST is equipped with 16 low resolution spectrographs connected to 4000 optical fibres, which are precisely positioned on the focal surface. As the telescope with the highest rate of spectral acquisition all over the world, LAMOST could obtain the spectra of 4000 objects simultaneously. LAMOST conducted its pilot survey between October 2011 and June 2012, while the regular survey started in September 2012 and finished its first year's operation in June 2013. The data from both the pilot survey and the first year regular survey make up the database of LAMOST DR1 (Luo et al. 2015). DR1 contains totally 2\,204\,696 spectra with a resolution ($\lambda/\Delta\lambda$) of 1800 in the wavelength range 3690-9100$\mathrm{\AA}$, among which 1\,790\,879 spectra have their signal-to-noise ratio (SNR) $\geq$10, and 1\,944\,329 spectra are classified as stellar spectra. Although the number of stellar spectra in LAMOST DR1 is large, many of them lack photometric measurements in certain bands, such as the \textit{u} band, and it prevents one to use colors for object classifications. Therefore, LAMOST DR1 provides us an appropriate database to test our new method (HELM algorithm) in searching for hot subdwarf stars directly from observed spectra, without a need for color information (also see the discussion in Section 5). \subsection{The HELM algorithm and our training sample} HELM stands for the hierarchical framework of the ELM algorithm (see Paper I for more details), which was proposed by Tang et al. (2015). It usually contains two parts: an unsupervised learning part and a supervised part. The unsupervised part in HELM could include many layers. To give higher-level features of the training sample, the input of each layer is the output of the previous layer. On the other hand, the supervised part contains only one layer, and it takes the output of the last unsupervised layer as its input. In the experiments of Paper I, The HELM algorithm could filter out single-lined hot subdwarf stars from LAMOST spectra with an accuracy of 0.92 and efficiency of 0.96, respectively. When applied to the selection of double-lined hot subdwarfs, the HELM presented an accuracy and efficiency of 0.80 and 0.71, respectively. These results are better when we compare them with other popular algorithms (see section 4.2 in Paper I), which demonstrates that the HELM algorithm is an accurate and efficient new method to search for hot subdwarf stars in large spectroscopic surveys. The training sample used in the experiments of Paper I are the spectra of hot subdwarf stars identified in Luo et al. (2016) combined with 4600 LAMOST DR1 spectra of various types of objects, including stars of different spectral types, galaxies, quasars and objects with ambiguous spectral features. There are a total of 166 hot subdwarf spectra in our training sample, among which 122 stars are single-lined hot subdwarfs, while 44 spectra show strong Mg I triplet lines at 5170 $\mathrm{\AA}$ or Ca II triplet lines at 8650 $\mathrm{\AA}$ indicating the binary nature of these stars. According to Table 2 in Luo et al. (2016), the 122 single-lined hot subdwarf stars consist of 77 sdB stars, 15 He-sdO stars, 12 sdO stars, 10 He-sdB stars and 8 blue horizontal branch (BHB) stars. All the sample spectra are divided into three groups to carry out the experiments in HELM and other popular algorithms (see Paper I for details). \section{Target selection} \begin{figure*} \centering \begin{minipage}[c]{0.3\textwidth} \includegraphics[width=45mm]{sdB_d02_fm.pdf} \centerline{(a)} \end{minipage}% \centering \begin{minipage}[c]{0.3\textwidth} \includegraphics[width=45mm]{bhb_d02_fm.pdf} \centerline{(b)} \end{minipage}% \centering \begin{minipage}[c]{0.3\textwidth} \includegraphics[width=45mm]{wd_d02_fm.pdf} \centerline{(c)} \end{minipage} \caption{Normalized spectra near the H$_\delta$ line in three different types of stars. The blue dashed curve is the fitting profile of $\mathrm{H}_{\delta}$ line. The values of $D_{0.2}$ and $f_\mathrm{m}$ for each star are showed.} \end{figure*} By applying the HELM algorithm outlined in Paper I, we obtained more than 7000 hot subdwarf candidates from LAMOST DR1, among which 1034 spectra have an $u$-band SNR larger than 10. We have selected our final hot subdwarf sample from these candidates. Blue horizontal branch (BHB) stars, B-type main-sequence (B-MS) stars and WDs show very similar features (e.g., strong H Balmer lines) in their spectra as hot subdwarf stars (Moehler et al. 1990). Some of these stars have similar temperatures to hot subdwarf stars, especially to He-poor sdB stars. Therefore, the hot subdwarf candidate sample selected by the HELM algorithm method is contaminated by the above mentioned object types. Three steps are used to select hot subdwarf stars from our candidates. \subsection{Excluding BHB stars and WDs from our sample} BHB stars are horizontal branch stars bluer than the RR Lyrae instability strip in the color-magnitude diagram (CMD). These stars present effective temperatures in the range of about 7 000 - 20 000 K and surface gravities (e.g., log\ $g$) in the range of $\log\ {g}\ =\ 2.5-4.5$ cm\,s$^{-2}$, respectively (Catelan 2009). Xue et al. (2008) used the $D_{0.2}$ and $f_\mathrm{m}$ method to discriminate BHB stars from blue straggler (BS) and B-MS stars. In this method, $D_{0.2}$ is the full width of the H$_\delta$ line at 20\% below the local continuum, while $f_\mathrm{m}$ is the flux relative to the continuum at the line core (Beers et al. 1992; Sirko et al. 2004). Xue et al. (2008) used the criteria: $17\mathrm {\AA} \leq D_{0.2} \leq 28.5 \mathrm {\AA} $ and $0.1 \leq f_\mathrm{m}\leq 0.3$, to select BHB stars from their samples. Both the values of $D_{0.2}$ and $f_\mathrm{m}$ are sensitive to effective temperature and gravity in hot stars (Xue et al. 2008), which makes it a suitable measure to distinguish our sample spectra in the $D_{\rm 0.2}$ - $f_{\rm m}$ diagram. Since BHB stars have lower temperatures and gravities than hot subdwarf stars and regular WDs present higher temperatures and gravities than hot subdwarf stars, these spectral classes can be clearly separated according to their $D_{\rm 0.2}$ and $f_{\rm m}$ values (Greenstein \& Sargent 1974). We use \textit{the scale width versus shape method} (Clewley et al. 2002; Xue et al. 2008) to fit the $\mathrm{H}_{\delta}$ line and obtain the value of $D_{0.2}$ and $f_\mathrm{m}$ for each spectrum in our sample. This method is based on a S\'ersic profile fit (S\'ersic 1968) to Balmer lines in the following form: \begin{equation} y=1.0-a \; exp\left[-\left(\frac{|\lambda-\lambda_{0}|}{b}\right)^{c}\right], \end{equation} where $y$ is the normalized flux, $\lambda$ is the wavelength and $\lambda_0$ is the nominal central wavelength of the Balmer line. The coefficients $a$, $b$ and $c$ are free parameters. As described in Xue et al. (2008), to account for imperfect normalization of spectra, we used five free parameters: $a$, $b$, $c$, $\lambda_0$ and $n$ to fit the normalized spectrum to the S\'ersic profile: \begin{equation} y=n-a \; exp\left[-\left(\frac{|\lambda-\lambda_{0}|}{b}\right)^{c}\right]. \end{equation} The three panels in Fig 1 show the results of fitting the $\mathrm{H}_{\delta}$ profile of a sdB star, a BHB star and a WD, respectively. In each panel, solid curves represent an extracted spectrum near the $\mathrm{H}_{\delta}$ line, while blue dashed curves denote our best fitting line profiles. Panel (a) shows the spectrum of the sdB star PG\,1605+072 taken from Luo et al. (2016) with $T_\mathrm{eff}$\ =\ 32\,550$\pm$370 K and $\mathrm{log}\ g$=\ 5.29$\pm$0.07 cm\,s$^{-2}$. By adopting the fitting method described above, we got $D_{0.2}$\ =\ 9.37 $\mathrm{\AA}$ and $f_\mathrm{m}$\ =\ 0.63. Panel (b) shows the spectrum of the BHB star SDSSJ171935.27+262234.9 from Xue et al. (2008) with $T_\mathrm{eff}$ = 7846 K and $\mathrm{log}\ g$\ =\ 3.46 cm\,s$^{-2}$ (no error bars for this star are presented in Xue et al. 2008 ), while its $D_{0.2}$ and $f_\mathrm{m}$ are 22.53 $\mathrm{\AA}$ and 0.28, respectively. One can see obviously that the BHB star presents much deeper $\mathrm{H}_{\delta}$ line (i.e., smaller value of $f_\mathrm{m}$) and much wider $D_{0.2}$ than the sdB star in Panel (a) due to its significantly lower effective temperature and gravity. The spectrum of the WD SDSS\,J094126.79+294503.4 in Panel (c) is taken from the catalogue of Eisenstein et al. (2006) with its $T_\mathrm{eff}$ =\ 20\,818 K and $\mathrm{log} \ g$\ =\ 8.0 cm\,s$^{-2}$. Although this WD shows a similar depth of the $\mathrm{H}_{\delta}$ line (i.e., $f_\mathrm{m}$\ =\ 0.55) to the sdB star showed in Panel (a), it presents a much larger $D_{0.2}$ (39.42 $\mathrm{\AA}$) than the sdB star (9.37 $\mathrm{\AA}$) due to its higher gravity. \begin{figure*} \centering \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=80mm]{d02_fm_delta_5.pdf} \centerline{(a)} \end{minipage}% \centering \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=80mm]{lamost_dr1_d02_fm_snu_10_delta_3.pdf} \centerline{(b)} \end{minipage}% \caption{Panel (a): the distribution of BHB stars, hot subdwarfs and WDs in the $D_{\rm 0.2}-f_{\rm m}$ diagram. Panel (b): Our hot subdwarf candidates selected by the HELM algorithm in the $D_{\rm 0.2}-f_{\rm m}$ diagram. The red dashed line is a clear boundary between BHB stars and hot subdwarfs at $D_{0.2}$\ =17.0 \AA. } \end{figure*} To better demonstrate the differences of $D_{0.2}$ and $f_\mathrm{m}$ among BHB stars, hot subdwarfs and WDs, we selected some known BHB stars, hot subdwarfs and WDs from published catalogues and put them into the $D_{0.2}$ - $f_\mathrm{m}$ diagram in Panel (a) of Fig 2. Black solid triangles denote BHB stars identified from Xue et al. (2008), blue open circles represent hot subdwarfs selected from the catalogue of Geier et al. (2017), and green open squares are WDs from Eisenstein et al. (2006). BHB stars are concentrated quite well in the upper left corner of Panel (a), and subdwarfs distribute in a strip from the middle center to the bottom right, while WDs locate on the upper right and middle area of the panel (note that most of of the selected WDs have $D_{\rm 0.2}$ values larger than 35 \AA\ and are off the panel). As expected, there is a remarkable gap between BHB stars and hot subdwarf stars near $D_{0.2}$\ = 17.0 \AA\, which is marked by the red dashed horizontal line in Panel (a). Since WDs present much larger values of $D_{0.2}$ than BHB and hot subdwarf stars, $D_{0.2}$\ = 17.0 can be used as a criterion to distinguish hot subdwarf stars from BHB stars and WDs in our sample. Panel (b) of Fig 2 shows the values of $D_{0.2}$ and $f_\mathrm{m}$ for the 1034 sample spectra selected by HELM (see Section 2 and Paper I). To compare with Panel (a) clearly, we plot a dashed horizontal line at $D_{0.2}$ = 17.0 \AA\ in Panel (b) as well, which denotes the gap between BHB stars and hot subdwarf stars in Panel (a). Our sample in Panel (b) shows an analogous distribution to the stars in Panel (a), with the notable exception that the obvious gap at $D_{\rm 0.2}\ =\ 17.0$ \AA\ is not seen in Panel (b). This is due to the fact that the selected BHB stars in Panel (a) are stars with temperatures in the range of $T_{\rm eff}\ =\ 7000 - 10\,000$ and surface gravity in a range of $\log{g}\ =\ 2.5-4.0$ cm\,s$^{-2}$ (Xue et al. 2008), which are much lower than the temperatures and gravities of hot subdwarf stars (e.g., $T_\mathrm{eff}\geq$\ 20\,000$ K$ and log$\ g\geq 5.0 \ \mathrm{cm\,s}^{-2}$, Heber 2016), while the stars selected by HELM form a more evenly distributed mix of stars and the gap in the $D_{\rm 0.2}-f_{\rm m}$ diagram is filled up. Therefore, our sample contains not only BHB stars with low temperatures, hot subdwarf stars and WDs, but also includes high temperature BHB stars (e.g., 10 000 - 20 000 K) and B-MS stars, because these stars present similar temperatures to hot subdwarf stars in lower temperatures (e.g., He-poor sdB stars). Therefore, high temperature BHB stars and B-MS stars fill the gap presented in Panel (a) and make a continuous distribution for our sample in $D_{0.2}$ - $f_\mathrm{m}$ diagram. Note that there are a few stars in the upper right and middle area of Panel (b), which are typically occupied by WDs in Panel (a). This demonstrates that a few WDs are in our sample, and HELM is very efficient at distinguishing hot subdwarf stars from WDs. Nevertheless, the criterion of $D_{0.2}$ = 17.0 \AA\ still excludes most BHB stars with low temperatures and WDs, while preserving hot subdwarf stars in our sample. After applying the selection criterion of $D_{0.2}<17.0 $ \AA\, we obtained 578 hot subdwarf candidate spectra, among which 161 spectra present obvious Mg I triplet lines at 5170 $\mathrm{\AA}$ or Ca II triplet lines at 8650 $\mathrm{\AA}$. These lines are characteristic of cool stars and such subdwarfs are double-lined composite spectrum binary candidates, that will be studied in a forthcoming publication. Therefore, our hot subdwarf sample selected by $D_{0.2}$-$f_\mathrm{m}$ method consists of 417 spectra, for which the atmospheric parameters were determined by fitting their H Balmer and He lines. The $D_{0.2}$-$f_\mathrm{m}$ method is able to exclude most of the BHB stars and WDs in our sample. However, as the method is based on measuring the width and depth of H$_\delta$ line, some hot subdwarfs with weak or no obvious H$_\delta$ lines (e.g., He-sdO, He-sdB) could be also removed from our sample. Furthermore, the values of $D_{0.2}$ and $f_\mathrm{m}$ for some spectra are difficult to obtain from poor quality spectra near the H$_\delta$ line. To assess the completeness of our sample we used {\sc XTgrid} (N\'emeth et al. 2012; Vennes et al. 2011, see next section for detail) to make a spectral classification for the 456 spectra which were removed by the $D_{0.2}$-$f_\mathrm{m}$ method. With this procedure we could recover a further 48 hot subdwarf candidates from low quality spectra. The atmospheric parameters of these 48 spectra together with the 417 spectra selected by $D_{0.2}$-$f_\mathrm{m}$ method (i.e., 465 spectra in total) are determined by fitting their LAMOST optical spectra with synthetic spectra (see next section). All objects with atmospheric parameters characteristic of hot subdwarfs were selected as hot subdwarf candidates. \subsection{Atmospheric parameters of hot subdwarf candidates } \begin{figure} \centering \includegraphics [width=100mm]{represent_spec_fig5.pdf} \caption{Four normalized spectra of hot subdwarf stars with different spectral types identified in this study. Best-fitting synthetic spectra are over plotted by a red dashed line on each spectra. From top to bottom, a He-sdOB, sdOB, sdB and sdO star is presented respectively. Some H Balmer lines and important He I and He II lines marked at the bottom of the figure. } \end{figure} To determine the atmospheric parameters of the final hot subdwarf sample we fitted NLTE models to the observations. We used the NLTE model atmosphere code {\sc Tlusty} (version 204; Hubeny \& Lanz (2017) to calculate models with H and He composition and corresponding synthetic spectra with {\sc Synspec} (version 49; Lanz et al. 2007). Details of the model calculations are described by N\'emeth et al. (2014). The spectral analysis was done by a steepest-descent iterative $\chi^2$ minimization procedure, which is implemented in the fitting program {\sc XTgrid} (N\'emeth et al. 2012; Vennes et al. 2011). This algorithm fits the entire optical range and attempts to reproduce the observed line profiles simultaneously. Final parameter errors are determined by departing from the best fitting parameters in one dimension until the statistical limit for the 60\% confidence level of a single parameter is reached, separately for positive and negative error bars. To match the resolution of LAMOST spectra we convolved the synthetic spectra with a Gaussian profile at a constant resolution ($R\ =\ 1800$). Fig 3 shows the best fitting models for four representative hot subdwarf spectra from our sample. In this figure, gray solid curves denote the normalized stellar spectra\footnote{The continuum for each spectrum was fitted automatically in {\sc XTgrid}}, while red dashed curves represent the best fitting synthetic spectra. The positions of the strongest H Balmer lines, He I and He II lines are marked in Fig 3 as well. The label 'He' plus an integer for each spectrum is the helium class following the hot subdwarf classification scheme of Drilling et al. (2013), which is based on He line strength (see Sect 4 for details). The top spectrum is a He-sdOB star with dominant He I lines and weak H Balmer lines, while the second spectrum from the top is a sdOB star, which shows dominant H Balmer lines with both weak He I and He II lines. The third spectrum from the top is a typical sdB star, which presents broad H Balmer lines with weak He I lines. Finally, the spectrum at the bottom of the figure is classified as a sdO star, because of its dominant H Balmer lines with weak He II line at 4686 $\mathrm{\AA}$ while no He l lines can be detected. By employing {\sc XTgrid} , we obtained the atmospheric parameters (e.g., $T_\mathrm{eff}$, $\mathrm{log}\ g$ and He abundance) for the 465 spectra selected in Section 3. We classified stars with $T_\mathrm{eff} \ge$ 20\,000 K and $\mathrm{log}\ g\ge$ 5.0 as hot subdwarf stars, with $T_\mathrm{eff} <$ 20\,000K and $\mathrm{log}g <$ 5.0 as hot BHB stars, while stars with $\mathrm{log}\ g <$ 4.5 as B-MS stars following the classification scheme of N\'emeth et al. (2012). After this procedure, we selected 76 hot subdwarf candidates based on their atmospheric parameters. We checked our results by Gaia Hertzsprung-Russell diagram (HRD) in next section. \subsection{Cross matching our results with the HRD of Gaia DR2 } \begin{figure} \centering \includegraphics [width=90mm]{sd_cmd1.png} \caption{The distribution of 74 selected subdwarf candidates in the HRD of Gaia DR2. 57 stars (marked with blue triangles) locate in the subdwarf region, and 12 stars (denoted by yellow squares) are distributed along the MS region, while the position of 6 stars (represented by red circles) correspond to the WD sequence. } \end{figure} The second data release (DR2) from Gaia (Gaia Collaboration et al. 2018a) provides high-precision astrometry and photometry for about 1.3 billion sources over the full sky. Based on this huge database, Gaia Collaboration et al. (2018b) built the Gaia DR2 HRD by using the most precise parallax and photometry (see Sect 2 in Gaia Collaboration et al. 2018b for their detailed selection filters). To check our final results, we cross matched the 76 hot subdwarf candidates with the database of Gaia DR2, and got 75 common objects within the radius of five arcseconds, among which one object had negative parallax, and it was removed from our sample. Fig 4 shows the HRD from Gaia Collaboration et al. (2018b) together with the 74 stars in common with this study. Gray dots denote the objects from Gaia DR2 selected by Gaia Collaboration (65\,921\,112 stars in total, see Fig 1 of Gaia Collaboration et al. 2018b), while blue triangles, yellow squares and red circles are the common stars in our sample. We found 56 stars (e.g., blue triangles) to be located in the hot subdwarf region of the HRD. Therefore, these 56 objects are finally identified as hot subdwarf stars in this study. On the other hand, we found 12 stars (e.g., yellow squares) distributed along the wide MS\footnote{Extinction is not considered in the HRD of Fig 1 in Gaia Collaboration et al. (2018b), therefore the MS is wider and can not be distinguished very clearly from the RGB. But the WD and hot subdwarf sequences are presented more clearly in this HRD.} , and 6 stars (e.g., red circles) are along the WD sequence. \section{Results} Using the method described in Section 3, we identified 56 hot subdwarf stars. We followed the spectral classification scheme in Moehler et al. (1990) and Geier et al. (2017) to classify hot subdwarf stars: stars showing strong H Balmer lines with weak or no He I lines are classified as sdB stars; stars showing strong H Balmer lines accompanied by He II absorption are considered as sdO stars; stars having H Balmer lines accompanied both by weak He I and He II lines are classified as sdOB stars and stars with dominant He I lines and weak H Balmer lines are He-sdOB stars, while stars with dominant He II lines are He-sdO stars. Based on this simple classification scheme, we identified 31 sdB stars, 11 sdO stars, 9 sdOB stars, 4 He-sdOB and 1 He-sdO stars. Drilling et al. (2013) designed an MK (Morgan-Keenan)-like system of spectral classification for hot subdwarf stars, in which a spectral class, a luminosity class and a helium class are used to classify hot subdwarf stars. The spectral class is based on the MK standards of spectral classes O and B stars, and the luminosity class is based on the H and He line widths (see Sect 3 in Drilling et al. 2013). On the other hand, the helium class is described by an integer from 0 to 40 denoting the strengths of the He lines relative to the H Balmer lines, and it is roughly equal to the following function of the relative line depths: \begin{equation} 20\ \frac{\mathrm{HeI}\ \lambda4471+\mathrm{HeII}\ \lambda4541} {\mathrm{H}_{\gamma}-0.83\ \mathrm{HeII}\ \lambda4541} \end{equation} for helium class 0-20, and \begin{equation} 40-20\ \frac{\mathrm{H}_{\gamma}-0.83\ \mathrm{HeII}\ \lambda4541} {\mathrm{HeI}\ \lambda4471+\mathrm{HeII}\ \lambda4541} \end{equation} for helium class 20-40. We also appended this helium class for our hot subdwarf stars (see Table 1). The atmospheric parameters of the 56 identified hot subdwarf stars together with the information of 12 MS stars and 6 WDs are listed in Table 1. The atmospheric parameters of the MS stars and WDs are not presented. In column 1-11 of Table 1, we have presented the LAMOST designation, right ascension, declination, effective temperature , surface gravity and He abundance obtained in this study, spectral classification type, SNR in the \textit{u} band, apparent magnitudes in the u and \textit{g} band of SDSS, apparent magnitudes in the \textit{G} band of Gaia DR2, respectively. We also cross-matched our hot subdwarf stars with the hot subdwarfs list in Geier et al. (2017) and N\'emeth et al. (2012). In Table 1, the common hot subdwarf stars with Geier et al. (2017) are labeled by $^{*}$, and the common hot subdwarf stars with N\'emeth et al. (2012) are marked by $^{\dagger}$. \begin{table*} \scriptsize \begin{minipage}{180mm} \caption{Information for the 74 stars analyzed in this study. From left to right of the table, it gives the LAMOST designation of the objects, right ascension, declination, effective temperature, gravity, helium abundance, spectral classification type, SNR in \textit{u} band, apparent magnitude in \textit{u} and \textit{g} band from SDSS and apparent magnitude in \textit{G} band from Gaia DR2, respectively. } \end{minipage}\\ \centering \begin{tabularx}{18.0cm}{lllcccccccccccX} \hline\noalign{\smallskip} Designation$^ a$ & ra$^ b$ & dec & $T_\mathrm{eff}$ & $\mathrm{log}\ g$ & $\mathrm{log}(n\mathrm{He}/n\mathrm{H})^c$ &Sptype & SNR &uSDSS &gSDSS &G\ GaiaDR2 & \\ LAMOST & LAMOST & LAMOST&$(K)$&($\mathrm{cm\ s^{-2}}$)& & &\textit{u}-band &(mag) &(mag) &(mag) &\\ \hline\noalign{\smallskip} J002124.79+402857.1 & 5.3532989 & 40.482537 & 25850$\pm$\ 580 & 5.42$\pm$0.11 & -2.57$\pm$0.18 & sdB He4 & 18.7 & \ \ - & 15.19 & 15.51 & \\ J002355.23+420905.5$^{*}$ & 5.9801396 & 42.151544 & 30150$\pm$\ 280 & 5.47$\pm$0.06 & -2.31$\pm$0.14 & sdB He6 & 18.6 & \ \ - & 15.46 & 15.79 & \\ J003627.19+271000.7 & 9.113308 & 27.166863 & \ \ - & \ \ - & \ \ - & MS & 45.0 & 14.93 & 14.67 & 14.64 & \\ J003801.72+343156.2 & 9.5071771 & 34.53228 & 40850$\pm$\ 610 & 5.49$\pm$0.10 & -0.23$\pm$0.09 & He-sdOB He33 & 17.9 & \ \ - & 13.66 & 13.90 & \\ J004949.26+352200.9$^{*}$ & 12.455266 & 35.366938 & 34960$\pm$\ 690 & 5.83$\pm$0.12 & -1.49$\pm$0.10 & sdOB He13 & 25.3 & \ \ - & 14.54 & 14.82 & \\ J010448.81+362742.4$^{*}$ & 16.203409 & 36.461784 & 32260$\pm$\ 60 & 5.74$\pm$0.02 & -1.63$\pm$0.03 & sdOB He11 & 90.6 & 12.55 & 12.95 & 12.40 & \\ J010945.73+374538.5$^{*}$ & 17.440552 & 37.760704 & 29980$\pm$\ 100 & 5.49$\pm$0.03 & -3.54$\pm$0.26 & sdB He2 & 25.6 & 13.96 & 14.61 & 13.87 & \\ J011857.19-002545.5$^{*}$ & 19.738333 & -0.429333 & 29060$\pm$\ 140 & 5.48$\pm$0.04 & -3.16$\pm$0.25 & sdB He2 & 15.7 & 14.49 & 14.60 & 14.82 & \\ J013134.51+323723.7 & 22.893792 & 32.623252 & 60390$\pm$\ 720 & 5.48$\pm$0.05 & -1.40$\pm$0.10 & sdO He8 & 10.9 & \ \ - & 15.00 & 15.30 & \\ J014710.62+303213.2 & 26.794254 & 30.537002 & 22110$\pm$\ 210 & 5.00$\pm$0.07 & -2.05$\pm$0.12 & sdB He6 & 18.8 & \ \ - & 14.10 & 14.35 & \\ J015054.28+310746.7 & 27.72618 & 31.129651 & 28540$\pm$\ 180 & 5.70$\pm$0.04 & -1.69$\pm$0.05 & sdB He10 & 16.9 & \ \ - & 13.97 & 14.32 & \\ J020932.45+430712.5$^{*}$ & 32.385219 & 43.12014 & 27580$\pm$\ 500 & 5.42$\pm$0.03 & -2.73$\pm$0.16 & sdB He5 & 11.8 & 14.42 & 14.86 & 14.34 & \\ J022517.07+031218.2 & 36.3211422 & 3.2050785 & \ \ - & \ \ - & \ \ - & WD & 15.1 & 16.24 & 16.70 & 16.95 & \\ J023551.35+011845.1 & 38.963972 & 1.312544 & \ \ - & \ \ - & \ \ - & WD & 10.4 & 16.97 & 16.41 & 16.17 & \\ J030025.22+003224.3 & 45.10512 & 0.54009 & \ \ - & \ \ - & \ \ - & MS & 13.1 & 23.89 & 21.76 & 20.36 & \\ J031756.92+322950.4 & 49.487181 & 32.497341 & 33860$\pm$\ 430 & 6.07$\pm$0.15 & -1.62$\pm$0.12 & sdB He13 & 15.9 & \ \ - & 15.58 & 15.72 & \\ J035926.96+270508.6 & 59.862336 & 27.08573 & 35160$\pm$\ 380 & 5.51$\pm$0.04 & -2.74$\pm$0.35 & sdOB He2 & 14.0 & \ \ - & 14.97 & 15.10 & \\ J040613.24+465133.6 & 61.555205 & 46.859349 & \ \ - & \ \ - & \ \ - & MS & 15.2 & 14.77 & \ \ - & 14.59 & \\ J051425.36+332344.3 & 78.605685 & 33.395662 & \ \ - & \ \ - & \ \ - & MS & 10.4 & \ \ - & 15.04 & 13.38 & \\ J053656.48+395518.7$^{*}$ & 84.235335 & 39.92188 & 38490$\pm$\ 350 & 5.54$\pm$0.07 & -0.65$\pm$0.07 & sdOB He16 & 14.7 & \ \ - & 13.67 & 13.92 & \\ J054447.48+272032.0 & 86.197835 & 27.342228 & \ \ - & \ \ - & \ \ - & WD & 10.3 & \ \ - & 17.08 & 16.93 & \\ J055151.32+220437.0 & 87.96384 & 22.076954 & 29610$\pm$\ 110 & 5.66$\pm$0.03 & -2.22$\pm$0.05 & sdB He5 & 24.4 & \ \ - & 12.85 & 13.17 & \\ J055227.67+155311.4 & 88.115311 & 15.886516 & \ \ - & \ \ - & \ \ - & WD & 23.1 & \ \ - & 12.52 & 13.03 & \\ J055348.85+325601.7 & 88.453581 & 32.93382 & 30490$\pm$\ 110 & 5.68$\pm$0.02 & -2.15$\pm$0.04 & sdB He5 & 44.0 & \ \ - & 14.02 & 14.17 & \\ J055411.88+220459.7 & 88.549534 & 22.083273 & \ \ - & \ \ - & \ \ - & MS & 10.2 & \ \ - & 13.28 & 13.17 & \\ J055926.92+271321.0 & 89.862203 & 27.222502 & \ \ - & \ \ - & \ \ - & MS & 10.9 & \ \ - & 19.20 & 17.99 & \\ J062704.91+345809.5$^{*}$ & 96.770481 & 34.969325 & 25080$\pm$\ 380 & 5.26$\pm$0.08 & -3.57$\pm$0.62 & sdB He1 & 10.8 & \ \ - & 14.19 & 14.43 & \\ J062836.51+325031.5 & 97.152155 & 32.842084 & 42740$\pm$\ 570 & 5.30$\pm$0.12 & 0.20$\pm$0.10 & He-sdOB He37 & 21.5 & \ \ - & 14.51 & 14.71 & \\ J063210.36+281041.7 & 98.043207 & 28.178276 & 45130$\pm$\ 330 & 5.51$\pm$0.12 & 0.33$\pm$0.06 & He-sdOB He40 & 17.7 & \ \ - & 14.82 & 15.10 & \\ J063526.61+323109.8 & 98.86089 & 32.519401 & \ \ - & \ \ - & \ \ - & MS & 11.6 & \ \ - & 15.95 & 15.15 & \\ J063952.15+515700.9 & 99.967315 & 51.950267 & 29720$\pm$\ 110 & 5.37$\pm$0.04 & -3.00$\pm$0.73 & sdB He1 & 35.8 & \ \ - & \ \ - & 11.96 & \\ J064618.36+292013.2$^{*}$ & 101.57652 & 29.337016 & 38740$\pm$\ 450 & 5.90$\pm$0.05 & $-4.00>$ & sdO He0 & 73.4 & \ \ - & \ \ - & 13.59 & \\ J064814.13+171056.2 & 102.05891 & 17.182305 & \ \ - & \ \ - & \ \ - & MS & 10.6 & \ \ - & 14.96 & 13.23 & \\ J065446.63+244926.8 & 103.69431 & 24.82412 & 58700$\pm$3600 & 5.17$\pm$0.05 & -2.04$\pm$0.10 & sdO He2 & 55.7 & \ \ - & 13.65 & 13.99 & \\ J065532.98+220349.6 & 103.88743 & 22.063784 & 45090$\pm$\ 890 & 5.62$\pm$0.05 & -1.71$\pm$0.08 & sdO He6 & 30.5 & \ \ - & \ \ - & 13.70 & \\ J065647.77+242958.8 & 104.19908 & 24.499685 & \ \ - & \ \ - & \ \ - & MS & 18.7 & \ \ - & \ \ - & 10.19 & \\ J065748.42+253251.1 & 104.45177 & 25.547541 & 44930$\pm$1160 & 6.48$\pm$0.10 & $-4.00>$ & sdB He19 & 16.1 & \ \ - & 15.89 & 16.05 & \\ J065816.71+094343.1 & 104.56965 & 9.7286415 & 36270$\pm$\ 320 & 5.03$\pm$0.03 & -1.70$\pm$0.08 & sdOB He11 & 17.1 & \ \ - & 13.27 & 13.59 & \\ J070619.19+242910.5 & 106.57996 & 24.486267 & 61820$\pm$6030 & 5.30$\pm$0.04 & -2.00$\pm$0.13 & sdO He4 & 15.0 & \ \ - & 15.77 & 15.81 & \\ J071202.40+113332.4 & 108.01 & 11.559014 & 24720$\pm$\ 180 & 5.10$\pm$0.04 & -2.63$\pm$0.07 & sdB He5 & 33.0 & \ \ - & \ \ - & 12.46 & \\ \hline\noalign{\smallskip} \end{tabularx}\\ {$^a$ Stars labeled with $\ast$ also appear in the hot subdwarf catalogue of Geier et al. (2017).}\\ {$^b$ Stars labeled with $\dagger$ also appear in N\'emeth et al. (2012).} \\ {$^c$ "$>$" denotes a upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ for the object.}\\ \end{table*} \setcounter{table}{0} \begin{table*} \scriptsize \begin{minipage}{180mm} \caption{Continued } \end{minipage}\\ \centering \begin{tabularx}{18.0cm}{lllcccccccccccX} \hline\noalign{\smallskip} Designation$^ a$ & ra$^ b$ & dec & $T_\mathrm{eff}$ & $\mathrm{log}\ g$ & $\mathrm{log}(n\mathrm{He}/n\mathrm{H})^c$ &Sptype & SNR &uSDSS &gSDSS &G\ GaiaDR2 & \\ LAMOST & LAMOST & LAMOST&$(K)$&($\mathrm{cm\ s^{-2}}$)& & &\textit{u}-band &(mag) &(mag) &(mag) &\\ \hline\noalign{\smallskip} J072835.11+280239.1 & 112.1463 & 28.044199 & 86250$\pm$16170 & 5.77$\pm$0.16 & 0.04$\pm$0.12 & He-sdO He40 & 10.1 & \ \ - & 15.45 & 15.78 & \\ J073446.14+342120.8 & 113.69226 & 34.355805 & 25510$\pm$\ 680 & 5.15$\pm$0.07 & -2.42$\pm$0.09 & sdB He6 & 20.6 & \ \ - & 15.20 & 15.46 & \\ J073756.25+311646.5 & 114.48439 & 31.279597 & 30600$\pm$\ 130 & 5.45$\pm$0.03 & -2.47$\pm$0.12 & sdB He5 & 11.2 & \ \ - & \ \ - & 13.58 & \\ J074121.90+265425.8$^{*}$ & 115.34127 & 26.907168 & 29530$\pm$\ 460 & 5.30$\pm$0.07 & $-4.00>$ & sdB & 11.6 & \ \ - & 15.52 & 15.59 & \\ J074435.14+302108.7$^{*}$ & 116.14643$^{\dagger}$ & 30.352421 & 28980$\pm$\ 200 & 5.51$\pm$0.03 & -2.95$\pm$0.10 & sdB He3 & 30.6 & \ \ - & 14.39 & 14.74 & \\ J074855.82+304247.0$^{*}$ & 117.23262$^{\dagger}$ & 30.713059 & 30910$\pm$\ 110 & 5.80$\pm$0.03 & -2.02$\pm$0.04 & sdB He4 & 31.8 & \ \ - & 13.76 & 14.06 & \\ J075139.26+064604.8 & 117.91362 & 6.7680011 & 39850$\pm$\ 180 & 5.61$\pm$0.04 & -0.16$\pm$0.03 & He-sdOB He30 & 39.9 & \ \ - & 13.21 & 13.50 & \\ J075412.37+294957.0$^{*}$ & 118.55157 & 29.832504 & 30910$\pm$1230 & 5.77$\pm$0.28 & -1.87$\pm$0.32 & sdB He7 & 14.5 & \ \ - & 14.24 & 14.57 & \\ J075922.99+164601.6 & 119.845827 & 16.767125 & 37930$\pm$\ 920 & 5.25$\pm$0.05 & -2.89$\pm$0.27 & sdO He1 & 23.9 & 13.84 & 14.94 & 14.42 & \\ J080327.92+342140.6$^{*}$ & 120.86637 & 34.361297 & 38130$\pm$1350 & 5.58$\pm$0.11 & -3.28$\pm$0.60 & sdO He3 & 26.1 & \ \ - & 14.75 & 15.06 & \\ J080611.66+334425.6 & 121.5486 & 33.740449 & \ \ - & \ \ - & \ \ - & WD & 10.5 & \ \ - & 16.13 & 16.33 & \\ J080628.65+242057.4$^{*}$ & 121.61938 & 24.349293 & 27990$\pm$\ 350 & 5.48$\pm$0.04 & -2.50$\pm$0.14 & sdB He4 & 14.4 & \ \ - & 14.70 & 15.00 & \\ J080758.25+272434.3 & 121.99274 & 27.409538 & 38370$\pm$1190 & 5.58$\pm$0.08 & -3.41$\pm$0.66 & sdO He2 & 50.4 & \ \ - & 13.76 & 14.11 & \\ J084535.66+194150.2 & 131.3986$^{\dagger}$ & 19.697288 & 22070$\pm$\ 420 & 5.00$\pm$0.06 & -1.80$\pm$0.06 & sdB He7 & 18.4 & 13.13 & 13.49 & 13.26 & \\ J085649.36+170116.0$^{*}$ & 134.2056708$^{\dagger}$ & 17.021125 & 28810$\pm$\ 150 & 5.65$\pm$0.01 & -3.19$\pm$0.17 & sdB He2 & 56.3 & 14.67 & 12.73 & 12.81 & \\ J085851.11+021012.9$^{*}$ & 134.71299 & 2.1702667 & 48580$\pm$1150 & 5.61$\pm$0.07 & -1.83$\pm$0.09 & sdO He6 & 16.8 & \ \ - & 13.30 & 13.63 & \\ J093512.20+310959.3$^{*}$ & 143.8008625 & 31.166475 & 33870$\pm$\ 110 & 5.62$\pm$0.04 & -1.47$\pm$0.07 & sdOB He11 & 13.4 & 15.06 & 15.34 & 15.63 & \\ J112350.68+233645.8$^{*}$ & 170.961175 & 23.6127333 & 27560$\pm$\ 350 & 5.32$\pm$0.04 & -2.39$\pm$0.11 & sdB He5 & 15.8 & 13.76 & 13.90 & 14.15 & \\ J120624.36+570935.7$^{*}$ & 181.6015083$^{\dagger}$ & 57.1599222 & 34960$\pm$\ 230 & 5.70$\pm$0.04 & -1.81$\pm$0.06 & sdOB He9 & 18.4 & 14.28 & 14.60 & 14.85 & \\ J123652.66+501513.8$^{*}$ & 189.219429 & 50.253856 & 43250$\pm$2210 & 5.40$\pm$0.12 & -2.42$\pm$0.30 & sdO He2 & 22.7 & 13.96 & 14.38 & 14.65 & \\ J125229.60-030129.6$^{*}$ & 193.12335 & -3.0248924 & 30790$\pm$\ 480 & 5.59$\pm$0.09 & $-3.36>$ & sdB He0 & 13.4 & 15.46 & 15.71 & 15.65 & \\ J133640.95+515449.4 & 204.170631 & 51.913729 & 88450$\pm$21230 & 5.13$\pm$1.00 & -2.77$\pm$1.04 & sdOB - & 53.5 & 12.79 & 12.76 & 12.97 & \\ J135153.11-012946.6 & 207.9713167 & -1.4962778 & 31040$\pm$\ 560 & 6.03$\pm$0.12 & $-2.77>$ & sdB He0 & 11.2 & 15.31 & 15.45 & 15.66 & \\ J141736.40-043429.0 & 214.401671 & -4.574742 & 37750$\pm$\ 380 & 5.82$\pm$0.06 & -1.53$\pm$0.05 & sdOB He12 & 24.6 & 13.52 & 13.96 & 13.71 & \\ J144052.82-030852.6$^{*}$ & 220.220106 & -3.147965 & 29320$\pm$\ 40 & 5.44$\pm$0.03 & -2.74$\pm$0.05 & sdB He0 & 45.2 & 13.60 & 14.02 & 13.82 & \\ J161200.65+514943.5$^{*}$ & 243.0027458 & 51.82875 & 45130$\pm$1610 & 5.09$\pm$0.13 & -3.31$\pm$0.29 & sdB He2 & 10.9 & 13.26 & 13.54 & 13.67 & \\ J164718.35+322832.9 & 251.826491 & 32.475813 & \ \ - & \ \ - & \ \ - & WD & 38.9 & 13.47 & 13.83 & 13.59 & \\ J171013.21+532646.0 & 257.555047 & 53.446121 & 28120$\pm$\ 340 & 5.83$\pm$0.03 & -2.42$\pm$0.12 & sdB He3 & 13.1 & 12.28 & 12.87 & 12.60 & \\ J171718.79+422609.2 & 259.32832 & 42.435913 & 55490$\pm$2130 & 5.78$\pm$0.03 & -3.01$\pm$0.29 & sdO He0 & 30.4 & 12.26 & 12.77 & 12.48 & \\ J175311.46+062541.5 & 268.2977592 & 6.4282084 & \ \ - & \ \ - & \ \ - & MS & 11.6 & 14.68 & 13.66 & 14.58 & \\ J192216.18+405757.4 & 290.567417 & 40.965972 & \ \ - & \ \ - & \ \ - & MS & 20.7 & 13.54 & \ \ - & 13.51 & \\ J192609.46+372008.1$^{*}$ & 291.539417 & 37.335611 & 31060$\pm$\ 240 & 5.97$\pm$0.04 & -1.65$\pm$0.04 & sdB He11 & 23.8 & 13.45 & \ \ - & 13.61 & \\ J213406.74+033415.4 & 323.528123 & 3.570953 & 40310$\pm$1390 & 6.12$\pm$0.12 & -1.60$\pm$0.18 & sdB - & 21.2 & 11.50 & 11.78 & 11.55 & \\ J223419.15+091620.5 & 338.57981 & 9.272378 & \ \ - & \ \ - & \ \ - & MS & 13.2 & 13.89 & 13.93 & 13.87 & \\ \hline\noalign{\smallskip} \end{tabularx}\\ \end{table*} \subsection{Comparison with other studies} Among the 56 hot subdwarf stars in our study, 25 stars have been already catalouged by Geier et al. (2017), and 5 stars are listed in N\'emeth et al. (2012). To check the results presented in our study, we compared the atmospheric parameters obtained in this study with the ones from Geier et al. (2017) and N\'emeth et al. (2012) where their parameters are available. We have 25 common hot subdwarf stars with Geier et al. (2017), but only 11 stars with their $T_\mathrm{eff}$ and $\mathrm{log}\ g$ are available in the catalogue, and 10 stars with their He abundances are available in the catalogue. The subplots from left to right of Panel (a) in Fig 5 present the comparison of $T_\mathrm{eff}$, $\mathrm{log}\ g$ and $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$, respectively. As we see that both $T_\mathrm{eff}$ and $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ obtained in this study matched well with the values from Geier et al. (2017). Although, the comparison of $\mathrm{log}\ g$ in the middle subplot of Panel (a) presents a more dispersive distribution than the other two parameters, but our results are still comparable with the values from literature. We also have 5 common hot subdwarf stars with N\'emeth et al. (2012), which are marked in Table 1. These stars are from the GALEX survey with low-resolution spectra. Similar as we see in Panel (a), both $T_\mathrm{eff}$ and $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ from this study match very well with the values from N\'emeth et al. (2012, see the left and right subplots in Panel (b)). However, most of the $\mathrm{log}\ g$ obtained in this study seem to be a little larger than the values from N\'emeth et al. (2012, see the middle subplot in Panel(b)). This could be due to the fact that the synthetic spectra used to fit the observed spectra in our study are calculated from atmospheric models only with H and He composition (N\'emeth et al. 2014), while the synthetic spectra used in N\'emeth et al. (2012) are calculated from atmospheric models not only with H and He composition but also include C, N and O composition. Furthermore, the observed spectra in our sample (obtained in LAMOST survey) are different from the spectra in N\'emeth et al. (2012, obtained in GALEX survey), and the qualities (e.g., SNR) for the spectra are also different. Beyond these effects the major reason for the differences in the surface gravity is the inclusion of H Stark broadening tables from Tremblay \& Bergeron (2009) directly in the model atmosphere calculation in {\sc Tlusty} version 204, unlike in version 200 used by N\'emeth et al (2012). \begin{figure*} \centering \begin{minipage}[c]{0.8\textwidth} \includegraphics[width=140mm]{compare_with_geier17.pdf} \centerline{(a)} \end{minipage}\\ \centering \begin{minipage}[c]{0.8\textwidth} \includegraphics[width=140mm]{compare_with_nemeth12.pdf} \centerline{(b)} \end{minipage} \caption{Panel (a): Comparisons between the atmospheric parameters obtained in this study and the ones from Geier et al. (2017). Panel (b): Comparisons between the atmospheric parameters obtained in this study and the ones from N\'emeth et al. (2012). } \end{figure*} \subsection{Parameter diagrams} Fig 6 shows the distribution of all hot subdwarf stars from our study in the $T_{\rm eff}-\log\ {g}$ diagram. The thick solid line denotes the He main-sequence (He MS) from Paczy\'nski (1971), while the two dashed lines represent the zero-age HB (ZAHB) and terminal-age HB (TAHB) for hot subdwarf stars with [Fe/H] = -1.48 from Dorman et al. (1993). The thin solid, dot-dashed and dotted curves are the sdB evolution tracks from Han et al. (2002). From right to left, these sdB evolution tracks have the masses of 0.5, 0.6 and 0.7\ $\mathrm{M}_{\odot}$ respectively. The thin solid curves present a H-rich envelope mass of 0.0\ $\mathrm{M}_{\odot}$, the dot-dashed curves for 0.001\ $\mathrm{M}_{\odot}$, and the dotted curves for 0.005\ $\mathrm{M}_{\odot}$. \begin{figure} \centering \includegraphics [width=100mm]{teff_logg1.pdf} \caption{$T_\mathrm{eff}$-$\mathrm{log}\ g$ diagram for for the 56 hot subdwarf stars identified in this study. Stars with $\mathrm{log}(n\mathrm{He}/n\mathrm{H})\leq-2.2$ are marked with filled circles, stars with $-2.2< \mathrm{log}(n\mathrm{He}/n\mathrm{H})<-1.0$ are represented by open triangles, while stars with $\mathrm{log}(n\mathrm{He}/n\mathrm{H})\geq -1.0$ are showed by open squares. The thick solid line denotes the He-MS from Paczy\'nski (1971), and the two dashed lines represent ZAHB and TAHB from Dorman et al. (1993) with [Fe/H] = -1.48. While the thin solid, dot-dashed, and dotted curves represent the evolution tracks of hot subdwarf stars from Han et al. (2002). See text for the details on these evolution tracks.} \end{figure} We split our sample into three groups based on their He abundance following the scheme of N\'emeth et al. (2012). In Fig 6, filled circles denote hot subdwarf stars with their $\mathrm{log}(n\mathrm{He}/n\mathrm{H})\leq-2.2$. Most of these stars are He-poor sdB stars, and they are located near $T_\mathrm{eff}$ = 29 000 K, and $\mathrm{log}\ g$ = 5.5 cm\,s$^{-2}$. A few of the stars in this He abundance range show very high temperatures (e.g., $T_\mathrm{eff}\geq$50 000 K), which suggests that they have already finished their core helium burning stage and now evolve towards the WD cooling tracks. Open triangles in Fig 6 represent hot subdwarf stars with $-2.2< \mathrm{log}(n\mathrm{He}/n\mathrm{H})<-1.0$. Most of these stars are found near $T_\mathrm{eff}$ = 32 000 K, and $\mathrm{log}g$ = 5.75 cm\,s$^{-2}$. These stars show higher gravities than previous group and their temperatures show a large dispersion. The third group contains stars with He abundances in the range of $-1.0\leq\mathrm{log}(n\mathrm{He}/n\mathrm{H})$, which are denoted by open squares in Fig 6. Actually, we just found five hot subdwarf stars in this He abundance range, four of them are classified as He-sdOB stars and one is classified as He-sdO star based on our classification scheme. \begin{figure} \centering \includegraphics [width=100mm]{teff_loghe1.pdf} \caption{$T_\mathrm{eff}$-$\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ diagram for the 56 hot subdwarf stars identified in this study. The red dashed line denotes the solar He abundance. The dotted line and solid line are the linear regression line fitted by Edelmann et al. (2003), while the dot-dashed line is the best-fitting line for the He-poor sequence in N\'emeth et al. (2012). Diamonds denote the stars for which we just obtained the upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ (see Table 1). } \end{figure} Fig 7 shows the $T_\mathrm{eff}$-$\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ diagram for our hot subdwarf stars. The solar He abundance is marked by a horizontal red dashed line. The diamonds represent the stars for which only an upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ could be obtained. Edelmann et al. (2003) found two He sequences, which are positive correlations between the effective temperature and He abundance (i.e., a He-rich sequence and a He-weak sequence) when the analyzed spectra of hot subdwarf stars were from the Hamburg Quasar Survey. The He-rich sequence of their sample follows the fitting formula: \begin{equation} \mathrm{log}(n\mathrm{He}/n\mathrm{H})=-3.53+1.35\left(\frac{T_\mathrm{eff}}{10^{4}K}-2.00\right), \end{equation} while the He-weak sequence in their study follows the formula: \begin{equation} \mathrm{log}(n\mathrm{He}/n\mathrm{H})=-4.79+1.26\left(\frac{T_\mathrm{eff}}{10^{4}K}-2.00\right). \end{equation} These two lines are shown by the dotted and the solid lines in Fig 7, respectively. We found results similar to those described by Edelmann et al. (2003), the two He sequences of hot subdwrf stars are also present in our sample. Moreover, the He-rich sequence in Fig 7 could be fitted well by the line described in equation (5), which is from Edelmann et al. (2003). However, a He-weak sequence in our sample follows a different trend than the He-weak sequence by Edelmann et al. (2003). On the other hand, the He-weak sequence in our sample is consistent with the one presented in N\'emeth et al. (2012). They used another line to fit the He-weak sequence in their study: \begin{equation} \mathrm{log}(n\mathrm{He}/n\mathrm{H})=-4.26+0.69\left(\frac{T_\mathrm{eff}}{10^{4}K}-2.00\right). \end{equation} We also plot the linear regression by equation (7), which is denoted by a dot-dashed line in Fig 7. The trend of this line is consistent with our He-weak sequence. Furthermore, Edelmann et al. (2003) also found two less clear sequences of hot subdwarf stars in the $\mathrm{log}$ \textit{g}-$\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ plane. However, we did not find a similar result in our sample (see Fig 8). \begin{figure} \centering \includegraphics [width=100mm]{logg_loghe1.pdf} \caption{The $\log{g}-\log{(n{\rm He}/n{\rm H})}$ plane for the 56 hot subdwarf stars identified in this study, the red dashed line marks the solar He abundance for reference. Diamonds denote the stars for which we just obtained the upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ (see Table 1).} \end{figure} \section{Discussion} The traditional method to search for hot subdwarf stars in large spectroscopic surveys is to make color cuts followed by visual inspections. This method requires homogeneous photometric information to obtain the colors of the stars (e.g., \textit{u-g} and \textit{g-r}; Geier et al. 2011). Therefore, the traditional method is not suitable for large spectral databases without supplementary photometric measurements, such as the spectral database of LAMOST. The HELM algorithm, as described in Paper I and in this study, does not need color information to filter out spectra with certain spectral properties. This makes HELM a suitable method to screen large spectroscopic surveys for hot subdwarf stars, or any other objects with distinct spectral features. One may note that He-rich hot subdwarf stars are under-represented in our samples (e.g., only 5 stars with $\mathrm{log}(n\mathrm{He}/n\mathrm{H})>-1.0$, see Fig 7 in this paper), this could be due to the fact that the number of He-rich hot subdwarf stars in the training sample is small. Our training spectra were the hot subdwarfs from Luo et al. (2016), which consists of 77 sdB stars, 12 sdO stars, 10 He-sdB stars and 15 He-sdO stars. According to the classification scheme of Luo et al. (2016), both sdB and sdO stars are He-poor hot subdwarf stars with dominant H Balmer lines, while both He-sdB and He-sdO stars are He-rich stars with dominant He I or He II lines. That is, there are many more hot subdwarf stars with dominant H Balmer lines (He-poor stars) than the stars with dominant He lines (He-rich stars) in our training sample, e.g., 77 versus 25. In addition to this, we did not separate these different type of subdwarf stars during the experiments. Instead, we trained HELM with all the sample spectra together, thus the larger the number of stars of a particular type in the training sample, the greater the precision with which this stellar type may be identified in the science sample. These factors could be accounted for the lack of He-rich hot subdwarf stars in our results. The quantity and quality of the training spectra are both very important factors in the HELM algorithm method, and have a direct influence on the results (Tang et al. 2015). Before we started this work, only 166 hot subdwarf stars (including 122 single-lined stars and 44 double-lined stars) with LAMOST spectra were published in Luo et al. (2016). Therefore, the number of hot subwarf stars is limited in our training spectra. Moreover, among 122 single-lined hot subdwarf stars, 8 stars are classified as BHB stars in Luo et al. (2016), and only about 50 have a SNR larger than 10. As a result, although the initial candidates selected by HELM algorithm contains more than 7000 spectra, but nearly 6000 spectra have a $u$-band SNR below 10, which demonstrates a poor quality of the spectra for a follow-up study. These spectra have been discarded from our analysis as we mention in Section 3. With these considerations the total number of hot subdarfs in the LAMOST target list is likely much higher. Having used machine learning tools to search for hot subdwarf stars in LAMOST, we can outline some future improvements that will be required for a better efficiency and accuracy of the method. For example, we plan to add the standard hot subdwarf stars listed in Drilling et al. (2013) into our training sample, since it provides detailed classification for all kinds of hot subdwarf stars with different types, which will be quite useful to classify hot subdwarf stars by the HELM algorithm. We also plan to cross match the LAMOST database with the newest hot subdwarf catalogue (e.g., Geier et al. 2017), then we will be able to add many high quality hot subdwarf spectra to our training sample. From these improvements we expect a large number of new subdwarfs to be uncoverd from the LAMOST survey in the near future. These works are already on the way and will make important contributions on the study of the formation and evolution of hot subdwarf stars. \section{Summary} We have applied the HELM algorithm in our study to search for hot subdwarf stars in LAMOST DR1. 56 hot subdwarf stars are identified among 465 candidates with single-lined spectra, and their atmospheric parameters have been obtained by fitting the profiles of H Balmer and He lines with the synthetic spectra calculated from NLTE model atmospheres. 31 sdB stars, 11 sdO stars, 9 sdOB stars, 4 He-sdOB and 1 He-sdO stars were found in our study. These stars confirm the two He sequences of hot subdwarf stars in $T_\mathrm{eff}$-$\log{(n{\rm He}/n{\rm H})}$ diagram, which were first found by Edelmann et al. (2003). Our study has shown the strength of the HELM algorithm to filter out targets with specific spectral properties from large sets of spectroscopic data directly, without the need of any photometric observations or pre-selection. Though the total number of hot subdwarf stars identified may seem low compared to the sample size, it is mainly due to the limited quantity and quality of the training spectra. We expect that many more hot subdwarf stars will be found in the LAMOST database using machine learning method in the future after our experiences are implemented in the algorithm. We used the HELM algorithm for the first time to search for hot subdwarf stars in a large spectroscopic survey, and the results presented in our study demonstrate that this method could be applied to search for other types of object with obvious features in their spectra or images. \begin{ack} We thank the referee, A. E. Lynas-Gray, for his valuable suggestions and comments, which improved the manuscript much. This work was supported by the National Natural Science Foundation of China Grant Nos, 11390371, 11503016, 11873037,11603012 and U1731111, Natural Science Foundation of Hunan province Grant No. 2017JJ3283, the Youth Fund project of Hunan Provincial Education Department Grant No. 15B214, the Astronomical Big Data Joint Research Center, co-founded by the National Astronomical Observatories, Chinese Academy of Sciences and the Alibaba Cloud, Young Scholars Program of Shandong University, Weihai 2016WHWLJH09, Natural Science Foundation of Shandong Province ZR2015AQ011, China post-doctoral Science Foundation 2015M571124. This research has used the services of \mbox{www.Astroserver.org} under reference W00QEL. P.N. acknowledges support from the Grant Agency of the Czech Republic (GA\v{C}R 18-20083S). The LAMOST Fellowship is supported by Special Funding for Advanced Users, budgeted and administered by the Center for Astronomical Mega-Science, Chinese Academy of Sciences (CAMS). Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. \end{ack} \section{Introduction} Hot subdwarf stars (spectral types i.e.: sdB, sdO and related objects) are low mass stars in a core or shell helium (He) burning stage (Heber 2009, 2016). These stars lose nearly their whole hydrogen (H) envelopes during the evolution on the red giant branch (RGB), therefore they present very high effective temperatures ($T_\mathrm{eff}$ $\geq$ 20 000 K) on reaching the horizontal branch (HB) stage. Hot subdwarf stars are considered to be the main source of UV-excess found in elliptical galaxies (O'Connell 1999; Han et al. 2007). These stars also turned out to be important objects in studying close binary interactions, since many hot subdwarf stars are found in close binaries (Maxted et al. 2001; Napiwotzki et al. 2004; Copperwheat et al. 2011). The most common types of companion stars in hot subdwarf binaries are main-sequence (MS) stars, white dwarfs (WDs), brown dwarfs and planets. Hot subdwarf stars with massive WD companions are considered to be the progenitors of type Ia supernovae (Wang et al. 2009; Geier et al. 2011; Geier 2015). The atmospheres of hot subdwarf stars are good places to study diffusion processes, such as gravitational settling and radiative levitation. Moreover, pulsating sdB/O stars are extensively used in asteroseismology to study stellar interiors and rotation. For a recent review on hot subdwarf stars see Heber 2016. The formation mechanism of hot subdwarf stars is still unclear. Since about half of the hot subdwarf B type (sdB) stars are found in close binaries, Han et al. (2002, 2003) carried out a detailed binary population synthesis to study the formation of sdB stars. They found that common envelope (CE) ejection, mass transfer through Roche lobe overflow (RLOF) or merger of two helium core white dwarfs (He-WDs) could produce sdB stars in a close binary, wide binary and single system respectively. Based on these results, Chen et al. (2013) predicted that the orbital period of sdB binaries formed from RLOF mass transfer could be up to 1200 days, if atmospheric RLOF and a different angular momentum loss are considered in binary evolution. This result could explain the formation of sdB stars found in wide binaries. Furthermore, Xiong et al. (2017) found that two distinct groups of sdB stars could be formed through the detailed CE ejection channel. One group is flash-mixing sdB stars without H-rich envelopes, and the other is canonical sdB stars with H-rich envelopes. In addition, Zhang et al. (2012, 2017) studied the formation channel in detail for single sdB stars through the merger of two He-WDs or the merger of a He-WD with a low-mass MS companion. Their results could account for some He-rich sdB stars found in the field. The counterpart of hot sudwarf stars in globular clusters (GCs) are known as extreme horizontal branch (EHB) stars. Some of these stars with particularly high effective temperatures (e.g., $T_\mathrm{eff}$ $\geq$ 32 000 K ) form a blue hook in the ultraviolet (UV) color-magnitude diagram (CMD) of GCs (Brown et al. 2016), and they are known as blue hook stars in GCs. Lei et al. (2015, 2016) proposed that tidally-enhanced stellar wind during binary evolution may lead to huge mass loss of the primary stars at RGB and could produce blue hook stars in GCs after undergoing late core He flash. Thanks to large surveys over the past decade a significant number of previously unknown hot subdwarfs have been catalogued, e.g., Kepler ($\varnothing$stensen et al. 2010), Galaxy Evolution Explorer (GALEX, Vennes et al. 2011; N\'emeth et al. 2012; Kawka et al. 2015), the Sloan Digital Sky Survey (SDSS, Geier et al. 2015; Kepler et al. 2015, 2016) and the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) survey (Luo et al. 2016). $\varnothing$stensen (2006) compiled a widely used hot subdwarf database by searching extensive literatures, in which more than 2300 hot subdwarf stars are archived. Furthermore, Geier et al. (2017) compiled a catalogue of known hot subdwarf stars and candidates retrieved from literatures and unpublished databases. This catalogue contains 5613 objects with multi-band photometry, proper motions, classifications, atmospheric parameters, radial velocities and information on light curve variability. Using the first data release (DR1) of the LAMOST survey, Luo et al. (2016) identified 166 hot subdwarf stars, among which 122 objects are single-lined, while the other 44 objects present double-lined composite spectra (e.g., Mg I triplet lines at 5170 $\mathrm{\AA}$ or Ca II triplet lines at 8650 $\mathrm{\AA}$) , which demonstrates the binary nature of these stars. We need even more spectroscopically identified hot subdwarf stars and candidates to improve our understanding on their formation and evolution. Fortunately, large spectroscopic surveys provide us a good opportunity to search for new hot subdwarf stars, e.g., SDSS (York et al. 2000) and LAMOST (Cui et al. 2012; Zhao et al. 2006, 2012). The traditional method extensively used to search for hot subdwarf stars in large spectroscopic surveys is based on color cuts, followed by visual inspections. However, this method requires homogeneous photometry for the spectra to obtain their colors in different band (e.g., \textit{u-g} and \textit{g-r}, Geier et al. 2011), thus it might not work well in spectral database without any or lack of homogeneous photometric information, such as the database of LAMOST. Employing the Hierarchical Extreme Learning Machine (HELM) algorithm, Bu et al. (2017, hereafter Paper I) explored a machine learning method to search for hot subdwarf stars in LAMOST spectra. The Extreme Learning Machine (ELM) is a special type of single hidden-layer feed-forward network, while HELM is the hierarchical framework of the ELM algorithm (Huang et al. 2006). It is inspired by the deep learning algorithms, and built in a multilayer manner. HELM has been frequently used in many fields, such as image-quality assessment (Mao et al. 2014), human action recognition (Minhas et al. 2010) and hyper-spectral image classification (Li et al. 2015). Using the HELM algorithm in Paper I, we obtained an accuracy and efficiency of classifying single-lined hot subdwarf stars in LAMOST spectra up to 92\% and 96\% respectively, which demonstrated the reliability of the method to search for hot subdwarf stars in the LAMOST survey spectral database. Like in the seminal study of Paper I, we applied the HELM algorithm method to LAMOST DR1 and identified 56 hot subdwarf stars. We obtained the atmospheric parameters of these stars by fitting their spectra with synthetic spectra calculated from NLTE model atmospheres (N\'emeth et al. 2012, 2014). The structure of the paper is as follows. In Section 2, we briefly introduced the LAMOST spectral survey and sample filtering method based on the HELM algorithm. In Section 3, we introduced the selection criteria to sort out hot subdwarf stars selected from the candidates by the HELM algorithm. We give our results in Section 4. Finally, a discussion and a summary of this study are presented in Section 5 and 6, respectively. \section{The Lamost survey and sample filtering with the HELM algorithm} \label{sec:LAMOST and sample} \subsection{The LAMOST survey and database DR1} LAMOST is a special reflecting Schmidt telescope designed with both large aperture (effective aperture of 3.6 - 4.9 m) and a wide field of view (FOV, 5$^{\circ}$, Cui et al. 2012). LAMOST is equipped with 16 low resolution spectrographs connected to 4000 optical fibres, which are precisely positioned on the focal surface. As the telescope with the highest rate of spectral acquisition all over the world, LAMOST could obtain the spectra of 4000 objects simultaneously. LAMOST conducted its pilot survey between October 2011 and June 2012, while the regular survey started in September 2012 and finished its first year's operation in June 2013. The data from both the pilot survey and the first year regular survey make up the database of LAMOST DR1 (Luo et al. 2015). DR1 contains totally 2\,204\,696 spectra with a resolution ($\lambda/\Delta\lambda$) of 1800 in the wavelength range 3690-9100$\mathrm{\AA}$, among which 1\,790\,879 spectra have their signal-to-noise ratio (SNR) $\geq$10, and 1\,944\,329 spectra are classified as stellar spectra. Although the number of stellar spectra in LAMOST DR1 is large, many of them lack photometric measurements in certain bands, such as the \textit{u} band, and it prevents one to use colors for object classifications. Therefore, LAMOST DR1 provides us an appropriate database to test our new method (HELM algorithm) in searching for hot subdwarf stars directly from observed spectra, without a need for color information (also see the discussion in Section 5). \subsection{The HELM algorithm and our training sample} HELM stands for the hierarchical framework of the ELM algorithm (see Paper I for more details), which was proposed by Tang et al. (2015). It usually contains two parts: an unsupervised learning part and a supervised part. The unsupervised part in HELM could include many layers. To give higher-level features of the training sample, the input of each layer is the output of the previous layer. On the other hand, the supervised part contains only one layer, and it takes the output of the last unsupervised layer as its input. In the experiments of Paper I, The HELM algorithm could filter out single-lined hot subdwarf stars from LAMOST spectra with an accuracy of 0.92 and efficiency of 0.96, respectively. When applied to the selection of double-lined hot subdwarfs, the HELM presented an accuracy and efficiency of 0.80 and 0.71, respectively. These results are better when we compare them with other popular algorithms (see section 4.2 in Paper I), which demonstrates that the HELM algorithm is an accurate and efficient new method to search for hot subdwarf stars in large spectroscopic surveys. The training sample used in the experiments of Paper I are the spectra of hot subdwarf stars identified in Luo et al. (2016) combined with 4600 LAMOST DR1 spectra of various types of objects, including stars of different spectral types, galaxies, quasars and objects with ambiguous spectral features. There are a total of 166 hot subdwarf spectra in our training sample, among which 122 stars are single-lined hot subdwarfs, while 44 spectra show strong Mg I triplet lines at 5170 $\mathrm{\AA}$ or Ca II triplet lines at 8650 $\mathrm{\AA}$ indicating the binary nature of these stars. According to Table 2 in Luo et al. (2016), the 122 single-lined hot subdwarf stars consist of 77 sdB stars, 15 He-sdO stars, 12 sdO stars, 10 He-sdB stars and 8 blue horizontal branch (BHB) stars. All the sample spectra are divided into three groups to carry out the experiments in HELM and other popular algorithms (see Paper I for details). \section{Target selection} \begin{figure*} \centering \begin{minipage}[c]{0.3\textwidth} \includegraphics[width=45mm]{sdB_d02_fm.pdf} \centerline{(a)} \end{minipage}% \centering \begin{minipage}[c]{0.3\textwidth} \includegraphics[width=45mm]{bhb_d02_fm.pdf} \centerline{(b)} \end{minipage}% \centering \begin{minipage}[c]{0.3\textwidth} \includegraphics[width=45mm]{wd_d02_fm.pdf} \centerline{(c)} \end{minipage} \caption{Normalized spectra near the H$_\delta$ line in three different types of stars. The blue dashed curve is the fitting profile of $\mathrm{H}_{\delta}$ line. The values of $D_{0.2}$ and $f_\mathrm{m}$ for each star are showed.} \end{figure*} By applying the HELM algorithm outlined in Paper I, we obtained more than 7000 hot subdwarf candidates from LAMOST DR1, among which 1034 spectra have an $u$-band SNR larger than 10. We have selected our final hot subdwarf sample from these candidates. Blue horizontal branch (BHB) stars, B-type main-sequence (B-MS) stars and WDs show very similar features (e.g., strong H Balmer lines) in their spectra as hot subdwarf stars (Moehler et al. 1990). Some of these stars have similar temperatures to hot subdwarf stars, especially to He-poor sdB stars. Therefore, the hot subdwarf candidate sample selected by the HELM algorithm method is contaminated by the above mentioned object types. Three steps are used to select hot subdwarf stars from our candidates. \subsection{Excluding BHB stars and WDs from our sample} BHB stars are horizontal branch stars bluer than the RR Lyrae instability strip in the color-magnitude diagram (CMD). These stars present effective temperatures in the range of about 7 000 - 20 000 K and surface gravities (e.g., log\ $g$) in the range of $\log\ {g}\ =\ 2.5-4.5$ cm\,s$^{-2}$, respectively (Catelan 2009). Xue et al. (2008) used the $D_{0.2}$ and $f_\mathrm{m}$ method to discriminate BHB stars from blue straggler (BS) and B-MS stars. In this method, $D_{0.2}$ is the full width of the H$_\delta$ line at 20\% below the local continuum, while $f_\mathrm{m}$ is the flux relative to the continuum at the line core (Beers et al. 1992; Sirko et al. 2004). Xue et al. (2008) used the criteria: $17\mathrm {\AA} \leq D_{0.2} \leq 28.5 \mathrm {\AA} $ and $0.1 \leq f_\mathrm{m}\leq 0.3$, to select BHB stars from their samples. Both the values of $D_{0.2}$ and $f_\mathrm{m}$ are sensitive to effective temperature and gravity in hot stars (Xue et al. 2008), which makes it a suitable measure to distinguish our sample spectra in the $D_{\rm 0.2}$ - $f_{\rm m}$ diagram. Since BHB stars have lower temperatures and gravities than hot subdwarf stars and regular WDs present higher temperatures and gravities than hot subdwarf stars, these spectral classes can be clearly separated according to their $D_{\rm 0.2}$ and $f_{\rm m}$ values (Greenstein \& Sargent 1974). We use \textit{the scale width versus shape method} (Clewley et al. 2002; Xue et al. 2008) to fit the $\mathrm{H}_{\delta}$ line and obtain the value of $D_{0.2}$ and $f_\mathrm{m}$ for each spectrum in our sample. This method is based on a S\'ersic profile fit (S\'ersic 1968) to Balmer lines in the following form: \begin{equation} y=1.0-a \; exp\left[-\left(\frac{|\lambda-\lambda_{0}|}{b}\right)^{c}\right], \end{equation} where $y$ is the normalized flux, $\lambda$ is the wavelength and $\lambda_0$ is the nominal central wavelength of the Balmer line. The coefficients $a$, $b$ and $c$ are free parameters. As described in Xue et al. (2008), to account for imperfect normalization of spectra, we used five free parameters: $a$, $b$, $c$, $\lambda_0$ and $n$ to fit the normalized spectrum to the S\'ersic profile: \begin{equation} y=n-a \; exp\left[-\left(\frac{|\lambda-\lambda_{0}|}{b}\right)^{c}\right]. \end{equation} The three panels in Fig 1 show the results of fitting the $\mathrm{H}_{\delta}$ profile of a sdB star, a BHB star and a WD, respectively. In each panel, solid curves represent an extracted spectrum near the $\mathrm{H}_{\delta}$ line, while blue dashed curves denote our best fitting line profiles. Panel (a) shows the spectrum of the sdB star PG\,1605+072 taken from Luo et al. (2016) with $T_\mathrm{eff}$\ =\ 32\,550$\pm$370 K and $\mathrm{log}\ g$=\ 5.29$\pm$0.07 cm\,s$^{-2}$. By adopting the fitting method described above, we got $D_{0.2}$\ =\ 9.37 $\mathrm{\AA}$ and $f_\mathrm{m}$\ =\ 0.63. Panel (b) shows the spectrum of the BHB star SDSSJ171935.27+262234.9 from Xue et al. (2008) with $T_\mathrm{eff}$ = 7846 K and $\mathrm{log}\ g$\ =\ 3.46 cm\,s$^{-2}$ (no error bars for this star are presented in Xue et al. 2008 ), while its $D_{0.2}$ and $f_\mathrm{m}$ are 22.53 $\mathrm{\AA}$ and 0.28, respectively. One can see obviously that the BHB star presents much deeper $\mathrm{H}_{\delta}$ line (i.e., smaller value of $f_\mathrm{m}$) and much wider $D_{0.2}$ than the sdB star in Panel (a) due to its significantly lower effective temperature and gravity. The spectrum of the WD SDSS\,J094126.79+294503.4 in Panel (c) is taken from the catalogue of Eisenstein et al. (2006) with its $T_\mathrm{eff}$ =\ 20\,818 K and $\mathrm{log} \ g$\ =\ 8.0 cm\,s$^{-2}$. Although this WD shows a similar depth of the $\mathrm{H}_{\delta}$ line (i.e., $f_\mathrm{m}$\ =\ 0.55) to the sdB star showed in Panel (a), it presents a much larger $D_{0.2}$ (39.42 $\mathrm{\AA}$) than the sdB star (9.37 $\mathrm{\AA}$) due to its higher gravity. \begin{figure*} \centering \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=80mm]{d02_fm_delta_5.pdf} \centerline{(a)} \end{minipage}% \centering \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=80mm]{lamost_dr1_d02_fm_snu_10_delta_3.pdf} \centerline{(b)} \end{minipage}% \caption{Panel (a): the distribution of BHB stars, hot subdwarfs and WDs in the $D_{\rm 0.2}-f_{\rm m}$ diagram. Panel (b): Our hot subdwarf candidates selected by the HELM algorithm in the $D_{\rm 0.2}-f_{\rm m}$ diagram. The red dashed line is a clear boundary between BHB stars and hot subdwarfs at $D_{0.2}$\ =17.0 \AA. } \end{figure*} To better demonstrate the differences of $D_{0.2}$ and $f_\mathrm{m}$ among BHB stars, hot subdwarfs and WDs, we selected some known BHB stars, hot subdwarfs and WDs from published catalogues and put them into the $D_{0.2}$ - $f_\mathrm{m}$ diagram in Panel (a) of Fig 2. Black solid triangles denote BHB stars identified from Xue et al. (2008), blue open circles represent hot subdwarfs selected from the catalogue of Geier et al. (2017), and green open squares are WDs from Eisenstein et al. (2006). BHB stars are concentrated quite well in the upper left corner of Panel (a), and subdwarfs distribute in a strip from the middle center to the bottom right, while WDs locate on the upper right and middle area of the panel (note that most of of the selected WDs have $D_{\rm 0.2}$ values larger than 35 \AA\ and are off the panel). As expected, there is a remarkable gap between BHB stars and hot subdwarf stars near $D_{0.2}$\ = 17.0 \AA\, which is marked by the red dashed horizontal line in Panel (a). Since WDs present much larger values of $D_{0.2}$ than BHB and hot subdwarf stars, $D_{0.2}$\ = 17.0 can be used as a criterion to distinguish hot subdwarf stars from BHB stars and WDs in our sample. Panel (b) of Fig 2 shows the values of $D_{0.2}$ and $f_\mathrm{m}$ for the 1034 sample spectra selected by HELM (see Section 2 and Paper I). To compare with Panel (a) clearly, we plot a dashed horizontal line at $D_{0.2}$ = 17.0 \AA\ in Panel (b) as well, which denotes the gap between BHB stars and hot subdwarf stars in Panel (a). Our sample in Panel (b) shows an analogous distribution to the stars in Panel (a), with the notable exception that the obvious gap at $D_{\rm 0.2}\ =\ 17.0$ \AA\ is not seen in Panel (b). This is due to the fact that the selected BHB stars in Panel (a) are stars with temperatures in the range of $T_{\rm eff}\ =\ 7000 - 10\,000$ and surface gravity in a range of $\log{g}\ =\ 2.5-4.0$ cm\,s$^{-2}$ (Xue et al. 2008), which are much lower than the temperatures and gravities of hot subdwarf stars (e.g., $T_\mathrm{eff}\geq$\ 20\,000$ K$ and log$\ g\geq 5.0 \ \mathrm{cm\,s}^{-2}$, Heber 2016), while the stars selected by HELM form a more evenly distributed mix of stars and the gap in the $D_{\rm 0.2}-f_{\rm m}$ diagram is filled up. Therefore, our sample contains not only BHB stars with low temperatures, hot subdwarf stars and WDs, but also includes high temperature BHB stars (e.g., 10 000 - 20 000 K) and B-MS stars, because these stars present similar temperatures to hot subdwarf stars in lower temperatures (e.g., He-poor sdB stars). Therefore, high temperature BHB stars and B-MS stars fill the gap presented in Panel (a) and make a continuous distribution for our sample in $D_{0.2}$ - $f_\mathrm{m}$ diagram. Note that there are a few stars in the upper right and middle area of Panel (b), which are typically occupied by WDs in Panel (a). This demonstrates that a few WDs are in our sample, and HELM is very efficient at distinguishing hot subdwarf stars from WDs. Nevertheless, the criterion of $D_{0.2}$ = 17.0 \AA\ still excludes most BHB stars with low temperatures and WDs, while preserving hot subdwarf stars in our sample. After applying the selection criterion of $D_{0.2}<17.0 $ \AA\, we obtained 578 hot subdwarf candidate spectra, among which 161 spectra present obvious Mg I triplet lines at 5170 $\mathrm{\AA}$ or Ca II triplet lines at 8650 $\mathrm{\AA}$. These lines are characteristic of cool stars and such subdwarfs are double-lined composite spectrum binary candidates, that will be studied in a forthcoming publication. Therefore, our hot subdwarf sample selected by $D_{0.2}$-$f_\mathrm{m}$ method consists of 417 spectra, for which the atmospheric parameters were determined by fitting their H Balmer and He lines. The $D_{0.2}$-$f_\mathrm{m}$ method is able to exclude most of the BHB stars and WDs in our sample. However, as the method is based on measuring the width and depth of H$_\delta$ line, some hot subdwarfs with weak or no obvious H$_\delta$ lines (e.g., He-sdO, He-sdB) could be also removed from our sample. Furthermore, the values of $D_{0.2}$ and $f_\mathrm{m}$ for some spectra are difficult to obtain from poor quality spectra near the H$_\delta$ line. To assess the completeness of our sample we used {\sc XTgrid} (N\'emeth et al. 2012; Vennes et al. 2011, see next section for detail) to make a spectral classification for the 456 spectra which were removed by the $D_{0.2}$-$f_\mathrm{m}$ method. With this procedure we could recover a further 48 hot subdwarf candidates from low quality spectra. The atmospheric parameters of these 48 spectra together with the 417 spectra selected by $D_{0.2}$-$f_\mathrm{m}$ method (i.e., 465 spectra in total) are determined by fitting their LAMOST optical spectra with synthetic spectra (see next section). All objects with atmospheric parameters characteristic of hot subdwarfs were selected as hot subdwarf candidates. \subsection{Atmospheric parameters of hot subdwarf candidates } \begin{figure} \centering \includegraphics [width=100mm]{represent_spec_fig5.pdf} \caption{Four normalized spectra of hot subdwarf stars with different spectral types identified in this study. Best-fitting synthetic spectra are over plotted by a red dashed line on each spectra. From top to bottom, a He-sdOB, sdOB, sdB and sdO star is presented respectively. Some H Balmer lines and important He I and He II lines marked at the bottom of the figure. } \end{figure} To determine the atmospheric parameters of the final hot subdwarf sample we fitted NLTE models to the observations. We used the NLTE model atmosphere code {\sc Tlusty} (version 204; Hubeny \& Lanz (2017) to calculate models with H and He composition and corresponding synthetic spectra with {\sc Synspec} (version 49; Lanz et al. 2007). Details of the model calculations are described by N\'emeth et al. (2014). The spectral analysis was done by a steepest-descent iterative $\chi^2$ minimization procedure, which is implemented in the fitting program {\sc XTgrid} (N\'emeth et al. 2012; Vennes et al. 2011). This algorithm fits the entire optical range and attempts to reproduce the observed line profiles simultaneously. Final parameter errors are determined by departing from the best fitting parameters in one dimension until the statistical limit for the 60\% confidence level of a single parameter is reached, separately for positive and negative error bars. To match the resolution of LAMOST spectra we convolved the synthetic spectra with a Gaussian profile at a constant resolution ($R\ =\ 1800$). Fig 3 shows the best fitting models for four representative hot subdwarf spectra from our sample. In this figure, gray solid curves denote the normalized stellar spectra\footnote{The continuum for each spectrum was fitted automatically in {\sc XTgrid}}, while red dashed curves represent the best fitting synthetic spectra. The positions of the strongest H Balmer lines, He I and He II lines are marked in Fig 3 as well. The label 'He' plus an integer for each spectrum is the helium class following the hot subdwarf classification scheme of Drilling et al. (2013), which is based on He line strength (see Sect 4 for details). The top spectrum is a He-sdOB star with dominant He I lines and weak H Balmer lines, while the second spectrum from the top is a sdOB star, which shows dominant H Balmer lines with both weak He I and He II lines. The third spectrum from the top is a typical sdB star, which presents broad H Balmer lines with weak He I lines. Finally, the spectrum at the bottom of the figure is classified as a sdO star, because of its dominant H Balmer lines with weak He II line at 4686 $\mathrm{\AA}$ while no He l lines can be detected. By employing {\sc XTgrid} , we obtained the atmospheric parameters (e.g., $T_\mathrm{eff}$, $\mathrm{log}\ g$ and He abundance) for the 465 spectra selected in Section 3. We classified stars with $T_\mathrm{eff} \ge$ 20\,000 K and $\mathrm{log}\ g\ge$ 5.0 as hot subdwarf stars, with $T_\mathrm{eff} <$ 20\,000K and $\mathrm{log}g <$ 5.0 as hot BHB stars, while stars with $\mathrm{log}\ g <$ 4.5 as B-MS stars following the classification scheme of N\'emeth et al. (2012). After this procedure, we selected 76 hot subdwarf candidates based on their atmospheric parameters. We checked our results by Gaia Hertzsprung-Russell diagram (HRD) in next section. \subsection{Cross matching our results with the HRD of Gaia DR2 } \begin{figure} \centering \includegraphics [width=90mm]{sd_cmd1.png} \caption{The distribution of 74 selected subdwarf candidates in the HRD of Gaia DR2. 57 stars (marked with blue triangles) locate in the subdwarf region, and 12 stars (denoted by yellow squares) are distributed along the MS region, while the position of 6 stars (represented by red circles) correspond to the WD sequence. } \end{figure} The second data release (DR2) from Gaia (Gaia Collaboration et al. 2018a) provides high-precision astrometry and photometry for about 1.3 billion sources over the full sky. Based on this huge database, Gaia Collaboration et al. (2018b) built the Gaia DR2 HRD by using the most precise parallax and photometry (see Sect 2 in Gaia Collaboration et al. 2018b for their detailed selection filters). To check our final results, we cross matched the 76 hot subdwarf candidates with the database of Gaia DR2, and got 75 common objects within the radius of five arcseconds, among which one object had negative parallax, and it was removed from our sample. Fig 4 shows the HRD from Gaia Collaboration et al. (2018b) together with the 74 stars in common with this study. Gray dots denote the objects from Gaia DR2 selected by Gaia Collaboration (65\,921\,112 stars in total, see Fig 1 of Gaia Collaboration et al. 2018b), while blue triangles, yellow squares and red circles are the common stars in our sample. We found 56 stars (e.g., blue triangles) to be located in the hot subdwarf region of the HRD. Therefore, these 56 objects are finally identified as hot subdwarf stars in this study. On the other hand, we found 12 stars (e.g., yellow squares) distributed along the wide MS\footnote{Extinction is not considered in the HRD of Fig 1 in Gaia Collaboration et al. (2018b), therefore the MS is wider and can not be distinguished very clearly from the RGB. But the WD and hot subdwarf sequences are presented more clearly in this HRD.} , and 6 stars (e.g., red circles) are along the WD sequence. \section{Results} Using the method described in Section 3, we identified 56 hot subdwarf stars. We followed the spectral classification scheme in Moehler et al. (1990) and Geier et al. (2017) to classify hot subdwarf stars: stars showing strong H Balmer lines with weak or no He I lines are classified as sdB stars; stars showing strong H Balmer lines accompanied by He II absorption are considered as sdO stars; stars having H Balmer lines accompanied both by weak He I and He II lines are classified as sdOB stars and stars with dominant He I lines and weak H Balmer lines are He-sdOB stars, while stars with dominant He II lines are He-sdO stars. Based on this simple classification scheme, we identified 31 sdB stars, 11 sdO stars, 9 sdOB stars, 4 He-sdOB and 1 He-sdO stars. Drilling et al. (2013) designed an MK (Morgan-Keenan)-like system of spectral classification for hot subdwarf stars, in which a spectral class, a luminosity class and a helium class are used to classify hot subdwarf stars. The spectral class is based on the MK standards of spectral classes O and B stars, and the luminosity class is based on the H and He line widths (see Sect 3 in Drilling et al. 2013). On the other hand, the helium class is described by an integer from 0 to 40 denoting the strengths of the He lines relative to the H Balmer lines, and it is roughly equal to the following function of the relative line depths: \begin{equation} 20\ \frac{\mathrm{HeI}\ \lambda4471+\mathrm{HeII}\ \lambda4541} {\mathrm{H}_{\gamma}-0.83\ \mathrm{HeII}\ \lambda4541} \end{equation} for helium class 0-20, and \begin{equation} 40-20\ \frac{\mathrm{H}_{\gamma}-0.83\ \mathrm{HeII}\ \lambda4541} {\mathrm{HeI}\ \lambda4471+\mathrm{HeII}\ \lambda4541} \end{equation} for helium class 20-40. We also appended this helium class for our hot subdwarf stars (see Table 1). The atmospheric parameters of the 56 identified hot subdwarf stars together with the information of 12 MS stars and 6 WDs are listed in Table 1. The atmospheric parameters of the MS stars and WDs are not presented. In column 1-11 of Table 1, we have presented the LAMOST designation, right ascension, declination, effective temperature , surface gravity and He abundance obtained in this study, spectral classification type, SNR in the \textit{u} band, apparent magnitudes in the u and \textit{g} band of SDSS, apparent magnitudes in the \textit{G} band of Gaia DR2, respectively. We also cross-matched our hot subdwarf stars with the hot subdwarfs list in Geier et al. (2017) and N\'emeth et al. (2012). In Table 1, the common hot subdwarf stars with Geier et al. (2017) are labeled by $^{*}$, and the common hot subdwarf stars with N\'emeth et al. (2012) are marked by $^{\dagger}$. \begin{table*} \scriptsize \begin{minipage}{180mm} \caption{Information for the 74 stars analyzed in this study. From left to right of the table, it gives the LAMOST designation of the objects, right ascension, declination, effective temperature, gravity, helium abundance, spectral classification type, SNR in \textit{u} band, apparent magnitude in \textit{u} and \textit{g} band from SDSS and apparent magnitude in \textit{G} band from Gaia DR2, respectively. } \end{minipage}\\ \centering \begin{tabularx}{18.0cm}{lllcccccccccccX} \hline\noalign{\smallskip} Designation$^ a$ & ra$^ b$ & dec & $T_\mathrm{eff}$ & $\mathrm{log}\ g$ & $\mathrm{log}(n\mathrm{He}/n\mathrm{H})^c$ &Sptype & SNR &uSDSS &gSDSS &G\ GaiaDR2 & \\ LAMOST & LAMOST & LAMOST&$(K)$&($\mathrm{cm\ s^{-2}}$)& & &\textit{u}-band &(mag) &(mag) &(mag) &\\ \hline\noalign{\smallskip} J002124.79+402857.1 & 5.3532989 & 40.482537 & 25850$\pm$\ 580 & 5.42$\pm$0.11 & -2.57$\pm$0.18 & sdB He4 & 18.7 & \ \ - & 15.19 & 15.51 & \\ J002355.23+420905.5$^{*}$ & 5.9801396 & 42.151544 & 30150$\pm$\ 280 & 5.47$\pm$0.06 & -2.31$\pm$0.14 & sdB He6 & 18.6 & \ \ - & 15.46 & 15.79 & \\ J003627.19+271000.7 & 9.113308 & 27.166863 & \ \ - & \ \ - & \ \ - & MS & 45.0 & 14.93 & 14.67 & 14.64 & \\ J003801.72+343156.2 & 9.5071771 & 34.53228 & 40850$\pm$\ 610 & 5.49$\pm$0.10 & -0.23$\pm$0.09 & He-sdOB He33 & 17.9 & \ \ - & 13.66 & 13.90 & \\ J004949.26+352200.9$^{*}$ & 12.455266 & 35.366938 & 34960$\pm$\ 690 & 5.83$\pm$0.12 & -1.49$\pm$0.10 & sdOB He13 & 25.3 & \ \ - & 14.54 & 14.82 & \\ J010448.81+362742.4$^{*}$ & 16.203409 & 36.461784 & 32260$\pm$\ 60 & 5.74$\pm$0.02 & -1.63$\pm$0.03 & sdOB He11 & 90.6 & 12.55 & 12.95 & 12.40 & \\ J010945.73+374538.5$^{*}$ & 17.440552 & 37.760704 & 29980$\pm$\ 100 & 5.49$\pm$0.03 & -3.54$\pm$0.26 & sdB He2 & 25.6 & 13.96 & 14.61 & 13.87 & \\ J011857.19-002545.5$^{*}$ & 19.738333 & -0.429333 & 29060$\pm$\ 140 & 5.48$\pm$0.04 & -3.16$\pm$0.25 & sdB He2 & 15.7 & 14.49 & 14.60 & 14.82 & \\ J013134.51+323723.7 & 22.893792 & 32.623252 & 60390$\pm$\ 720 & 5.48$\pm$0.05 & -1.40$\pm$0.10 & sdO He8 & 10.9 & \ \ - & 15.00 & 15.30 & \\ J014710.62+303213.2 & 26.794254 & 30.537002 & 22110$\pm$\ 210 & 5.00$\pm$0.07 & -2.05$\pm$0.12 & sdB He6 & 18.8 & \ \ - & 14.10 & 14.35 & \\ J015054.28+310746.7 & 27.72618 & 31.129651 & 28540$\pm$\ 180 & 5.70$\pm$0.04 & -1.69$\pm$0.05 & sdB He10 & 16.9 & \ \ - & 13.97 & 14.32 & \\ J020932.45+430712.5$^{*}$ & 32.385219 & 43.12014 & 27580$\pm$\ 500 & 5.42$\pm$0.03 & -2.73$\pm$0.16 & sdB He5 & 11.8 & 14.42 & 14.86 & 14.34 & \\ J022517.07+031218.2 & 36.3211422 & 3.2050785 & \ \ - & \ \ - & \ \ - & WD & 15.1 & 16.24 & 16.70 & 16.95 & \\ J023551.35+011845.1 & 38.963972 & 1.312544 & \ \ - & \ \ - & \ \ - & WD & 10.4 & 16.97 & 16.41 & 16.17 & \\ J030025.22+003224.3 & 45.10512 & 0.54009 & \ \ - & \ \ - & \ \ - & MS & 13.1 & 23.89 & 21.76 & 20.36 & \\ J031756.92+322950.4 & 49.487181 & 32.497341 & 33860$\pm$\ 430 & 6.07$\pm$0.15 & -1.62$\pm$0.12 & sdB He13 & 15.9 & \ \ - & 15.58 & 15.72 & \\ J035926.96+270508.6 & 59.862336 & 27.08573 & 35160$\pm$\ 380 & 5.51$\pm$0.04 & -2.74$\pm$0.35 & sdOB He2 & 14.0 & \ \ - & 14.97 & 15.10 & \\ J040613.24+465133.6 & 61.555205 & 46.859349 & \ \ - & \ \ - & \ \ - & MS & 15.2 & 14.77 & \ \ - & 14.59 & \\ J051425.36+332344.3 & 78.605685 & 33.395662 & \ \ - & \ \ - & \ \ - & MS & 10.4 & \ \ - & 15.04 & 13.38 & \\ J053656.48+395518.7$^{*}$ & 84.235335 & 39.92188 & 38490$\pm$\ 350 & 5.54$\pm$0.07 & -0.65$\pm$0.07 & sdOB He16 & 14.7 & \ \ - & 13.67 & 13.92 & \\ J054447.48+272032.0 & 86.197835 & 27.342228 & \ \ - & \ \ - & \ \ - & WD & 10.3 & \ \ - & 17.08 & 16.93 & \\ J055151.32+220437.0 & 87.96384 & 22.076954 & 29610$\pm$\ 110 & 5.66$\pm$0.03 & -2.22$\pm$0.05 & sdB He5 & 24.4 & \ \ - & 12.85 & 13.17 & \\ J055227.67+155311.4 & 88.115311 & 15.886516 & \ \ - & \ \ - & \ \ - & WD & 23.1 & \ \ - & 12.52 & 13.03 & \\ J055348.85+325601.7 & 88.453581 & 32.93382 & 30490$\pm$\ 110 & 5.68$\pm$0.02 & -2.15$\pm$0.04 & sdB He5 & 44.0 & \ \ - & 14.02 & 14.17 & \\ J055411.88+220459.7 & 88.549534 & 22.083273 & \ \ - & \ \ - & \ \ - & MS & 10.2 & \ \ - & 13.28 & 13.17 & \\ J055926.92+271321.0 & 89.862203 & 27.222502 & \ \ - & \ \ - & \ \ - & MS & 10.9 & \ \ - & 19.20 & 17.99 & \\ J062704.91+345809.5$^{*}$ & 96.770481 & 34.969325 & 25080$\pm$\ 380 & 5.26$\pm$0.08 & -3.57$\pm$0.62 & sdB He1 & 10.8 & \ \ - & 14.19 & 14.43 & \\ J062836.51+325031.5 & 97.152155 & 32.842084 & 42740$\pm$\ 570 & 5.30$\pm$0.12 & 0.20$\pm$0.10 & He-sdOB He37 & 21.5 & \ \ - & 14.51 & 14.71 & \\ J063210.36+281041.7 & 98.043207 & 28.178276 & 45130$\pm$\ 330 & 5.51$\pm$0.12 & 0.33$\pm$0.06 & He-sdOB He40 & 17.7 & \ \ - & 14.82 & 15.10 & \\ J063526.61+323109.8 & 98.86089 & 32.519401 & \ \ - & \ \ - & \ \ - & MS & 11.6 & \ \ - & 15.95 & 15.15 & \\ J063952.15+515700.9 & 99.967315 & 51.950267 & 29720$\pm$\ 110 & 5.37$\pm$0.04 & -3.00$\pm$0.73 & sdB He1 & 35.8 & \ \ - & \ \ - & 11.96 & \\ J064618.36+292013.2$^{*}$ & 101.57652 & 29.337016 & 38740$\pm$\ 450 & 5.90$\pm$0.05 & $-4.00>$ & sdO He0 & 73.4 & \ \ - & \ \ - & 13.59 & \\ J064814.13+171056.2 & 102.05891 & 17.182305 & \ \ - & \ \ - & \ \ - & MS & 10.6 & \ \ - & 14.96 & 13.23 & \\ J065446.63+244926.8 & 103.69431 & 24.82412 & 58700$\pm$3600 & 5.17$\pm$0.05 & -2.04$\pm$0.10 & sdO He2 & 55.7 & \ \ - & 13.65 & 13.99 & \\ J065532.98+220349.6 & 103.88743 & 22.063784 & 45090$\pm$\ 890 & 5.62$\pm$0.05 & -1.71$\pm$0.08 & sdO He6 & 30.5 & \ \ - & \ \ - & 13.70 & \\ J065647.77+242958.8 & 104.19908 & 24.499685 & \ \ - & \ \ - & \ \ - & MS & 18.7 & \ \ - & \ \ - & 10.19 & \\ J065748.42+253251.1 & 104.45177 & 25.547541 & 44930$\pm$1160 & 6.48$\pm$0.10 & $-4.00>$ & sdB He19 & 16.1 & \ \ - & 15.89 & 16.05 & \\ J065816.71+094343.1 & 104.56965 & 9.7286415 & 36270$\pm$\ 320 & 5.03$\pm$0.03 & -1.70$\pm$0.08 & sdOB He11 & 17.1 & \ \ - & 13.27 & 13.59 & \\ J070619.19+242910.5 & 106.57996 & 24.486267 & 61820$\pm$6030 & 5.30$\pm$0.04 & -2.00$\pm$0.13 & sdO He4 & 15.0 & \ \ - & 15.77 & 15.81 & \\ J071202.40+113332.4 & 108.01 & 11.559014 & 24720$\pm$\ 180 & 5.10$\pm$0.04 & -2.63$\pm$0.07 & sdB He5 & 33.0 & \ \ - & \ \ - & 12.46 & \\ \hline\noalign{\smallskip} \end{tabularx}\\ {$^a$ Stars labeled with $\ast$ also appear in the hot subdwarf catalogue of Geier et al. (2017).}\\ {$^b$ Stars labeled with $\dagger$ also appear in N\'emeth et al. (2012).} \\ {$^c$ "$>$" denotes a upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ for the object.}\\ \end{table*} \setcounter{table}{0} \begin{table*} \scriptsize \begin{minipage}{180mm} \caption{Continued } \end{minipage}\\ \centering \begin{tabularx}{18.0cm}{lllcccccccccccX} \hline\noalign{\smallskip} Designation$^ a$ & ra$^ b$ & dec & $T_\mathrm{eff}$ & $\mathrm{log}\ g$ & $\mathrm{log}(n\mathrm{He}/n\mathrm{H})^c$ &Sptype & SNR &uSDSS &gSDSS &G\ GaiaDR2 & \\ LAMOST & LAMOST & LAMOST&$(K)$&($\mathrm{cm\ s^{-2}}$)& & &\textit{u}-band &(mag) &(mag) &(mag) &\\ \hline\noalign{\smallskip} J072835.11+280239.1 & 112.1463 & 28.044199 & 86250$\pm$16170 & 5.77$\pm$0.16 & 0.04$\pm$0.12 & He-sdO He40 & 10.1 & \ \ - & 15.45 & 15.78 & \\ J073446.14+342120.8 & 113.69226 & 34.355805 & 25510$\pm$\ 680 & 5.15$\pm$0.07 & -2.42$\pm$0.09 & sdB He6 & 20.6 & \ \ - & 15.20 & 15.46 & \\ J073756.25+311646.5 & 114.48439 & 31.279597 & 30600$\pm$\ 130 & 5.45$\pm$0.03 & -2.47$\pm$0.12 & sdB He5 & 11.2 & \ \ - & \ \ - & 13.58 & \\ J074121.90+265425.8$^{*}$ & 115.34127 & 26.907168 & 29530$\pm$\ 460 & 5.30$\pm$0.07 & $-4.00>$ & sdB & 11.6 & \ \ - & 15.52 & 15.59 & \\ J074435.14+302108.7$^{*}$ & 116.14643$^{\dagger}$ & 30.352421 & 28980$\pm$\ 200 & 5.51$\pm$0.03 & -2.95$\pm$0.10 & sdB He3 & 30.6 & \ \ - & 14.39 & 14.74 & \\ J074855.82+304247.0$^{*}$ & 117.23262$^{\dagger}$ & 30.713059 & 30910$\pm$\ 110 & 5.80$\pm$0.03 & -2.02$\pm$0.04 & sdB He4 & 31.8 & \ \ - & 13.76 & 14.06 & \\ J075139.26+064604.8 & 117.91362 & 6.7680011 & 39850$\pm$\ 180 & 5.61$\pm$0.04 & -0.16$\pm$0.03 & He-sdOB He30 & 39.9 & \ \ - & 13.21 & 13.50 & \\ J075412.37+294957.0$^{*}$ & 118.55157 & 29.832504 & 30910$\pm$1230 & 5.77$\pm$0.28 & -1.87$\pm$0.32 & sdB He7 & 14.5 & \ \ - & 14.24 & 14.57 & \\ J075922.99+164601.6 & 119.845827 & 16.767125 & 37930$\pm$\ 920 & 5.25$\pm$0.05 & -2.89$\pm$0.27 & sdO He1 & 23.9 & 13.84 & 14.94 & 14.42 & \\ J080327.92+342140.6$^{*}$ & 120.86637 & 34.361297 & 38130$\pm$1350 & 5.58$\pm$0.11 & -3.28$\pm$0.60 & sdO He3 & 26.1 & \ \ - & 14.75 & 15.06 & \\ J080611.66+334425.6 & 121.5486 & 33.740449 & \ \ - & \ \ - & \ \ - & WD & 10.5 & \ \ - & 16.13 & 16.33 & \\ J080628.65+242057.4$^{*}$ & 121.61938 & 24.349293 & 27990$\pm$\ 350 & 5.48$\pm$0.04 & -2.50$\pm$0.14 & sdB He4 & 14.4 & \ \ - & 14.70 & 15.00 & \\ J080758.25+272434.3 & 121.99274 & 27.409538 & 38370$\pm$1190 & 5.58$\pm$0.08 & -3.41$\pm$0.66 & sdO He2 & 50.4 & \ \ - & 13.76 & 14.11 & \\ J084535.66+194150.2 & 131.3986$^{\dagger}$ & 19.697288 & 22070$\pm$\ 420 & 5.00$\pm$0.06 & -1.80$\pm$0.06 & sdB He7 & 18.4 & 13.13 & 13.49 & 13.26 & \\ J085649.36+170116.0$^{*}$ & 134.2056708$^{\dagger}$ & 17.021125 & 28810$\pm$\ 150 & 5.65$\pm$0.01 & -3.19$\pm$0.17 & sdB He2 & 56.3 & 14.67 & 12.73 & 12.81 & \\ J085851.11+021012.9$^{*}$ & 134.71299 & 2.1702667 & 48580$\pm$1150 & 5.61$\pm$0.07 & -1.83$\pm$0.09 & sdO He6 & 16.8 & \ \ - & 13.30 & 13.63 & \\ J093512.20+310959.3$^{*}$ & 143.8008625 & 31.166475 & 33870$\pm$\ 110 & 5.62$\pm$0.04 & -1.47$\pm$0.07 & sdOB He11 & 13.4 & 15.06 & 15.34 & 15.63 & \\ J112350.68+233645.8$^{*}$ & 170.961175 & 23.6127333 & 27560$\pm$\ 350 & 5.32$\pm$0.04 & -2.39$\pm$0.11 & sdB He5 & 15.8 & 13.76 & 13.90 & 14.15 & \\ J120624.36+570935.7$^{*}$ & 181.6015083$^{\dagger}$ & 57.1599222 & 34960$\pm$\ 230 & 5.70$\pm$0.04 & -1.81$\pm$0.06 & sdOB He9 & 18.4 & 14.28 & 14.60 & 14.85 & \\ J123652.66+501513.8$^{*}$ & 189.219429 & 50.253856 & 43250$\pm$2210 & 5.40$\pm$0.12 & -2.42$\pm$0.30 & sdO He2 & 22.7 & 13.96 & 14.38 & 14.65 & \\ J125229.60-030129.6$^{*}$ & 193.12335 & -3.0248924 & 30790$\pm$\ 480 & 5.59$\pm$0.09 & $-3.36>$ & sdB He0 & 13.4 & 15.46 & 15.71 & 15.65 & \\ J133640.95+515449.4 & 204.170631 & 51.913729 & 88450$\pm$21230 & 5.13$\pm$1.00 & -2.77$\pm$1.04 & sdOB - & 53.5 & 12.79 & 12.76 & 12.97 & \\ J135153.11-012946.6 & 207.9713167 & -1.4962778 & 31040$\pm$\ 560 & 6.03$\pm$0.12 & $-2.77>$ & sdB He0 & 11.2 & 15.31 & 15.45 & 15.66 & \\ J141736.40-043429.0 & 214.401671 & -4.574742 & 37750$\pm$\ 380 & 5.82$\pm$0.06 & -1.53$\pm$0.05 & sdOB He12 & 24.6 & 13.52 & 13.96 & 13.71 & \\ J144052.82-030852.6$^{*}$ & 220.220106 & -3.147965 & 29320$\pm$\ 40 & 5.44$\pm$0.03 & -2.74$\pm$0.05 & sdB He0 & 45.2 & 13.60 & 14.02 & 13.82 & \\ J161200.65+514943.5$^{*}$ & 243.0027458 & 51.82875 & 45130$\pm$1610 & 5.09$\pm$0.13 & -3.31$\pm$0.29 & sdB He2 & 10.9 & 13.26 & 13.54 & 13.67 & \\ J164718.35+322832.9 & 251.826491 & 32.475813 & \ \ - & \ \ - & \ \ - & WD & 38.9 & 13.47 & 13.83 & 13.59 & \\ J171013.21+532646.0 & 257.555047 & 53.446121 & 28120$\pm$\ 340 & 5.83$\pm$0.03 & -2.42$\pm$0.12 & sdB He3 & 13.1 & 12.28 & 12.87 & 12.60 & \\ J171718.79+422609.2 & 259.32832 & 42.435913 & 55490$\pm$2130 & 5.78$\pm$0.03 & -3.01$\pm$0.29 & sdO He0 & 30.4 & 12.26 & 12.77 & 12.48 & \\ J175311.46+062541.5 & 268.2977592 & 6.4282084 & \ \ - & \ \ - & \ \ - & MS & 11.6 & 14.68 & 13.66 & 14.58 & \\ J192216.18+405757.4 & 290.567417 & 40.965972 & \ \ - & \ \ - & \ \ - & MS & 20.7 & 13.54 & \ \ - & 13.51 & \\ J192609.46+372008.1$^{*}$ & 291.539417 & 37.335611 & 31060$\pm$\ 240 & 5.97$\pm$0.04 & -1.65$\pm$0.04 & sdB He11 & 23.8 & 13.45 & \ \ - & 13.61 & \\ J213406.74+033415.4 & 323.528123 & 3.570953 & 40310$\pm$1390 & 6.12$\pm$0.12 & -1.60$\pm$0.18 & sdB - & 21.2 & 11.50 & 11.78 & 11.55 & \\ J223419.15+091620.5 & 338.57981 & 9.272378 & \ \ - & \ \ - & \ \ - & MS & 13.2 & 13.89 & 13.93 & 13.87 & \\ \hline\noalign{\smallskip} \end{tabularx}\\ \end{table*} \subsection{Comparison with other studies} Among the 56 hot subdwarf stars in our study, 25 stars have been already catalouged by Geier et al. (2017), and 5 stars are listed in N\'emeth et al. (2012). To check the results presented in our study, we compared the atmospheric parameters obtained in this study with the ones from Geier et al. (2017) and N\'emeth et al. (2012) where their parameters are available. We have 25 common hot subdwarf stars with Geier et al. (2017), but only 11 stars with their $T_\mathrm{eff}$ and $\mathrm{log}\ g$ are available in the catalogue, and 10 stars with their He abundances are available in the catalogue. The subplots from left to right of Panel (a) in Fig 5 present the comparison of $T_\mathrm{eff}$, $\mathrm{log}\ g$ and $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$, respectively. As we see that both $T_\mathrm{eff}$ and $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ obtained in this study matched well with the values from Geier et al. (2017). Although, the comparison of $\mathrm{log}\ g$ in the middle subplot of Panel (a) presents a more dispersive distribution than the other two parameters, but our results are still comparable with the values from literature. We also have 5 common hot subdwarf stars with N\'emeth et al. (2012), which are marked in Table 1. These stars are from the GALEX survey with low-resolution spectra. Similar as we see in Panel (a), both $T_\mathrm{eff}$ and $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ from this study match very well with the values from N\'emeth et al. (2012, see the left and right subplots in Panel (b)). However, most of the $\mathrm{log}\ g$ obtained in this study seem to be a little larger than the values from N\'emeth et al. (2012, see the middle subplot in Panel(b)). This could be due to the fact that the synthetic spectra used to fit the observed spectra in our study are calculated from atmospheric models only with H and He composition (N\'emeth et al. 2014), while the synthetic spectra used in N\'emeth et al. (2012) are calculated from atmospheric models not only with H and He composition but also include C, N and O composition. Furthermore, the observed spectra in our sample (obtained in LAMOST survey) are different from the spectra in N\'emeth et al. (2012, obtained in GALEX survey), and the qualities (e.g., SNR) for the spectra are also different. Beyond these effects the major reason for the differences in the surface gravity is the inclusion of H Stark broadening tables from Tremblay \& Bergeron (2009) directly in the model atmosphere calculation in {\sc Tlusty} version 204, unlike in version 200 used by N\'emeth et al (2012). \begin{figure*} \centering \begin{minipage}[c]{0.8\textwidth} \includegraphics[width=140mm]{compare_with_geier17.pdf} \centerline{(a)} \end{minipage}\\ \centering \begin{minipage}[c]{0.8\textwidth} \includegraphics[width=140mm]{compare_with_nemeth12.pdf} \centerline{(b)} \end{minipage} \caption{Panel (a): Comparisons between the atmospheric parameters obtained in this study and the ones from Geier et al. (2017). Panel (b): Comparisons between the atmospheric parameters obtained in this study and the ones from N\'emeth et al. (2012). } \end{figure*} \subsection{Parameter diagrams} Fig 6 shows the distribution of all hot subdwarf stars from our study in the $T_{\rm eff}-\log\ {g}$ diagram. The thick solid line denotes the He main-sequence (He MS) from Paczy\'nski (1971), while the two dashed lines represent the zero-age HB (ZAHB) and terminal-age HB (TAHB) for hot subdwarf stars with [Fe/H] = -1.48 from Dorman et al. (1993). The thin solid, dot-dashed and dotted curves are the sdB evolution tracks from Han et al. (2002). From right to left, these sdB evolution tracks have the masses of 0.5, 0.6 and 0.7\ $\mathrm{M}_{\odot}$ respectively. The thin solid curves present a H-rich envelope mass of 0.0\ $\mathrm{M}_{\odot}$, the dot-dashed curves for 0.001\ $\mathrm{M}_{\odot}$, and the dotted curves for 0.005\ $\mathrm{M}_{\odot}$. \begin{figure} \centering \includegraphics [width=100mm]{teff_logg1.pdf} \caption{$T_\mathrm{eff}$-$\mathrm{log}\ g$ diagram for for the 56 hot subdwarf stars identified in this study. Stars with $\mathrm{log}(n\mathrm{He}/n\mathrm{H})\leq-2.2$ are marked with filled circles, stars with $-2.2< \mathrm{log}(n\mathrm{He}/n\mathrm{H})<-1.0$ are represented by open triangles, while stars with $\mathrm{log}(n\mathrm{He}/n\mathrm{H})\geq -1.0$ are showed by open squares. The thick solid line denotes the He-MS from Paczy\'nski (1971), and the two dashed lines represent ZAHB and TAHB from Dorman et al. (1993) with [Fe/H] = -1.48. While the thin solid, dot-dashed, and dotted curves represent the evolution tracks of hot subdwarf stars from Han et al. (2002). See text for the details on these evolution tracks.} \end{figure} We split our sample into three groups based on their He abundance following the scheme of N\'emeth et al. (2012). In Fig 6, filled circles denote hot subdwarf stars with their $\mathrm{log}(n\mathrm{He}/n\mathrm{H})\leq-2.2$. Most of these stars are He-poor sdB stars, and they are located near $T_\mathrm{eff}$ = 29 000 K, and $\mathrm{log}\ g$ = 5.5 cm\,s$^{-2}$. A few of the stars in this He abundance range show very high temperatures (e.g., $T_\mathrm{eff}\geq$50 000 K), which suggests that they have already finished their core helium burning stage and now evolve towards the WD cooling tracks. Open triangles in Fig 6 represent hot subdwarf stars with $-2.2< \mathrm{log}(n\mathrm{He}/n\mathrm{H})<-1.0$. Most of these stars are found near $T_\mathrm{eff}$ = 32 000 K, and $\mathrm{log}g$ = 5.75 cm\,s$^{-2}$. These stars show higher gravities than previous group and their temperatures show a large dispersion. The third group contains stars with He abundances in the range of $-1.0\leq\mathrm{log}(n\mathrm{He}/n\mathrm{H})$, which are denoted by open squares in Fig 6. Actually, we just found five hot subdwarf stars in this He abundance range, four of them are classified as He-sdOB stars and one is classified as He-sdO star based on our classification scheme. \begin{figure} \centering \includegraphics [width=100mm]{teff_loghe1.pdf} \caption{$T_\mathrm{eff}$-$\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ diagram for the 56 hot subdwarf stars identified in this study. The red dashed line denotes the solar He abundance. The dotted line and solid line are the linear regression line fitted by Edelmann et al. (2003), while the dot-dashed line is the best-fitting line for the He-poor sequence in N\'emeth et al. (2012). Diamonds denote the stars for which we just obtained the upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ (see Table 1). } \end{figure} Fig 7 shows the $T_\mathrm{eff}$-$\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ diagram for our hot subdwarf stars. The solar He abundance is marked by a horizontal red dashed line. The diamonds represent the stars for which only an upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ could be obtained. Edelmann et al. (2003) found two He sequences, which are positive correlations between the effective temperature and He abundance (i.e., a He-rich sequence and a He-weak sequence) when the analyzed spectra of hot subdwarf stars were from the Hamburg Quasar Survey. The He-rich sequence of their sample follows the fitting formula: \begin{equation} \mathrm{log}(n\mathrm{He}/n\mathrm{H})=-3.53+1.35\left(\frac{T_\mathrm{eff}}{10^{4}K}-2.00\right), \end{equation} while the He-weak sequence in their study follows the formula: \begin{equation} \mathrm{log}(n\mathrm{He}/n\mathrm{H})=-4.79+1.26\left(\frac{T_\mathrm{eff}}{10^{4}K}-2.00\right). \end{equation} These two lines are shown by the dotted and the solid lines in Fig 7, respectively. We found results similar to those described by Edelmann et al. (2003), the two He sequences of hot subdwrf stars are also present in our sample. Moreover, the He-rich sequence in Fig 7 could be fitted well by the line described in equation (5), which is from Edelmann et al. (2003). However, a He-weak sequence in our sample follows a different trend than the He-weak sequence by Edelmann et al. (2003). On the other hand, the He-weak sequence in our sample is consistent with the one presented in N\'emeth et al. (2012). They used another line to fit the He-weak sequence in their study: \begin{equation} \mathrm{log}(n\mathrm{He}/n\mathrm{H})=-4.26+0.69\left(\frac{T_\mathrm{eff}}{10^{4}K}-2.00\right). \end{equation} We also plot the linear regression by equation (7), which is denoted by a dot-dashed line in Fig 7. The trend of this line is consistent with our He-weak sequence. Furthermore, Edelmann et al. (2003) also found two less clear sequences of hot subdwarf stars in the $\mathrm{log}$ \textit{g}-$\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ plane. However, we did not find a similar result in our sample (see Fig 8). \begin{figure} \centering \includegraphics [width=100mm]{logg_loghe1.pdf} \caption{The $\log{g}-\log{(n{\rm He}/n{\rm H})}$ plane for the 56 hot subdwarf stars identified in this study, the red dashed line marks the solar He abundance for reference. Diamonds denote the stars for which we just obtained the upper limit of $\mathrm{log}(n\mathrm{He}/n\mathrm{H})$ (see Table 1).} \end{figure} \section{Discussion} The traditional method to search for hot subdwarf stars in large spectroscopic surveys is to make color cuts followed by visual inspections. This method requires homogeneous photometric information to obtain the colors of the stars (e.g., \textit{u-g} and \textit{g-r}; Geier et al. 2011). Therefore, the traditional method is not suitable for large spectral databases without supplementary photometric measurements, such as the spectral database of LAMOST. The HELM algorithm, as described in Paper I and in this study, does not need color information to filter out spectra with certain spectral properties. This makes HELM a suitable method to screen large spectroscopic surveys for hot subdwarf stars, or any other objects with distinct spectral features. One may note that He-rich hot subdwarf stars are under-represented in our samples (e.g., only 5 stars with $\mathrm{log}(n\mathrm{He}/n\mathrm{H})>-1.0$, see Fig 7 in this paper), this could be due to the fact that the number of He-rich hot subdwarf stars in the training sample is small. Our training spectra were the hot subdwarfs from Luo et al. (2016), which consists of 77 sdB stars, 12 sdO stars, 10 He-sdB stars and 15 He-sdO stars. According to the classification scheme of Luo et al. (2016), both sdB and sdO stars are He-poor hot subdwarf stars with dominant H Balmer lines, while both He-sdB and He-sdO stars are He-rich stars with dominant He I or He II lines. That is, there are many more hot subdwarf stars with dominant H Balmer lines (He-poor stars) than the stars with dominant He lines (He-rich stars) in our training sample, e.g., 77 versus 25. In addition to this, we did not separate these different type of subdwarf stars during the experiments. Instead, we trained HELM with all the sample spectra together, thus the larger the number of stars of a particular type in the training sample, the greater the precision with which this stellar type may be identified in the science sample. These factors could be accounted for the lack of He-rich hot subdwarf stars in our results. The quantity and quality of the training spectra are both very important factors in the HELM algorithm method, and have a direct influence on the results (Tang et al. 2015). Before we started this work, only 166 hot subdwarf stars (including 122 single-lined stars and 44 double-lined stars) with LAMOST spectra were published in Luo et al. (2016). Therefore, the number of hot subwarf stars is limited in our training spectra. Moreover, among 122 single-lined hot subdwarf stars, 8 stars are classified as BHB stars in Luo et al. (2016), and only about 50 have a SNR larger than 10. As a result, although the initial candidates selected by HELM algorithm contains more than 7000 spectra, but nearly 6000 spectra have a $u$-band SNR below 10, which demonstrates a poor quality of the spectra for a follow-up study. These spectra have been discarded from our analysis as we mention in Section 3. With these considerations the total number of hot subdarfs in the LAMOST target list is likely much higher. Having used machine learning tools to search for hot subdwarf stars in LAMOST, we can outline some future improvements that will be required for a better efficiency and accuracy of the method. For example, we plan to add the standard hot subdwarf stars listed in Drilling et al. (2013) into our training sample, since it provides detailed classification for all kinds of hot subdwarf stars with different types, which will be quite useful to classify hot subdwarf stars by the HELM algorithm. We also plan to cross match the LAMOST database with the newest hot subdwarf catalogue (e.g., Geier et al. 2017), then we will be able to add many high quality hot subdwarf spectra to our training sample. From these improvements we expect a large number of new subdwarfs to be uncoverd from the LAMOST survey in the near future. These works are already on the way and will make important contributions on the study of the formation and evolution of hot subdwarf stars. \section{Summary} We have applied the HELM algorithm in our study to search for hot subdwarf stars in LAMOST DR1. 56 hot subdwarf stars are identified among 465 candidates with single-lined spectra, and their atmospheric parameters have been obtained by fitting the profiles of H Balmer and He lines with the synthetic spectra calculated from NLTE model atmospheres. 31 sdB stars, 11 sdO stars, 9 sdOB stars, 4 He-sdOB and 1 He-sdO stars were found in our study. These stars confirm the two He sequences of hot subdwarf stars in $T_\mathrm{eff}$-$\log{(n{\rm He}/n{\rm H})}$ diagram, which were first found by Edelmann et al. (2003). Our study has shown the strength of the HELM algorithm to filter out targets with specific spectral properties from large sets of spectroscopic data directly, without the need of any photometric observations or pre-selection. Though the total number of hot subdwarf stars identified may seem low compared to the sample size, it is mainly due to the limited quantity and quality of the training spectra. We expect that many more hot subdwarf stars will be found in the LAMOST database using machine learning method in the future after our experiences are implemented in the algorithm. We used the HELM algorithm for the first time to search for hot subdwarf stars in a large spectroscopic survey, and the results presented in our study demonstrate that this method could be applied to search for other types of object with obvious features in their spectra or images. \begin{ack} We thank the referee, A. E. Lynas-Gray, for his valuable suggestions and comments, which improved the manuscript much. This work was supported by the National Natural Science Foundation of China Grant Nos, 11390371, 11503016, 11873037,11603012 and U1731111, Natural Science Foundation of Hunan province Grant No. 2017JJ3283, the Youth Fund project of Hunan Provincial Education Department Grant No. 15B214, the Astronomical Big Data Joint Research Center, co-founded by the National Astronomical Observatories, Chinese Academy of Sciences and the Alibaba Cloud, Young Scholars Program of Shandong University, Weihai 2016WHWLJH09, Natural Science Foundation of Shandong Province ZR2015AQ011, China post-doctoral Science Foundation 2015M571124. This research has used the services of \mbox{www.Astroserver.org} under reference W00QEL. P.N. acknowledges support from the Grant Agency of the Czech Republic (GA\v{C}R 18-20083S). The LAMOST Fellowship is supported by Special Funding for Advanced Users, budgeted and administered by the Center for Astronomical Mega-Science, Chinese Academy of Sciences (CAMS). Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. \end{ack}
2,877,628,090,630
arxiv
\section{Introduction} Ever since the fundamental work of S. Novikov and D. Quillen \cite{novikov},\cite{Q} the theory of formal groups is firmly rooted in stable homotopy theory. In particular, the simple geometric structure of the moduli space of formal groups has been a constant source of inspiration. This moduli space is stratified according to the height of the formal group. For many spaces $X$, ${\mathrm{MU}}_*(X)$ can canonically be considered as a flat sheaf on the moduli space and the stratification defines a resolution of ${\mathrm{MU}}_*(X)$ (the Cousin-complex) which is well-known to be the chromatic resolution of ${\mathrm{MU}}_*(X)$ and which is a central tool in the actual computation of the stable homotopy of $X$.\\ In fact, much deeper homotopy theoretic results have been suggested by this point of view and we mention two of them. All thick subcategories of the derived category of sheaves on the moduli space are rather easily determined by using the above stratification. This simple structure persists to determine all the thick subcategories of the category of finite spectra, see \cite{R2}, Theorem 3.4.3. Similarly, any coherent sheaf on the moduli space can be reconstructed from its restriction to the various strata (corresponding roughly to ${\mathrm{K}}(n)$- localisation in stable homotopy). Again, this result persists to homotopy theory as the chromatic convergence theorem, \cite{R2}, Theorem 7.5.7.\\ In conclusion, the derived category of sheaves on the moduli space of formal groups has turned out to be an excellent algebraic approximation to the homotopy category of (finite) spectra.\\ It may thus seem a little surprising that the central notion of a stack of formal groups has not yet been given a solid foundation, and the chief purpose of this paper is to do so. We hasten to point out to the knowledgeable reader that to this end there is something to do beyond just copying existing literature as already the following simple remark demonstrates. Defining (as usual) a formal group to be a group structure on the formal affine line, one is guaranteed to {\em not} obtain a stack just because the formal affine line in general does admit non-trivial flat forms. We thus spend some effort in the construction of the stack of formal groups and the derivation of its basic properties. This may also be useful in the multiplicative ring spectrum project of P. Goerss and M. Hopkins, c.f. \cite{G}.\\ In fact, we start out more generally by making precise the relation between flat Hopf algebroids and a certain class of stacks. Roughly, the datum of a flat Hopf algebroid is equivalent to the datum of the stack with a specific presentation. Now, the category of comodules of the flat Hopf algebroid only depends on the stack. We will demonstrate the gain in conceptual clarity provided by this point of view by reconsidering the following remarkable recent result of M. Hovey and N. Strickland. For two Landweber exact ${\mathrm{BP}}_*$-algebras $R$ and $S$ of the same height the categories of comodules of the flat Hopf algebroids $(R,\Gamma_R:=R\otimes_{{\mathrm{BP}}_*}{\mathrm{BP}}_*{\mathrm{BP}}\otimes_{{\mathrm{BP}}_*} R)$ and $(S,\Gamma_S:=S\otimes_{{\mathrm{BP}}_*}{\mathrm{BP}}_*{\mathrm{BP}}\otimes_{{\mathrm{BP}}_*} S)$ are equivalent. As an immediate consequence one obtains the computationally important change-of-rings isomorphism $\mathrm{Ext}_{\Gamma_R}^*(R,R)\simeq\mathrm{Ext}_{\Gamma_S}^*(S,S)$ which had been established previously by G. Laures \cite{coco}, 4.3.3.\\ From our point of view, this result has the following simple explanation. Let ${\mathfrak{X}}$ be the stack associated with $({\mathrm{BP}}_*,{\mathrm{BP}}_*{\mathrm{BP}})$ and $f:\mathrm{Spec}\,(R)\longrightarrow{\mathfrak{X}}$ the canonical map. As we will explain, ${\mathfrak{X}}$ is closely related to the stack of formal groups and is thus stratified by closed substacks \[ {\mathfrak{X}}={\mathfrak{Z}}^0\supseteq{\mathfrak{Z}}^1\supseteq\ldots \,.\] We will show that the induced Hopf algebroid $(R,\Gamma_R)$ is simply a presentation of the {\em stack}-theoretic image of $f$ and that $R$ being Landweber exact of height $n$ implies that this image is ${\mathfrak{X}}-{\mathfrak{Z}}^{n+1}$. We conclude that $(R,\Gamma_R)$ and $(S,\Gamma_S)$ are presentations of {\em the same} stack which implies the result of \cite{H1} but more is true: The comodule categories under consideration are in fact equivalent as {\em tensor} abelian categories (\cite{H1} treats their structure of abelian categories only) and we easily generalise the above proof to apply to all the stacks ${\mathfrak{Z}}^{n}-{\mathfrak{Z}}^{n+k}$ (with $n\geq 1$ allowed).\\ Returning to the stack of formal groups, we show that the stack associated with $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}})$ is closely related to this stack. Note, however, that this requires an {\em a priori} construction of the stack of formal groups, the problem being the following. The objects of a stack associated with a flat Hopf algebroid are only {\em flat locally} given in terms of the Hopf algebroid and it is in general difficult to decide what additional objects the stack contains. Given the central role of the stack of formal groups in stable homotopy theory, we believe that it is important to have a genuinely geometric understanding of it rather than just as the stack associated to some Hopf algebroid, so we solve this problem here.\\ Finally, we point out that for many stacks appearing in algebraic topology it is surprisingly easy to compute their Picard groups. For example, the Picard group of the stack of formal groups is isomorphic to ${\mathbb{Z}}$, generated by the canonical line bundle. In keeping with the philosophy that the stack of formal groups provides a good algebraic approximation to stable homotopy theory one may try to use this in the current investigations of the Picard groups of various categories of spectra (e.g. \cite{invertible}) and we hope to return to this in the future.\\ We review the individual sections in more detail. In section \ref{prelim} we review the stack theoretic notions we will have to use in the following. In section \ref{stacksandhopf} we give the relation between flat Hopf algebroids and algebraic stacks. In section \ref{morphisms} we collect a number of technical results on algebraic stacks centring around the problem to relate the properties of a morphism between algebraic stacks with properties of the functors it induces on the categories of quasi-coherent sheaves. The main result is proved in section \ref{ringchange}. In the final section \ref{stackformalgroups} we construct the stack of formal groups and show that the algebraic stack associated with the flat Hopf algebroid $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}})$ is the stack of (one dimensional, commutative, connected, formally smooth) formal groups together with a trivialization of the canonical line bundle and explain its basic geometric properties.\\ To conclude the introduction we would like to acknowledge the influence of M. Hopkins on the present circle of ideas. We understand that he was the first to insist that numerous results on (comodules over) flat Hopf algebroids should be understood from a geometric, i.e. stack theoretic, point of view, c.f. \cite{hopkinslecture}.\\ \begin{acknowledgements} I would like to thank E. Pribble for making a copy of \cite{P} available to me, J. Heinloth, J. Hornbostel, M. Hovey, G. Kings and N. Strickland for useful conversation and the referee for suggesting substantial improvements of the exposition. \end{acknowledgements} \section{Preliminaries on algebraic stacks}\label{prelim} In this section we will recall those concepts from the theory of stacks which will be used in the sequel.\\ Fix an affine scheme $S$ and denote by $\mathrm{Aff}_S$ the category of affine $S$-schemes with some cardinality bound to make it small. We may write $\mathrm{Aff}$ for $\mathrm{Aff}_S$ if $S$ is understood. \begin{defn}\label{def1} A category fibred in groupoids (understood: over $\mathrm{Aff}$) is a category ${\mathfrak{X}}$ together with a functor $a:{\mathfrak{X}}\longrightarrow\mathrm{Aff}$ such that\\ i) ("existence of pull-backs") For every morphism $\phi:V\longrightarrow U$ in $\mathrm{Aff}$ and $x\in\mathrm{Ob}({\mathfrak{X}})$ with $a(x)=U$ there is a morphism $f:y\longrightarrow x$ with $a(f)=\phi$.\\ ii) ("uniqueness of pull-backs up to unique isomorphism") For every diagram in ${\mathfrak{X}}$ \[ \xymatrix{& z \mathrm{ar}[d]^h \\ y \mathrm{ar}[r]^f & x} \] lying via $a$ over a diagram \[ \xymatrix{& W \mathrm{ar}[d]^{\chi} \mathrm{ar}[dl]_{\psi}\\ V \mathrm{ar}[r]^{\phi} & U} \] in $\mathrm{Aff}$ there is a unique morphism $g:z\longrightarrow y$ in ${\mathfrak{X}}$ such that $f\circ g=h$ and $a(g)=\psi$. \end{defn} As an example, consider the category $\mathrm{Ell}$ of elliptic curves having objects $E/U$ consisting of an affine $S$-scheme $U$ and an elliptic curve $E$ over $U$. Morphisms in $\mathrm{Ell}$ are cartesian diagrams \begin{equation}\label{z1} \xymatrix{E' \mathrm{ar}[r] \mathrm{ar}[d] & E \mathrm{ar}[d] \\ U' \mathrm{ar}[r]^f & U,} \end{equation} equivalently isomorphisms of elliptic curves over $U'$ from $E'$ to $E\times_U U'$. For an explicit account of $\mathrm{Aut}\,_{\mathrm{Ell}}(E/U)$ see \cite{strickell}, section 5.\\ There is a functor \[ a:\mathrm{Ell}\longrightarrow\mathrm{Aff}\] sending $E/U$ to $U$ and a morphism in $\mathrm{Ell}$ as in (\ref{z1}) to $f$.\\ Checking that $a$ makes $\mathrm{Ell}$ a category fibred in groupoids reveals that the main subtlety in Definition \ref{def1} lies in then non-uniqueness of cartesian products. A similar example can be given using vector bundles on topological spaces \cite{hollander}, Example B.2.\\ Let $a:{\mathfrak{X}}\longrightarrow\mathrm{Aff}$ be a category fibred in groupoids. For $U\in\mathrm{Ob}(\mathrm{Aff})$ the fibre category ${\mathfrak{X}}_U\subseteq{\mathfrak{X}}$ is defined as the subcategory having objects $x\in\mathrm{Ob}({\mathfrak{X}})$ with $a(x)=U$ and morphisms $f\in\mathrm{Mor}\;({\mathfrak{X}})$ with $a(f)=\mathrm{id}_U$. The category ${\mathfrak{X}}_U$ is a groupoid. Choosing a pull-back as in Definition \ref{def1}, i) for every $\phi:V\longrightarrow U$ in $\mathrm{Aff}$ one can define functors $\phi^*:{\mathfrak{X}}_U\longrightarrow{\mathfrak{X}}_V$ and, for composable $\phi,\psi\in\mathrm{Mor}\;(\mathrm{Aff})$, isomorphisms $\psi^*\circ\phi^*\simeq (\phi\circ\psi)^*$ satisfying a cocycle condition. Sometimes $\phi^*(x)$ will be denoted as $x|V$. This connects Definition \ref{def1} with the concept of fibred category as in \cite{SGA1}, VI as well as with the notion of lax/pseudo functor/presheaf on $\mathrm{Aff}$ with values in groupoids; see \cite{hollander} and \cite{vistoli} for more details.\\ Categories fibred in groupoids constitute a $2$-category in which $1$-morphisms from $a:{\mathfrak{X}}\longrightarrow\mathrm{Aff}$ to $b:{\mathfrak{Y}}\longrightarrow\mathrm{Aff}$ are functors $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ with $b\circ f=a$ (sic !) and $2$-morphisms are isomorphisms between $1$-morphisms. A $1$-morphism $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ is called a monomorphism (resp. isomorphism) if for all $U\in\mathrm{Ob}(\mathrm{Aff})$ the induced functor $f_U:{\mathfrak{X}}_U\longrightarrow{\mathfrak{Y}}_U$ between fibre categories is fully faithful (resp. an equivalence of categories).\\ The next point is to explain what a sheaf, rather than a presheaf, of groupoids should be. This makes sense for any topology on $\mathrm{Aff}$ but we fix the $fpqc$ topology for definiteness: It is the Grothendieck topology on $\mathrm{Aff}$ generated by the pretopology which as covers of an $U\in\mathrm{Aff}$ has the finite families of flat morphisms $U_i\longrightarrow U$ in $\mathrm{Aff}$ such that $\coprod_i U_i\longrightarrow U$ is faithfully flat, c.f. \cite{vistoli}, 2.3. \begin{defn}\label{def2} A stack (understood: over $\mathrm{Aff}$ for the $fpqc$ topology) is a category fibred in groupoids ${\mathfrak{X}}$ such that\\ i) ("descent of morphisms") For $U\in\mathrm{Ob}(\mathrm{Aff})$ and $x,y\in\mathrm{Ob}({\mathfrak{X}}_U)$ the presheaf \[ \mathrm{Aff}/U\longrightarrow\mathrm{Sets}\, , \, (V\stackrel{\phi}{\longrightarrow}U)\mapsto \mathrm{Hom}_{{\mathfrak{X}}_V}(x|V,y|V) \] is a sheaf.\\ ii) ("glueing of objects") If $\{ U_i \stackrel{\phi_i}{\longrightarrow}U\}$ is a covering in $\mathrm{Aff}$, $x_i\in\mathrm{Ob}({\mathfrak{X}}_{U_i})$ and $f_{ji}:(x_i|U_i\times_U U_j)\stackrel{\sim}{\longrightarrow} (x_j|U_i\times_U U_j)$ are isomorphisms satisfying a cocycle condition then there are $x\in\mathrm{Ob}({\mathfrak{X}}_U)$ and isomorphisms $f_i:(x|U_i)\stackrel{\sim}{\longrightarrow}x_i$ such that $f_j|U_i\times_U U_j=f_{ji}\circ f_i|U_i\times_U U_j$. \end{defn} The category fibred in groupoids $\mathrm{Ell}$ is a stack: Condition i) of Definition \ref{def2} for $\mathrm{Ell}$ is a consequence of faithfully flat descent \cite{BLR}, 6.1, Theorem 6, and condition ii) relies on the fact that elliptic curves canonically admit ample line bundles, see \cite{vistoli}, 4.3.3.\\ \begin{defn}\label{def3} Let ${\mathfrak{X}}$ be a stack. A substack of ${\mathfrak{X}}$ is a strictly full subcategory ${\mathfrak{Y}}\subseteq{\mathfrak{X}}$ such that\\ i) For any $\phi:U\longrightarrow V$ in $\mathrm{Aff}$ one has $\phi^*(\mathrm{Ob}({\mathfrak{Y}}_V))\subseteq\mathrm{Ob}({\mathfrak{Y}}_U)$.\\ ii) If $\{ U_i\longrightarrow U\}$ is a covering in $\mathrm{Aff}$ and $x\in\mathrm{Ob}({\mathfrak{X}}_U)$ then we have $x\in\mathrm{Ob}({\mathfrak{Y}}_U)$ if and only if $x|U_i\in\mathrm{Ob}({\mathfrak{Y}}_{U_i})$ for all $i$. \end{defn} As an example, consider the stack $\overline{\mathrm{Ell}}$ of generalised elliptic curves in the sense of \cite{delignerap}. Then $\mathrm{Ell}\subseteq\overline{\mathrm{Ell}}$ is a substack: Since a generalised elliptic curve is an elliptic curve if and only if it is smooth, condition i) of Definition \ref{def3} holds because smoothness is stable under base change and condition ii) holds because smoothness if $fpqc$ local on the base.\\ \begin{defn}\label{def4} A $1$-morphism $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ of stacks is an epimorphism if for every $U\in\mathrm{Ob}(\mathrm{Aff})$ and $y\in\mathrm{Ob}({\mathfrak{Y}}_U)$ there exist a covering $\{ U_i\longrightarrow U\}$ in $\mathrm{Aff}$ and $x_i\in\mathrm{Ob}({\mathfrak{X}}_{U_i})$ such that $f_{U_i}(x_i)\simeq y|U_i$ for all $i$. \end{defn} A $1$-morphism of stacks is an isomorphism if and only if it is both a monomorphism and an epimorphism \cite{CA}, Corollaire 3.7.1. This fact can also be understood from a homotopy theoretic point of view \cite{hollander}, Corollary 8.16.\\ A fundamental insight is that many of the methods of algebraic geometry can be generalised to apply to a suitable class of stacks. In order to define this class, we first have to explain the concept of representable $1$-morphisms of stacks which in turn needs the notion of algebraic spaces:\\ Algebraic spaces are a generalisation of schemes. The reader unfamiliar with them can, for the purpose of reading this paper, safely replace algebraic spaces by schemes throughout. We have to mention them anyway in order to confirm with our main reference \cite{CA}. Algebraic spaces were invented by M. Artin and we decided not to try to give any short account of the main ideas underlying this master piece of algebraic geometry but rather refer the reader to \cite{artin69} for an introduction and to \cite{knutson} as the standard technical reference.\\ We can now proceed on our way towards defining algebraic stacks. \begin{defn}\label{def5} A $1$-morphism $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ of stacks is representable if for any $U\in\mathrm{Aff}$ with a $1$-morphism $U\longrightarrow{\mathfrak{Y}}$ the fibre product ${\mathfrak{X}}\times_{{\mathfrak{Y}}} U$ is an algebraic space. \end{defn} Here, we refer the reader to \cite{CA}, 3.3 for the notion of finite limit for stacks.\\ Now let $P$ be a suitable property of morphisms of algebraic spaces, e.g. being an open or closed immersion, being affine or being (faithfully) flat, see \cite{CA}, 3.10 for a more exhaustive list. We say that a representable $1$-morphism $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ of stacks has the property $P$ if for every $U\in\mathrm{Aff}$ with a $1$-morphism $g:U\longrightarrow{\mathfrak{Y}}$, forming the cartesian diagram \[ \xymatrix{ {\mathfrak{X}}\mathrm{ar}[r]^f& {\mathfrak{Y}}\\ {\mathfrak{X}}\times_{{\mathfrak{Y}}} U\mathrm{ar}[u]\mathrm{ar}[r]^(.6){f'}& U \mathrm{ar}[u] ,} \] the resulting morphism $f'$ between algebraic spaces has the property $P$.\\ As an example, let us check that the inclusion $\mathrm{Ell}\subseteq\overline{\mathrm{Ell}}$ is an open immersion: To give $U\in\mathrm{Aff}$ and a morphism $U\longrightarrow \overline{\mathrm{Ell}}$ is the same as to give a generalised elliptic curve $\pi:E\longrightarrow U$. Then $\mathrm{Ell}\times_{\overline{\mathrm{Ell}}} U\longrightarrow U$ is the inclusion of the complement of the image under $\pi$ of the non-smooth locus of $\pi$ and hence is an open subscheme of $U$.\\ \begin{defn}\label{def6} A stack ${\mathfrak{X}}$ is algebraic if the diagonal $1$-morphism ${\mathfrak{X}}\longrightarrow{\mathfrak{X}}\times{\mathfrak{X}}$ is representable and affine and there is an affine scheme $U$ and a faithfully flat $1$-morphism $P: U\longrightarrow{\mathfrak{X}}$. \end{defn} See section \ref{rigidstax} for further discussion.\\ A convenient way of constructing stacks is by means of groupoid objects. Let $(X_0,X_1)$ be a groupoid object in $\mathrm{Aff}$, i.e. a Hopf algebroid, see section \ref{stacksandhopf}. Then $(X_0,X_1)$ determines a presheaf of groupoids on $\mathrm{Aff}$ and the corresponding category fibred in groupoids ${\mathfrak{X}}'$ is easily seen to satisfy condition i) of Definition \ref{def2} for being a stack but not, in general, condition ii). There is a canonical way to pass from ${\mathfrak{X}}'$ to a stack ${\mathfrak{X}}$ \cite{CA}, Lemme 3.2 which can also be interpreted as a fibrant replacement in a suitable model structure on presheaves of groupoids \cite{hollander}.\\ We provisionally define the stack of formal groups ${\mathfrak{X}}_{FG}$ to be the stack associated with the Hopf algebroid $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}}[u^{\pm 1}])$. Then ${\mathfrak{X}}_{FG,U}'$ is the groupoid of formal group laws over $U$ and their (not necessarily strict) isomorphisms. A priori, it is unclear what the fibre categories ${\mathfrak{X}}_{FG,U}$ are and in fact we will have to proceed differently in section \ref{stackformalgroups}: We first construct a stack ${\mathfrak{X}}_{FG}$ directly and then prove that it is the stack associated with $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}}[u^{\pm 1}])$. \\ Note that there is a canonical $1$-morphism $\mathrm{Spec}\,({\mathrm{MU}}_*)\longrightarrow{\mathfrak{X}}_{FG}$. The following is a special case of Proposition \ref{landweberflat}. \begin{prop} A ${\mathrm{MU}}_*$-algebra $R$ is Landweber exact if and only if the composition $\mathrm{Spec}\,(R)\longrightarrow\mathrm{Spec}\,({\mathrm{MU}}_*)\longrightarrow{\mathfrak{X}}_{FG}$ is flat. \end{prop} \section{Algebraic stacks and flat Hopf algebroids}\label{stacksandhopf} In this section we explain the relation between flat Hopf algebroids and their categories of comodules and a certain class of stacks and their categories of quasi-coherent sheaves of modules. \subsection{The $2$-category of flat Hopf algebroids}\label{flatha} We refer to \cite{R1}, Appendix A for the notion of a (flat) Hopf algebroid. To give a Hopf algebroid $(A,\Gamma)$ is equivalent to giving $(X_0:=\mathrm{Spec}\,(A), X_1:=\mathrm{Spec}\,(\Gamma))$ as a groupoid in affine schemes \cite{CA}, 2.4.3 and we will formulate most results involving Hopf algebroids this way.\\ Recall that this means that $X_0$ and $X_1$ are affine schemes and that we are given morphisms $s,t:X_1\longrightarrow X_0$ (source and target), $\epsilon:X_0 \longrightarrow X_1$ (identity), $\delta:\cart{X_1}{s,X_0,t}{X_1}\longrightarrow X_1$ (composition) and $i:X_1\longrightarrow X_1$ (inverse) verifying suitable identities. The corresponding maps of rings are denoted $\eta_L,\eta_R$ (left- and right unit), $\epsilon$ (augmentation), $\Delta$ (comultiplication) and $c$ (antipode).\\ The $2$-category of flat Hopf algebroids ${\cal H}$ is defined as follows. Objects are Hopf algebroids $(X_0,X_1)$ such that $s$ and $t$ are flat (and thus faithfully flat because they allow $\epsilon$ as a right inverse). A $1$-morphism of flat Hopf algebroids from $(X_0,X_1)$ to $(Y_0,Y_1)$ is a pair of morphisms of affine schemes $f_i:X_i\longrightarrow Y_i$ ($i=0,1$) commuting with all the structure. The composition of $1$-morphisms is component wise. Given two $1$-morphisms $(f_0,f_1),(g_0,g_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$, a $2$-morphism $c:(f_0,f_1)\longrightarrow (g_0,g_1)$ is a morphism of affine schemes $c:X_0\longrightarrow Y_1$ such that $sc=f_0, tc=g_0$ and the diagram \[ \xymatrix{ X_1\mathrm{ar}[r]^-{(g_1,cs)}\mathrm{ar}[d]_{(ct,f_1)} & Y_1{\times\atop s,Y_0,t}Y_1\mathrm{ar}[d]^{\delta} \\ Y_1{\times\atop s,Y_0,t}Y_1\mathrm{ar}[r]^-{\delta} & Y_1} \] commutes. For $(f_0,f_1)=(g_0,g_1)$ the identity $2$-morphism is given by $c:=\epsilon f_0$. Given two $2$-morphisms $\xymatrix{(f_0,f_1) \mathrm{ar}[r]^c & (g_0,g_1)\mathrm{ar}[r]^{c^{'}} & (h_0,h_1)}$ their composition is defined as \[ c^{'}\circ c:\xymatrix{X_0\mathrm{ar}[r]^-{(c^{'},c)} & Y_1{\times\atop s,Y_0,t}Y_1 \mathrm{ar}[r]^-{\delta} & Y_1}. \] One checks that the above definitions make ${\cal H}$ a $2$-category which is in fact clear because, except for the flatness of $s$ and $t$, they are merely a functorial way of stating the axioms of a groupoid, a functor and a natural transformation. For technical reasons we will sometimes consider Hopf algebroids for which $s$ and $t$ are not flat.\\ \subsection{The $2$-category of rigidified algebraic stacks}\label{rigidstax} From Definition \ref{def2} one sees that any $1$-morphism of algebraic stacks from an algebraic space to an algebraic stack is representable and affine, c.f. the proof of \cite{CA}, Corollaire 3.13. In particular, the condition in Definition \ref{def2} that $P$ be faithfully flat makes sense. By definition, every algebraic stack is quasi-compact, hence so is any $1$-morphism between algebraic stacks \cite{CA}, D\'efinition 4.16, Remarques 4.17. One can check that finite limits and colimits of algebraic stacks are again algebraic stacks. If ${\mathfrak{U}}\stackrel{i}{\hookrightarrow}{\mathfrak{X}}$ is a quasi-compact open immersion of stacks and ${\mathfrak{X}}$ is algebraic then the stack ${\mathfrak{U}}$ is algebraic as one easily checks. In general, an open substack of an algebraic stack need not be algebraic, see the introduction of section \ref{ringchange}.\\ A morphism $P$ as in Definition \ref{def2} is called a presentation of ${\mathfrak{X}}$. As far as we are aware, the above definition of ``algebraic'' is due to P. Goerss \cite{G} and is certainly motivated by the equivalence given in subsection \ref{equivalence} below. We point out that the notion of ``algebraic stack'' well-establish in algebraic geometry \cite{CA}, D\'efinition 4.1 is different from the above. For example, the stack associated with $({\mathrm{BP}}_*,{\mathrm{BP}}_*{\mathrm{BP}})$ in section \ref{ringchange} is algebraic in the above sense but not in the sense of algebraic geometry because its diagonal is not of finite type \cite{CA} Lemme 4.2. Of course, in the following we will use the term ``algebraic stack'' in the sense defined above.\\ The $2$-category ${\cal S}$ of rigidified algebraic stacks is defined as follows. Objects are presentations $P: X_0\longrightarrow {\mathfrak{X}}$ as in Definition \ref{def2}. A $1$-morphism from $P: X_0\longrightarrow {\mathfrak{X}}$ to $Q: Y_0\longrightarrow {\mathfrak{Y}}$ is a pair consisting of $f_0:X_0\longrightarrow Y_0$ in $\mathrm{Aff}$ and a $1$-morphism of stacks $f:{\mathfrak{X}}\longrightarrow {\mathfrak{Y}}$ such that the diagram \[ \xymatrix{ X_0\mathrm{ar}[r]^{f_0}\mathrm{ar}[d]_P & Y_0\mathrm{ar}[d]^Q \\ {\mathfrak{X}} \mathrm{ar}[r]_f &{\mathfrak{Y}} }\] is $2$-commutative. The composition of $1$-morphisms is component wise. Given $1$-morphisms $(f_0,f), (g_0,g):(X_0\longrightarrow {\mathfrak{X}})\longrightarrow (Y_0\longrightarrow {\mathfrak{Y}})$ a $2$-morphism in ${\cal S}$ from $(f_0,f)$ to $(g_0,g)$ is by definition a $2$-morphism from $f$ to $g$ in the $2$-category of stacks \cite{CA}, 3. \hspace{.5 cm} \subsection{The equivalence of ${\cal H}$ and ${\cal S}$}\label{equivalence} We now establish an equivalence of $2$-categories between ${\cal H}$ and ${\cal S}$. We define a functor $K:{\cal S}\longrightarrow {\cal H}$ as follows.\\ \[ K(\xymatrix{X_0\mathrm{ar}[r]^P & {\mathfrak{X}}}):=(X_0, X_1:=\cart{X_0}{P,{\mathfrak{X}},P}{X_0}) \] has a canonical structure of groupoid \cite{CA}, Proposition 3.8, $X_1$ is affine because $X_0$ is affine and $P$ is representable and affine and the projections $s,t:\xymatrix{X_1\mathrm{ar}@<.5ex>[r]\mathrm{ar}@<-.5ex>[r] & X_0}$ are flat because $P$ is. Thus $(X_0,X_1)$ is a flat Hopf algebroid. If $(f_0,f): \xymatrix{(X_0\mathrm{ar}[r]^P & {\mathfrak{X}})}\longrightarrow \xymatrix{(Y_0\mathrm{ar}[r]^Q & {\mathfrak{Y}})}$ is a $1$-morphism in ${\cal S}$ we define $K((f_0,f)):=(f_0,f_0\times f_0)$. If we have $1$-morphisms $(f_0,f),(g_0,g): \xymatrix{(X_0\mathrm{ar}[r]^P & {\mathfrak{X}})}\longrightarrow \xymatrix{(Y_0\mathrm{ar}[r]^Q & {\mathfrak{Y}})}$ in ${\cal S}$ and a $2$-morphism $(f_0,f)\longrightarrow (g_0,g)$ then we have by definition a $2$-morphism $\xymatrix{f \mathrm{ar}[r]^{\Theta} & g}: {\mathfrak{X}}\longrightarrow {\mathfrak{Y}}$. In particular, we have $\Theta_{X_0}:\mathrm{Ob}({\mathfrak{X}}_{X_0})\longrightarrow\mathrm{Mor}\;({\mathfrak{Y}}_{X_0})=\mathrm{Hom}_{\mathrm{Aff}}(X_0,Y_1)$ and we define $K(\Theta):=\Theta_{X_0}(\mathrm{id}_{X_0})$. One checks that $K:{\cal S}\longrightarrow {\cal H}$ is a $2$-functor.\\ We define a $2$-functor $G:{\cal H}\longrightarrow {\cal S}$ as follows. On objects we put $G((X_0,X_1)):=(X_0\stackrel{can}{\longrightarrow} {\mathfrak{X}}:=[ \xymatrix{ X_1\mathrm{ar}@<.5ex>[r]\mathrm{ar}@<-.5ex>[r] & X_0} ] )$, the stack associated with the groupoid $(X_0,X_1)$ together with its canonical presentation \cite{CA}, 3.4.3; identify the $X_i$ with the flat sheaves they represent to consider them as ``S-espaces'', see also subsection \ref{epimonic}. Then $G((X_0,X_1))$ is a rigidified algebraic stack: Saying that the diagonal of ${\mathfrak{X}}$ is representable and affine means that for any algebraic space $X$ and morphisms $x_1,x_2:X\longrightarrow{\mathfrak{X}}$ the sheaf $\underline{\mathrm{Isom}}_X(x_1,x_2)$ on $X$ is representable by an affine $X$-scheme. This problem is local in the {\em fpqc} topology on $X$ because affine morphisms satisfy effective descent in the {\em fpqc} topology \cite{SGA1}, expos\'e VIII, Th\'eor\`eme 2.1. So we can assume that the $x_i$ lift to $X_0$ and the assertion follows because $(s,t): X_1\longrightarrow\cart{X_0}{S}{X_0}$ is affine. A similar argument shows that $P:X_0\longrightarrow{\mathfrak{X}}$ is representable and faithfully flat because $s$ and $t$ are faithfully flat.\\ Given a $1$-morphism $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ in ${\cal H}$ there is a unique $1$-morphism $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ making \[ \xymatrix{ X_1\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r]\mathrm{ar}[d]_{f_1} & X_0 \mathrm{ar}[r]^-P\mathrm{ar}[d]_{f_0} & {\mathfrak{X}} \mathrm{ar}@{.>}[d]^f \\ Y_1\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & Y_0 \mathrm{ar}[r]^-Q & {\mathfrak{Y}} } \] $2$-commutative \cite{CA}, proof of Proposition 4.18 and we define $G((f_0,f_1)):=f$.\\ Given a $2$-morphism $c:X_0\longrightarrow Y_1$ from the $1$-morphism $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ to the $1$-morphism $(g_0,g_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ in ${\cal H}$ we have a diagram \[ \xymatrix{ X_1\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r]\mathrm{ar}@<-.5ex>[d]_{f_1}\mathrm{ar}@<.5ex>[d]^{g_1} & X_0 \mathrm{ar}[r]^-P\mathrm{ar}@<-.5ex>[d]_{f_0}\mathrm{ar}@<.5ex>[d]^{g_0} & {\mathfrak{X}} \mathrm{ar}@/^/[d]^-g\mathrm{ar}@/_/[d]_-f \\ Y_1\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & Y_0 \mathrm{ar}[r]^-Q & {\mathfrak{Y}} } \] and need to construct a $2$-morphism $\Theta=G(c):f\longrightarrow g$ in the $2$-category of stacks. We will do this in some detail because we omit numerous similar arguments.\\ Fix $U\in\mathrm{Aff}$, $x\in\mathrm{Ob}({\mathfrak{X}}_U)$ and a representation of $x$ as in \cite{CA}, proof of Lemme 3.2 \[ (U^{'}\longrightarrow U,\, x^{'}:U^{'}\longrightarrow X_0,\, U^{''}:=\xymatrix{ U^{'} \mathrm{ar}@{}[r]|{\times}_-U & U^{'}}\stackrel{\sigma}{\longrightarrow} X_1), \] i.e. $U^{'}\longrightarrow U$ is a cover in $\mathrm{Aff}$, $x^{'}\in X_0(U^{'})=\mathrm{Hom}_{\mathrm{Aff}}(U^{'}, X_0)$ and $\sigma$ is a descent datum for $x^{'}$ with respect to the cover $U^{'}\longrightarrow U$. Hence, denoting by $\pi_1,\pi_2:U^{''}\longrightarrow U^{'}$ and $\pi: U^{'}\longrightarrow U$ the projections, we have $\sigma:\pi_1^*x^{'}\stackrel{\sim}{\longrightarrow}\pi_2^*x^{'}$ in ${\mathfrak{X}}_{U^{''}}$, i.e. $x^{'}\pi_1=s\sigma$ and $x^{'}\pi_2=t\sigma$. Furthermore, $\sigma$ satisfies a cocycle condition which we do not spell out.\\ We have to construct a morphism \[ \Theta_x:f(x)\longrightarrow g(x) \mbox{ in }{\mathfrak{Y}}_U\] which we do by descent from $U^{'}$ as follows. We have a morphism \[ \pi^*(f(x)) =f(\pi^*(x)=x^{'})=f_0x^{'}\stackrel{\phi^{'}}{\longrightarrow}\pi^*(g(x))=g_0x^{'} \mbox{ in }{\mathfrak{Y}}_{U^{'}}\] given by $\phi^{'}:=cx^{'}:U^{'}\longrightarrow Y_1$. We also have a diagram \[ \xymatrix{ \pi_1^*(\pi^*(f(x)))=f_0x^{'}\pi_1 \mathrm{ar}[r]^-{\pi_1^*(\phi^{'})} \mathrm{ar}[d]_-{\sigma_f} & \pi_1^*(\pi^*(g(x)))=g_0x^{'}\pi_1 \mathrm{ar}[d]^{\sigma_g} \\ \pi_2^*(\pi^*(f(x)))=f_0x^{'}\pi_2 \mathrm{ar}[r]^-{\pi_2^*(\phi^{'})} & \pi_2^*(\pi^*(g(x)))=g_0x^{'}\pi_2} \] in ${\mathfrak{Y}}_{U^{''}}$ where $\sigma_f$ and $\sigma_g$ are descent isomorphisms for $f(x^{'})$ and $g(x^{'})$ given by $\sigma_f=f_1\sigma$ and $\sigma_g=g_1\sigma$. We check that this diagram commutes by computing in $\mathrm{Mor}\;({\mathfrak{Y}}_{U^{''}})$: \[ \sigma_g\circ \pi_1^*(\phi^{'})= \delta_Y(g_1\sigma,cx^{'}\pi_1)=\delta_Y(g_1\sigma,cs\sigma)=\delta_Y(g_1,cs)\sigma \stackrel{(*)}{=}\] \[ =\delta_Y(ct,f_1)\sigma=\delta_Y(ct\sigma,f_1\sigma)=\delta_Y(cx^{'}\pi_2,f_1\sigma)=\pi_2^*(\phi^{'})\circ\sigma_f.\] Here $\delta_Y$ is the composition of $(Y_0,Y_1)$ and in $(*)$ we used the commutative square in the definition of $2$-morphisms in ${\cal H}$.\\ So $\phi^{'}$ is compatible with descent data and thus descents to the desired $\Theta_x: f(x)\longrightarrow g(x)$. We omit the verification that $\Theta_x$ is independent of the chosen representation of $x$ and natural in $x$ and $U$. One checks that $G:{\cal H}\longrightarrow{\cal S}$ is a $2$-functor.\\ \begin{theorem}\label{equivofSandH} The above $2$-functors $K:{\cal S}\longrightarrow {\cal H}$ and $G:{\cal H}\longrightarrow {\cal S}$ are inverse equivalences. \end{theorem} \begin{proof} We have $G\circ K(X_0\stackrel{P}{\longrightarrow}{\mathfrak{X}})=(\xymatrix{ X_0 \mathrm{ar}[r]^-{can} & [ X_0{\times\atop{\mathfrak{X}}}X_0 \mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & X_0 ]})$ and there is a unique $1$-isomorphism $\nu_P:\xymatrix{[ X_0{\times\atop{\mathfrak{X}}}X_0 \mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & X_0 ]}\longrightarrow{\mathfrak{X}}$ with $\nu_p\circ can=P$ \cite{CA}, Proposition 3.8. One checks that this defines an isomorphism of $2$-functors $G\circ K\stackrel{\simeq}{\longrightarrow}\mathrm{id}_{\cal S}$.\\ Next we have $K\circ G(X_0,X_1)=(X_0,\cart{X_0}{P,{\mathfrak{X}},P}{X_0})$, where $(X_0\stackrel{P}{\longrightarrow}{\mathfrak{X}})=G(X_0,X_1)$, and $X_1\simeq\cart{X_0}{P,{\mathfrak{X}},P}{X_0}$ \cite{CA}, 3.4.3 and one checks that this defines an isomorphism of $2$-functors $\mathrm{id}_{\cal H}\stackrel{\simeq}{\longrightarrow} K\circ G$. \end{proof} In the following, given a flat Hopf algebroid $(X_0,X_1)$, we will refer to $G((X_0,X_1))$ simply as the (rigidified) algebraic stack associated with $(X_0,X_1)$.\\ The forgetful functor from rigidified algebraic stacks to algebraic stacks is not full but we have the following. \begin{prop}\label{chain} If $(X_0,X_1)$ and $(Y_0,Y_1)$ are flat Hopf algebroids with associated rigidified algebraic stacks $P:X_0\longrightarrow{\mathfrak{X}}$ and $Q:Y_0\longrightarrow{\mathfrak{Y}}$ and ${\mathfrak{X}}$ and ${\mathfrak{Y}}$ are $1$-isomorphic as stacks then there is a chain of $1$-morphisms of flat Hopf algebroids from $(X_0,X_1)$ to $(Y_0,Y_1)$ such that every morphism in this chain induces a $1$-isomorphism on the associated algebraic stacks. \end{prop} \begin{remark} This result implies Theorem 6.5 of \cite{H1}: By Theorem \ref{image} below, the assumptions of {\em loc. cit.} imply that the flat Hopf algebroids $(B,\Gamma_B)$ and $(B^{'},\Gamma_{B^{'}})$ considered there have the same open substack of the stack of formal groups as their associated stack. So they are connected by a chain of weak equivalences by Proposition \ref{chain}, see Remark \ref{tannakabla} for the notion of weak equivalence. \end{remark} \begin{proof} Let $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ be a $1$-isomorphism of stacks and form the cartesian diagram \[ \xymatrix{ X_1^{'}\mathrm{ar}[r]_-{f_1} \mathrm{ar}@<-.5ex>[d] \mathrm{ar}@<.5ex>[d] & Y_1 \mathrm{ar}@<-.5ex>[d] \mathrm{ar}@<.5ex>[d] \\ X_0^{'} \mathrm{ar}[r]_-{f_0} \mathrm{ar}[d]_-{P^{'}} & Y_0 \mathrm{ar}[d]^-{Q} \\ {\mathfrak{X}} \mathrm{ar}[r]_-{f} & {\mathfrak{Y}}. } \] To be precise, the upper square is cartesian for either both source or both target morphisms. Then $(f_0,f_1)$ is a $1$-isomorphism of flat Hopf algebroids. Next, $Z:=\cart{X_0^{'}}{P^{'},{\mathfrak{X}},P}{X_0}$ is an affine scheme because $X_0^{'}$ is and $P$ is representable and affine. The obvious $1$-morphism $Z\longrightarrow{\mathfrak{X}}$ is representable, affine and faithfully flat because $P$ and $P^{'}$ are. Writing $W:=\cart{Z}{{\mathfrak{X}}}{Z}\simeq\cart{X_1^{'}}{{\mathfrak{X}}}{X_1}$ we have that ${\mathfrak{X}}\simeq [ \xymatrix{ W \mathrm{ar}@<.5ex>[r]\mathrm{ar}@<-.5ex>[r] & Z} ]$ by the flat version of \cite{CA}, Proposition 4.3.2. There are obvious $1$-morphisms of flat Hopf algebroids $(Z,W)\longrightarrow (X_0^{'},X_1^{'})$ and $(Z,W)\longrightarrow (X_0,X_1)$ covering $\mathrm{id}_{{\mathfrak{X}}}$ (in particular inducing an isomorphism on stacks) and we get the sought for chain as $(Y_0,Y_1)\longleftarrow (X_0^{'},X_1^{'})\longleftarrow (Z,W)\longrightarrow (X_0,X_1)$. \end{proof} \vspace{1.5cm} \subsection{Comodules and quasi-coherent sheaves of modules}\label{modules} Let $(A,\Gamma)$ be a flat Hopf algebroid with associated rigidified algebraic stack $X_0=\mathrm{Spec}\,(A)\longrightarrow{\mathfrak{X}}$. From Theorem \ref{equivofSandH} one would certainly expect that the category of $\Gamma$-comodules has a description in terms of $X_0\longrightarrow{\mathfrak{X}}$. In this subsection we prove the key observation that this category does in fact only depend on ${\mathfrak{X}}$ and not on the particular presentation $X_0\longrightarrow{\mathfrak{X}}$, c.f. (\ref{boing}) below.\\ For basic results concerning the category $\Modq{{\mathfrak{X}}}$ of quasi-coherent sheaves of modules on an algebraic stack ${\mathfrak{X}}$ we refer the reader to \cite{CA}, 13.\\ Fix a rigidified algebraic stack $X_0\stackrel{P}{\longrightarrow}{\mathfrak{X}}$ corresponding by Theorem \ref{equivofSandH} to the flat Hopf algebroid $(X_0=\mathrm{Spec}\,(A),X_1=\mathrm{Spec}\,(\Gamma))$ with structure morphisms $s,t:X_1\longrightarrow X_0$. As $P$ is affine it is in particular quasi-compact, hence {\em fpqc}, and thus of effective cohomological descent for quasi-coherent modules \cite{CA}, Th\'eor\`eme 13.5.5,i). In particular, $P^*$ induces an equivalence \[ P^*:\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{\mathfrak{X}})\stackrel{\simeq}{\longrightarrow}\{F\in\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{X_0})+\mbox{ descent data}\},\] c.f. \cite{BLR}, Chapter 6 for similar examples of descent. A descent datum on $F\in\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{X_0})$ is an isomorphism $\alpha:s^*F\longrightarrow t^*F$ in $\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{X_1})$ satisfying a cocycle condition. Giving $\alpha$ is equivalent to giving either its adjoint $\psi_l:F\longrightarrow s_*t^*F$ or the adjoint of $\alpha^{-1}$, $\psi_r: F\longrightarrow t_*s^*F$. Writing $M$ for the $A$-module corresponding to $F$, $\alpha$ corresponds to an isomorphism $\ocart{\Gamma}{\eta_L,A}{M}\longrightarrow\ocart{\Gamma}{\eta_R,A}{M}$ of $\Gamma$-modules and $\psi_r$ and $\psi_l$ correspond respectively to morphisms $M\longrightarrow \Gamma\otimes_{\eta_R,A} M$ and $M\longrightarrow M\otimes_{A,\eta_L}\Gamma$ of $A$-modules. One checks that this is a $1$-$1$ correspondence between descent data on $F$ and left- (respectively right-)$\Gamma$-comodule structures on $M$. For example, the cocycle condition for $\alpha$ corresponds to the coassociativity of the coaction. In the following we will work with left-$\Gamma$-comodules exclusively and simply call them $\Gamma$-comodules. The above construction then provides an explicit equivalence \begin{equation}\label{boing} \mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{\mathfrak{X}})\stackrel{\simeq}{\longrightarrow} \Gamma\mbox{-comodules.} \end{equation} This can also be proved using the Barr-Beck theorem, \cite{P}, 3.22.\\ The identification of $\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{\mathfrak{X}})$ with $\Gamma$-comodules allows to (re)understand a number of results on $\Gamma$-comodules from the stack theoretic point of view and we now give a short list of such applications which we will use later.\\ The adjunction $(P^*,P_*):\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{\mathfrak{X}})\longrightarrow\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{X_0})$ corresponds to the forgetful functor from $\Gamma$-comodules to $A$-modules, respectively to the functor ``induced/extended comodule''. The structure sheaf ${\cal O}_{{\mathfrak{X}}}$ corresponds to the trivial $\Gamma$-comodule $A$, hence taking the primitives of a $\Gamma$-comodule (i.e. the functor $\mathrm{Hom}_{\Gamma}(A,\cdot)$ from $\Gamma$-comodules to abelian groups) corresponds to $\mathrm{Hom}_{{\cal O}_{\mathfrak{X}}}({\cal O}_{\mathfrak{X}},\cdot)=H^0({\mathfrak{X}},\cdot)$ and thus $\mathrm{Ext}^*_{\Gamma}(A,\cdot)$ corresponds to quasi-coherent cohomology $H^*({\mathfrak{X}},\cdot)$. Another application of (\ref{boing}) is the following correspondence between closed substacks and invariant ideals:\\ By \cite{CA}, Application 14.2.7 there is a $1$-$1$ correspondence between closed substacks ${\mathfrak{Z}}\subseteq{\mathfrak{X}}$ and quasi-coherent ideal sheaves ${\mathcal I}\subseteq{\mathcal O}_{{\mathfrak{X}}}$ under which ${\mathcal O}_{{\mathfrak{Z}}}\simeq {\mathcal O}_{{\mathfrak{X}}}/{\mathcal I}$ and by (\ref{boing}) these ${\mathcal I}$ correspond to $\Gamma$-subcomodules $I\subseteq A$, i.e. invariant ideals. In this situation, the diagram \[ \xymatrix{ \mathrm{Spec}\,(\Gamma/I\Gamma)\mathrm{ar}[r]\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d] & \mathrm{Spec}\,(\Gamma)\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d] \\ \mathrm{Spec}\,(A/I) \mathrm{ar}[r]\mathrm{ar}[d] & \mathrm{Spec}\,(A)\mathrm{ar}[d] \\ {\mathfrak{Z}}\mathrm{ar}[r] & {\mathfrak{X}} }\] is cartesian. Note that the Hopf algebroid $(A/I,\Gamma/I\Gamma)$ is induced from $(A,\Gamma)$ by the map $A\longrightarrow A/I$ because $A/I\otimes_A\Gamma\otimes_A A/I\simeq \Gamma/(\eta_LI+\eta_RI)\Gamma=\Gamma/I\Gamma$ since $I$ is invariant.\\ We conclude this subsection by giving a finiteness result for quasi-coherent sheaves of modules. Let ${\mathfrak{X}}$ be an algebraic stack. We say that ${\mathcal F}\in\Modq{{\mathfrak{X}}}$ if {\em finitely generated} if there is a presentation $P:X_0=\mathrm{Spec}\,(A)\longrightarrow{\mathfrak{X}}$ such that the $A$-module corresponding to $P^*{\mathcal F}$ is finitely generated. If ${\mathcal F}$ is finitely generated then for any presentation $P:X_0^{'}=\mathrm{Spec}\,(A^{'})\longrightarrow{\mathfrak{X}}$ the $A^{'}$-module corresponding to $P^{'*}{\mathcal F}$ is finitely generated as one sees using \cite{Bou}, I, \S3, Proposition 11. \begin{prop}\label{fgenha} Let $(A,\Gamma)$ be a flat Hopf algebroid, $M$ a $\Gamma$-comodule and $M^{'}\subseteq M$ a finitely generated $A$-submodule. Then $M^{'}$ is contained in a $\Gamma$-subcomodule of $M$ which is finitely generated as an $A$-module. \end{prop} \begin{proof} \cite{Thorsten}, Proposition 5.7. \end{proof} Note that in this result, ``finitely generated'' cannot be strengthened to ``coherent'' as is shown by the example of the simple ${\mathrm{BP}}_*{\mathrm{BP}}$-comodule ${\mathrm{BP}}_*/(v_0,\ldots)$ which is not coherent as a ${\mathrm{BP}}_*$-module. \begin{prop}\label{fgen} Let ${\mathfrak{X}}$ be an algebraic stack. Then any ${\mathcal F}\in\Modq{{\mathfrak{X}}}$ is the filtering union of its finitely generated quasi-coherent subsheaves. \end{prop} \begin{proof} Choose a presentation of ${\mathfrak{X}}$ and apply Proposition \ref{fgenha} to the resulting flat Hopf algebroid. \end{proof} This result may be compared with \cite{CA}, Proposition 15.4. \section{Tannakian results}\label{morphisms} In \cite{Lurie}, J. Lurie considers a Tannakian correspondence for "geometric" stacks which are exactly those stacks that are algebraic {\em both} in the sense of \cite{CA}, D\'efinition 4.1 and in the sense of Definition \ref{def6}. He shows that associating to such a stack ${\mathfrak{X}}$ the category $\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{\mathfrak{X}})$ is a fully faithful $2$-functor. The recognition problem, i.e. giving an intrinsic characterisation of the categories $\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{\mathfrak{X}})$, remains open but see \cite{tannaka} for a special case.\\ The usefulness of a Tannakian correspondence stems from being able to relate notions of linear algebra, pertaining to the categories $\mathrm{Mod}_{\mathrm{qcoh}}({\cal O}_{\mathfrak{X}})$ and their morphisms, to geometric notions, pertaining to the stacks and their morphisms. See \cite{DMOS},Propositions 2.20-29 for examples of this in the special case that ${\mathfrak{X}}=BG$ is the classifying stack of a linear algebraic group $G$. This relation can be studied without having solved the recognition problem and we do so in the present section, i.e. we relate properties of $1$-morphisms $(f_0,f_1)$ of flat Hopf algebroids to properties of the induced morphism $f:{\mathfrak{X}} \longrightarrow{\mathfrak{Y}}$ of algebraic stacks and the adjoint pair $(f^*,f_*):\Modq{{\mathfrak{X}}}\longrightarrow\Modq{{\mathfrak{Y}}}$ of functors. \subsection{The epi/monic factorisation}\label{epimonic} Every $1$-morphism of stacks factors canonically into an epimorphism followed by a monomorphism and in this subsection we explain the analogous result for (flat) Hopf algebroids. In particular, this will explain the stack theoretic meaning of the construction of an induced Hopf algebroid, c.f. \cite{H1}, beginning of section 2.\\ By a {\em flat sheaf} we will mean a set valued sheaf on the site $\mathrm{Aff}$. The topology of $\mathrm{Aff}$ is subcanonical, i.e. every representable presheaf is a sheaf. We can thus identify the category underlying $\mathrm{Aff}$ with a full subcategory of the category of flat sheaves.\\ Every $1$-morphism $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ of stacks factors canonically ${\mathfrak{X}}\longrightarrow{\mathfrak{X}}^{'}\longrightarrow{\mathfrak{Y}}$ into an epimorphism followed by a monomorphism \cite{CA}, Proposition 3.7. The stack ${\mathfrak{X}}^{'}$ is determined up to unique $1$-isomorphism and is called the image of $f$.\\ For a $1$-morphism $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ of flat Hopf algebroids we introduce \begin{eqnarray}\label{ab} & \alpha:=t\pi_2 : \cart{X_0}{f_0,Y_0,s}{Y_1}\longrightarrow Y_0\mbox{ and } & \\ & \beta:=(s,f_1,t): X_1\longrightarrow\cartt{X_0}{f_0,Y_0,s}{Y_1}{t,Y_0,f_0}{X_0}. &\nonumber \end{eqnarray} The $1$-morphism $f:{\mathfrak{X}}\longrightarrow {\mathfrak{Y}}$ induced by $(f_0,f_1)$ on algebraic stacks is an epimorphism if and only if $\alpha$ is an epimorphism of flat sheaves as is clear from Definition \ref{def4}. On the other hand, $f$ is a monomorphism if and only if $\beta$ is an isomorphism, as is easily checked.\\ Writing $X_1^{'}:=\cartt{X_0}{f_0,Y_0,s}{Y_1}{t,Y_0,f_0}{X_0}$, $(f_0,f_1)$ factors as \[ \xymatrix{ X_1\mathrm{ar}[r]^{f_1^{'}:=\beta}\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d] & X_1^{'}\mathrm{ar}[r]^{\pi_2}\mathrm{ar}@<-.5ex>[d]_-{\pi_1}\mathrm{ar}@<.5ex>[d]^-{\pi_3} & Y_1\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d] \\ X_0 \mathrm{ar}[r]^{f_0^{'}:=\mathrm{id}_{X_0}} & X_0 \mathrm{ar}[r]^{f_0} & Y_0 } \] and the factorisation of $f$ induced by this is the epi/monic factorisation. Note that even if $(X_0,X_1)$ and $(Y_0,Y_1)$ are flat Hopf algebroids, $(X_0,X_1^{'})$ does not have to be flat.\\ We refer to $(X_0,X_1^{'})$ as the Hopf algebroid induced from $(Y_0,Y_1)$ by $f_0$. \subsection{Flatness and isomorphisms}\label{flatandiso} The proof of the next result will be given at the end of this subsection. The equivalence of $ii)$ and $iii)$ is equivalent to Theorem 6.2 of \cite{H1} but we will obtain refinements of it below, see Proposition \ref{epiofstack} and Proposition \ref{monothm}. \begin{theorem}\label{isothm} Let $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ be a $1$-morphism of flat Hopf algebroids with associated morphisms $\alpha$ and $\beta$ as in (\ref{ab}) and inducing $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ on algebraic stacks. Then the following are equivalent:\\ i) $f$ is a $1$-isomorphism of stacks.\\ ii) $f^*:\Modq{{\mathfrak{X}}}\longrightarrow\Modq{{\mathfrak{Y}}}$ is an equivalence.\\ iii) $\alpha$ is faithfully flat and $\beta$ is an isomorphism. \end{theorem} \begin{remark}\label{tannakabla} This result shows that weak equivalences as defined in \cite{H2}, Definition 1.1.4 are exactly those $1$-morphisms of flat Hopf algebroids which induce $1$-isomorphisms on the associated algebraic stacks.\\ \end{remark} We next give two results about the flatness of morphisms. \begin{prop}\label{flatness1} Let $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ be a $1$-morphism of flat Hopf algebroids, $P:X_0\longrightarrow{\mathfrak{X}}$ and $Q:Y_0\longrightarrow{\mathfrak{Y}}$ the associated rigidified algebraic stacks and $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ the induced $1$-morphism of algebraic stacks. Then the following are equivalent:\\ i) $f$ is (faithfully) flat.\\ ii) $f^*:\Modq{{\mathfrak{Y}}}\longrightarrow\Modq{{\mathfrak{X}}}$ is exact (and faithful).\\ iii) $\alpha:=t\pi_2 :\cart{X_0}{f_0,Y_0,s}{Y_1}\longrightarrow Y_0$ is (faithfully) flat.\\ iv) The composition $X_0\stackrel{P}{\longrightarrow}{\mathfrak{X}}\stackrel{f}{\longrightarrow}{\mathfrak{Y}}$ is (faithfully) flat. \end{prop} \begin{proof} The equivalence of i) and ii) holds by definition, the one of i) and iv) holds because $P$ is $fpqc$ and being (faithfully) flat is a local property for the {\em fpqc} topology. Abbreviating $Z:=\cart{X_0}{f_0,Y_0,s}{Y_1}$ we have a cartesian diagram \[ \xymatrix{ Z \mathrm{ar}[rr]^-{\alpha}\mathrm{ar}[d]_-{\pi_1} & & Y_0 \mathrm{ar}[d]^-Q \\ X_0 \mathrm{ar}[r]^-P \mathrm{ar}@{.>}[rru]^{f_0} & {\mathfrak{X}} \mathrm{ar}[r]^-f & {\mathfrak{Y}} } \] which, as $Q$ is $fpqc$, shows that iv) and iii) are equivalent. We check that this diagram is in fact cartesian by computing: \[ \cart{X_0}{fP,{\mathfrak{Y}},Q}{Y_0}=\cart{X_0}{Qf_0,{\mathfrak{Y}},Q}{Y_0}\simeq \] \[ \simeq\cartt{X_0}{f_0,Y_0,\mathrm{id}}{Y_0}{Q,{\mathfrak{Y}},Q}{Y_0}\simeq\cart{X_0}{f_0,Y_0,s}{Y_1}=Z,\] and under this isomorphism the projection onto the second factor corresponds to $\alpha$. \end{proof} \begin{prop}\label{flatness2} Let $(Y_0,Y_1)$ be a flat Hopf algebroid, $f_0: X_0\longrightarrow Y_0$ a morphism in $\mathrm{Aff}$ and $(f_0,f_1):(X_0, X_1:=\cartt{X_0}{f_0,Y_0,s}{Y_1}{t,Y_0,f_0}{X_0})\longrightarrow (Y_0,Y_1)$ the canonical $1$-morphism of Hopf algebroids from the induced Hopf algebroid and $Q:Y_0\longrightarrow{\mathfrak{Y}}$ the rigidified algebraic stack associated with $(Y_0,Y_1)$. Then the following are equivalent:\\ i) The composition $X_0\stackrel{f_0}{\longrightarrow}Y_0\stackrel{Q}{\longrightarrow}{\mathfrak{Y}}$ is (faithfully) flat.\\ ii) $\alpha:=t\pi_2 :\cart{X_0}{f_0,Y_0,s}{Y_1}\longrightarrow Y_0$ is (faithfully) flat.\\ If either of this maps is flat, then $(X_0,X_1)$ is a {\em flat} Hopf algebroid. \end{prop} The last assertion of this Proposition does not admit a converse: For $(Y_0,Y_1)=(\mathrm{Spec}\,({\mathrm{BP}}_*),\mathrm{Spec}\,({\mathrm{BP}}_*{\mathrm{BP}}))$ and $X_0:=\mathrm{Spec}\,({\mathrm{BP}}_*/I_n)\longrightarrow Y_0$, the induced Hopf algebroid is flat but $X_0\longrightarrow{\mathfrak{Y}}$ is not, c.f. subsection \ref{landweber}. \begin{proof} The proof of the equivalence of i) and ii) is the same as in Proposition \ref{flatness1}, using that $Q$ is $fpqc$. Again denoting $Z:=\cart{X_0}{f_0,Y_0,s}{Y_1}$ one checks that the diagram \[ \xymatrix{ Z \mathrm{ar}[r]^-{\alpha} & Y_0\\ X_1 \mathrm{ar}[u] \mathrm{ar}[r]^-t & X_0 \mathrm{ar}[u]_-{f_0}} \] is cartesian which implies the final assertion of the proposition because flatness is stable under base change. \end{proof} \begin{prop}\label{inducediso} Let $(Y_0,Y_1)$ be a flat Hopf algebroid, $f_0: X_0\longrightarrow Y_0$ a morphism in $\mathrm{Aff}$ such that the composition $X_0\stackrel{f_0}{\longrightarrow}Y_0\stackrel{Q}{\longrightarrow}{\mathfrak{Y}}$ is faithfully flat, where $Q:Y_0\longrightarrow{\mathfrak{Y}}$ is the rigidified algebraic stack associated with $(Y_0,Y_1)$. Let $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ be the canonical $1$-morphism with $(X_0,X_1)$ the Hopf algebroid induced from $(Y_0,Y_1)$ by $f_0$. Then $(X_0,X_1)$ is a flat Hopf algebroid and $(f_0,f_1)$ induces a $1$-isomorphism on the associated algebraic stacks. \end{prop} \begin{proof} The $1$-morphism $f$ induced on the associated algebraic stacks is a monomorphism as explained in subsection \ref{epimonic}. Proposition \ref{flatness2} shows that $(X_0,X_1)$ is a flat Hopf algebroid and that $\alpha$ is faithfully flat, hence an epimorphism of flat sheaves. Thus $f$ is an epimorphism of stacks as noted in subsection \ref{epimonic} and, finally, $f$ is a $1$- isomorphism by \cite{CA}, Corollaire 3.7.1. \end{proof} We now start to take the module categories into consideration. Given $f:X\longrightarrow Y$ in $\mathrm{Aff}$ we have an adjunction $\psi_f: \mathrm{id}_{\Modq{Y}}\longrightarrow f_*f^*$. We recognise the epimorphisms of representable flat sheaves as follows. \begin{prop}\label{epiofrep} Let $f:X\longrightarrow Y$ be a morphism in $\mathrm{Aff}$. Then the following are equivalent:\\ i) $f$ is an epimorphism of flat sheaves.\\ ii) There is some $\phi:Z\longrightarrow X$ in $\mathrm{Aff}$ such that $f\phi$ is faithfully flat.\\ If i) and ii) hold, then $\psi_f$ is injective.\\ If $f$ is flat, the conditions i) and ii) are equivalent to $f$ being faithfully flat. \end{prop} As an example of a morphism satisfying the conditions of Proposition \ref{epiofrep} without being flat one may take the unique morphism $\mathrm{Spec}\,({\mathbb{Z}})\sqcup\mathrm{Spec}\,({\mathbb{F}}_p)\longrightarrow\mathrm{Spec}\,({\mathbb{Z}})$. \begin{proof} That i) implies ii) is seen by lifting $\mathrm{id}_Y\in Y(Y)$ after a suitable faithfully flat cover $Z\longrightarrow Y$ to some $\phi\in X(Z)$.\\ To see that ii) implies i), fix some $U\in\mathrm{Aff}$ and $u\in Y(U)$ and form the cartesian diagram \[ \xymatrix{ Z\mathrm{ar}[r]^-{\phi} & X\mathrm{ar}[r]^-f & Y \\ W\mathrm{ar}[u]_-v \mathrm{ar}[rr] & & U\mathrm{ar}[u]_-u . } \] Then $W\longrightarrow U$ is faithfully flat and $u$ lifts to $v\in Z(W)$ and hence to $\phi v\in X(W)$.\\ To see the assertion about flat $f$, note first that a faithfully flat map is trivially an epimorphism of flat sheaves. Secondly, if $f$ is flat and an epimorphism of flat sheaves, then there is some $\phi:Z\longrightarrow X$ as in ii) and the composition $f\phi$ is surjective (on the topological spaces underlying these affine schemes), hence so is $f$, i.e. $f$ is faithfully flat, \cite{Bou}, ch. II, \S2, no 5, Corollary 4,ii). The injectivity of $\psi_f$ is a special case of \cite{Bou}, I, \S3, Proposition 8, i). \end{proof} We have a similar result for epimorphisms of algebraic stacks. \begin{prop}\label{epiofstack} Let $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ be a $1$-morphism of flat Hopf algebroids inducing $f:{\mathfrak{X}} \longrightarrow{\mathfrak{Y}}$ on associated algebraic stacks and write $\alpha:=t\pi_2:\cart{X_0}{f_0,Y_0,s}{Y_1}\longrightarrow Y_0$. Then the following are equivalent:\\ i) $f$ is an epimorphism.\\ ii) $\alpha$ is an epimorphism of flat sheaves.\\ iii) There is some $\phi: Z\longrightarrow \cart{X_0}{f_0,Y_0,s}{Y_1}$ in $\mathrm{Aff}$ such that $\alpha\phi$ is faithfully flat.\\ If these conditions hold then $\mathrm{id}_{\Modq{{\mathfrak{Y}}}}\longrightarrow f_*f^*$ is injective. \end{prop} \begin{proof} The equivalence of i) and ii) is ``mise pour memoire'', the one of ii) and iii) has been proved in Proposition \ref{epiofrep}. Assume that these conditions hold and let $g:{\mathfrak{X}}^{'}\longrightarrow{\mathfrak{X}}$ be any morphism of algebraic stacks. Assume that $\mathrm{id}_{\Modq{{\mathfrak{Y}}}}\longrightarrow(fg)_*(fg)^*$ is injective. Then we have that the composition $\mathrm{id}_{\Modq{{\mathfrak{Y}}}}\longrightarrow f_*f^*\longrightarrow f_*g_*g^*f^*=(fg)_*(fg)^*$ is injective and hence so is $\mathrm{id}_{\Modq{{\mathfrak{Y}}}}\longrightarrow f_*f^*$. Taking $g:=P:X_0\longrightarrow{\mathfrak{X}}$ the canonical presentation we see that we can assume that ${\mathfrak{X}}=X_0$, in particular $f:X_0\longrightarrow{\mathfrak{Y}}$ is representable and affine (and an epimorphism). Now let $Q:Y_0\longrightarrow{\mathfrak{Y}}$ be the canonical presentation and form the cartesian diagram \begin{equation}\label{diagram} \xymatrix{ Z_0\mathrm{ar}[r]^{g_0}\mathrm{ar}[d]_-P & Y_0\mathrm{ar}[d]^-Q \\ X_0 \mathrm{ar}[r]^-f & {\mathfrak{Y}}.} \end{equation} As $Q$ is $fpqc$ we have that $\mathrm{id}_{\Modq{{\mathfrak{Y}}}}\longrightarrow f_*f^*$ is injective if and only if $Q^*\longrightarrow Q^*f_*f^*\simeq g_{0,*}P^*f^*\simeq g_{0,*}g_0^*Q^*$ is injective, we used flat base change, \cite{CA} Proposition 13.1.9 and this will follow from the injectivity of $\mathrm{id}_{\Modq{Y_0}}\longrightarrow g_{0,*}g_0^*$ because $Q$ is flat.\\ As $f$ is representable and affine, $Z_0$ is an affine scheme hence, by Proposition \ref{epiofrep}, we are done because $g_0$ is an epimorphism of flat sheaves \cite{CA}, Proposition 3.8.1. \end{proof} There is an analogous result for monomorphisms of algebraic stacks. \begin{prop}\label{monothm} Let $(f_0,f_1):(X_0,X_1)\longrightarrow (Y_0,Y_1)$ be a $1$-morphism of flat Hopf algebroids, $P:X_0\longrightarrow{\mathfrak{X}}$ the rigidified algebraic stack associated with $(X_0,X_1)$, $f:{\mathfrak{X}}\longrightarrow{\mathfrak{Y}}$ the associated $1$-morphism of algebraic stacks, $\Theta: f^*f_*\longrightarrow\mathrm{id}_{\Modq{{\mathfrak{X}}}}$ the adjunction and $\beta=(s,f_1,t):X_1\longrightarrow\cartt{X_0}{f_0,Y_0,s}{Y_1}{t,Y_0,f_0}{X_0}$. Then the following are equivalent:\\ i) $f$ is a monomorphism.\\ ii) $\beta$ is an isomorphism.\\ iii) $\Theta_{P_*{\cal O}_{X_0}}$ is an isomorphism.\\ If $f$ is representable then these conditions are equivalent to:\\ iiia) $\Theta$ is an isomorphism.\\ iiib) $f_*$ is fully faithful. \end{prop} \begin{remark} This result may be compared to the first assertion of Theorem 2.5 of \cite{H1}. There it is proved that $\Theta$ is an isomorphism if $f$ is a {\em flat} monomorphism.\\ In the situation of Proposition \ref{monothm}, iiib) it is natural to ask for the essential image of $f_*$, see Proposition \ref{essimage}.\\ I do not know whether every monomorphism of algebraic stacks is representable, c.f. \cite{CA}, Corollaire 8.1.3. \end{remark} \begin{proof} We already know that $i)$ and $ii)$ are equivalent. Consider the diagram \[ \xymatrix{ X_0 \mathrm{ar}@/_/[d]_-{\Delta^{'}}\mathrm{ar}[r]^-P & {\mathfrak{X}} \mathrm{ar}@/_/[d]_-{\Delta_f} \mathrm{ar}[r]^-f & {\mathfrak{Y}}\\ \pi: {\mathfrak{Z}} \mathrm{ar}[u]_-{\pi_1^{'}} \mathrm{ar}[r]_-{P^{'}} & {\mathfrak{X}} {\times\atop f,{\mathfrak{Y}},f} {\mathfrak{X}} \mathrm{ar}[u]_-{\pi_1} \mathrm{ar}[r]_-{\pi_2} & {\mathfrak{X}} \mathrm{ar}[u]_-f } \] in which the squares made of straight arrows are cartesian. As $fP$ is representable and affine, we have $fP=\underline{\mathrm{Spec}\,}(f_*P_*{\mathcal O}_{X_0})$, c.f. \cite{CA} 14.2, and $\pi=\underline{\mathrm{Spec}\,}(f^*f_*P_*{\mathcal O}_{X_0})$. We know that $i)$ is equivalent to the diagonal of $f$, $\Delta_f$, being an isomorphism \cite{CA}, Remarque 2.3.1. As $\Delta_f$ is a section of $\pi_1$ this is equivalent to $\pi_1$ being an isomorphism. As $P$ is an epimorphism, this is equivalent to $\pi_1^{'}$ being an isomorphism by \cite{CA}, Proposition 3.8.1. Of course, $\pi_1^{'}$ admits $\Delta^{'}:=(\mathrm{id}_{X_0},\Delta_f P)$ as a section so, finally, $i)$ is equivalent to $\Delta^{'}$ being an isomorphism. One checks that $\Delta^{'}=\underline{\mathrm{Spec}\,}(\Theta_{P_*{\mathcal O}_{X_0}})$ and this proves the equivalence of $i)$ and $iii)$.\\ Now assume that $f$ is representable and a monomorphism. We will show that $iiia)$ holds. Consider the cartesian diagram \[ \xymatrix{ Z \mathrm{ar}[r]^-{f^{'}}\mathrm{ar}[d]_-P & Y_0 \mathrm{ar}[d]^-Q\\ {\mathfrak{X}}\mathrm{ar}[r]^-f & {\mathfrak{Y}}.} \] We have \[ P^*f^*f_* \simeq f^{'*}Q^*f_* \simeq f^{'*}f^{'}_* P^* . \] As $P^*$ reflects isomorphism, $iiia)$ will hold if the adjunction $f^{'*}f_*^{'}\longrightarrow\mathrm{id}_{\Modq{Z}}$ is an isomorphism. As $f$ is representable, this can be checked at the stalks of $z\in Z$, and we can replace $f^{'}$ by the induced morphism $\mathrm{Spec}\,({\mathcal O}_{Z,z})\longrightarrow \mathrm{Spec}\,({\mathcal O}_{Y_0,y})$ ($y:=f^{'}(z)$) which is a monomorphism. In particular, we have reduced the proof of $iiia)$ to the case of affine schemes, i.e. the following assertion: If $\phi: A\longrightarrow B$ is a ring homomorphism such that $\mathrm{Spec}\,(\phi)$ is a monomorphism, i.e. the ring homomorphism corresponding to the diagonal $B\otimes_A B\longrightarrow B,\; b_1\otimes b_2\mapsto b_1b_2$ is an isomorphism, then, for any $B$-module $M$, the canonical homomorphism of $B$-modules $M\otimes_A B\longrightarrow M$ is an isomorphism. This is however easy: \[ M\otimes_A B\simeq (M\otimes_B B)\otimes_A B\simeq M\otimes_B (B \otimes_A B) \simeq M\otimes_B B\simeq M,\] and we leave it to the reader to check that the composition of these isomorphisms is the natural map $M\otimes_A B\longrightarrow M$.\\ Finally, the proof that $iiia)$ and $iiib)$ are equivalent is a formal manipulation with adjunctions which we leave to the reader, and trivially $iiia)$ implies $iii)$. \end{proof} \begin{prop}\label{essimage} In the situation of Proposition \ref{monothm} assume that $f$ is representable and a monomorphism, let $Q:Y_0\longrightarrow{\mathfrak{Y}}$ be the rigidified algebraic stack associated with $(Y_0,Y_1)$ and form the cartesian diagram \begin{equation}\label{wurst} \xymatrix{ Z_0\mathrm{ar}[r]^{g_0}\mathrm{ar}[d]_-P & Y_0\mathrm{ar}[d]^-Q \\ {\mathfrak{X}}\mathrm{ar}[r]^-f & {\mathfrak{Y}}.} \end{equation} Then $Z_0$ is an algebraic space and a given ${\mathcal F}\in \Modq{{\mathfrak{Y}}}$ is in the essential image of $f_*$ if and only if $Q^*{\mathcal F}$ is in the essential image of $g_{0,*}$. Consequently, $f_*$ induces an equivalence between $\Modq{{\mathfrak{X}}}$ and the full subcategory of $\Modq{{\mathfrak{Y}}}$ consisting of such ${\mathcal F}$. \end{prop} \begin{proof} Firstly, $Z_0$ is an algebraic space because $f$ is representable. We know that $f_*$ is fully faithful by Proposition \ref{monothm}, $iiib)$ and need to show that the above description of its essential image is correct. If ${\mathcal F}\simeq f_*{\mathcal G}$ then $Q^*{\mathcal F}\simeq Q^*f_*{\mathcal G}\simeq g_{0,*}P^*{\mathcal G}$ so $Q^*{\mathcal F}$ lies in the essential image of $g_{0,*}$. To see the converse, extend (\ref{wurst}) to a cartesian diagram \[ \xymatrix{ Z_1\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d]\mathrm{ar}[r]^-{g_1} & Y_1\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d] \\ Z_0\mathrm{ar}[r]^{g_0}\mathrm{ar}[d]_-P & Y_0\mathrm{ar}[d]^-Q \\ {\mathfrak{X}}\mathrm{ar}[r]^-f & {\mathfrak{Y}}.} \] Note that ${\mathfrak{X}}\simeq [ \xymatrix{Z_1\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & Z_0 } ]$, hence $(Z_0,Z_1)$ is a flat groupoid (in algebraic spaces) representing ${\mathfrak{X}}$. Now let there be given ${\mathcal F}\in\Modq{{\mathfrak{Y}}}$ and $G\in\Modq{Z_0}$ with $Q^*{\mathcal F}\simeq g_{0,*}G$. We define $\sigma$ to make the following diagram commutative: \[ \xymatrix{ s^*Q^*{\mathcal F}\mathrm{ar}[r]^-{can}_-{\sim}\mathrm{ar}[d]_-{\sim} & t^*Q^*{\mathcal F}\mathrm{ar}[d]_-{\sim}\\ s^*g_{0,*}G\mathrm{ar}[d]_-{\sim} & t^*g_{0,*}G\mathrm{ar}[d]_-{\sim} \\ g_{1,*}s^*G \mathrm{ar}[r]^-{\sim}_-{\sigma} & g_{1,*}t^*G. }\] As $f$ is representable and a monomorphism, so is $g_1$ and thus $g_1^*g_{1,*}\stackrel{\sim}{\longrightarrow}\mathrm{id}_{\Modq{Z_1}}$ and $g_{1,*}$ is fully faithful by Proposition \ref{monothm},$iiia),iiib)$. We define $\tau$ to make the following diagram commutative: \[ \xymatrix{ g_1^*g_{1,*}s^*G \mathrm{ar}[r]^-{g_1^*(\sigma)}_-{\sim} \mathrm{ar}[d]_-{\sim} & g_1^*g_{1,*}t^*G\mathrm{ar}[d]_-{\sim} \\ s^*G\mathrm{ar}[r]^-{\tau} & t^*G.} \] Then $\tau$ satisfies the cocycle condition because it does so after applying the faithful functor $g_{1,*}$. So $\tau$ is a descent datum on $G$, and $G$ descents to ${\mathcal G}\in\Modq{{\mathfrak{X}}}$ with $P^*{\mathcal G}\simeq G$ and we have $Q^*f_*{\mathcal G}\simeq g_{0,*}P^*{\mathcal G}\simeq Q^*{\mathcal F}$, hence $f_*{\mathcal G}\simeq{\mathcal F}$, i.e. ${\mathcal F}$ lies in the essential image of $f_*$ as was to be shown. \end{proof} To conclude this subsection we give the proof of Theorem \ref{isothm} the notations and assumptions of which we now resume. \begin{proofof} Theorem \ref{isothm}. If iii) holds then $f$ is an epimorphism and a monomorphism by proposition \ref{epiofstack}, $iii)\Rightarrow i)$ and Proposition \ref{monothm}, $ii)\Rightarrow i)$ hence i) holds by \cite{CA}, Corollaire 3.7.1. The proof that i) implies ii) is left to the reader and we assume that ii) holds. Since $(f^*,f_*)$ is an adjoint pair of functors, $f_*$ is a quasi-inverse for $f^*$ and $\Theta: f^*f_*\longrightarrow\mathrm{id}_{\Modq{{\mathfrak{X}}}}$ is an isomorphism so $\beta$ is an isomorphism by Proposition \ref{monothm}, $iii)\Rightarrow ii)$. As $f^*$ is in particular exact and faithful, $\alpha$ is faithfully flat by Proposition \ref{flatness1}, $ii)\Rightarrow iii)$ and iii) holds. \end{proofof} \vspace{1.5cm} \section{Landweber exactness and change of rings}\label{ringchange} In this section we will use the techniques from section \ref{morphisms} to give a short and conceptional proof of the fact that Landweber exact ${\mathrm{BP}}_*$-algebras of the same height have equivalent categories of comodules. In fact, we will show that the relevant algebraic stacks are $1$-isomorphic.\\ Let $p$ be a prime number. We will study the algebraic stack associated with the flat Hopf algebroid $({\mathrm{BP}}_*,{\mathrm{BP}}_*{\mathrm{BP}})$ where ${\mathrm{BP}}$ denotes Brown-Peterson homology at $p$.\\ We will work over $S:=\mathrm{Spec}\,({\mathbb{Z}}_{(p)})$, i.e. $\mathrm{Aff}$ will be the category of ${\mathbb{Z}}_{(p)}$-algebras with its {\em fpqc} topology. We refer the reader to \cite{R1}, Chapter 4 for basic facts about ${\mathrm{BP}}$, e.g. ${\mathrm{BP}}_*={\mathbb{Z}}_{(p)}[v_1,\ldots]$ where the $v_i$ denote either the Hazewinkel- or the Araki-generators, it does not matter but the reader is free to make a definite choice at this point if she feels like doing so.\\ $(V:=\mathrm{Spec}\,({\mathrm{BP}}_*),W:=\mathrm{Spec}\,({\mathrm{BP}}_*{\mathrm{BP}}))$ is a flat Hopf algebroid and we denote by $P:V\longrightarrow{\mathfrak{X}}_{FG}$ the corresponding rigidified algebraic stack. We refer the reader to section \ref{stackformalgroups} for an intrinsic description of the stack ${\mathfrak{X}}_{FG}$.\\ For $n\geq 1$ the ideal $I_n:=(v_0,\ldots,v_{n-1})\subseteq {\mathrm{BP}}_*$ is an invariant prime ideal where we agree that $v_0:=p$, $I_0:=(0)$ and $I_{\infty}:=(v_0,v_1,\ldots)$.\\ As explained in subsection \ref{modules}, corresponding to these invariant ideals there is a sequence of closed substacks \[ {\mathfrak{X}}_{FG}={\mathfrak{Z}}^0\supseteq{\mathfrak{Z}}^1\supseteq\ldots\supseteq{\mathfrak{Z}}^{\infty}.\] We denote by ${\mathfrak{U}}^n:={\mathfrak{X}}_{FG}-{\mathfrak{Z}}^n$ ($0\leq n \leq \infty$) the open substack complementary to ${\mathfrak{Z}}^n$ and have an ascending chain \[ \emptyset={\mathfrak{U}}^0\subseteq{\mathfrak{U}}^1\subseteq\ldots\subseteq{\mathfrak{U}}^{\infty}\subseteq{\mathfrak{X}}_{FG}.\] For $0\leq n<\infty$, $I_n$ is finitely generated, hence the open immersion ${\mathfrak{U}}^n\subseteq{\mathfrak{X}}_{FG}$ is quasi-compact and ${\mathfrak{U}}^n$ is an algebraic stack. However, ${\mathfrak{U}}^{\infty}$ is not algebraic: If it was, it could be covered by an affine (hence quasi-compact) scheme and the open covering ${\mathfrak{U}}^{\infty}=\cup_{n\geq 0,n\neq\infty}{\mathfrak{U}}^n$ would allow a finite subcover, which it does not.\\ \subsection{The algebraic stacks associated with Landweber exact ${\mathrm{BP}}_*$-algebras}\label{landweber} In this subsection we prove our main result, Theorem \ref{image}, which determines the stack theoretic image of a morphism $X_0\longrightarrow{\mathfrak{X}}_{FG}$ corresponding to a Landweber exact ${\mathrm{BP}}_*$-algebra. It turns out that the same arguments apply more generally to morphisms $X_0\longrightarrow{\mathfrak{Z}}^n$ for any $n\geq 0$ and we work in this generality from the very beginning.\\ Fix some $0\leq n<\infty$. The stack ${\mathfrak{Z}}^n$ is associated with the flat Hopf algebroid $(V_n,W_n)$ where $V_n:=\mathrm{Spec}\,({\mathrm{BP}}_*/I_n)$ and $W_n:=\mathrm{Spec}\,({\mathrm{BP}}_*{\mathrm{BP}}/I_n{\mathrm{BP}}_*{\mathrm{BP}})$, the flatness of this Hopf algebroid is established by direct inspection, and we have a cartesian diagram \begin{equation}\label{diagram3} \xymatrix{ W_n\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d]\mathrm{ar}@{^{(}->}[r] & W=W_0\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d]\\ V_n \mathrm{ar}[d]_-{Q_n}\mathrm{ar}@{^{(}->}[r]^-{i_n} & V=V_0 \mathrm{ar}[d]^-Q \\ {\mathfrak{Z}}^n \mathrm{ar}@{^{(}->}[r] & {\mathfrak{X}}_{FG} } \end{equation} in which the horizontal arrows are closed immersions.\\ We have an ascending chain of open substacks \[ \emptyset={\mathfrak{Z}}^n\cap{\mathfrak{U}}^n\subseteq{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{n+1}\subseteq\ldots\subseteq{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{\infty}\subseteq{\mathfrak{Z}}^n. \] Let $X_0\stackrel{\phi}{\longrightarrow}V_n$ be a morphism in $\mathrm{Aff}$ corresponding to a morphism of rings ${\mathrm{BP}}_*/I_n\longrightarrow R:=\Gamma(X_0,{\mathcal O}_{X_0})$. Slightly generalising Definition 4.1 of \cite{H1} we define the height of $\phi$ as \[ \mathrm{ht}(\phi):=\max \{ N\geq 0 | R/I_NR \neq 0 \} \] which may be $\infty$ and we agree to put $\mathrm{ht}(\phi):=-1$ in case $R=0$, i.e. $X_0=\emptyset$. Recall that a geometric point of $X_0$ is a morphism $\Omega\stackrel{\alpha}{\longrightarrow}X_0$ in $\mathrm{Aff}$ where $\Omega=\mathrm{Spec}\,(K)$ is the spectrum of an algebraically closed field $K$. The composition $\Omega\stackrel{\alpha}{\longrightarrow}X_0\stackrel{\phi}{\longrightarrow}V_n\stackrel{i_n}{\hookrightarrow} V$ specifies a $p$-typical formal group law over $K$ and $\mathrm{ht}(i_n\phi\alpha)$ is the height of this formal group law. The relation between $\mathrm{ht}(\phi)$ and the height of formal group laws is the following. \begin{prop}\label{heightfibre} In the above situation we have \[ \mathrm{ht}(\phi)=\max\{\mathrm{ht}(i_n\phi\alpha)|\alpha:\Omega\longrightarrow X_0\mbox{ a geometric point} \}, \] with the convention that $\max\;\emptyset=-1$. \end{prop} This Proposition means that $\mathrm{ht}(\phi)$ is the maximum height in a geometric fibre of the formal group law over $X_0$ parametrised by $i_n\phi$. \begin{proof} Clearly, $\mathrm{ht}(i_n\phi\psi)\leq\mathrm{ht}(\phi)$ for any morphism $\psi:Y\longrightarrow X_0$ in $\mathrm{Aff}$. For any $0\leq N^{'}\leq\mathrm{ht}(\phi)$ we have that $I_{N^{'}}R\neq R$ so there is a maximal ideal of $R$ containing $I_{N^{'}}R$ and a geometric point $\alpha$ of $X_0$ supported at this maximal ideal will satisfy $\mathrm{ht}(i_n\phi\alpha)\geq N^{'}$. \end{proof} Another geometric interpretation of $\mathrm{ht}(\phi)$ is given by considering the composition $f:X_0\stackrel{\phi}{\longrightarrow}V_n\stackrel{Q_n}{\longrightarrow}{\mathfrak{Z}}^n$. \begin{prop}\label{heightfactor} In this situation we have \[ \mathrm{ht}(\phi)+1=\min\{ N\geq 0|f \mbox{ factors through } {\mathfrak{Z}}^n\cap{\mathfrak{U}}^N\hookrightarrow{\mathfrak{Z}}^n \} \] with the convention that $\min\;\emptyset=\infty$ and $\infty+1=\infty$. \end{prop} \begin{proof} For any $\infty>N\geq n$ we have a cartesian square \begin{equation}\label{diagram2} \xymatrix{ V_n^N\mathrm{ar}[r]^-j\mathrm{ar}[d] & V_n\mathrm{ar}[d]^-{Q_n} \\ {\mathfrak{Z}}^n\cap{\mathfrak{U}}^N\mathrm{ar}[r]^-i & {\mathfrak{Z}}^n} \end{equation} where $V_n^N=V_n-\mathrm{Spec}\,({\mathrm{BP}}_*/I_N)=\bigcup_{i=n}^{N-1}\mathrm{Spec}\,(({\mathrm{BP}}_*/I_n)[v_i^{-1}])$ hence $f$ factors through $i$ if and only if $\phi:X_0\longrightarrow V_n$ factors through $j$. As $j$ is an open immersion, this is equivalent to $|\phi|(|X_0|)\subseteq |V_n^N| \subseteq |V_n|$ where $| \cdot |$ denotes the topological space underlying a scheme. But this condition can be checked using geometric points and the rest is easy, using Proposition \ref{heightfibre}. \end{proof} Recall from \cite{H1}, Definition 2.1 that, if $(A,\Gamma)$ is a flat Hopf algebroid, an $A$-algebra $f: A\longrightarrow B$ is said to be {\em Landweber exact} over $(A,\Gamma)$ if the functor $M\mapsto M\otimes_A B$ from $\Gamma$-comodules to $B$-modules is exact. For $(X_0:=\mathrm{Spec}\,(A),X_1:=\mathrm{Spec}\,(\Gamma))$, $\phi:=\mathrm{Spec}\,(f): Y_0:=\mathrm{Spec}\,(B)\longrightarrow X_0$ and $P: X_0\longrightarrow{\mathfrak{X}}$ the rigidified algebraic stack associated with $(X_0,X_1)$ this exactness is equivalent to the flatness of the composition $Y_0\stackrel{\phi}{\longrightarrow}X_0\stackrel{P}{\longrightarrow}{\mathfrak{X}}$ because the following square of functors commutes up to natural isomorphism \[ \xymatrix{ (P\phi)^*:\Modq{{\mathfrak{X}}} \mathrm{ar}[r] \mathrm{ar}[d]^{\simeq} & \Modq{Y_0} \mathrm{ar}[d]^{\simeq} \\ \Gamma\mbox{-comodules} \mathrm{ar}[r]^{M\mapsto M\otimes_A B} & B\mbox{-modules,}} \] where the horizontal equivalences are those given by (\ref{boing}).\\ In case ${\mathfrak{X}}={\mathfrak{Z}}^n$ this flatness has the following decisive consequence which paraphrases the fact that the image of a flat morphism is stable under generalisation. \begin{prop}\label{allheights} Assume that $n\geq 0$ and that $\phi:\emptyset\neq X_0\longrightarrow V_n$ is Landweber exact of height $N:=\mathrm{ht}(\phi)$ (hence $n\leq N\leq\infty$). Then for any $n\leq j\leq N$ there is a geometric point $\alpha:\Omega\longrightarrow X_0$ such that $\mathrm{ht}(i_n\phi\alpha)=j$. \end{prop} \begin{proof} Let $\phi$ correspond to ${\mathrm{BP}}_*/I_n\longrightarrow R$. We first note that $v_n,v_{n+1},\ldots\in R$ is a regular sequence by Proposition \ref{landweberflat} below. Now assume that $N<\infty$ and fix $n\leq j\leq N$. Then $v_j\in R/I_{j-1}R\neq 0$ is not a zero divisor and thus there is a minimal prime ideal of $R/I_{j-1}R$ not containing $v_j$. A geometric point supported at this prime ideal solves the problem. In the remaining case $j=N=\infty$ we have $R/I_{\infty}R\neq 0$ and any geometric point of this ring solves the problem. \end{proof} The main result of this paper is the following. \begin{theorem}\label{image} Assume that $n\geq 0$ and that $\emptyset\neq X_0\longrightarrow V_n$ is Landweber exact of height $N$ (hence $n\leq N\leq\infty)$. Let $(X_0,X_1)$ be the Hopf algebroid induced from $(V,W)$ by the composition $X_0\stackrel{\phi}{\longrightarrow}V_n \stackrel{i_n}{\hookrightarrow} V$. Then $(X_0,X_1)$ is a flat Hopf algebroid and its associated algebraic stack is given as \[ [ \xymatrix{ X_1\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & X_0 } ] \simeq {\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1} \mbox{ if }N\neq\infty\mbox{ and} \] \[ [ \xymatrix{ X_1\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & X_0 } ] \simeq {\mathfrak{Z}}^n\mbox{ if }N=\infty. \] \end{theorem} \begin{proof} Note that $(X_0,X_1)$ is also induced from the flat Hopf algebroid $(V_n,W_n)$ along $\phi$ and thus is a flat Hopf algebroid using the final statement of Proposition \ref{flatness2} and the Landweber exactness of $\phi$. We first assume that $N\neq\infty$. Then by Proposition \ref{heightfactor} the composition $X_0\stackrel{\phi}{\longrightarrow}V_n\longrightarrow{\mathfrak{Z}}^n$ factors as $X_0\stackrel{\psi}{\longrightarrow}{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}\stackrel{i}{\longrightarrow}{\mathfrak{Z}}^n$ and $\psi$ is flat because $i$ is an open immersion and $X_0\longrightarrow{\mathfrak{Z}}^n$ is flat by assumption. By Proposition \ref{inducediso} we will be done if we can show that $\psi$ is in fact faithfully flat. For this we consider the presentation ${\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}\simeq [ \xymatrix{ W_n^{N+1}\mathrm{ar}@<-.5ex>[r]\mathrm{ar}@<.5ex>[r] & V_n^{N+1} } ]$ given by the cartesian diagram \[ \xymatrix{ W_n^{N+1} \mathrm{ar}[r]\mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d] & W_n \mathrm{ar}@<-.5ex>[d]\mathrm{ar}@<.5ex>[d] \\ V_n^{N+1} \mathrm{ar}[r]\mathrm{ar}[d] & V_n \mathrm{ar}[d]^-{Q_n} \\ {\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1} \mathrm{ar}[r] & {\mathfrak{Z}}^n} \] and note that $\psi$ lifts to $\rho: X_0\longrightarrow V_n^{N+1}$ and induces $\alpha:=t\pi_2: \cart{X_0}{\rho,V_n^{N+1},s}{W_n^{N+1}}\longrightarrow V_n^{N+1}$ which is flat and we need it to be faithfully flat to apply Proposition \ref{flatness1}, $iii)\Rightarrow iv)$ and conclude that $\psi$ is faithfully flat. So we have to prove that $\alpha$ is surjective on the topological spaces underlying the schemes involved.\\ This surjectivity can be checked on geometric points and for any such geometric point $\Omega\stackrel{\mu}{\longrightarrow}V_n^{N+1}$ we have that $j:=\mathrm{ht}(\Omega\stackrel{\mu}{\longrightarrow}V_n^{N+1}\longrightarrow V_n\stackrel{i_n}{\hookrightarrow}V)$ satisfies $n\leq j\leq N$. By Proposition \ref{allheights} there is a geometric point $\Omega^{'}\stackrel{\nu}{\longrightarrow}X_0$ with $\mathrm{ht}(\Omega^{'}\stackrel{\nu}{\longrightarrow}X_0\longrightarrow V_n\stackrel{i_n}{\hookrightarrow}V)=j$ and we can assume that $\Omega=\Omega^{'}$ because the corresponding fields have the same characteristic namely $0$ if $j=0$ and $p$ otherwise. As any two formal group laws over an algebraically closed field having the same height are isomorphic we find some $\sigma:\Omega\longrightarrow W_n^{N+1}$ fitting into a commutative diagram \[ \xymatrix{ X_0{\times\atop \rho,V_n^{N+1},s} W_n^{N+1}\mathrm{ar}[r]^-{\alpha} & V_n^{N+1}\\ \Omega.\mathrm{ar}[u]^-{(\nu,\sigma)}\mathrm{ar}[ur]_-{\mu} & } \] As $\mu$ was arbitrary this shows that $\alpha$ is surjective. We leave the obvious modifications for the case $N=\infty$ to the reader. \end{proof} To conclude this subsection we explain the relation of Landweber exactness and Landweber's regularity condition. This is well-known to the expert and in fact has been worked out in detail in \cite{franke}, section 3, Theorem 8 but we include it here anyway. Fix some $n\ge 0$ and let $\phi:{\mathrm{BP}}_*/I_n \longrightarrow R$ be a ${\mathrm{BP}}_* / I_n$-algebra. Then Landweber's condition is \begin{equation} \label{landreg} \mbox{ The sequence }\phi(v_n),\phi(v_{n+1}),\ldots\,\in R\mbox{ is regular.} \end{equation} \begin{prop}\label{landweberflat} In the above situation, (\ref{landreg}) holds if and only if the composition $\mathrm{Spec}\,(R)\longrightarrow\mathrm{Spec}\,({\mathrm{BP}}/I_n)\longrightarrow{\mathfrak{Z}}^n$ is flat. \end{prop} \begin{proof} From \cite{millerrav}, Proposition 2.2 we know that the restriction of $f^*: \Modq{{\mathfrak{Z}}^n}\longrightarrow\Modq{\mathrm{Spec}\,(R)}$ to finitely presented comodules is exact if and only if (\ref{landreg}) holds. But $f^*$ itself is exact, and hence $f$ is flat, if and only if its above restriction is exact because any ${\mathrm{BP}}_*{\mathrm{BP}}/I_n$-comodule is the filtering direct limit of finitely presented comodules. This was pointed out to me by N. Strickland. In case $n=0$ this is \cite{millerrav}, Lemma 2.11 and the general case follows from \cite{H2}, Proposition 1.4.1,e), Proposition 1.4.4, Lemma 1.4.6 and Proposition 1.4.8. \end{proof} \subsection{Equivalence of comodule categories and change of rings} In this subsection we will spell out some consequences of the above results in the language of comodules but need some elementary preliminaries first.\\ Let $A$ be a ring, $I=(f_1,\ldots, f_n)\subseteq A$ ($n\geq 1$) a finitely generated ideal and $M$ an $A$-module. We have a canonical map \[ \bigoplus_i M_{f_i}\longrightarrow\bigoplus_{i<j}M_{f_if_j},\; (x_i)_i\mapsto \left( \frac{x_i}{1}-\frac{x_j}{1} \right)_{i,j} \] and a canonical map \[ \alpha_M: M\longrightarrow\ker(\bigoplus_i M_{f_i}\longrightarrow\bigoplus_{i<j}M_{f_if_j}).\] For $X:=\mathrm{Spec}\,(A)$, $Z:=\mathrm{Spec}\,(A/I)$, $j: U:=X-Z\hookrightarrow X$ the open immersion and ${\mathcal F}$ the quasi-coherent ${\mathcal O}_X$-module corresponding to $M$, $\alpha_M$ corresponds to the adjunction ${\mathcal F}\longrightarrow j_*j^*{\mathcal F}$. Note that $\ker(\alpha_M)$ is the $I$-torsion submodule of $M$. The cokernel of $\alpha_M$ corresponds to the local cohomology $H^1_Z(X,{\mathcal F})$, c.f. \cite{local}. We say that $M$ is $I$-local if $\alpha_M$ is an isomorphism. A quasi-coherent ${\mathcal O}_X$-module ${\mathcal F}$ is in the essential image of $j_*$ if and only if ${\mathcal F}\longrightarrow j_*j^*{\mathcal F}$ is an isomorphism if and only if the $A$-module corresponding to ${\mathcal F}$ is $I$-local. If $n=1$ then $M$ is $I=(f_1)$-local if and only if $f_1$ acts invertibly on $M$.\\ We now formulate a special case of Proposition \ref{essimage} in terms of comodules. \begin{prop}\label{subcomod} For any $n\geq 0$ the category $\Modq{{\mathfrak{Z}}^n}$ is equivalent to the full subcategory of ${\mathrm{BP}}_*{\mathrm{BP}}$-comodules $M$ such that $I_nM=0$.\\ For any $0\leq n\leq N<\infty$ the category $\Modq{{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}}$ is equivalent to the full subcategory of ${\mathrm{BP}}_*{\mathrm{BP}}$-comodules $M$ such that $I_nM=0$ and $M$ is $I_{N+1}/I_n$-local as a ${\mathrm{BP}}_*/I_n$-module. \end{prop} \begin{remark} We know from (\ref{boing}) that $\Modq{{\mathfrak{Z}}^n}$ is equivalent to the category of ${\mathrm{BP}}_*{\mathrm{BP}}/I_n$-comodules. The alert reader will have noticed that we have not yet mentioned any graded comodules. This is not sloppy terminology, we really mean comodules without any grading even though the flat Hopf algebroids are all graded. However, it is easy to take the grading into account, in particular all results of this subsection have analogues for graded comodules, c.f. Remark \ref{grading}. \end{remark} \begin{proof} Fix $0\leq n<\infty$. The $1$-morphism ${\mathfrak{Z}}^n\hookrightarrow{\mathfrak{X}}_{FG}$ is representable and a closed immersion (in particular a monomorphism) because its base change along $V\longrightarrow{\mathfrak{X}}_{FG}$ is a closed immersion and being a closed immersion is $fpqc$-local on the base, \cite{ega42}, 2.7.1, $xii)$. Proposition \ref{essimage} identifies $\Modq{{\mathfrak{Z}}^n}$ with the full subcategory of $\Modq{{\mathfrak{X}}_{FG}}$ consisting of those ${\mathcal F}$ such that $Q^*{\mathcal F}\simeq i_{n,*}G$ for some $G\in\Modq{V_n}$ ( with notations as in (\ref{diagram3})). Identifying, as in subsection \ref{modules}, $\Modq{{\mathfrak{X}}_{FG}}$ with the category of ${\mathrm{BP}}_*{\mathrm{BP}}$-comodules, ${\mathcal F}$ corresponds to some ${\mathrm{BP}}_*{\mathrm{BP}}$-comodule $M$ and $Q^*{\mathcal F}$ corresponds to the ${\mathrm{BP}}_*$-module underlying $M$. So the condition of Proposition \ref{essimage} is that the ${\mathrm{BP}}_*$-module $M$ is in the essential image of $i_{n,*}$, i.e. $M$ is an ${\mathrm{BP}}_*/I_n$-module, i.e. $I_nM=0$.\\ Now fix $0\leq n \leq N<\infty$. We apply Proposition \ref{essimage} to $i:{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}\longrightarrow{\mathfrak{X}}_{FG}$ which is representable and a quasi-compact immersion (in particular a monomorphism) because it sits in a cartesian diagram \[ \xymatrix{ V_n^{N+1} \mathrm{ar}[d]\mathrm{ar}[r]^-j & V \mathrm{ar}[d]^-Q \\ {\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1} \mathrm{ar}[r]^-i & {\mathfrak{X}}_{FG}, } \] c.f. (\ref{diagram2}), in which $j$ is a quasi-compact immersion and one uses \cite{ega42}, 2.7.1, $xi)$ as above. Arguing as above, we are left with identifying the essential image of $j_*$ which, as explained at the beginning of this subsection, corresponds to the ${\mathrm{BP}}_*$-modules $M$ such that $I_nM=0$ and $M$ is $I_{N+1}/I_n$-local as a ${\mathrm{BP}}_*/I_n$-module. \end{proof} \begin{cor}\label{comodcat} Let $n\geq 0$ and let ${\mathrm{BP}}_*/I_n\longrightarrow R\neq 0$ be Landweber exact of height $N$ (hence $n\leq N \leq\infty$). Then $(R,\Gamma):=(R,R\otimes_{{\mathrm{BP}}_*}{\mathrm{BP}}_*{\mathrm{BP}}\otimes_{{\mathrm{BP}}_*}R)$ is a flat Hopf algebroid and its category of comodules is equivalent to the full subcategory of ${\mathrm{BP}}_*{\mathrm{BP}}$-comodules $M$ such that $I_nM=0$ and $M$ is $I_{N+1}/I_n$-local as a ${\mathrm{BP}}_*/I_n$-module. The last condition is to be ignored in case $N=\infty$. \end{cor} \begin{proof} By Theorem \ref{image}, $(R,\Gamma)$ is a flat Hopf algebroid with associated algebraic stack ${\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}$ (resp. ${\mathfrak{Z}}^n$ if $N=\infty$). So the category of $(R,\Gamma)$-comodules is equivalent to $\Modq{{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}}$ (resp. $\Modq{{\mathfrak{Z}}^n}$). Now use Proposition \ref{subcomod}. \end{proof} \begin{remark} The case $n=0$ of Corollary \ref{comodcat} corresponds to the situation treated in \cite{H1} where, translated into the present terminology, $\Modq{{\mathfrak{U}}^{N+1}}$ is identified as a {\em localisation} of $\Modq{{\mathfrak{X}}_{FG}}$. This can be done because $f:{\mathfrak{U}}^{N+1}\longrightarrow{\mathfrak{X}}_{FG}$ is flat, hence $f^*$ exact. To relate more generally $\Modq{{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}}$ to $\Modq{{\mathfrak{X}}_{FG}}$ it seems more appropriate to identify the former as a full subcategory of the latter as we did above. However, using Proposition 1.4 of {\em loc. cit.} and Proposition \ref{monothm} one sees that $\Modq{{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}}$ is equivalent to the localisation of $\Modq{{\mathfrak{X}}_{FG}}$ with respect to all morphisms $\alpha$ such that $f^*(\alpha)$ is an isomorphism where $f:{\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}\longrightarrow{\mathfrak{X}}_{FG}$ is the immersion. As $f$ is not flat for $n\geq 1$ this condition seems less tractable than the one in Corollary \ref{comodcat}. \end{remark} Of course, equivalences of comodule categories give rise to change of rings theorems and we refer to \cite{H1} for numerous examples (in the case $n=0$) and only point out the following, c.f. \cite{R2}, Theorem B.8.8 for the notation and a special case: If $n\geq 1$ and $M$ is a ${\mathrm{BP}}_*{\mathrm{BP}}$-comodule such that $I_nM=0$ and $v_n$ acts invertibly on $M$ then \[ \mathrm{Ext}^*_{{\mathrm{BP}}_*{\mathrm{BP}}}({\mathrm{BP}}_*,M)\simeq\mathrm{Ext}^*_{\Sigma(n)}({\mathbb{F}}_p[v_n,v_n^{-1}],M\otimes_{{\mathrm{BP}}_*} {\mathbb{F}}_p[v_n,v_n^{-1}]) . \] In fact, this is clear from the case $n=N$ of Corollary \ref{comodcat} applied to the obvious map ${\mathrm{BP}}_*/I_n\longrightarrow{\mathbb{F}}_p[v_n,v_n^{-1}]$ which is Landweber exact of height $n$.\\ To make a final point, in \cite{H1} we also find many of the fundamental results of \cite{L} generalised to Landweber exact algebras whose induced Hopf algebroids are presentations of our ${\mathfrak{U}}^{N+1}$. One may generalise these results further to the present case, i.e. to ${\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}$ for $n\geq 1$, but again we leave this to the reader and only point out an example: In the situation of Corollary \ref{comodcat} every non-zero graded $(R,\Gamma)$-comodule has a non-zero primitive.\\ To prove this, consider the comodule as a quasi-coherent sheaf ${\mathcal F}$ on ${\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1}$ and use that the primitives we are looking at are $H^0({\mathfrak{Z}}^n\cap{\mathfrak{U}}^{N+1},{\mathcal F})\simeq H^0({\mathfrak{X}}_{FG},f_*{\mathcal F})\neq 0$ because $f_*$ is faithful and using the result of P. Landweber that every non-zero graded ${\mathrm{BP}}_*{\mathrm{BP}}$-comodule has a non-zero primitive. \section{The stack of formal groups}\label{stackformalgroups} In this section we take a closer look at the algebraic stacks associated with the flat Hopf algebroids $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}})$ and $({\mathrm{BP}}_*,{\mathrm{BP}}_*{\mathrm{BP}})$.\\ A priori, these stacks are given by the abstract procedure of stackification and in many instances one can work with this definition directly, the results of the previous sections are an example of this. For future investigations, e.g. those initiated in \cite{G}, it might be useful to have the genuinely geometric description of these stacks which we propose to establish in this section.\\ For this, we require a good notion of formal scheme over an arbitrary affine base as given by N. Strickland \cite{formalgrp} and we quickly recall some of his results now.\\ The category $X_{fs,{\mathbb{Z}}}$ of formal schemes over $\mathrm{Spec}\,({\mathbb{Z}})$ is defined to be the ind-category of $\mathrm{Aff}_{{\mathbb{Z}}}$ which we consider as usual as a full subcategory of the functor category $C:=\underline{\mathrm{Hom}}(\mathrm{Aff}_{{\mathbb{Z}}}^{op},\mathrm{Sets})$, c.f. \cite{formalgrp}, Definition 4.1 and \cite{SGA4}, expos\'e I, 8. A formal ring is by definition a linearly topologised Hausdorff and complete ring and ${\mathrm{FRings}}$ denotes the category of formal rings with continuous ring homomorphisms. Any ring can be considered as a formal ring by giving it the discrete topology. There is a fully faithful functor ${\mathrm{Spf}}: {\mathrm{FRings}}^{{\mathrm{op}}}\longrightarrow X_{fg,{\mathbb{Z}}}\subset C$ \cite{formalgrp}, section 4.2 given by \[ {\mathrm{Spf}}(R)(S):=\mathrm{Hom}_{{\mathrm{FRings}}}(R,S)=\mbox{colim}_I \mathrm{Hom}_{{\mathrm{Rings}}}(R/I,S),\] the limit being taken over the directed set of open ideals $I\subseteq R$.\\ In particular, any ring $R$ can be considered as a formal scheme over ${\mathbb{Z}}$ and we thus get the category $X_{fs,R}:=X_{fs,{\mathbb{Z}}}/{\mathrm{Spf}}(R)$ of formal schemes over $R$. For varying $R$, these categories assemble into an $fpqc$-stack $X_{fs}$ over $\mathrm{Spec}\,({\mathbb{Z}})$ which we call the stack of formal schemes \cite{formalgrp}, Remark 2.58, Proposition 4.51 and Remark 4.52.\\ Define $X_{fgr}$ to be the category of commutative group objects in $X_{fs}$. Then $X_{fgr}$ is canonically fibred over $\mathrm{Aff}_{{\mathbb{Z}}}$ and is in fact an $fpqc$-stack over $\mathrm{Spec}\,({\mathbb{Z}})$ because being a commutative group object can be expressed by the existence of suitable structure morphisms making appropriate diagrams commute. Finally, define $X\subseteq X_{fgr}$ to be the substack of those objects which are $fpqc$-locally isomorphic to $({\hat{\mathbb{A}}^1},0)$ as {\em pointed formal schemes} (of course, a formal group is considered as a pointed formal schemes via its zero section). It is clear that $X\subseteq X_{fgr}$ is in fact a substack and in particular is itself an $fpqc$-stack over $\mathrm{Spec}\,({\mathbb{Z}})$ which we will call the stack of formal groups. We will see in a minute that $X$ (unlike $X_{fgr}$) is in fact an algebraic stack.\\ Our first task will be to determine what formal schemes occur in the fibre category $X_R$ for a given ring $R$. This requires some notation:\\ For a locally free $R$-module $V$ of rank one we denote by $\hat{SV}$ the symmetric algebra of $V$ over $R$ completed with respect to its augmentation ideal. This $\hat{SV}$ is a formal ring. The diagonal morphism $V\longrightarrow V\oplus V$ induces a structure of formal group on ${\mathrm{Spf}}(\hat{SV})$. Indeed, for any faithfully flat extension $R\longrightarrow R^{'}$ with $V\otimes_R R^{'}\simeq R'$ we have ${\mathrm{Spf}}(\hat{SV})\times_{\mathrm{Spec}\,(R)} \mathrm{Spec}\,(R^{'})\simeq\hat{\mathbb{G}}_{a,R^{'}}$ in $X_{R^{'}}$. On the other hand, denote by $\Sigma(R)$ the set of isomorphism classes of pointed formal schemes in $X_R$. We have a map $\rho_R:{\mathrm{Pic}}(R)\longrightarrow\Sigma(R)\;,\; [V]\mapsto [{\mathrm{Spf}}(\hat{SV})]$. \begin{prop}\label{forms} For any ring $R$, the map $\rho_R:{\mathrm{Pic}}(R)\longrightarrow\Sigma(R)$ is bijective. \end{prop} \begin{remark} For suitable rings $R$ we can compare the above construction of the category of formal groups over $R$ to more traditional ones: If $R$ is pseudocompact and local \cite{conrad}, Definition 1.1.4 then, using Proposition \ref{forms}, one can check that $X_R$ is the groupoid of one dimensional commutative formal Lie groups over $R$ in the sense of \cite{conrad}, Definition 3.3.2. \end{remark} \begin{proofof} Proposition \ref{forms}. By definition, $\Sigma(R)$ is the set of $fpqc$-forms of the pointed formal scheme $({\hat{\mathbb{A}}^1},0)$ over $R$. We thus have a $\check{\mathrm{C}}$ech-cohomological description \[ \Sigma(R)\simeq{\check{H}}^1(R,\underline{\mathrm{Aut}\,}({\hat{\mathbb{A}}^1},0))=\mbox{colim}_{R\longrightarrow R^{'}} {\check{H}}^1(R^{'}/R,\underline{\mathrm{Aut}\,}({\hat{\mathbb{A}}^1},0)),\] where $G^0:=\underline{\mathrm{Aut}\,}({\hat{\mathbb{A}}^1},0)$ is the sheaf of automorphisms of the pointed formal scheme $({\hat{\mathbb{A}}^1},0)$ over $R$ and the limit is taken over all faithfully flat extensions $R\longrightarrow R^{'}$. For an arbitrary $R$-algebra $R^{'}$ we can identify \[ G^{0}(R^{'})= \{ f\in R^{'}[[t]] \; | \; f(0)=0,\; f^{'}(0)\in R^* \} \] with the multiplication of the right hand side being substitution of power series. We have a split epimorphism $\pi:G^{0}\longrightarrow\mathbb{G}_m$ given on points by $\pi(f):=f^{'}(0)$ with kernel $G^{1}:=\ker(\pi)$ and we define more generally for any $n\geq 1$, $G^n(R^{'}):= \{ f \in G^{0}(R^{'}) \; | \; f=1+ O(t^n) \}$. For any $n\geq 1$ we have an epimorphism $G^n\longrightarrow\mathbb{G}_a,\; f=1+\alpha t^n+O(t^{n+1})\mapsto\alpha$ with kernel $G^{n+1}$. One checks that the $G^n$ are a descending chain of normal subgroups in $G^0$ defining for every $R$-algebra $R^{'}$ a structure of complete Hausdorff topological group on $G^0(R^{'})$.\\ Using ${\check{H}}^1(R^{'}/R,\mathbb{G}_a)=0$ and an approximation argument shows that ${\check{H}}^1(R^{'}/R,G^1)=0$ for any $R$-algebra $R^{'}$, hence the map $\phi:{\check{H}}^1(R,G^0)\longrightarrow{\check{H}}^1(R,\mathbb{G}_m)$ induced by $\pi$ is injective, and as $\pi$ is split we see that $\phi$ is a bijection. As ${\check{H}}^1(R,\mathbb{G}_m)\simeq{\mathrm{Pic}}(R)$ we have obtained a bijection $\Sigma(R)\simeq{\mathrm{Pic}}(R)$ and unwinding the definitions shows that it coincides with $\rho_R$. \end{proofof} The stack $X$ carries a canonical line bundle:\\ For any ring $R$ and $G\in X_R$ we can construct the locally free rank one $R$-module $\omega_{G/R}$ as usual \cite{formalgrp}, Definition 7.1 and as its formation is compatible with base change it defines a line bundle $\omega$ on $X$. We remark without proof that ${\mathrm{Pic}}(X)\simeq{\mathbb{Z}}$, generated by the class of $\omega$.\\ We define a $\mathbb{G}_m$-torsor $\pi: {\mathfrak{X}}:=\underline{\mathrm{Spec}\,}(\oplus_{\nu\in{\mathbb{Z}}}\omega^{\otimes \nu})\longrightarrow X$ \cite{CA}, 14.2 and now check that ${\mathfrak{X}}$ is the algebraic stack associated with the flat Hopf algebroid $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}})$.\\ For any ring $R$, the category ${\mathfrak{X}}_R$ is the groupoid of pairs $(G/R,\; \omega_{G/R}\stackrel{\simeq}{\longrightarrow} R)$ consisting of a formal group $G/R$ together with a trivialization of the $R$-module $\omega_{G/R}$. The morphisms in ${\mathfrak{X}}_R$ are the isomorphisms of formal groups which respect the trivializations in an obvious sense. Since $\omega_{{\mathrm{Spf}}(\hat{SV})/R}\simeq V$ we see from Proposition \ref{forms} that any $G\in{\mathfrak{X}}_R$ is isomorphic to $({\hat{\mathbb{A}}^1},0)$ as a pointed formal scheme over $R$. This easily implies that the diagonal of ${\mathfrak{X}}$ is representable and affine. Now recall the affine scheme FGL$\simeq\mathrm{Spec}\,({\mathrm{MU}}_*)$ \cite{formalgrp}, Example 2.6 parametrising formal group laws. We define $f:$FGL$\longrightarrow{\mathfrak{X}}$ by specifying the corresponding object of ${\mathfrak{X}}_{{\mathrm{FGL}}}$ as follows: We take $G:={\hat{\mathbb{A}}^1}_{{\mathrm{FGL}}}={\mathrm{Spf}}({\mathrm{MU}}_*[[x]])$ with the group structure induced by a fixed choice of universal formal group law over ${\mathrm{MU}}_*$ together with the trivialization $\omega_{G/{\mathrm{MU}}_*}=(x)/(x^2)\stackrel{\simeq}{\longrightarrow} {\mathrm{MU}}_*$ determined by $x\mapsto 1$. We then claim that $f$ is faithfully flat and thus ${\mathfrak{X}}$ is an algebraic stack with presentation $f$ (this will also imply that $X$ is an algebraic stack):\\ Given any $1$-morphism $\mathrm{Spec}\,(R)\longrightarrow{\mathfrak{X}}$ we can assume that that the corresponding object of ${\mathfrak{X}}_R$ is given as $({\hat{\mathbb{A}}^1}_R,\; (x)/(x^2)\stackrel{\simeq}{\longrightarrow} R,\; x\mapsto u)$ with the group structure on $({\hat{\mathbb{A}}^1}_R,0)$ defined by some formal group law over $R$ and with some unit $u\in R^*$. Then $\mathrm{Spec}\,(R)\times_{{\mathfrak{X}}} {\mathrm{FGL}}$ parametrises isomorphisms of formal group laws with leading term $u$. This is well-known to be representable by a polynomial ring over $R$, hence it is faithfully flat.\\ The same argument shows that ${\mathrm{FGL}}\times_{{\mathfrak{X}}}{\mathrm{FGL}}\simeq{\mathrm{FGL}}\times_{\mathrm{Spec}\,({\mathbb{Z}})}\; {\mathrm{SI}}\simeq\mathrm{Spec}\,({\mathrm{MU}}_*{\mathrm{MU}})$ where ${\mathrm{SI}}$ parametrises strict isomorphisms of formal group laws \cite{R1}, Appendix A 2.1.4 and this establishes the first half of the following result. \begin{theorem}\label{mubp} 1) ${\mathfrak{X}}$ is the algebraic stack associated with the flat Hopf algebroid $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}})$.\\ 2) For any prime $p$, ${\mathfrak{X}}\times_{\mathrm{Spec}\,({\mathbb{Z}})}\mathrm{Spec}\,({\mathbb{Z}}_{(p)})$ is the algebraic stack associated with the flat Hopf algebroid $({\mathrm{BP}}_*,{\mathrm{BP}}_*{\mathrm{BP}})$. \end{theorem} \begin{proof} The proof of 2) is identical to the proof of 1) given above except that to see that the obvious $1$-morphism $\mathrm{Spec}\,({\mathrm{BP}}_*)\longrightarrow{\mathfrak{X}}\times_{\mathrm{Spec}\,({\mathbb{Z}})}\mathrm{Spec}\,({\mathbb{Z}}_{(p)})$ is faithfully flat one has to use Cartier's theorem saying that any formal group law over a ${\mathbb{Z}}_{(p)}$-algebra is strictly isomorphic to a $p$-typical one , see for example \cite{R1}, Appendix A 2.1.18. \end{proof} \begin{remark}\label{grading} 1) We explain how the grading of ${\mathrm{MU}}_*$ fits into the above result. The stack ${\mathfrak{X}}$ carries a $\mathbb{G}_m$-action given on points by \[ \alpha\cdot(G/R,\; \phi:\omega_{G/R}\stackrel{\simeq}{\longrightarrow} R):= (G/R,\; \phi:\omega_{G/R}\stackrel{\simeq}{\longrightarrow} R\stackrel{\cdot\alpha}{\longrightarrow} R)\mbox{ for }\alpha\in R^* .\] This action can be lifted to the Hopf algebroid $({\mathrm{FGL}},{\mathrm{FGL}}\times{\mathrm{SI}})$ as in \cite{formalgrp}, Example 2.97 and thus determines a grading of the flat Hopf algebroid $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}})$. As observed in {\em loc.cit} this is the usual (topological) grading except that all degrees are divided by 2.\\ 2) For any $n\geq 0$ we know from section \ref{stacksandhopf} and Theorem \ref{mubp}, 1) that \[ \mathrm{Ext}^n_{{\mathrm{MU}}_*{\mathrm{MU}}}({\mathrm{MU}}_*,{\mathrm{MU}}_*) \simeq {\mathrm{H}}^{n}({\mathfrak{X}},{\mathcal O}_{{\mathfrak{X}}}).\] As $\pi:{\mathfrak{X}}\longrightarrow X$ is affine its Leray spectral sequence collapses to an isomorphism ${\mathrm{H}}^n({\mathfrak{X}},{\mathcal O}_{{\mathfrak{X}}})\simeq{\mathrm{H}}^n(X,\pi_* {\mathcal O}_{{\mathfrak{X}}})\simeq\oplus_{k \in{\mathbb{Z}}}{\mathrm{H}}^n(X,\omega^{\otimes k})$. The comparison of gradings given in 1) implies that this isomorphism restricts, for every $k\in{\mathbb{Z}}$, to an isomorphism \[ \mathrm{Ext}^{n,2k}_{{\mathrm{MU}}_*{\mathrm{MU}}}({\mathrm{MU}}_*,{\mathrm{MU}}_*)\simeq{\mathrm{H}}^n(X,\omega^{\otimes k}).\] In particular, we have ${\mathrm{H}}^*(X,\omega^{\otimes k})=0$ for all $k<0$.\\ 3) As $\pi:{\mathfrak{X}}\longrightarrow X$ is fpqc, the pull back $\pi^*$ establishes an equivalence between $\Modq{X}$ and the category of quasi-coherent ${\mathcal O}_{{\mathfrak{X}}}$-modules equipped with a descent datum with respect to $\pi$, c.f. the beginning of subsection \ref{modules}. One checks that a descent datum on a given ${\mathcal F}\in\Modq{{\mathfrak{X}}}$ with respect to $\pi$ is the same as a $\mathbb{G}_m$-action on ${\mathcal F}$ compatible with the action on ${\mathfrak{X}}$ given in 1). Hence $\pi^*$ gives an equivalence between $\Modq{X}$ and the category of {\em evenly graded} ${\mathrm{MU}}_*{\mathrm{MU}}$-comodules.\\ 4) The referee suggest a different way of looking at 3): Since ${\mathfrak{X}}\longrightarrow X$ is a $\mathbb{G}_m$-torsor it is in particular $fpqc$ and hence the composition $\mathrm{Spec}\,({\mathrm{MU}}_*)\longrightarrow {\mathfrak{X}}\longrightarrow X$ is a presentation of $X$ and one checks that the corresponding flat Hopf algebroid is $({\mathrm{MU}}_*,{\mathrm{MU}}_*{\mathrm{MU}}[u^{\pm 1}])$ thereby justifying our ad hoc definition of $X$ in section \ref{prelim}. This again shows that $\Modq{X}$ is equivalent to the category of evenly graded ${\mathrm{MU}}_*{\mathrm{MU}}$-comodules, this time the grading being accounted for by the coaction of $u$.\\ 5) The analogues of 1)-4) above with $X$ (resp. ${\mathrm{MU}}$) replaced by $X\times_{\mathrm{Spec}\,({\mathbb{Z}})}\mathrm{Spec}\,({\mathbb{Z}}_{(p)})$ (resp. ${\mathrm{BP}}$) hold true. \end{remark} The last issue we would like to address is the stratification of $X$ by the height of formal groups.\\ For every prime $p$ we put $Z_p^1:=X\times_{\mathrm{Spec}\,({\mathbb{Z}})} \mathrm{Spec}\,({\mathbb{F}}_p)\subseteq X$.\\ The universal formal group $G$ over $Z_p^1$ comes equipped with a relative Frobenius $F:G\longrightarrow G^{(p)}$ which can be iterated to $F^{(h)}:G\longrightarrow G^{(p^h)}$ for all $h\geq 1$.\\ For $h\geq 1$ we define $Z^h_p \subseteq Z^1_p$ to be the locus over which the $p$-multiplication of $G$ factors through $F^{(h)}$. Clearly, $Z_p^h\subseteq X$ is a closed substack, hence $Z^h_p$ is the stack of formal groups over $\mathrm{Spec}\,({\mathbb{F}}_p)$ which have height at least $h$. The stacks labeled ${\mathfrak{Z}}^n$ ($n\geq 1$) in section \ref{ringchange} are the preimages of $Z_p^n$ under $\pi\times\mathrm{id}:{\mathfrak{X}}\times_{\mathrm{Spec}\,({\mathbb{Z}})}\mathrm{Spec}\,({\mathbb{Z}}_{(p)})\longrightarrow X\times_{\mathrm{Spec}\,({\mathbb{Z}})}\mathrm{Spec}\,({\mathbb{Z}}_{(p)})$.\\ For any $n\geq 1$ we define the (non-closed) substack $Z^n:=\bigcup_{p\mbox{\small prime}}Z_p^n\subseteq X$ with complement $U^n:=X-Z^n$.\\ If ${\mathrm{MU}}_*\longrightarrow B$ is a Landweber exact ${\mathrm{MU}}_*$-algebra which has height $n\geq 1$ at every prime as in \cite{H1}, section 7 then the stack theoretic image of $\mathrm{Spec}\,(B)\longrightarrow\mathrm{Spec}\,({\mathrm{MU}}_*)\longrightarrow{\mathfrak{X}}$ is the preimage of $U^n$ under $\pi: {\mathfrak{X}}\longrightarrow X$ which we will write as ${\mathfrak{U}}^n:=\pi^{-1}(U^n)\subseteq{\mathfrak{X}}$. This can be checked as in section \ref{ringchange} and shows that the equivalences of comodule categories proved in {\em loc. cit.} are again a consequence of the fact that the relevant algebraic stacks are $1$-isomorphic. We leave the details to the reader.\\ To conclude we would like to point out the following curiosity:\\ As complex ${\mathrm{K}}$-theory is Landweber exact of height $1$ over ${\mathrm{MU}}_*$ we know that the flat Hopf algebroid $({\mathrm{K}}_*,{\mathrm{K}}_*{\mathrm{K}})$ has ${\mathfrak{U}}^1$ as its associated algebraic stack. So J. Adams' computation of $\mathrm{Ext}^1_{{\mathrm{K}}_*{\mathrm{K}}}({\mathrm{K}}_*,{\mathrm{K}}_*)$ implies that for any integer $k\geq 2$ we have \[ |{\mathrm{H}}^1(U^1,\omega^{\otimes k})|=2\cdot\mbox{denominator}\;(\zeta(1-k)),\] where $\zeta$ is the Riemann zeta function and we declare the denominator of $0$ to be $1$. To check this one uses Remark \ref{grading}, 2) with $X$ replaced by $U^1$, \cite{switzer}, Proposition 19.22 and \cite{neukirch}, VII, Theorem 1.8.\\ Unfortunately, the orders of the (known) groups ${\mathrm{H}}^2(U^1,\omega^{\otimes k})$ have nothing to do with the nominators of Bernoulli-numbers.\\
2,877,628,090,631
arxiv
\section{Introduction} Young shell-type supernova remnants (SNRs) with strong non-thermal emission, such as RX J1713.7-3946, HESS J1731-347 and Vela Jr., have been in the spotlight in recent years due to their presumed close relation to the origin of Galactic cosmic rays (CRs). These remnants show high luminosities in TeV gamma-rays and synchrotron dominated X-rays with no sign of thermal emission from the ejecta or shell. Vela Jr.\footnote{We note that the Vela Jr.~SNR is also referred to as G266.2-1.2.}~is a typical example of a TeV-bright SNR where emission in the X-ray band is strongly non-thermal. It lies along the line-of-sight of the crowded Vela complex, comprising the Vela SNR, the Vela-X pulsar wind nebula (PWN), the Vela pulsar, the Pencil Nebula, and the Puppis A SNR. While Vela Jr.~was first discovered by the \textit{ROSAT} satellite at X-ray energies above $\sim 1.3$\,keV \citep{Aschenbach1998}, it is coincident with the southeastern part of the more diffuse Vela SNR and is totally obscured by the latter in the soft X-ray band. The age of and distance to this remnant have been debated with the current most-agreed-on values being $d_\mathrm{SNR} \gtrsim 750$~pc and $t_\mathrm{age} \sim 1700 - 4300$~yr \citep[e.g.,][]{Slane2001,Katsuda2008}. The progenitor has been suggested to be the core-collapse of a massive star due to the discovery of a central compact object AX J0851.9-4617.4 near the center of the SNR \citep{Slane2001} with a consistent age and distance \citep{Kargaltsev2002}, although no pulsation has yet been detected. Detection of the 1.157~MeV radioactive decay line of the short-lived $^{44}$Ti along the same line-of-sight has been claimed \citep{Iyudin1998}, suggestive of an extremely short distance of about 200~pc, and a young age of around 700~yr. However, the line detection suffers from low statistical significance at the $2-4 \sigma$ level \citep{Schonfelder2000}, and from the non-detection of the associated Ca(Sc) K$\alpha$ line in X-rays by the \textit{Advanced Satellite for Cosmology and Astrophysics} (\textit{ASCA}) GIS \citep{Slane2001} and \textit{Suzaku} XIS \citep{Hiraga2009} imaging spectrometers. Moreover, using the distance and age deduced directly from proper-motion measurements \citep{Katsuda2008}, it can be shown that the required initial mass of $^{44}$Ti has to be $\sim$ 1~$\Msun$, orders of magnitude larger than that expected by supernova nucleosynthesis models. Hence, we consider the association of the $^{44}$Ti flux with Vela Jr.~unlikely. Sharp synchrotron filamentary structures in X-rays have been resolved by \textit{Chandra} in the bright northwestern shell \citep{Bamba2005}. An explanation of the small effective width of these filaments by fast synchrotron cooling of CR electrons requires a local magnetic field strength of about 100~$\mu$G \citep[e.g.,][]{Berezhko2009}, probably indicating magnetic field amplification (MFA) induced by efficient CR ion acceleration \citep*[e.g.,][]{Bell2004,VBE2008,BOE2011}. The radio remnant \citep{Combi1999, Stupar2005} is relatively faint ($\sim 30-50$~Jy at 1~GHz) with a flat spectral index $\sim 0.3$. The morphology is ring-like and shows a good overall correlation with the X-ray remnant, suggesting a common synchrotron origin for the radio and X-ray emission. Spatially extended emission in the GeV and TeV bands has also been discovered by \textit{Fermi LAT} \citep{Tanaka2011} and \textit{H.E.S.S.} \citep{Aharonian2007b} along the line-of-sight. The gamma-ray spectra can be described by a single power-law with index $\sim 1.85$ and a cutoff energy at a few TeV. As we will discuss in more detail below, Vela Jr.~shares some striking similarities with another well-studied non-thermal SNR RX J1713.7-3946\ in both spectral and morphological characteristics. We believe these similarities are important for understanding the physical nature of Vela Jr. Recently, \citet{Tanaka2011} have fitted the broadband spectra of Vela Jr.~using both hadronic ($\pi^0$-decay\ dominated) and leptonic (IC dominated) scenarios. They assumed single power-law spectra with exponential cut-offs for the underlying proton and electron distributions. To obtain the best fits, they varied the spectral indices of the particles and the environmental parameters including the downstream (DS) B-field strength and gas density. % They concluded that reasonable spectral fits can be obtained under both scenarios by invoking different parameter choices. In some cases, however, the interpretation of their parameters can be non-trivial, such as an unrealistically large energy in the accelerated proton population ($\sim 10^{51}$~erg) in the hadronic model for a low-density ($n \sim 0.01$~cm$^{-3}$) ambient medium, and a small downstream B-field ($B_2 \sim 10~\mu$G) required in the leptonic case which seems to contradict with the sharpness of the X-ray filaments found in the brighter NW rim (but see our arguments for such a possibility in Section~\ref{conclude}). While DSA models predict the acceleration of both electrons {\it and} ions, determining the total energy appearing in relativistic particles depends crucially on the correct interpretation of which particles are responsible for any observed gamma-ray emission. In addition, models for which the gamma-ray emission is produced predominantly by hadrons necessarily produce observable signatures in the luminosity and ionization state of the thermal X-ray emission. We believe that disentangling the emission origin requires a more self-consistent\ SNR model, as we have developed and used to model SNR RX J1713.7-3946\ \citep[e.g.,][and references therein]{ESPB2012,LEN2012}. In our {\it CR-hydro-NEI}\ model, the thermal X-ray emission is determined self-consistently\ using the dynamical information of the forward shock. Furthermore, the SNR dynamics is calculated self-consistently\ with DSA including the feedback effects from NL-DSA on the SNR evolution. We include spatial profiles of the multi-wavelength emission which provide more stringent constraints on the parameters and lead to a better differentiation between the $\pi^0$-decay\ and IC-dominated cases. Within the approximations of our one-dimensional model, we find that leptonic models clearly fit the broadband observations better than $\pi^0$-decay\ ones in a fashion strikingly similar to broadband models of SNR RX J1713.7-3946.\footnote{As in our models of SNR RX J1713.7-3946, we only calculate emission from the shocked region between the contact discontinuity\ (CD) and FS and neglect emission from the reverse shock. Including additional thermal X-ray emission from the shocked ejecta would strengthen the case for a leptonic dominated model for the $\gamma$-ray\ emission.} We emphasize that regardless of the approximations intrinsic to our model; our results clearly show that including a self-consistent calculation of the thermal X-ray emission is essential for any broadband model of a young SNR. For Vela Jr., as for SNR RX J1713.7-3946, hadronic scenarios are excluded with high confidence in homogeneous models. This paper is organized as follows: The first section provides a brief description of the latest version of the {\it CR-hydro-NEI}\ simulation code we use as our modeling platform; the second section describes our models for Vela Jr.~and the various constrains from observation data; and in the last part, we discuss the implication of our models on our understanding of the environment of Vela Jr., the acceleration of CR particles at the blast wave, and the possible production mechanism(s) of the observed broadband emission from radio to the TeV band. Some concluding remarks follow. \section{The \textit{CR-hydro-NEI} Simulation Code} When it comes to modeling emission from young shell-type SNRs with strong forward shocks, it is very important to take into account the non-linear aspects of DSA and its coupling to the hydrodynamics and plasma conditions in the shocked gas. The high Mach number shocks in young SNRs are expected to be efficient particle accelerators and with efficient DSA, the CR production, shock structure, shocked gas temperature, and magnetic fields are coupled and feedback effects cannot be ignored. Here we employ a {\it CR-hydro-NEI}\ code which can effectively model young, shell-type SNRs like Vela Jr.~with these non-linear effects accounted for self-consistently. In this section, we briefly describe a generalized version of the {\it CR-hydro-NEI}\ code that includes several updates and additional functions not included in the description given in \citet{LEN2012}. \subsection{Coupled SNR hydrodynamics, NL-DSA, and NEI} The development of the {\it CR-hydro-NEI}\ code used here has been described in detail in a number of recent papers \citep[e.g.,][]{EPSBG2007,PES2009,PSRE2010,EPSR2010,ESPB2012}, with the most recent ``generalized" version given in \citet{LEN2012}. Briefly, the SNR hydrodynamics are modeled with a one-dimensional hydro simulation based on VH-1 \citep[e.g.,][]{BE2001}. The SNR simulation provides the evolving shock speed, sonic and Alfv\'en\ Mach numbers, and other quantities necessary for the shock acceleration calculation. The NL-DSA calculation is done with a semi-analytic\ solution based largely on the work of P. Blasi and co-workers \citep[e.g.,][]{BGV2005,CBAV2009} \citep[see][for a full list of references]{LEN2012}. With suitable parameters, the semi-analytic\ calculation provides the shock compression ratio, amplified magnetic field, and full proton and electron spectra which are used to calculate the broadband continuum radiation from synchrotron, \brem, IC, and $\pi^0$-decay\ emission. The SNR evolution is coupled to the CR production by modifying the standard hydrodynamic equations through a change in the equation of state from the influence of relativistic\ CRs, energy loss from escaping CRs, and magnetic pressure.\footnote{We note that only a ``scalar" magnetic pressure is used in our 1D hydro simulation.} The changes produced in the hydrodynamics by NL-DSA also modify the evolution and ionization state of the shocked thermal plasma and these changes are self-consistently\ included in our non-equilibrium ionization\ (NEI) calculation of the thermal X-ray emission. We thus obtain a model of an evolving SNR where the non-thermal continuum and thermal line emission are calculated self-consistently. Treating thermal and non-thermal processes consistently is required when CR production is efficient because the production of relativistic\ CRs can strongly influence the density and temperature of the shock-heated thermal plasma \citep[e.g.,][]{Ellison2000}. In such a self-consistent calculation, it is not possible, for instance, to adjust parameters so that $\pi^0$-decay\ matches GeV-TeV observations without modifying the fit to thermal X-ray line emission. As we have demonstrated in our fits to SNR RX J1713.7-3946\ \citep[e.g.,][]{ESPB2012,LEN2012}, this coupling strongly constrains broadband emission models. \subsection{Precursor CR populations and emission} To determine the CR proton distribution function $f^\mathrm{cr}(x,p)$ at a certain position $x$ in the precursor upstream of the forward shock, we follow the recipe described in \citet{LEN2012} [see equation~(17)] and references therein. As for the electrons, we implicitly assume that they do not affect the shock structure and dynamics due to their low energy density compared to the protons, which we have checked to be the case a posteriori for all models presented here. We also assume that radiative cooling of the electrons occurs mainly in the downstream region where the B-field is highest and the electrons spend most of their time scattering per shock-crossing. As a result, this allows us to calculate the electron distribution in the precursor easily using the precursor profile and the distribution function at the subshock in the same way as for the protons \textit{after} the DSA solution has been obtained at every time step. At each position $x$ in the precursor, we read off the local environment variables such as the gas density, the B-field strength and the temperature as provided by the DSA solution to calculate the local photon emissivity. The total emission spectrum from the CR precursor is thus obtained by an integration over the precursor volume. \subsection{Secondary particle production and emission in the SNR shell} Accelerated protons and heavier ions will interact with the background gas and produce charged pions through p-p interactions. These pions further decay into secondary particles such as e$^+$, e$^\mathrm{-}$ and $\nu$'s. The charged secondaries can accumulate in the SNR shell and radiate as the shock propagates into the upstream medium. We have implemented this process in the {\it CR-hydro-NEI}\ code by following the continuous production of the secondaries in the post-shock region, as well as the adiabatic and radiative losses of the e$^+$--e$^-$. We only consider secondary production and radiation in the post-shock region because the density and magnetic field are considerably higher than in the shock precursor. The relevant inclusive secondary cross-sections are obtained using the parameterized model by \citet{Kamae06}, as we do for the gamma-rays. Contribution from heavy ions like p-He is taken into account via an effective enhancement factor. These secondaries contribute to the photon emissivity via synchrotron, IC and bremsstrahlung and is estimated at the end of a simulation. We note that there is a possibility that these secondary e$^+$--e$^-$\ can be injected into DSA and be `reaccelerated' in the same manner as the primary electrons. This is especially true for those being generated in the CR precursor. However, since most of the secondary particles are produced and accumulated in the DS region, and the probability of these populations traveling back to the shock from their production sites in order to be injected into acceleration is relatively low, we will ignore the process of reacceleration. We will show that, whilst the secondary particles do not contribute significantly to the volume-integrated broadband photon spectrum for all models presented in this work, the production of these particles in the shocked gas and their accumulation can have important effects on the spatial variation of synchrotron emission in some specific X-ray bands for a hadronic model. This result will be discussed in detail in Section~\ref{section:xray}. \subsection{Multi-wavelength emission profiles} Using the spatial information from the {\it CR-hydro-NEI}\ code, it is relatively straightforward to obtain model predictions for multi-wavelength emission profiles. Line-of-sight projection effects are calculated assuming spherical symmetry. The projected radial (i.e., outward from the SNR center) profiles, integrated over a set of chosen wavebands, are convolved with gaussian kernels to mimic the point-spread-functions of various instruments and facilitate direct comparison with data. \section{Models} \label{sec:models} The radio-to-TeV spectral and morphological features of Vela Jr.~largely resemble SNR {RX J1713.7-3946}. The observed similarities between Vela Jr.~and SNR RX J1713.7-3946\ include: (i) relatively dim radio emission with a flat spectral index; (ii) radio and X-ray emission which are predominately synchrotron in origin with ring-like morphologies, and with no evidence of thermal emission from either the shell or the ejecta (central) region; (iii) luminous in TeV energies, but relatively faint in the GeV band when compared to other gamma-ray SNRs, especially the middle-aged remnants like W51C, W44, and IC 443 \citep[e.g.,][]{AbdoEtalW51C2009, AbdoEtalW442010, AbdoEtalIC4432010}; and (iv) TeV emission is spatially resolved to a shell-like structure and correlates well with the X-ray and radio images. As might be expected, the observational similarities between Vela Jr.~and SNR RX J1713.7-3946\ lead naturally to a similar conclusion from our {\it CR-hydro-NEI}\ model that IC dominates the GeV-TeV emission process. \subsection{Input parameters} We have constructed two representative models: a ``hadronic" model where the GeV-TeV emission is $\pi^0$-decay\ dominated, and a ``leptonic" model where it is IC dominated. For both models we assume: (i) the associated supernova (SN) had a kinetic energy of $E_\mathrm{SN} = 10^{51}$~erg; (ii) the age of the remnant is 2500~yr to match the observed size of the SNR at the distances assumed in Table~\ref{table:param}; (iii) the injection parameter which determines the acceleration efficiency of DSA is set at $\chi_\mathrm{inj} \equiv p_\mathrm{inj}/p_\mathrm{th} + u_2/c = 3.6$, corresponding to a fraction $\eta \sim 2-3 \times 10^{-4}$ of thermal particles being injected in DSA at any time \citep[note that our $\chi_\mathrm{inj}$ is $\xi$ in][]{BGV2005}; (iv) we have conservatively assumed that the electron temperature equilibrates with the proton temperature through Coulomb collisions unless otherwise mentioned; (v) the spectrum of the background photon fields that are up-scattered by the relativistic electrons to gamma-ray energies (IC) is taken from \citet{PMS2006} at the position of the SNR, which contains the cosmic microwave background (CMB) and the local interstellar radiation fields in the infrared and optical bands; (vi) the upstream gas temperature is fixed at $T_0 = 10^4$\,K regardless of the gas density gradient; (vii) a number fraction of helium $f_\mathrm{He}=0.0977$ is specified for the plasma, which contributes additionally to the various photon emission. However, other than this scaling, we ignore the acceleration of heavy ions in this paper; (viii) we assume that the ambient B-field is oriented quasi-parallel to the shock normal over the whole surface, that is, we assume that the magnetic field geometry is unimportant as would be the case for efficient DSA where the self-generated magnetic turbulence mediating DSA is totally tangled in the CR precursor and downstream from the shock, i.e., the Bohm limit for CR diffusion is obtained. For full details of the formulation, see \citet{LEN2012}. An important difference between our hadronic and leptonic models is that the hadronic model evolves in a uniform upstream medium, typical of a Type Ia SN, and the leptonic model evolves in a wind cavity, typical of a core-collapse SN. While the observational evidence favors a core-collapse\ origin for the Vela Jr.~SNR, we could not find a set of core-collapse\ parameters that would produce a reasonable fit to the broadband data for a hadronic model radiating in a pre-SN wind. This is because the hadronic model needs a high ambient gas density and high B-field strength in order to simultaneously explain the radio/X-ray synchrotron emission and the gamma-ray flux. We thus used a uniform ISM model for our hadronic fit. On the other hand, a leptonic model requires a lower ambient gas density to suppress the $\pi^0$-decay\ gamma-rays, and this is natually realized by a wind bubble environment for a remnant sitting close to the Galactic plane. More details will be given in Section~\ref{section: results} where we will further discuss this point. Consistent with the different SN scenarios, we assume an ejecta mass $M_\mathrm{ej}=1.4\,\Msun$ for our hadronic model and $M_\mathrm{ej}=3\,\Msun$ for our leptonic model. In Table~\ref{table:param}, we summarize the main input parameters and output quantities measured at the end of the simulation for each model. We constrain the input parameters via a satisfactory fit to the observed broadband spectrum from radio to TeV energies, as well as a general agreement of the dynamical variables (e.g., radius of the forward shock) with current observations. The distance to the SNR is recently constrained by \citet{Katsuda2008} using proper-motion measurements over a 7-year time period. They provided the estimate equivalent to $d_\mathrm{SNR} \gtrsim (750 \pm 205) \times (v_\mathrm{sk}/3000\ \mathrm{km}\ \mathrm{s}^{-1})$~pc. We will adopt $d_\mathrm{SNR}$ for our models which is consistent with this limit. \section{Results and Discussion} \label{section: results} \subsection{Dynamics} \begin{figure} \centering \includegraphics[width=9cm]{fig1.eps} \caption{ Time evolution of output quantities for our leptonic model (solid lines) and hadronic model (dashed lines). From the top panel: (1) shock radius $R_\mathrm{FS}$; (2) shock velocity $V_\mathrm{sk}$; (3) sonic Mach number $M_\mathrm{S}$ (black) and Alfv\'{e}n Mach number $M_\mathrm{A}$ (red) of the forward shock; and (4) B-field strength in the far upstream $B_0$ (black), right in front of the subshock $B_1$ (green), and immediately behind it $B_2$ (red). } \label{fig:model_evo_1} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{fig2.eps} \caption{Continuation of Figure~\ref{fig:model_evo_1} where again, solid curves are our leptonic model and dashed curves are our hadronic model. From the top panel: (1) compression ratios at the subshock $R_\mathrm{sub}$ (red) and total $R_\mathrm{tot}$ (black); (2) fraction of $E_\mathrm{SN}$ converted into relativistic particles $f_\mathrm{SN}$ for the total accelerated (black) and escaping (red) populations; (3) acceleration efficiency $\epsilon _\mathrm{DSA}$; and (4) maximum momentum, $p_\mathrm{max}$, attained by the accelerated protons (black) and electrons (red). } \label{fig:model_evo_2} \end{figure} We plot the time evolution of selected important physical quantities for the hadronic and leptonic models in Figures~\ref{fig:model_evo_1} and \ref{fig:model_evo_2}. The very different undisturbed upstream environments give rise to large differences in the dynamical behavior of the two models. With the lower B-field and gas density in the wind cavity for the leptonic model, the following are expected and observed compared to the hadronic model: (1) the shock sweeps up material at a much lower rate, so that the shock speed and Mach numbers decay slower, and shock radius increases faster, with time; (2) the acceleration efficiency, and hence the fraction, $f_\mathrm{SN}$, of supernova explosion energy, $E_\mathrm{SN}$, converted into CR particles at the current age is much lower ($f_\mathrm{SN} \simeq 0.14$ vs 0.48) due to the slower injection rate of thermal particles from the shocked gas into DSA. This is also reflected in the much more moderate shock modification (i.e., smaller $R_\mathrm{tot}$) than for the hadronic model on average. The high $f_\mathrm{SN}$ fraction of the hadronic model is consistent with the result of \citet{Tanaka2011}; (3) the lower B-field in the DS region ($B_2$) results in a longer synchrotron loss time-scale, in this case longer than the acceleration time-scale throughout the age of 2500 yr, resulting in a common $p_\mathrm{max}$ for electrons and protons. On the contrary, the hadronic model has a much higher ambient B-field strength in the uniform ISM, such that $p_\mathrm{max}$ of electrons becomes limited by the synchrotron loss (and IC loss to a lesser extent) starting from about 200~yr. These evolutionary differences relate directly to the broadband emission of the SNR, as we will discuss in the upcoming sections. \subsection{Broadband Photon Spectrum} \begin{figure} \centering \includegraphics[width=9cm]{fig3.eps} \caption{Lower panel: Broadband SED fit with parameters tuned so the gamma-rays are dominated by hadronic emission. The solid lines show emission spectra produced by the primary CRs in the post-shock region, with the exception of the yellow line which shows emission from the interaction of the escaped CRs with the upstream medium beyond the free escape boundary (FEB). The dashed lines show emission in the post-shock region produced by the secondary e$^+$ and e$^\mathrm{-}$ originating from p-p interactions and the subsequent decay of charged pions. The dash-dotted lines show emission from the CR precursor region ahead of the forward shock. Upper panel: Same as the lower panel but with parameters tuned so that the gamma-rays are dominated by IC emission. All model spectra in the lower panel are boosted by a factor of $\sim 5$ to match the observed fluxes. (Data: black - radio \citep{Combi1999}; red band - \textit{ASCA} GIS \citep{Aharonian2007b}; red - \textit{Fermi} LAT \citep{Tanaka2011}; green - \textit{H.E.S.S.} \citep{Aharonian2007b}. } \label{fig:model_spec} \end{figure} As mentioned in Section~\ref{sec:models}, to obtain reasonable fits to the observed photon spectra, the two models require that the SNR evolves within very different surrounding environments. For the hadronic model, in order for the $\pi^0$-decay\ photons to dominate over the IC component in the gamma-ray band, the gas density has to be high and the number ratio between electrons and protons ($K_\mathrm{ep}$) has to be particularly low ($K_\mathrm{ep} \sim 10^{-4}$). The high density requirement argues against the SNR propagating into a pre-SN wind with rapidly decreasing density and we are forced to assume the blast wave runs into an interstellar medium (ISM) with uniform gas density and magnetic field strength. In constrast, the leptonic model requires that the forward shock propagates into a magnetized pre-SN wind region. This introduces a descending gradient for the gas density and B-field. The low shocked gas density in the downstream photon-emitting region suppresses the $\pi^0$-decay\ and \brems, as well as the X-ray line emission, relative to the IC component. Figure~\ref{fig:model_spec} shows the corresponding fits of the broadband spectra to currently available observations. We see that both models can explain the radio and GeV-to-TeV gamma-ray data reasonably well under the respective choices of parameters. In the X-ray band, however, the hadronic model has difficulty reproducing the X-ray photon index observed by \textit{ASCA}, mainly due to the much faster synchrotron loss rate caused by the higher B-field in the hadronic model. The higher ambient gas density required for $\pi^0$-decay\ dominance in the gamma-ray band is also accompanied by a high flux from thermal X-ray lines, which exceeds the X-ray flux observed by \textit{ASCA} in the soft X-ray energy range below a few keV, despite the adoption of a conservative model of electron heating via Coulomb collisions in the post-shock region. The self-consistently\ calculated thermal X-ray emission provides a strong additional model constraint. Another important difference between the two models is the overall normalization factor required for the broadband emission to match the observed flux. For the leptonic model this factor is close to unity, but the hadronic model has to be boosted by a factor $\sim 5$ to explain the observed flux level. This factor $\sim 5$ is included in all of the model spectra shown in the lower panel of Figure~\ref{fig:model_spec}. \subsection{X-ray Spectrum and Its Spatial Properties} \label{section:xray} \begin{figure} \centering \includegraphics[width=8cm]{fig4.eps} \caption{ X-ray spectra for our leptonic model (upper panel) and hadronic model (lower panel). In both panels, the blue and black solid lines correspond to the synchrotron (including secondary contribution if any) and thermal line emission respectively, while the dashed lines are the thermal continuum spectra. The red band is from \textit{ASCA} GIS observation. In the lower panel for the hadronic model, the light grey lines also show the thermal spectra corresponding to a model with instantaneous equilibration between the electron and proton temperatures right behind the subshock, but is otherwise the same. In the top panel for the leptonic model, the grey line shows the total spectrum. As in Figure~\ref{fig:model_spec}, the hadronic model spectra are multiplied by $\sim 5$ to match the observed broadband flux level. } \label{fig:model_xray} \end{figure} A more in-depth comparison with the currently available X-ray data is valuable to differentiate SNR models. Firstly, to see the general agreement of the models with data, we plot the calculated non-thermal and thermal X-ray spectra against the \textit{ASCA} result \citep[e.g.,][]{Slane2001, Aharonian2007b} in Figure~\ref{fig:model_xray}. For the hadronic model, to show the possible range of variance of the thermal spectrum based on different equilibration models for the electron temperature, we also show the calculated thermal spectrum assuming that the electron temperature equilibrates instantaneously with the proton temperature behind the FS. Coulomb equilibration and instant equilibration are two extremes for electron heating and the actual heating is very likely to lie somewhere in between. For either extreme in heating, the hadronic model predicts a thermal X-ray flux substantially higher than what observation implies (especially below $\sim 2$~keV), in addition to its failure to reproduce the non-thermal spectral index. The leptonic model, on the other hand, shows a general good fit to the data without an over-prediction of the thermal emission. In this regard, we notice that \citet{Berezhko2009} have proposed a hadronic model in which the shock is running into a wind bubble region created by the progenitor star embedded inside a high density gas cloud. We do not consider such a situation in this work since, in such a high density environment ($n_\mathrm{gas} \gg 0.01$~cm$^{-3}$), the resultant thermal X-ray lines are expected to violate the observation data even more than is the case for our hadronic model. Secondly, we consider the X-ray spectrum near the bright NW rim of the remnant. Vela Jr.~was observed with the \textit{Chandra X-ray Observatory} on 2003 January 5 and 6 (ObsIDs 3446 and 4414) using the Advanced CCD imaging Spectrometer (ACIS). Standard cleaning and data reduction were performed using CIAO version 4.4. The merged observations yielded a total exposure time of 73.9 ks. Spectra, extracted using the {\sl specextract} software, were obtained from a narrow region encompassing the forward shock of the SNR (solid box in Figure~\ref{fig:chandra_image}) and from a background region (dashed box) used to account for projected emission from the Vela SNR as well as internal, Galactic, and extragalactic background emission. The predicted thermal and nonthermal emission from \textit{CR-hydro-NEI} for the projected spectral region indicated was converted into a table model for use in {\sl xspec} for both the hadronic and leptonic models. The {\sl tbabs} model for interstellar absorption was applied, and the model was convolved with the instrument response function and fit to the data. To account for possible excess foreground emission from the Vela SNR, a thermal nonequilibrium ionization model ({\sl nei}) with a distinct {\sl tbabs} absorption component was included. The best-fit results are shown in Figure~\ref{fig:model_chandra_spec}, where the leptonic model fit is shown as a black histogram, and the hadronic model is shown in red. The dashed histogram corresponds to the additional {\sl tbabs}$\times${\sl nei} model. It is clear that the hadronic model provides a poor representation of the X-ray data, both at low energies, where the predicted line emission is not observed, and at high energies, where the model is much steeper than the observed spectrum. The leptonic model provides an excellent fit to the data ($\chi_\nu^2 = 1.05$ for 327 degrees of freedom). The column density is $n_H = (4.3 \pm 0.2) \times 10^{21}{\rm\ cm}^{-2}$, in good agreement with previous measurements \citep{Slane2001, Pannuti2010} and the accompanying thermal component has a temperature of $0.26^{+0.32}_{-0.08}$~keV with a column density of $(0 - 3.3) \times 10^{21}{\rm\ cm}^{-2}$, consistent with observed properties for soft thermal emission from the Vela SNR. The best-fit normalization for the \textit{CR-hydro-NEI} component is about 15\% higher than the ratio of the spectral extraction region length to the entire SNR circumference, which is reasonable given that the rim is somewhat brighter at this position (see Figure~\ref{fig:chandra_image}). \begin{figure} \centering \includegraphics[width=8.5cm]{fig5.eps} \caption{ \textit{Chandra} image of the NW region of Vela Jr. The regions used for source and background spectra are shown by the solid and dashed boxes respectively. A circle containing a faint point source falling within the source region was excluded in the spectral extraction. } \label{fig:chandra_image} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{fig6.eps} \caption{ \textit{Chandra} spectrum of the NW rim compared with the spectra from our leptonic (black) and hadronic (red) models. The model spectra are folded with the instrument response function and the fits reveal an interstellar absorption with a column density of $n_\mathrm{H} \approx 4 \times 10^{21}$~cm$^2$. An additional thermal component (dashed histogram) with $T_e \sim 0.3$~keV and $n_\mathrm{H} \sim 2 \times 10^{21}$~cm$^2$ is added to the leptonic model to explain the background residual at low energies. } \label{fig:model_chandra_spec} \end{figure} Thanks to the high angular resolution of \textit{Chandra}, it is also possible to study spatial variation of the spectral properties near the FS \citep[see, e.g.,][]{Pannuti2010}. Correspondingly, we calculate the synchrotron photon index for a series of line-of-sight projected regions from the models which have different radial distances from the FS. The results are shown in Figure~\ref{fig:synch_index}. Besides the absolute difference of the index values, the hadronic and leptonic models show very distinct trends of the index variation as a function of distance from the FS. For our leptonic model, the synchrotron spectrum softens as we look inward from the FS, whereas our hadronic model exhibits a much weaker dependence on position. The major factors which determine the spectral index of the non-thermal X-rays at a certain distance from the SNR center are the following: (1) Energy loss history of the primary accelerated electrons, which is critical to the high-energy cut-off of the primary synchrotron component; and (2) the relative importance of the secondary e$^+$--e$^-$\ to the primary electrons at that position. These secondaries are produced by the trapped accelerated protons interacting with the shocked gas, which lose energy radiatively like the primaries and meanwhile accumulate in number with time. Different from the primaries, which are accelerated by the shock and then advected downstream without replenishment, the secondaries at each position in the shocked gas are continuously generated by the advected high-energy protons whose spectra do not evolve with time significantly other than from adiabatic losses. The spectral shapes of the secondary and primary synchrotron components are hence typically different. \begin{figure} \centering \includegraphics[width=9cm]{fig7.eps} \caption{ Line-of-sight dependence of the non-thermal X-ray spectral index. The synchrotron indices are extracted from regions with various radial distances from the SNR center and in the energy range of $2 - 10$~keV. The black thick lines are the result from our best-fit models. Red points are spectral fitting results from \citet{KHU2013} using archival \textit{XMM Newton} data in the same energy range. The thin grey lines correspond to the modified models described in Section~\ref{section:sensitivity} below. Note that the contact discontinuity lies at $\mathrm{R}/\mathrm{R}_\mathrm{FS} = 0.81 - 0.82$ for both hadronic and leptonic models. } \label{fig:synch_index} \end{figure} For our leptonic model, the effect from secondaries is obviously small due to the very low ambient gas density in the wind bubble. As a result, the energy loss of the primaries dominates the spatial behavior of the synchrotron radiation. Since primary electrons accelerated by the shock at an earlier phase and now residing further inward in the SNR shell have suffered from synchrotron/IC and adiabatic losses for a longer period of time than those just being freshly accelerated, the cutoffs of their spectra are found at lower energies. In addition, due to the falling B-field strength with radius in our wind model, those primary electrons accelerated earlier have experienced a higher synchrotron loss rate on average than those residing closer to the FS at the current age. These lead to the softening of the synchrotron spectrum as shown by our result. For the hadronic model, the secondaries cannot be neglected. In the volume integrated photon spectrum (Figure~\ref{fig:model_spec}), the primary component dominates over the secondary one until around 10~keV. However, by looking deeper towards the SNR center where the number of secondary particles is larger due to accumulation and the primaries have a lower cut-off energy from a longer period of radiative loss, the secondary spectrum can dominate over the primary spectrum at a few keV and harden the resultant total spectrum in the $1 - 5$~keV energy range. We find that this effect can compensate the energy loss of the primaries in the SNR shell and, for the relatively moderate ambient gas density we assume, lead to a weak variation of the synchrotron index with radius after projecting our spherically symmetric model along the line-of-sight. With a higher ambient gas density, a spectral hardening can be expected as we look inward from the shock. Using \textit{Chandra} and \textit{XMM Newton} data respectively, \citet{Pannuti2010} and \citet{KHU2013} have recently discovered a trend of softening for the non-thermal spectrum as the line-of-sight is moved inward from the FS towards the SNR center. This trend provides further support for our leptonic scenario. For a direct comparison, we overlaid in Figure~\ref{fig:synch_index} the fitted photon indices given by \citet{KHU2013} and found a reasonable agreement with our result. Additionally, the phenomenological spectral evolution model invoked by \citet{KHU2013} shows that the typical downstream B-field should be around a few $\mu$G in order to explain the observed softening trend, which is highly consistent with the field strength obtained in our leptonic model. \subsection{Gamma-ray Morphology} The current generation of ground-based Imaging Atmospheric Cherenkov Telescopes (IACTs) possesses sufficient angular resolution to discern the spatial structures of SNRs with large angular sizes in sub-TeV to multiple TeV energies. Recent {\it HESS}\ observations of Vela Jr.~\citep{Aharonian2007b} have measured the radial surface brightness profiles of its shell-like emission from 300~GeV to 20~TeV with a resolution of about $0.06^\circ$. We compare our model profiles with the {\it HESS}\ measurements for the brightest northern part in Figure~\ref{fig:TeV_profile}. The profiles are extracted from the model gamma-ray image after convolution with the PSF of the observation. The results show that both models predict gamma-ray emission profiles reasonably compatible with the data, and unlike the X-ray result, closely resemble each other after PSF smoothing. The reason behind this relative similarity in the gamma-ray band can be explained by the following: X-ray synchrotron emission produced by relativistic electrons is highly sensitive to the radiative loss rate. % Electrons experience much faster energy loss in our hadronic model than the leptonic model, as mentioned above, which results into a sharp X-ray profile close to the FS for the hadronic case, and a more spread out profile behind the FS for the leptonic case. In the gamma-ray band, however, radiative loss plays a much less important role. In the hadronic model, the gamma-ray photons mainly originate from CR protons where losses can be ignored, while in the leptonic model the electrons responsible for the IC emission are only subject to minor losses due to the low magnetic fields. \begin{figure} \includegraphics[width=9cm]{fig8.eps} \caption{Radial profile of gamma-ray emission (300~GeV - 20~TeV) compared with {\it HESS}\ measurements of the northern shell. The black thick curves are model profiles smoothed by a Gaussian kernel with $\sigma = 0.06^\circ$ corresponding roughly to the PSF of the {\it HESS}\ observation. The red thin curves are the model profiles expected under the advertised best angular resolution of \textit{CTA} ($0.02^\circ$). } \label{fig:TeV_profile} \end{figure} Nevertheless, the gamma-rays originate from totally different mechanisms for the hadronic and leptonic cases so differences are predicted for more precise measurements. Future ground-based gamma-ray observatories, such as the upcoming Cherenkov Telescope Array (\textit{CTA}), will be able to differentiate the models with unprecedented spatial resolving power and sensitivity. For illustration, we include in Figure~\ref{fig:TeV_profile} our model profiles using the advertised best angular resolution of \textit{CTA} (about $0.02^\circ$). The red curves imply that future gamma-ray observations may be able to differentiate between the predicted radial profiles of Vela Jr. \subsection{Possible Thermal X-ray Line Detection by \textit{Astro-H}} TeV-bright SNRs like Vela Jr.~and RX J1713.7-3946 have shown no sign of strong thermal emission with current X-ray observations. However, with the advent of future instruments of superior spectral resolution, it is very possible that thermal X-ray lines will be discerned in the non-thermal dominated spectrum. As a first look, we plot the calculated X-ray spectrum in Figure~\ref{fig:spec_astro_h} for our leptonic model in the energy range of 0.3 to 12~keV, the designed energy coverage of the \textit{Soft X-ray Spectrometer} (SXS) onboard the next-generation X-ray space observatory \textit{Astro-H}. The emission lines are filtered by a Gaussian kernel with a FWHM of 7~eV to mimic the target baseline spectral resolution of SXS.\footnote{We assume a pure spherical expansion of the SNR shell, such that near the FS (i.e., the SNR rim) the line broadening/shifting effect from expansion is negligible.} For comparison, we show the leptonic model spectrum with a resolution of 60~eV (a typical ballpark value for currently operating X-ray observatories). We see that using the resolving power of SXS, the stronger sub-keV lines can be recognized very visibly on top of the synchrotron spectrum, and it is very possible that they will be detected by \textit{Astro-H}. Detection of such lines will be important to further constrain our model parameters such as the local gas compositions and the ion temperatures and densities. In a follow-up paper in preparation, we will perform a more detailed study of the X-ray spectrum by performing formal \textit{Astro-H} SXS spectral simulations based on our models, and take into account thermal Doppler broadening of the emission lines.\footnote{For the typical post-shock temperature in our leptonic model, the broadening by thermal motion of the ions is around $1\%$, assuming that the temperature ratios among the ion species are mass-proportional. Therefore, roughly speaking, SXS with a FWHM of at most 7~eV will be able to identify thermal broadening of the stronger lines above approximately 0.7~keV, where there are a few. We will perform a more detailed calculation of the ion temperatures in our follow-up paper to better assess this possibility.} \begin{figure} \centering \includegraphics[width=9.5cm]{fig9.eps} \caption{Thermal emission lines and non-thermal spectrum from 0.3 to 12~keV from the leptonic model. The model spectrum is filtered with a Gaussian kernel with $\Delta$E (FWHM) = 7~eV consistent over the whole energy range to imitate the (minimum) target energy resolution of the SXS instrument onboard the next-generation X-ray observatory \textit{Astro-H}. A thick cyan line is overlaid to show the total spectrum under a fixed resolution of 60~eV for comparison. } \label{fig:spec_astro_h} \end{figure} \subsection{Sensitivity of Results on Model Parameters} \label{section:sensitivity} We emphasize that while small variations in parameters in either of our ``best-fit" models do not strongly modify the results for that model, the parameter sets for the two scenarios are well separated and it is not possible to go smoothly from one set to the other while maintaining a satisfactory fit to the observations. To illustration this point and see how our models respond to parameter changes, we create modified versions for each of our models in which we purposely tune a set of important parameters so that it is positioned closer to the parameter space of the competing best-fit model. Two of the most important parameters that distinguish our leptonic and hadronic scenarios include the electron-to-proton number ratio at relativistic energies $K_\mathrm{ep}$, and the upstream gas density $n_0$ (or the mass loss rate $\mathrm{d}M/\mathrm{d}t$ and wind speed $V_\mathrm{wind}$ when a pre-SN wind is present). For the leptonic case with a massive star as progenitor and a wind cavity, by recalling that the IC flux $f_\mathrm{IC} \propto n_0K_\mathrm{ep}$ \citep[e.g.][]{EPSR2010}, we explore an altered model where $K_\mathrm{ep}$ is lowered by a factor of 5 (i.e. $3 \times 10^{-3}$) and the wind matter density increased by the same factor to roughly maintain the level of $f_\mathrm{IC}$.\footnote{Only decreasing $K_\mathrm{ep}$ by such a factor will not visibly change the broadband spectral shape, but will require an overall normalization factor substantially larger than 1, thus excluding such an option.} The wind magnetization is fixed so as to keep the synchrotron-to-IC flux ratio $f_\mathrm{syn}/f_\mathrm{IC}$ roughly unchanged. On the other hand, we explore an altered model for the hadronic case where $K_\mathrm{ep}$ is boosted up by a factor of 5 while $n_0$ is decreased by a factor of 1.7, so that the model is now leaning toward the leptonic parameter space. Note that since the $\pi^0$-decay\ flux $f_{\pi^0} \propto n_0^2$, this will decrease $f_{\pi^0}$ and increase $f_\mathrm{IC}$ by roughly the same factor. For both cases, we also adjust the assumed age of the remnant, within the allowable range of $1700 - 4000$~yr implied by observations, from the default age of 2500~yr in order to achieve the same SNR angular size as the original model. Other than what described above, all other parameters are kept fixed. The effect on the total broadband spectrum is shown in Figure~\ref{fig:sed_sensitivity}. We can see that the modified leptonic model produces a non-thermal spectrum with an acceptable fit to the data except the highest energy TeV points and a X-ray spectral index a little too soft. But most importantly, it already starts to face the difficulty of over-predicting the thermal emission in the X-ray band due to the enhanced ambient gas density, which is the same problem encountered by our hadronic model. Another surfacing problem not shown in this plot is that the spatial dependence of the synchrotron index now fails to explain the \textit{XMM Newton} data by showing a softening trend too strong towards the SNR center (see Figure~\ref{fig:synch_index}). For the modified hadronic model, although the thermal X-ray reduces back to a flux level more compatible with data than the original model, the broadband spectral fit becomes unacceptable by failing to explain the non-thermal X-ray flux level simultaneously with the gamma-rays, due to the enhanced leptonic contributions which worsen the fit. To achieve a good fit again, we find that a much lower B-field must accompany the increased $K_\mathrm{ep}$ and decreased $n_0$, which is simply what being realized in the wind model we are employing for the leptonic model. The conclusion from this analysis is hence that it is impossible to maintain a good fit to data by continuously tuning some influential parameters from the leptonic to the hadronic case (and vice versa) unless a drastic change of the underlying SNR environment is involved, i.e.~a rarified wind cavity against a uniform ISM model for the ambient medium. \begin{figure} \centering \includegraphics[width=9cm]{fig10.eps} \caption{Comparison of the broadband spectra of our original best-fit models (dashed lines) with their respective modified versions (solid lines) to show the sensitivity of our models on parameter choices. In each panel, the black lines show the total non-thermal spectrum, and the grey lines show the accompanying thermal X-ray line emission. The data points are the same as Figure~\ref{fig:model_spec}. See text for details about the modified models. } \label{fig:sed_sensitivity} \end{figure} \section{Conclusions} \label{conclude} We have investigated the TeV bright SNR Vela Jr.~with a comprehensive modeling of its multi-wavelength emission from the radio to TeV band. Our one-dimensional, spherically symmetric NL-DSA model indicates that the SNR originated from the core-collapse explosion of a massive star inside a wind cavity with rarified gas density and magnetic fields. Based on the broadband continuum emission, with the added constraint from the self-consistently\ determined thermal X-ray emission, we show that the GeV-TeV gamma-rays are predominately produced by IC emission from shock accelerated electrons with a much smaller hadronic contribution from shock accelerated ions. Despite the fact that the leptons radiate more efficiently, here, as in all self-consistent\ DSA models of SNRs we are aware of, far more energy is put into relativistic\ ions than electrons. In summary, these conclusions are supported by the following results: \begin{itemize} \item In order for $\pi^0$-decay\ gamma-rays from shock accelerated ions to dominate IC emission from electrons, the ambient pre-shock density must be above some limit. Despite a wide range in possible thermal equilibration models for electron heating and the non-equilibrium ionization\ of the shocked plasma, the density required for $\pi^0$-decay\ dominance is accompanied by strong thermal line emission in the X-ray band which is well above \textit{ASCA GIS} observations (see the lower panel of Figure~\ref{fig:model_spec}). This conclusion is similar to what we found previously for another non-thermal SNR RX J1713.7-3946. \item Even if the inconsistency with the thermal X-ray line emission is ignored, the hadronic model, with the SNR forward shock moving into a uniform ISM, fails to reproduce the spectral index of the observed X-ray synchrotron emission, as measured by \textit{ASCA} GIS and \textit{Chandra} ACIS. While the hadronic model can produce a reasonable match to the spectral shapes of data in other wavelengths, the overall normalization of the broadband emission requires a boosting factor of $\sim 5$ to bring the model spectra to the observed flux levels. \item Another possible problem with the hadronic model is that our best fit hadronic result predicts a spatial (i.e., radial) variation of the X-ray synchrotron photon index around the FS that is incompatible to the trend suggested by data from X-ray satellites (see Figure~\ref{fig:synch_index}). \item The Vela Jr.~SNR most likely originated from a core-collapse SN but the ``best-fit'' hadronic model requires a relatively high-density, high-magnetic field environment more typical of a thermonuclear Type Ia SN. \end{itemize} In contrast to the above problems with the hadronic model, our pre-SN wind leptonic model can self-consistently\ explain the broadband continuum emission, the lack of thermal X-ray line emission, the overall normalization assuming $d_\mathrm{SNR} \sim 1$\,kpc, and the spatial variation of the non-thermal X-ray spectral index. It is important to note that while our calculations assume a uniform ambient ISM, any real SNR may have a highly inhomogeneous upstream medium containing clumped gas. Recently, radio observations of SNR RX J1713.7-3946 \citep{Sano2010, Fukui2012} have revealed clumpy atomic/molecular cloud distributions surrounding the SNR shell, indicating a possible interaction of the shock and the accelerated protons with the gas clouds. Concentrating the mass in small, dense clumps can considerably enhance the gamma-ray flux. \citet{Inoue2012} have investigated this possibility using 3-D MHD simulations of a shock running into an inhomogeneous medium. They pointed out the possibility that a hadronic model can avoid the problem of an over-prediction of the thermal X-ray flux, since the denser cores of the gas that carry most of the mass fraction can survive the shock passing without being ionized and hence will not contribute to the thermal emission, but still can interact with the accelerated protons and produce gamma-rays. While definitively ruling out hadronic scenarios in homogeneous broadband models of Vela Jr., our 1-D model cannot rule out hadronic dominated GeV-TeV emission in more complex clumpy environments. In fact, although not pursued in this work, it is possible that the gamma-rays are contributed by a two-component mixture of IC and $\pi^0$-decay\ photons from the interaction of dense gas clumps and the accelerated CR protons. Future high-resolution observations of the surrounding interstellar gas distribution and 3-D hydro simulations self-consistently coupled to DSA will provide a quantitative estimate on this possible $\pi^0$-decay\ component from clumps. To explain the broadband spectrum of the SNR as a whole, our leptonic model requires a low B-field ($\sim$ a few $\mu$G) on average in the SNR shell, which is one of the reasons why the presence of a wind cavity is necessary to provide a weak B-field in the upstream medium. The fact that magnetic field amplification in the shock precursor is taken into account in our models only strengthens this need. This low downstream B-field is highly consistent with those inferred from measurements of the X-ray/TeV brightness ratio \citep{Aharonian2007b} and more recently the radial variation of X-ray synchrotron spectral indices \citep{Pannuti2010, KHU2013}. However, as briefly mentioned earlier, it disagrees with some previous claims of large post-shock B-fields ($\sim 0.1$~mG) based on the discovery of sharp X-ray filaments in the NW shell. The argument for such large field strength is built on the assumption that the narrow widths of such filaments are realized by a short synchrotron loss time-scale for the electrons. Our broadband analysis and supporting observational data essentially show that an alternative explanation for the sharp filamentary structures without the need to invoke large B-field strengths may be necessary \citep[see e.g.,][and references therein]{BykovDots2008,EV2008}. We leave this important point as an open question to be answered by future works. A fundamental question for DSA in SNRs is the fraction of SN explosion energy, $f_\mathrm{SN}$, channeled into CRs. Our two models show that this fraction can be very different, {\it at a given age}, depending on whether the gamma-ray production mechanism is assumed to be $\pi^0$-decay\ or IC. For Vela jr, at $t_\mathrm{SNR}=2500$\,yr, $f_\mathrm{SN} \simeq 0.14$ for our leptonic model and $f_\mathrm{SN} \simeq 0.48$ for our hadronic model. In both cases, of course, CR protons contain the large majority of the CR energy. The ratio of CR electron to CR proton energy density is given by $K_\mathrm{ep}$ and $K_\mathrm{ep} \simeq 0.015$ for our leptonic model and $K_\mathrm{ep} \simeq 1.5\xx{-4}$ for our hadronic model. The determination of the gamma-ray origin, therefore, relates directly to the long postulated but unproved link between SNRs and Galactic CRs, that is, whether or not the Galactic ensemble of SNRs can meet the energy budget of the CR spectrum. Because the $\pi^0$-decay\ and IC mechanisms are so different, a mis-interpretation of the multi-wavelength data including the gamma-rays can yield a very wrong prediction for the contribution of SNRs to the Galactic CRs. Of course, the CR energy budget depends ultimately on $f_\mathrm{SN}$ over the full age of the remnant so it will be interesting to see how $f_\mathrm{SN}$ evolves from very young ages up to the radiative phase of a SNR. In particular, the recent discovery of gamma-ray bright middle-aged SNRs by Fermi LAT may imply active particle acceleration at radiative shocks running into molecular clouds despite their slow speeds. Another important point is that the evolution of a SNR in a changing ambient environment may shift the dominating gamma-ray production mechanism with age. It is hence critical to understand observations from SNRs of different ages and try to link them together into a self-consistent evolutionary picture. We will pursue this line of work in a series of follow-up papers. Finally, we discussed the implications of our models for Vela Jr.~on future observations by the next-generation of observatories, including \textit{CTA} and \textit{Astro-H}. We showed that \textit{CTA} may be able to distinguish broadband emission models by measuring TeV brightness profiles with its unprecedented angular resolving power. This is especially valuable to distinguish leptonic and hadronic models for Vela Jr., as we have illustrated in Figure~\ref{fig:TeV_profile}. In the X-ray band, the soon to be commissioned \textit{Astro-H} space telescope possesses an extremely powerful calorimeter for high-resolution spectral studies, and we have postulated that there is a high possibility that SXS will be able to detect the more prominent thermal lines (see Figure~\ref{fig:spec_astro_h}) from non-thermal dominated SNRs such as Vela Jr.~and RX J1713.7-3946. We expect future multi-wavelength observations will put extremely precise constraints on our model parameters, allowing us to firmly pin down the origin of the high-energy emission and to gain further insight on the particle acceleration mechanism in young SNR shocks. \acknowledgments The authors acknowledge important discussions with Takaaki Tanaka and Hidetoshi Sano concerning this work. They also thank the anonymous referee for providing valuable suggestions on improving the quality of this manuscript. D.C.E. acknowledges support from NASA grants ATP02-0042-0006, NNH04Zss001N-LTSA, 06-ATP06-21, and NNX11AE03G. P.S. acknowledges support from NASA Contract NAS8-03060. S.N. acknowledges support from Ministry of Education, Culture, Sports, Science and Technology (No. 23105709), Japan Society for the Promotion of Science (No. 19104006 and No. 23340069), and the Global COE Program, The Next Generation of Physics, Spun from Universality and Emergence, from MEXT of Japan. \bibliographystyle{aa}
2,877,628,090,632
arxiv
\section{Introduction} Two-dimensional crystals of Carbon atoms (graphene) are recently discovered~\cite{novoselov}. Graphene, a single, one-atom thick sheet of carbon atoms arranged in a honeycomb lattice. High quality graphene single crystals some thousands of $\mu m^2$ in size are sufficient for most fundamental physics studies.~\cite{geim} There are significant efforts to grow graphene epitaxially~\cite{berger} by thermal decomposition of SiC, or by vapor deposition of hydrocarbons on catalytic metallic surfaces which could later be etched away leaving graphene on an insulating substrate. This stable crystal has attracted considerable attention because of its unusual effective many-body properties~\cite{yafis}, quasi-particle properties and its Landau Fermi liquid picture~\cite{polini} and the effect of electron-electron interactions to plasmon behavior and angle resolved photoemission spectroscopy (ARPES) ~\cite{polini2} that follow from chiral band states and because of potential applications. The low energy quasi-particle excitations in graphene are linearly dispersing, described by Dirac cones at the edges of the first Brillouin zone. It is very hard for alien atoms to replace the carbon atoms in the graphene structure because of the robustness and specificity of the $\sigma$ bonding. Due to that, electron mean-free path $l$, in graphene can be very large. One of the important issues in graphene is its quantum transport properties having the universal minimum conductivity at the Dirac point. Initially, it was believed that this universality is a native property~\cite{novoselov2} but recent experimental\cite{tan,cho} and theoretical\cite{adam,ando2,pereira,katsnelson2,castro-neto} reports indicate that the transport properties are very sensitive to impurities and defects and minimum conductivity is not universal. Conventional two-dimensional electron gas (2DEG) has been a fertile source of surprising new physics for more than four decades. Although the exploration of graphene is still at an early stage, it is already clear~\cite{novoselov2} that the strong field properties of Dirac electrons in graphene are different from and as rich as those of a semiconductor heterojunction 2DEG. The Fermi liquid phenomenology of Dirac electrons in graphene~\cite{polini, polini2} and conventional 2DEG~\cite{asgari2} have the same structure, since both systems are isotropic and have a single circular Fermi surface. The strength of interaction effects in a conventional 2DEG increases with decreasing carrier density. At low densities, the quasiparticle weight $Z$ is small, the velocity is suppressed~\cite{asgari2}, the charge compressibility changes sign from positive to negative\cite{asgari}, and the spin-susceptibility is strongly enhanced~\cite{asgari2}. These effects emerge from an interplay between exchange interactions and quantum fluctuations of charge and spin in the 2DEG. In the Dirac electrons in graphene, it was shown~\cite{yafis,polini,polini2} that interaction effects also become noticeable with decreasing density, although more slowly, that the quasiparticle weight $Z$ tends to larger values, that the velocity is enhanced rather than suppressed, and that the influence of interactions on the compressibility and the spin-susceptibility changes sign. These qualitative differences are due to exchange interactions between electrons near the Fermi surface and electrons in the negative energy sea and to interband contributions to Dirac electrons from charge and spin fluctuations. Compressibility measurements of conventional 2DEG have been carried out~\cite{eisenstein} and it is found qualitatively that Coulomb interactions affect the compressibility at sufficiently low electron density or strong coupling constant region. Recently, the local compressibility of graphene has been measured ~\cite{martin} using a scannable single electron transistor and it is argued that the measured compressibility is well described by the kinetic energy contribution and it is suggested that exchange and correlation effects have canceling contributions. From the theoretical point of view, the compressibility was first calculated by Peres {\it et al.}~\cite{peres} considering the exchange contribution to the noninteracting doped or undoped graphene flake. A related quantity $\partial\mu/\partial n$ (where $\mu$ is the chemical potential and $n$ is the electron density) is recently considered by Hwang {\it et al}.\cite{hwang_dmu} within the same approximation. Going beyond the exchange contribution, the correlation effects were taken into account by Barlas {\it et al}~\cite{yafis} based on an evaluation of graphene's exchange and random phase approximation (RPA) correlation energies. Moreover, Sheehy and Schmalian~\cite{sheehy} by exploiting the proximity to relativistic electron quantum critical point, derived explicit expressions for the temperature and density dependence of the compressibility properties of graphene. All these theoretical efforts have been carried out for clean systems. Since disorder is unavoidable in any material, there has been great interest in trying to understand how disorder affects the physics of electrons in material science specially here in graphene and its transport properties. Our aim in this work is to study the ground-state properties in the presence of electron-impurity and electron-electron interactions. For this purpose, we use the self-consistent theory of G{\"o}tze \cite{gotze} to calculate the scattering rate, ground-state energy and the compressibility of the system at the level of RPA including disorder effects. Our calculation is in the same spirit of our earlier work on conventional 2DEG.\cite{asgari} We note that recent work of Adam {\it et al}.\cite{adam} also use a self-consistent approach where the impurity scattering by the charge carriers is treated self-consistently in the RPA and the static conductivity is calculated in the Boltzmann kinetic theory. Thus, the main difference between the present work and that of Adam {\it et al}.\cite{adam} is that we are interested in a thermodynamic quantity (compressibility) whereas the latter is aimed at calculating a transport property (conductivity). We also remark that direct solution of Dirac equation for Dirac-like electrons incorporating the charge impurities has been discussed by Novikov~\cite{novikov} and the validity of the Born approximation is seriously questioned. Similar work has been carried out by Pereira {\it et al}.\cite{pereira} in which they studied the problem of a Coulomb charge and calculated the local density of state and local charge by solving the Dirac equation. They found new characteristics of bound states and strong renormalization of the van Hove singularities in the lattice description that are beyond the Dirac equation. In this work, we consider the charged impurity and the surface-roughness potentials which are established experimentally~\cite{meyer,ishigami} to be important. It has been demonstrated that a short-range scattering potential is irrelevant for electronic properties of graphene~\cite{katsnelson,adam}. We have used the same method ~\cite{asgari, asgari3} to investigate some properties of the conventional 2DEG. In this paper, we point out the differences between the graphene and conventional 2DEG due to disorder effects. The scattering rate behavior within our self-consistent theory shows that impurity scattering cannot localize the carriers in graphene. The effect of disorder on spin susceptibility is similar to that on compressibility and accordingly we will not show any result for spin susceptibility. The rest of this paper is organized as follows. In Sec.\,II, we introduce the models for self-consistent calculation of impurities effect. We then outline the calculation of compressibility. Section III contains our numerical calculations of ground state properties and comparison of models with recent experimental measurements. We conclude in Sec.\,IV with a brief summary. \section{Theoretical Model} We consider a system of 2D Dirac-like electrons interacting via the Coulomb potential $e^2/\epsilon r$ and its Fourier transform $v_q=2\pi e^2/(\epsilon q)$ where $\epsilon$ is the background dielectric constant. The Dirac electron gas Hamiltonian on a graphene sheet is given by \begin{equation}\label{ham} {\hat {\cal H}} = v\sum_{\bf k, \alpha} {{\hat \psi}_{{\bf k}, \alpha}}^\dag \left[\tau^3\otimes {\bf \sigma \cdot k}\right] {\hat \psi}_{{\bf k}, \alpha} + \frac{1}{2A}\sum_{{\bf q}\neq 0}v_q ({\hat n}_{\bf q} {\hat n}_{-{\bf q}}-{\hat N}) \end{equation} where $v=3 t a/2$ is the Fermi velocity, $t$ is the tight-binding hopping integral, $a$ is the spacing of the honeycomb lattice, $A$ is the sample area and ${\hat N}$ is the total number operator. Here $\tau^3$ is a Pauli matrix that acts on $K$ and $K'$ two-degenerate valleys at which $\pi$ and $\pi^*$ bands touch and $\sigma^1$ and $\sigma^2$ are Pauli matrices that act on graphene's pseudospin degrees of freedom. A central quantity in the theoretical formulation of the many-body effects in Dirac fermions is the dynamical polarizability tensor $\chi^{(0)}({\bf q},i\Omega,\mu\neq 0)$ where $\mu$ is chemical potential. This is defined through the one-body noninteracting Green's functions.\cite{gonzalez_1994} The density-density response function $\chi^{(0)}({\bf q},\Omega,\mu)$ of the doped two-dimensional Dirac electron model was first consider by Shung~\cite{shung} as a step toward a theory of collective excitations in graphite. The Dirac electron $\chi^{(0)}({\bf q},\Omega,\mu)$ expression has been considered recently by us~\cite{yafis} and others.~\cite{others} Implementing the Green's function $G^{(0)}({\bf k},\omega,\mu)$ in the calculation a closed form expression for $\chi^{(0)}({\bf q},i\Omega,\mu\neq 0)$ is found.\cite{yafis} To describe the properties of Dirac electrons we define a dimensionless coupling constant $\alpha_{gr}=g{e^2/\upsilon \epsilon \hbar}$ where $g=g_vg_s=4$ is the valley and spin degeneracy. The effect of disorder is to dampen the charge-density fluctuations and results to modify the dynamical polarizability tensor. Within the relaxation time approximation the modified $\chi^{(0)}({\bf q},i\Omega,\mu,\Gamma)$ is given by~\cite{mermin} \begin{equation} \chi^{(0)}({\bf q},i\Omega,\mu, \Gamma)= \frac{\chi^{(0)}({\bf q},i\Omega+i\Gamma,\mu)}{1- \frac{\Gamma}{\Omega+\Gamma}\left[1- \frac{\chi^{(0)}({\bf q},i\Omega+i\Gamma,\mu)}{\chi^{(0)}({\bf q})} \right]}~, \end{equation} in which the strength of damping is represented by $\Gamma$. To include the many-body effects, we consider the density-density correlation function within the RPA, \begin{equation} \chi_{\rho\rho}({\bf q},i\Omega,\mu,\Gamma)=\frac{\chi^{(0)} ({\bf q},i\Omega,\mu, \Gamma)} {1-v_q\chi^{(0)}({\bf q},i\Omega,\mu, \Gamma)}~. \end{equation} As the short-range disorder is shown~\cite{adam} to have negligible effect in the transport properties of graphene, we consider long-ranged charged impurity scattering and surface roughness as the main sources of disorder. The latter mechanism also known as ripples comes either from thermal fluctuations or interaction with the substrate.\cite{nima} The disorder averaged surface roughness (ripples) potential (SRP) is modeled as \begin{equation} \langle |U_{surf}(q)|^2\rangle = \pi\Delta^2h^2 (2\pi e^2 n/\epsilon)^2 e^{-q^2\Delta^2/4} \, , \end{equation} where $h$ and $\Delta$ are parameters describing fluctuations in the height and width, respectively. We can use the experimental results of Meyer {\it al}.\cite{meyer} who estimate $\Delta\sim 10$\,nm and $h\sim 0.5$\,nm. It is important to point out that there are other models to take into account the surface-roughness potential. The effect of bending of the graphene sheet has been studied by Kim and Castro Neto~\cite{kim}. This model has two main effects, firstly the decrease of the distance between carbon atoms and secondly a rotation of the $p_z$ orbitals. Due to bending the electrons are subject to a potential which depends on the structure of the graphene sheet. Another possible model is described by Katsnelson and Geim ~\cite{katsnelson} considering the change of in-plane displacements and out-of-plane displacements due to the local curvature of a graphene sheet. Consequently, the change of the atomic displacements results to change in nearest-neighbour hopping parameters which is equivalent to the appearance of a random gauge field described by a vector potential. These different models need to be implemented in our scheme and to be checked numerically to assess their validity in comparison to the available measurements. The charged disorder potential (CDP) is taken to be \begin{equation} \langle |U_{imp}(q)|^2\rangle=n_i v_q^2 e^{-2qd}\, , \end{equation} in which $n_i$ is the density of impurities and $d$ is the setback distance from the graphene sheet. We use the mode-coupling approximation introduced by G{\"o}tze\cite{gotze} to express the total scattering rate in terms of the screened disorder potentials \begin{displaymath} i\Gamma=-\frac{v_F k_F }{2 \hbar n A}\sum_{\bf q} \left[\frac{<\mid U_{imp}(q)\mid^2>}{\varepsilon^2(\bf q)}\right. \end{displaymath} \begin{equation} \left. +\frac{\langle|U_{surf}(q)|^2\rangle}{\varepsilon^2(\bf q)}\right]\frac{\varphi_0({\bf q},i\Gamma)}{1+i\Gamma\varphi_0({\bf q},i\Gamma)/\chi^{0}({\bf q})}~, \end{equation} where $\varepsilon({\bf q})=1-v_q \chi^{(0)}({\bf q})$ is the static screening function and the relaxation function for electrons scattering from disorder is given as $\varphi_0({\bf q},i\Gamma)= \left[\chi^{(0)}({\bf q},i\Gamma,\mu)-\chi^{(0)}({\bf q})\right]/i\Gamma$ . Since the scattering rate $\Gamma$ depends on the relaxation function $\varphi_0({\bf q},i\Gamma)$, which itself is determined by the disorder included response function, the above equation needs to be solved self-consistently to yield eventually the scattering rate as a function of the coupling constant. Note that at the present level of approximation (i.e. RPA) the static dielectric function $\varepsilon(q)$ does not depend on $\Gamma$. In the conventional 2DEG correlation effects beyond the RPA (through the local-field factor) render $\varepsilon(q)$ also $\Gamma$ dependent.\cite{asgari} The ground state energy is calculated using the coupling constant integration technique, which has the contributions $E^{tot}=E_{kin}+E_{\rm x} +E_{\rm c}$. \begin{figure}[ht] \begin{center} \tabcolsep=0 cm \includegraphics[width=0.75\linewidth]{fig1.eps} \caption{(Color online) The scattering rate $\Gamma$ as a function of the coupling constant $\alpha_{gr}$ for both the charge-disorder potential (CDP) and surface roughness potential (SRP) contributions. } \end{center} \end{figure} The first-order ``exchange" contribution per particle is given by \begin{displaymath}\label{ex} \varepsilon_{\rm x}=\frac{E_{\rm x}}{N}=\frac{1}{2}\int \frac{d^2 {\bf q}}{(2\pi)^2}~v_q \end{displaymath} \begin{equation} \left[-\frac{1}{\pi n} \int_0^{+\infty}d \Omega ~\chi^{(0)}({\bf q},i\Omega,\mu,\Gamma)-1\right]\,. \end{equation} To evaluate the correlation energy in the RPA, we follow a standard strategy for uniform continuum models ~\cite{Giuliani_and_Vignale} \begin{displaymath}\label{corr} \varepsilon^{\rm RPA}_{\rm c}=\frac{E_{\rm c}}{N}= \frac{1}{2\pi n}\int \frac{d^2 {\bf q}}{(2\pi)^2} \int_0^{+\infty}d\Omega\left\{v_q\chi^{(0)}({\bf q},i\Omega,\mu,\Gamma)\right. \end{displaymath} \begin{equation} \left. +\ln{\left[1-v_q\chi^{(0)}({\bf q},i\Omega,\mu,\Gamma)\right]}\right\}\,. \end{equation} Since $\chi^{(0)}({\bf q},\Omega,\mu,\Gamma)$ is linearly proportional to ${\bf q}$ at large ${\bf q}$ and decrease only like $\omega^{-1}$ at large $\omega$, accordingly the exchange and correlation energy built by Eqs.~(7) and (8) is divergent~\cite{yafis}. In order to improve convergence, it is convenient at this point to add and subtract $\chi^{(0)}({\bf q},i\Omega,\mu= 0,2\Gamma)$ inside the frequency integral and regularize~\cite{note} the exchange and correlation energy. Therefore, these ultraviolet divergences can be cured calculating \begin{equation}\label{exchange_regularized} \delta \varepsilon_{\rm x}=-\frac{1}{2\pi n}\int \frac{d^2 {\bf q}}{(2\pi)^2}~v_q \int_0^{+\infty}d \Omega ~\delta \chi^{(0)}({\bf q},i\Omega,\mu,\Gamma) \end{equation} and \begin{displaymath}\label{eq:regularization} \delta \varepsilon^{\rm RPA}_{\rm c}=\frac{1}{2\pi n} \int \frac{d^2 {\bf q}}{(2\pi)^2} \int_0^{+\infty}d\Omega\left\{v_q \delta \chi^{(0)}({\bf q},i\Omega,\mu,\Gamma)\right. \end{displaymath} \begin{equation} \left. + \ln{\left[\frac{1-v_q\chi^{(0)}({\bf q},i\Omega,\mu,\Gamma)}{1- v_q\chi^{(0)}({\bf q},i\Omega,\mu=0,2\Gamma)}\right]}\right\} \end{equation} where $\delta \chi^{(0)}$ is the difference between the doped ($\mu \ne 0$) and undoped ($\mu=0$) polarizability functions. With this regularization the $q$ integrals have logarithmic ultraviolet divergences~\cite{yafis}. we can introduce an ultraviolet cutoff for the wave vector integrals $k_c=\Lambda k_F$ which is the order of the inverse lattice spacing and $\Lambda$ is dimensionless quantity. Fermi momentum is related to density as given by $k_F=(4\pi n/g)^{1/2}$. Once the ground state is obtained the compressibility $\kappa$ can easily be calculated from \begin{equation} \kappa^{-1}=n^2\frac{\partial^2 (n \delta\varepsilon_{\rm tot})} {\partial n^2}\,, \end{equation} where the total ground-state energy is given by $\delta \varepsilon_{\rm tot}=\delta \varepsilon_{\rm kin}+ \delta \varepsilon_{\rm x}+\delta \varepsilon^{\rm RPA}_{\rm c}$. Here the zeroth-order kinetic contribution to the ground-state energy is $\delta \varepsilon_{\rm kin} =\frac{2}{3}\varepsilon_{\rm F}$. we consider the dimensionless ratio $\kappa/\kappa_0$ where $\kappa_0=2/(n\varepsilon_{\rm F})$ is the compressibility of the noninteracting system. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\linewidth]{fig2a.eps} \includegraphics[width=0.7\linewidth]{fig2b.eps} \caption{(Color online) (a): The correlation energy $\delta \varepsilon_c$ as a function of the coupling constant $\alpha_{gr}$ for cut-off value $\Lambda=k_c/k_F=50$. (b): The exchange energy $\delta \varepsilon_x$ as a function of the coupling constant $\alpha_{gr}$ for cut-off value $\Lambda=50$. Results of fixed $\Gamma$ values are compared to those calculated within the mode-coupling approximation.} \end{center} \end{figure} \section{Numerical results} In this section we present our calculations for ground state properties of graphene in present of impurities that we model them as mentioned above. The inverse compressibility $1/(n^2 \kappa)$ is calculated by using the theoretical models described above and compare them with the recent experimental measurements. In all numerical calculations we consider $d=0.5$\,nm. Electron density is taken to be $1\times 10^{12}$\,cm$^{-2}$ for Figs.\,1-3. Increasing disorder (increasing $n_i$ or decreasing $d$ for charge-disorder potential or increasing $h$ for surface roughness potential) decrease the $\chi^{(0)}(q,\Omega,\mu,\Gamma)$ as the scattering rate $\Gamma$ gets bigger. Thus, decreasing $\chi^{(0)}(q,\Omega,\mu,\Gamma)$ (or increasing correlation effects) results in a stronger disorder potential. Despite $\Gamma$ increases with increasing $\alpha_{gr}$, apparently it turns to a saturation limit and does not diverge. This behavior is quite different than what is seen in conventional 2DEG ~\cite{asgari} when the many-body effects influence the scattering rate through the local-field factor. In the conventional 2DEG system, at a critical level of disorder this nonlinear feedback causes $\Gamma$ to increase rapidly and diverge, which is taken as an indication for the localization of carriers. However, in graphene, our calculations show that the $\Gamma$ does not diverge therefore impurities cannot localize carriers and we have a weakly localized system in the presence of impurities compatible with experimental observations~\cite{morozov}. We can understand the saturated behavior of $\Gamma$ qualitatively as follows. In the context of conventional 2DEG, Mott argument says that the mean-free path $l$ in a metal could not be shorter than the wavelength $\lambda$. Since $l$ is proportional to the inverse of $\Gamma$, for large values of $\Gamma$ obtained in 2DEG, the electron mean-free path decreases and becomes less than or equal to $\lambda$. At this point we should have a metal-insulator phase transition. In the context of graphene, on the other hand, Mott's argument is similar to that light does not notice any roughness (one source of scattering) on a scale shorter than its wavelength. Consequentely there is a lower limit for the electron's mean-free path in graphene and it turns out that we would have a maximum (saturation) value for $\Gamma$. The issue of localization in graphene has recently attracted some attention and the chiral nature of electron behavior has been discussed in the literature.~\cite{suzuura,mccann} Suzuura and Ando~\cite{suzuura} claimed that the quantum correction to the conductivity in graphene can differ from what is observed in normal 2DEG due to the nature of elastic scattering in graphene possibly changing the sign of the localization correction and turn weak localization into weak antilocalization for the region when intervalley scattering time is much larger than the phase coherence time. Further consideration of the behavior of the quantum correction to the conductivity in graphene~\cite{mccann} conclude that this behavior is entirely suppressed due to time-reversal symmetry breaking of electronic states around each degenerate valley. We have found through our calculations that $\Gamma$ increases with increasing $n_i/n$ as a function of $\alpha_{gr}$. Figure~1 shows $\Gamma$ for various scattering mechanisms. As it is clear, CDP is the dominant mechanism for $\Gamma$ in our model. The effect of SRP is mostly negligible, except at large values of the coupling constant. This finding is to be contrasted with the the statement of Martin {\it et al}.\cite{martin} that both substrate induced structural distortions (SRP) and chemical doping (CDP) are conceivable sources of density fluctuations. We stress that our model calculations indicate that at realistic coupling constant values (c.f. Fig.\,1) only the charged impurity scattering dominates. \begin{figure}[ht] \begin{center} \includegraphics[width=0.75\linewidth]{fig3.eps} \caption{(Color online) The compressibility $\kappa/\kappa_0$ scaled by that of a noninteracting clean system as a function of the coupling constant $\alpha_{gr}$ for cut-off value $\Lambda=50$.} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.75\linewidth]{fig4.eps} \caption{(Color online) The inverse compressibility $[n^2\kappa]^{-1}=\partial \mu/\partial n$ (in units of ${\rm meV}~10^{-10}{\rm cm}^{2}$) as a function of the electron density (in units of $10^{12}~{\rm cm}^{-2}$). The filled squares are the experimental data by Martin {\it et al.}~\cite{martin}.} \end{center} \end{figure} We have calculated the exchange and correlation energies as a function of $\alpha_{gr}$ in the presence of disorder. It is found that the disorder effects become more appreciable at large coupling constants, within the mode coupling approximation. The exchange energy is positive~\cite{yafis} because our regularization procedure implicitly selects the chemical potential of undoped graphene as the zero of energy; doping either occupies quasiparticle states with positive energies or empties quasiparticles with negative energies. Figure~2(a) shows the correlation energy $\delta \varepsilon_c$ as a function of $\alpha_{gr}$. It appears that the disorder effects become more appreciable at large coupling constant. Note that $\delta \varepsilon_c$ has the same density dependence as $\delta \varepsilon_x$ apart from the weak dependence on $\Lambda$. In contrast to the exchange energy, Figure~2(b), the correlation energy is negative~\cite{yafis}. Figure~3 shows the charge compressibility, $\kappa/\kappa_0$ scaled by its non-interacting contribution as a function of $\alpha_{gr}$ for various models of $\Gamma$. The behavior of $\kappa$ shows some novel physics, which is qualitatively different from the physics known in the conventional 2DEG. Exchange makes a positive contribution to the inverse compressibility and thus tends to reduce (rather than enhance) the compressibility. On the other hand, correlations make a negative contribution to the inverse compressibility and thus tends to enhance $\kappa$. In the conventional 2DEG both contributions tend to enhance the compressibility. In the case of graphene instead, apparently exchange and correlation compete with each other~\cite{martin} in determining the compressibility of the system. It is interesting to note that similar physics is true also in the spin susceptibility~\cite{yafis}. In Fig.~4 we compare our theoretical predictions for the inverse compressibility of doped graphene with the experimental results of Martin {\it et al.}~\cite{martin}. For definiteness we take $\Lambda=k_c/k_F$ to be such that $\pi (\Lambda k_F)^2=(2\pi)^2/{\cal A}_0$, where ${\cal A}_0=3\sqrt{3} a^2_0/2$ is the area of the unit cell in the honeycomb lattice, with $a_0 \simeq 1.42$~\AA~the carbon-carbon distance. With this choice $\Lambda\simeq{(g n^{-1}\sqrt{3}/9.09)^{1/2}} \times 10^2$, where $n$ is the electron density in units of $10^{12}~{\rm cm}^{-2}$. Martin {\it et al.}~\cite{martin} fitted the experimental inverse compressibility, $(n^2 \kappa)^{-1}$ to the kinetic term using a single parameter Fermi velocity which is larger than the bare Fermi velocity. Note that the kinetic term in graphene has the same density dependence as the leading exchange and correlations terms. As it is clear in Fig.~4 the inverse compressibility of noninteracting system is below the experimental data. By increasing the interaction effects, i.e., increasing the coupling constant strength, $\alpha_{gr}$ our theoretical results move up. Unfortunately, in the experimental sample, the value of $\alpha_{gr}$ is not specified and we considered it to be $\approx 1$. Therefore, including the exchange-correlation effects in our RPA theory, gives results very close to experimental data. Furthermore, the results of incorporating impurity density, $n_i= 10^{10}$\,cm$^{-2}$ in the system and solving the self-consistent equations to obtain the scattering rate value, yield very good agreement with the measured values in the large and middle electron density regions. We have examined the inverse compressibility by the kinetic term contribution only including a fitting value for Fermi velocity and our numerical results are well described by a fitting velocity about $1.28\,v_F$. We would stress here that this fitting velocity is different from the renormalized velocity defined within the Landau-Fermi liquid theory in graphene.~\cite{polini} In a recent calculation of $\partial\mu/\partial n$ within the Hartre-Fock approximation in graphene where $\mu$ is the chemical potential and $n$ is the electron density Hwang {\it et al}.\cite{hwang_dmu} stated that correlation and disorder effects would only introduce small corrections. This is not, in general, true since it has been shown by Barlas {\it et al}.\cite{yafis} that the correlation effects are essential in the ground-state properties. Although these effects are not significant in very weak interaction strength regime and high electron density, including many-body exchange-correlation effects together with disorder effect are necessary to get agreement with quantities measured in experiments of Martin {\it et al}.\cite{martin} It would be useful to carry out further experimental work at larger interaction strengths to assess the role played by correlation effects. \section{Conclusion} We have studied the ground state thermodynamic properties of a graphene sheet within the random phase approximation incorporating the impurities in the system. Our approach is based on a self-consistent calculation between impurity effect and many-body electron-electron interaction. We have used a model surface roughness potential together with the charged disorder potential in the system. Our calculations of inverse compressibility compared with recent experimental results of Martin {\it et al}.\cite{martin} demonstrate the importance of including correlation effects together with disorder effects correctly in the thermodynamic quantities. We remark that in a very small density region, the system is highly inhomogeneous where experimental data tends to a constant and the effect of the impurities are very essential. A model going beyond the RPA is necessary to account for increasing correlation effects at low density. To describe the experimental data in this region more sophisticated theoretical methods which incorporate inhomogeneities are needed. One approach would be the density-functional theory where Dirac electrons in the presence of impurities are considered. \begin{acknowledgments} We thank J. Martin for providing us with their experimental data and M. Polini for useful discussions. B.\,T. is supported by TUBITAK (No. 106T052) and TUBA. \end{acknowledgments}
2,877,628,090,633
arxiv
\section{Introduction} \label{sec:int} Kagome systems are a very attractive venue to study exotic states in the condensed matter physics, due to the realization of several unusual features: topological electronic flat bands~\cite{ye.kang.18,lin.choi.18,sales.yan.19,meier.du.20,kang.fang.20}, anomalous Hall effect~\cite{chen.niu.14}, exotic type of superconductivity~\cite{ko.lee.09}, topological insulating phase~\cite{guo.franz.09}, frustrated magnetism~\cite{gardner.gingras.10,nussinov.vanderbrink.15}, chiral spin state~\cite{ohgushi.murakami.00}, quantum spin liquid state~\cite{han.hlton.12,zhou.kanoda.17}, or chiral phonons~\cite{chen.wu.19}. An example of this type of systems is a recently discovered kagome-metals family $A$V$_{3}$Sb$_{5}$ \mbox{($A$ = K, Rb, and Cs)}~\cite{ortiz.gomes.19}, exhibiting the giant anomalous Hall effect~\cite{yang.wang.20,yu.wu.21,zheng.chen.21}. The $A$V$_{3}$Sb$_{5}$ compounds were identified as nonmagnetic~\cite{kenney.ortiz.21} $\mathbb{Z}_{2}$ topological good metals~\cite{ortiz.sarte.21,ortiz.teicher.21b,setty.hu.21}, with weak electronic correlation effects~\cite{luo.gao.21,zhao.wu.21}. As a consequence, the topological surface states can be observed, for instance in CsV$_{3}$Sb$_{5}$ just below the Fermi level~\cite{hu.teicher.21}. The transport experiments confirm the emergence of nontrivial band topology~\cite{fu.zhao.21}. This opens a way for envisioning the new quantum devices, based on the topological properties of $A$V$_{3}$Sb$_{5}$ like, e.g., Josephson junctions with extremely long coupling (of at least $6$~$\mu$m in the case of KV$_{3}$Sb$_{5}$~\cite{wang.yang.20}). One of the most interesting properties of the $A$V$_{3}$Sb$_{5}$ compounds is the coexistence of charge density wave (CDW) and superconductivity at low temperatures. The transition to the CDW phase is observed at $T_\text{CDW}$ equal to $78$~K, $102$~K, and $94$~K for KV$_{3}$Sb$_{5}$~\cite{ortiz.sarte.21,li.zhang.21,zhu.yang.21}, RbV$_{3}$Sb$_{5}$~\cite{yin.tu.21,zhu.yang.21}, and CsV$_{3}$Sb$_{5}$~\cite{ortiz.gomes.19,yang.wang.20,ortiz.sarte.21,ortiz.teicher.21b,zhao.wang.21,yu.wu.21,liang.hou.21,mu.yin.21,yin.tu.21,song.zheng.21,li.zhang.21,wang.jiang.21,yu.wang.21,gupta.das.21,wang.kong.21}, respectively. Further lowering of the temperature leads to the emergence of superconducting state at the critical temperature $T_{c}$, which is approximately equal to $0.93$~K, $0.92$~K, and $2.5$~K for KV$_{3}$Sb$_{5}$~\cite{ortiz.sarte.21}, RbV$_{3}$Sb$_{5}$~\cite{yin.tu.21}, and CsV$_{3}$Sb$_{5}$~\cite{ortiz.teicher.21b,mu.yin.21,ni.ma.21}, respectively. Values of $T_{c}$ estimated from the electron-phonon coupling are lower than experimentally established ones~\cite{tan.liu.21}, which indicates an unconventional pairing mechanism~\cite{wu.schwemmer.21}. Additionally, a general relatively weak dependence of the $T_{c}$ and $T_\text{CDW}$ on the thickness of the sample was reported for $A$V$_{3}$Sb$_{5}$ compounds~\cite{song.kong.21,song.ying.21,wang.yu.21}. The imposed external hydrostatic pressure leads to the occurrence of double superconducting dome on the temperature--pressure phase diagram~\cite{zhu.yang.21}. This behavior was observed in each of the $A$V$_{3}$Sb$_{5}$ family members, i.e., KV$_{3}$Sb$_{5}$~\cite{du.luo.21}, RbV$_{3}$Sb$_{5}$~\cite{wang.chen.21,du.luo.21b}, and CsV$_{3}$Sb$_{5}$~\cite{zhao.wang.21,yu.ma.21,chen.wang.21,chen.zhan.21,qian.christensen.21,zhang.chen.21,wang.kong.21}. Generally, an increasing pressure clearly has a negative impact on the charge order causing a fast decrease of $T_\text{CDW}$, whereas the superconducting phase survives even at high pressures. The three-dimensional (3D) CDW phase realized in $A$V$_{3}$Sb$_{5}$~\cite{ortiz.sarte.21,li.zhang.21,zhu.yang.21,yin.tu.21,zhu.yang.21,ortiz.gomes.19,yang.wang.20,ortiz.sarte.21,ortiz.teicher.21b,zhao.wang.21,yu.wu.21,liang.hou.21,mu.yin.21,yin.tu.21,song.zheng.21,li.zhang.21,wang.jiang.21,yu.wang.21,gupta.das.21} is associated with the emergence of the $2 \times 2$ pattern of charge distribution on the surface, observed by the scanning tunnelling microscope (STM) experiments~\cite{jiang.yin.20,li.wan.21,zhao.li.21,chen.yang.21,liang.hou.21,li.zhao.21,wang.kong.21}. However, it should be noted that the $4 \times 1$ stripe phase was also reported, e.g. in KV$_{3}$Sb$_{5}$~\cite{li.zhao.21}, RbV$_{3}$Sb$_{5}$~\cite{yu.xiao.21} or CsV$_{3}$Sb$_{5}$~\cite{li.wan.21,xu.yan.21,wang.kong.21}. Nevertheless, recent studies suggest that the $4 \times 1$ structure can be associated with mesoscopic structural phase separation and does not exist in the bulk~\cite{li.jiang.21}. Angle--resolved photoemission spectroscopy (ARPES) study does not give unequivocal information about the gap opening due to the CDW, e.g., the helium-lamp-based measurements report strong anisotropy of CDW gap opening around the K point~\cite{nakayama.li.21,luo.gao.21,wang.ma.21}, or a nearly isotropic gap behavior around the K point was found by the synchrotron-based study~\cite{lou.fedorov.21,kang.fang.21}. Additionally, around the $\Gamma$ point, the gapless structure is reported~\cite{wang.ma.21}. Nevertheless, the shape of the Fermi surface (FS) reported in many ARPES~\cite{kang.fang.21,luo.gao.21,hu.wu.21,li.zhang.21,hu.teicher.21,luo.peng.21,cai.wang.21,cho.ma.21} and {\it ab initio}~(DFT)~\cite{uykur.ortiz.21b,ortiz.teicher.21,luo.gao.21,fu.zhao.21,cho.ma.21,ortiz.teicher.21b,labollita.botana.21} studies support the idea that the emergence of the CDW is a consequence of the Peierls instability related to FS nesting near the saddle points at M~\cite{zhou.li.21,wang.liu.21}. \paragraph*{Motivation.}--- The minimal model that describes the CDW realized in $A$V$_{3}$Sb$_{5}$ is focused only on the kagome lattice of V atoms~\cite{ortiz.teicher.21}. Many papers assume that the CDW in a form of the {\it Star of David} (SD) or inverse SD (tri-hexagonal) structure~\cite{tan.liu.21,ortiz.teicher.21,christensen.birol.21} is a consequence of the kagome lattice distortion. However, the observed CDW is not only associated with the kagome lattice~\cite{ortiz.teicher.21}, but also with the layer of Sb atoms~\cite{jiang.yin.20,liang.hou.21,shumiya.hossain.21}. For example, the electronic band structure study of CsV$_{3}$Sb$_{5}$ shows that Sb atoms can play a role in realization of the CDW~\cite{tsirlin.fertey.21}. Study of the band structure of $A$V$_{3}$Sb$_{5}$ under pressure shows also the important role of the Sb bands~\cite{labollita.botana.21}. The nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR) experiments also suggest an emergence of complex CDW phase. In Ref.~\cite{luo.zhao.21}, authors discuss the NMR spectra of V in CsV$_{3}$Sb$_{5}$ and NQR spectra of Sb. They observed that CDW is accompanied by an additional charge modulation in bulk below $\sim 40$~K. Additionally, the NMR studies of Cs and V atoms in CsV$_{3}$Sb$_{5}$ report a displacement of Cs atoms for temperatures below $T_\text{CDW}$~\cite{song.zheng.21}. Recent theoretical studies tried to explain the CDW formation by orbital current orders~\cite{denner.thomale.21,tan.liu.21,lin.nandkishore.21,wu.schwemmer.21,mielke.das.21,park.ye.21}, and realization of real and imaginary CDW scenario~\cite{lin.nandkishore.21}. For example, a competition between charge density order (CDO) and the charge bond order (CBO) was discussed with respect to the interaction within the kagome lattice~\cite{denner.thomale.21}. However, these explanations completely ignore the interplay between the kagome net and the rest of the system, i.e., an additional Sb atom in the kagome plane. As mentioned in the previous paragraphs, the CDW emerging in the $A$V$_{3}$Sb$_{5}$ compound family can have a much more complex nature than it was recently assumed. In our paper, we discuss the emergence of the CDW phase from the dynamical point of view, i.e., in the context of the structural phase transition at $T_\text{CDW}$. Analysis of spectra measured using the inelastic x-ray scattering (IXS) and Raman spectroscopy of CsbV$_{3}$Sb$_{5}$ at different temperatures show that the P6/mmm probably transforms to a lower symmetry structure with decreasing temperature ~\cite{ratcliff.hallett.21, wang.wu.21,wulferding.lee.21}. The existence of structural phase transition is also indicated by the theoretical studies of $A$V$_{3}$Sb$_{5}$~\cite{cho.ma.21,zhang.liu.21,tan.liu.21,subedi.21} which present the imaginary modes in the phonon dispersion relations. What is more, even a simple SD or inverse SD deformation introduced ``by hand'' can lead to a dynamically stable structure, i.e., without imaginary phonon frequencies~\cite{tan.liu.21}. However, there is no clear evidence that these phases, i.e. SD or inverse SD, are a true ground states of the $A$V$_{3}$Sb$_{5}$ compounds. Additionally, some theoretical studies show that the coupling between lattice distortions and CDW induces a weak, first order transition without a continuous phonon softening in $A$V$_{3}$Sb$_{5}$~\cite{miao.li.21}. The group theory analysis shows that there are seventeen different structural distortions possible, due to the phonon instabilities at the M and L points in the parent $P6/mmm$ phase of $A$V$_{3}$Sb$_{5}$~\cite{subedi.21}. The calculations of each distorted structures performed by Alaska Subedi~\cite{subedi.21} show that the $Fmmm$ gives the lowest energy for all $A$V$_{3}$Sb$_{5}$ family members. However, the crystal structure optimized within the $Fmmm$ symmetry is unstable, showing the imaginary modes at the $\Gamma$ and Z points. Our calculations of the dynamical properties of the $A$V$_3$Sb$_5$ compounds performed at two different temperatures $T=50$~K and $T=150$~K revealed a softenig of phonon modes at the M and L points at lower temperature for each compound. Although, in KV$_3$Sb$_5$, all phonon dispersions are stable without imaginary frequencies, the weak decrease of both mode frequencies is still clearly visible. RbV$_3$Sb$_5$ exhibits one imaginary mode at the L point but one frequency at M point is also reduced. In CsV$_3$Sb$_5$ two soft modes at the L and M points were found. Next, we show that crystal distortion defined by the combination of the polarization vectors from these two points leads to the stable $C2/m$ structure, which properties correspond well to the experimental results discussed above. The paper is organized as follows. The calculation methods are explained in Sec.~\ref{sec:met}. In Sec.~\ref{sec.num} we present the {\it ab initio} results -- we start with details of numerical calculations, then we discuss the dynamical properties of the basic $P6/mmm$ structure (Sec.~\ref{sec.dyn_prop}), its structural phase transtion to the $C2/m$ structure (Sec.~\ref{sec.transition}), the phonon density of states and Raman modes (Sec.~\ref{sec.dosy}). Next, we discuss the emergence of charge density wave using the STM simulations of the surface (Sec.~\ref{sec.cdw_surf}), and finally the electronic properties of both structures (Sec.~\ref{sec.el}). We conclude our study in Sec.~\ref{sec.sum}. \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{ph_band_191.pdf} \caption{ Common properties of the $A$V$_{3}$Sb$_{5}$ compounds with the $P6/mmm$ symmetry: (a) primitive unit cell (solid line) and (b)~the corresponding Brillouin zone. The remaining panels (c)-(e) compare the phonon dispersions of these compounds obtained from the distribution of displacements for high and low temperature, i.e., $150$~K (orange line) and $50$~K (blue line), respectively. In~the case of RbV$_{3}$Sb$_{5}$ and CsV$_{3}$Sb$_{5}$ the soft modes are observed. \label{fig.band191} } \end{figure*} \section{Calculation methods} \label{sec:met} The first-principles density functional theory (DFT) calculations were performed using the projector augmented-wave (PAW) potentials~\cite{blochl.94} implemented in the Vienna Ab initio Simulation Package ({\sc Vasp}) code~\cite{kresse.hafner.94,kresse.furthmuller.96,kresse.joubert.99}. For the exchange-correlation energy the generalized gradient approximation (GGA) in the Perdew, Burke, and Ernzerhof (PBE) parametrization was used~\cite{pardew.burke.96}. The energy cutoff for the plane-wave expansion was set to $350$~eV. The van der Waals correction was included using the Grimme method~\cite{grimme.antony.10} implemented within {\sc Vasp}. Optimizations of the structural parameters (lattice constants and atomic positions) in the primitive unit cell were performed using $16\times 16 \times 8$ ($8 \times 8\times 8$) {\bf k}--point grid in the case of hexagonal $P6/mmm$ (monoclinic $C2/m$) symmetry using the Monkhorst--Pack scheme~\cite{monkhorst.pack.76}. As a convergence condition of the optimization loop, we took the energy change below $10^{-6}$~eV and $10^{-8}$~eV for ionic and electronic degrees of freedom. Symmetry of the structures were analyzed with {\sc FindSym}~\cite{stokes.hatch.05} and {\sc Seek-path}~\cite{hinuma.pizzi.17,togo.tanaka.18} packages. The interatomic force constants (IFC) were obtained with the {\sc Alamode} software~\cite{tadano.gohda.14}, using the supercell technique [details about supercell construction can be found in the Supplemental Material (SM)~\footnote{See Supplemental Material at [URL will be inserted by publisher] for details about construction of the supercells and crystallographic data for obtained structures with $P6/mmm$ and $C2/m$ symmetries. Supplemental Material contain also additional numerical results and data}]. Calculations were performed for the thermal distributions of multi-displacement of atoms for given finite temperatures~\cite{hellman.abrikosov.11}, generated within {\sc hecss} procedure~\cite{jochym.lazewski.21}. The energy and the Hellmann-Feynman forces acting on all atoms were calculated with {\sc Vasp} for twenty five different configurations of atomic displacements in the supercell. In dynamical properties calculations, we included both harmonic and higher-order contributions to phonons. The frequencies of the optical modes at the $\Gamma$ point and their activities were obtained within the {\it Parlinski--Li--Kawazoe} direct method~\cite{phonon1} implemented in the {\sc Phonon} software~\cite{phonon2}. \section{Numerical results and discussion} \label{sec.num} \subsection{Dynamical properties of $P6/mmm$} \label{sec.dyn_prop} \begin{figure}[!b] \centering \includegraphics[width=\linewidth]{schemat.pdf} \caption{ Diagram of symmetries generated by the soft modes (see labels). Initial symmetry $P6/mmm$ in low temperature can contain soft modes at M and L points, which leads to $Cmmm$ and $Immm$ symmetries, respectively. However, a soft mode at the X point could lead to transition from $Immm$ to $C2/m$ symmetry. Similarly, the $C2/m$ symmetry could be induced jointly by both soft modes of $P6/mmm$ symmetry (from M and L points). \label{fig.schemat} } \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{ph_band_12.pdf} \caption{Common properties of the $A$V$_{3}$Sb$_{5}$ compounds with the $C2/m$ symmetry: (a) primitive unit cell (solid line) and (b)~the corresponding Brillouin zone. The remaining panels show phonon dispersions obtained for: (c) KV$_{3}$Sb$_{5}$, (d)~RbV$_{3}$Sb$_{5}$, and (e) CsV$_{3}$Sb$_{5}$. Note~that none of them exhibits any soft modes. \label{fig.band12} } \end{figure*} In the room temperature, the $A$V$_{3}$Sb$_{5}$ compounds crystallize in the the hexagonal $P6/mmm$ symmetry. Crystal structure, shown in Fig.~\ref{fig.band191}(a), contains the ideal kagome net of $A$ atoms (decorated by one Sb atom), sandwiched by two honeycomb nets of Sb atoms (details of the crystal structure are given in the SM~\cite{Note1}). Taking the generic common description presented in the SM~\cite{Note1}, system possesses only three structural degrees of freedom ($a$ and $c$ lattice constants, and $z_\text{Sb}$-coordinate describing the position of Sb atom in honeycomb layer). The irreducible representations (IR) at the $\Gamma$ point are: $A_\text{2u} + E_\text{1u}$ for acoustic modes, and $A_\text{1g} + 3A_\text{2u} + B_\text{1g} + B_\text{1u} + 2 B_\text{2u} + 2 E_\text{2u} + E_{2g} + 4 E_\text{1u} + E_\text{1g}$ for optical modes. Only $A_\text{1g}$, $E_\text{2g}$, and $E_\text{1g}$ modes are Raman active, while $A_\text{2g}$ and $E_\text{1u}$ modes are infra-red active. To examine the relationship between the CDW phase and the lattice dynamics of the studied compounds, we calculated the phonon dispersion relations at two temperatures strictly related with the emergence of the CDW. The dispersion curves obtained for $A$V$_{3}$Sb$_{5}$ systems at 150 K (above $T_\text{CDW}$) and 50 K (below $T_\text{CDW}$) are demonstrated in Fig.~\ref{fig.band191} as orange and blue lines, respectively. In the case of heavy $A$ ions, i.e. in RbV$_{3}$Sb$_{5}$ and CsV$_{3}$Sb$_{5}$, we observed the imaginary modes for $T < T_\text{CDW}$. In the first case, the imaginary mode at the L point and the small frequency softening at the M point are observed, while in the second, two imaginary modes are found at the M and L points. Imaginary frequency modes lead to the structural phase transition, which will be discussed in more detail in the next subsection. We did not observe any imaginary modes in KV$_{3}$Sb$_{5}$ compound, however, the frequencies of both modes, at the M and L points, are slightly reduced at 50 K. The imaginary frequencies are present when the phonon dispersion relations are calculated at T = 0~K ~\cite{zhang.liu.21,tan.liu.21,cho.ma.21} thus we assume that all three compounds transform to the lower symmetry phase in the same way. \subsection{Structural phase transition to $C2/m$} \label{sec.transition} Symmetry analyses of soft modes gives information about the possible stable structures (Fig.~\ref{fig.schemat}) for studied systems. First of all, the soft mode at the M point can induce a transition to the $Cmmm$ symmetry (space group $65$), while the soft mode at the L point to the $Immm$ symmetry (space group $71$). It is important to note, that the lowest soft mode typically leads to the most energetically favorable structure. In our case, this corresponds to the soft modes at the L point, which not only change the atom positions, but also generate distortion of the lattice. What is more, dynamical study of the obtained structure with the $Immm$ symmetry leads to another soft mode (at the X point) resulting in a stable $C2/m$ structure. This structure can be also found as a result of simultaneous condensation of both soft modes from the M and L points. Structure with the $C2/m$ symmetry [presented in Fig.~\ref{fig.band12}(a)] does not exhibit any soft modes for any of the $A$V$_{3}$Sb$_{5}$ compounds~\footnote{Notice, that the distortion introduced by-hand in form of \textit{Star of David} or inverse \textit{Star of David} can give positive phonon spectra, see Ref.~\cite{wang.kong.21} and~\cite{tan.liu.21}.}. Phonon dispersions [Figs.~\ref{fig.band12}(c)-(e)] are characterized by many nearly flat-bands, as a consequence of the Brillouin zone folding (in relation to the basic $P6/mmm$ symmetry). The IRs at the $\Gamma$ point are: $A_\text{u} + 2 B_\text{u}$ for acoustic modes, and $26 A_\text{g} + 23 A_\text{u} + 22 B_\text{g} + 34 B_\text{u}$ for optical modes. $A_\text{g}$, and $B_\text{g}$ modes are Raman active, while $A_\text{u}$ and $B_\text{u}$ modes are infra-red active. Now, we will discuss the atomic displacements during the transition from the $P6/mmm$ to $C2/m$ phase because the analysis of the atoms rearrangement may shed new light on the CDW phase of $A$V$_{3}$Sb$_{5}$. Both phases have a layered framework and the corresponding layers can be easily compared [cf.~Fig.~\ref{fig.band191}(a) and Fig.~\ref{fig.band12}(a)]. We will discuss this question on the example of CsV$_{3}$Sb$_{3}$. First, the $A = \text{Cs}$ atoms change position in the direction perpendicular to the layers (along initial $c_{h}$ direction). The V atoms, on the contrary, are moved only in the plane of the layer. Finally, Sb atoms have rather complex behavior--- some of them remain at original positions, while the rest of them is shifted (mostly in $c_{h}$ direction). The biggest displacements are observed in the case of V atoms (even up to $0.10$~\AA) and some Sb atoms (smaller than $0.05$~\AA). \begin{figure}[!b] \centering \includegraphics[width=.98\linewidth]{vplane.pdf} \caption{ Calculated distances between atoms in the kagome net layer for CsV$_{3}$Sb$_{5}$. V and Sb atoms are represented by red and gray circles, respectively. Position of atoms for $P6/mmm$ structure, is given by colored circles, while for $C2/m$ by solid line circles. The blue lines width corresponds to the distance between atoms, which is also directly referenced by its value (in~\AA). ``Ideal'' distance between atoms in $P6/mmm$ is $2.718$~\AA. Black arrows show the direction of displacements. \label{fig.vpalne} } \end{figure} As far as the features of CDW phase are concerned, the most important one is the layer containing V atoms forming kagome net in the $P6/mmm$ structure (or distorted kagome-like net in $C2/m$). In Fig.~\ref{fig.vpalne} we present the differences between atomic positions in the ideal and distorted layer. There, the colored circles correspond to atomic sites in the ideal kagome net layer (the distance between atoms is $2.718$~\AA), while solid line circles denote positions of atoms in the distorted lattice of $C2/m$ phase. The Sb atoms in the distorted kagome-like layer actually conserve their positions. In contrast to that, the V atoms change their positions significantly. We can define three centers of atomic ``attraction'' (marked by shaded blue triangles in Fig.~\ref{fig.vpalne}). Two of them, form isoscales triangles of V atoms, with the distance between atoms equal to $\sim 2.58$~\AA. Third center is realized in the form of the hexagon of V atoms and centered on Sb atom. In this case, the distance between atoms is equal to $\sim 2.68$~\AA. As for the rest, the distances between atoms are unchanged or bigger than in the initial $P6/mmm$ structure (detailed values are shown in Fig.~\ref{fig.vpalne}). Summarizing the above, the low-temperature $C2/m$ structure bears the similarity to the inverse SD (tri-hexagonal) structure. \subsection{The phonon density of states and Raman scattering} \label{sec.dosy} \begin{figure}[!t] \centering \includegraphics[width=0.91\linewidth]{ph_dos.pdf} \caption{ Phonon density of states (DOS) for $A$V$_{3}$Sb$_{5}$ family members with different symmetries (as labeled). The grey background denotes the total density of states (for $T = 50$~K), whereas the red, green, and blue lines to the partial DOS of $A$, V, and Sb atoms, respectively. Additionally, for $P6/mmm$ symmetry, as the black dashed line we show total DOS at $T = 150$~K. Imaginary frequencies, shown as negative values on (c) and (f), were magnified as a guide-for-eye 30 and 10 times, respectively. \label{fig.dos} } \end{figure} Now, we shortly discuss the phonon density of states (DOS) shown in Fig.~\ref{fig.dos}. For each of the discussed structures (independently of the symmetry), the highest frequency modes (above $5$~THz) correspond mostly to vibrations of V atoms (solid green line). Near-flat-bands, located mostly in the middle range of frequencies (from $2$~THz to $5$~THz) are mainly associated with Sb atoms (solid blue lines). Finally, vibrations of alkali metals are located around $\sim 1.75$~THz, and form a clear peak in the DOS (solid red line). In the case of the soft mode DOS, spectral weights clearly show contributions from Sb and V atoms [see Figs.~\ref{fig.dos}(c) and \ref{fig.dos}(e)]. Additionally, for the $C2/m$ structures, we observed a lifting of degeneracy of the band in relation to the $P6/mmm$ symmetry, in the form of multi-peak structure. In practice, this DOS modification is observed in the whole range of frequencies (especially for the contributions that originate from V and Sb atoms). The frequencies of the Raman modes with their irreducible representations calculated for CsV$_3$Sb$_5$ with the $P6/mmm$ and $C2/m$ symmetries are presented in Tab.~\ref{tab.ir} in SM~\cite{Note1}. The Raman active modes in the $P6/mmm$ structure are present only within the honeycomb Sb sublattices -- the $A_{1g}$ mode represents the out-of-plane vibrations, while $E_{1g}$ and $E_{2g}$ are the in-plane modes. The $A_{1g}$ mode is found at $4.2$~THz, in good agreement with the experiments~\cite{li.zhang.21,ratcliff.hallett.21,wang.wu.21}. The frequency of the $E_{2g}$ mode ($3.9$~THz) is overestimated relative to experimental values, $3.63$~THz~\cite{li.zhang.21} and $3.55$~THz~\cite{wulferding.lee.21}, while the lowest $E_{1g}$ mode has lower frequency $2.27$~THz than the measured value $2.66$~THz~\cite{li.zhang.21}. Our calculations cannot explain the peak at $\sim 4.6$~THz observed close to $T_\text{CDW}$~\cite{li.zhang.21}, which may be connected with the CDW fluctuations induced by the laser pumping. After phase transition to the $C2/m$ structure, the Raman active modes are also associated with other atoms (i.e., Cs and V). In the low-symmetry phase, besides the $A_g$ mode at $4.1$~THz, two additional modes around $1.3$~THz and $3.1$~THZ were observed in the pump-probe experiments~\cite{ratcliff.hallett.21,wang.wu.21}. They can be identified as the symmetric $A_\text{g}$ modes found here at $1.4$~THz and $3.1$~THz (see Table~\ref{tab.ir} in SM~\cite{Note1}). The lower mode appears just below $T_\text{CDW}$, while the higher mode is observed below $T^{\ast}\sim 60$~K. It may indicate the existence of the intermediate phase between $T^{\ast} < T < T_\text{CDW}$ with a different symmetry -- one of the allowed by the soft modes symmetry~\cite{subedi.21}. \begin{figure}[!b] \centering \includegraphics[width=\linewidth]{stm.pdf} \caption{ The DFT-generated STM image for CsV$_{3}$Sb$_{5}$ surface: (a) terminated by the distorted kagome-like plane and (b)~terminated by Sb plane. Positions of the V and Sb atoms are marked by red and cyan dots, respectively. Marked orange triangles and hexagon correspond to the shortest distances between V atoms in kagome-like plane (cf. Fig.~\ref{fig.vpalne}). Arrows indicate horizontal lines with identical stripe-like charge distribution. \label{fig.stm} } \end{figure} \subsection{Surface charge density wave} \label{sec.cdw_surf} The CDW phase in $A$V$_{3}$Sb$_{5}$ was visualized employing the STM experiment many times~\cite{liang.hou.21,chen.yang.21,li.wan.21,zhao.li.21,chen.yang.21,wang.jiang.21,liang.hou.21,xu.yan.21,jiang.yin.20,li.zhao.21,shumiya.hossain.21}. In the case of KV$_{3}$Sb$_{5}$, a topography of the Sb layer shows the realization of the surface $2\times2$ modulation~\cite{jiang.yin.20,li.zhao.21}. Similar results were obtained for the Sb-terminated surface of RbV$_{3}$Sb$_{5}$~\cite{shumiya.hossain.21}. However, in this case the additional stripe-like surface structure can be observed in the STM image. Finally, the $2\times2$ CDW and $4\times1$ stripe superstructures were reported also in the case of the CsV$_{3}$Sb$_{5}$ surface~\cite{liang.hou.21,chen.yang.21,wang.jiang.21,li.wan.21,zhao.li.21,xu.yan.21}. There may be a few reasons for the formation of observed CDW structures. Firstly, the atomic displacements during transition from $P6/mmm$ to $C2/m$ symmetry can generate the CDW phase as it is shown earlier in Sec.~\ref{sec.transition}. Secondly, the surface reconstruction can play an important role in the emergence of the CDW pattern in the STM experiments. To explore this issue, we performed the DFT calculations to simulate STM images for a low-temperature monoclinic phase of CsV$_{3}$Sb$_{5}$ (Fig.~\ref{fig.stm}). In the figure, among the complex charge modulations we can identify a clear unidirectional stripes (pointed out by arrows) running along one of the lattice directions (horizontal lines). For clarity, we mark also positions of V and Sb atoms by red and cyan dots, respectively, and positions of the adjacent V atoms in kagome-like plane by orange triangles and hexagons (cf.~Fig.~\ref{fig.vpalne}). In the case of V kagome-like plane termination [shown in Fig.~\ref{fig.stm}(a)], we observe a stripe-like structure corresponding to the pattern realized by the adjacent V atoms (mostly marked hexagonal shapes). This finding agrees with the previous observation that the CDW phase in this plane is realized by the bonding of vanadium $d_{xz}/d_{yz}$ orbitals~\cite{wang.liu.21}. Certain refinement can be given also by the shifts of the Sb atoms out of this plane since the realization of the vacuum state from one-site of this plane leads to the small displacement of the Sb atoms in the out-of-plane (vacuum) direction. And then the positions of V atoms are modified, reconstructing a nearly-ideal kagome lattice. In the case of the Sb-terminated surface [see Fig.~\ref{fig.stm}(b)], we also observe the stripe-like structure. Emergence of the CDW on this surface indirectly corresponds to the inverted SD structure. Here, the surface charge distribution is governed mainly by the $p$ orbitals of Sb atoms. However, as we show earlier, the distorted kagome-like plane disturbs the positions of the Sb atoms of neighboring layers, so consequently, the influence of 3D (bulk) CDW is also noticeable in this surface. Similar effect was observed by studying the surface states of Bi$_{2}$Se$_{3}$ where the surface charges were modified by the vacancies or atom substitution in the interior of bulk material~\cite{kim.ye.11,wang.xu.11,jurszyszyn.sikora.20}, due to realization of long-range orbital hybridization~\cite{ptok.kapcia.20b}. Concluding, the observed stripe-like superstructure is related to a subtle effects of the surface reconstruction and the atomic orbitals hybridization rather than an exact 3D CDW emerging in the $A$V$_{3}$Sb$_{5}$. \subsection{Electronic properties} \label{sec.el} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{el_dos.pdf} \caption{ Comparison between the electronic density of states calculated for $P6/mmm$ (red line) and $C2/m$ (blue line) phases of $A$V$_{3}$Sb$_{5}$. Rows from top to bottom correspond to $A$ = K, Rb, and Cs, respectively. \label{fig.el_dos} } \end{figure} The theoretical study of the optical conductivity in CsV$_{3}$Sb$_{5}$~\cite{uykur.ortiz.21,uykur.ortiz.21b} shows that the phase transition to the CDW phase is manifested as the transfer of spectral weight towards the higher energies, indicating a partial gap opening and moreover, the reduction in the density of states at the Fermi level. Indeed, the spectral weight suppression around the Fermi level was also observed experimentally, in particular in KV$_{3}$Sb$_{5}$~\cite{jiang.yin.20}, RbV$_{3}$Sb$_{5}$~\cite{yu.xiao.21} or CsV$_{3}$Sb$_{5}$~\cite{liang.hou.21}. In this context, van Hove singularities (VHS) were also studied in many papers~\cite{cho.ma.21,hu.wu.21}. For example, in the case of CsV$_{3}$Sb$_{5}$ multiple VHS, emerging from $d_{xz}$/$d_{yz}$ and $d_{xy}$/$d_{x2-y2}$ kagome bands~\cite{kang.fang.21}, coexist near the Fermi level. Additionally, the doping and pressure can tune the VHS closer to the Fermi level, which can also result in different Fermi surface instabilities~\cite{labollita.botana.21}. Also, the band reconstruction due to system folding and a surface induced orbital-selective shift of the electron energy band were observed in ARPES experiments, e.g. in KV$_{3}$Sb$_{5}$~\cite{luo.gao.21} or in CsV$_{3}$Sb$_{5}$~\cite{luo.peng.21,ortiz.teicher.21}. Now, we shortly discuss the electronic properties of the $A$V$_{3}$Sb$_{5}$ in the context of its band structure (Fig.~\ref{fig.el_band} in the SM~\cite{Note1}) and DOS (Fig.~\ref{fig.el_dos}). In the case of the $P6/mmm$ symmetry (top panels in Fig.~\ref{fig.el_band} in the SM~\cite{Note1}), electronic band structures obtained for the primitive cell are in good agreement with the previous studies. In each case, the band structure is formed mostly by V $3d$ orbitals~\cite{wang.liu.21,labollita.botana.21,zhao.wu.21,tsirlin.fertey.21}. Realization of the ideal kagome lattice by V atoms leads to the occurrence of a nearly-flat band around $1$~eV, and the corresponding peak in electronic DOS (Fig.~\ref{fig.el_dos}). At the K and M points, the Dirac points and saddle points are observed around the Fermi level, respectively. This result is consistent with the previous {\it ab initio} studies of these compounds~\cite{ortiz.gomes.19,song.ying.21,chen.zhan.21,labollita.botana.21,zhou.li.21,wang.liu.21,hu.wu.21,kang.fang.21}. Emergence of the 3D CDW within the $2\times 2\times 2$ supercell of $P6/mmm$ system leads to the above mentioned folding of the Brillouin zone and thus induces the band structure reconstruction (middle panels on Fig.~\ref{fig.el_band} in the SM~\cite{Note1}). In the $C2/m$ band structure, new hole-like structures can be identified (marked by pink dotted lines). Indeed, this type of band structure reconstruction is observed in the ARPES experiments~\cite{luo.peng.21,luo.gao.21}. Furthermore, emergence of the similar structures can be observed for the $C2/m$ supercell (bottom panels in Fig.~\ref{fig.el_band} in the SM~\cite{Note1}). In this case, additional effects of the band degeneracy lifting in high symmetry points can be observed. As we can see, in the case of both $P6/mmm$ and $C2/m$ supercells, their band structure looks similar. Remarkably, such an effect was reported previously~\cite{tang.ono.21,subedi.21} in the case of other supercells (e.g. with SD or inverted SD pattern introduced by-hand to the system). Lowering the symmetry and lifting the band degeneracy (also at the saddle point) around the Fermi level generates a V-shape gap in the DOS spectrum exactly at the Fermi level (see Fig.~\ref{fig.el_dos}). Similar effect was reported previously -- it influences the CDW phase. Here, we clearly show that this is a consequence of the structural phase transition. However, emergence of the additional CDW order should also lead to gap opening. Therefore, this particular aspect of this study is still under debate. \section{Summary and conclusion} \label{sec.sum} In this paper we have discussed the origin of the charge density wave in the $A$V$_{3}$Sb$_{5}$ compounds from the microscopic point of view using the {\it ab initio} study of their dynamical properties. First, we showed that the structural phase transition at $T_\text{CDW}$ can be realized due to symmetry breaking from the $P6/mmm$ to $C2/m$. Next, we demonstrated that the atomic displacement during this transition leads to emergence of the inverse \textit{Star of David} pattern. This mechanism can be the source of the 3D $2\times2\times2$ charge density wave observed in these compounds. Additionally, the $4\times1$ stripe superstructure observed experimentally on the surface of $A$V$_{3}$Sb$_{5}$ family can be a consequence of the surface reconstruction. In our study, we used a thermal multi-displacement technique to find high-order phonon contributions. Notably, lowering symmetry and stabilization of the system with the $C2/m$ symmetry is a consequence of the soft modes in the fundamental crystal structure with $P6/mmm$ symmetry. The first-order structural transition can occur directly due to the synergy between soft modes from the M and L points. We have found that, in practice, all of the atoms are shifted from their original high symmetry positions, which suggests that the emergence of more complex charge density wave phase is more likely than it was previously assumed. Stemming from this, analyses of the atoms displacement confirm the appearance of the inverse \textit{Star of David} pattern, within the distorted kagome-like layer. Possible manifestation of the $C2/m$ phase in $A$V$_{3}$Sb$_{5}$ provides an opportunity to observe experimentally the emergence of charge density wave phase by Raman active modes in the phonon spectrum. Indeed, it was recently observed for $T < T_\text{CDW}$. Experimental observations of the folding of the Brillouin zone and the reconstruction of electronic band structure were also reproduced. In addition, lifting of the electronic band degeneracy leads to modification of the DOS at the Fermi level, in agreement with experimental results. \begin{acknowledgments} Some figures in this work was rendered using {\sc Vesta}~\cite{momma.izumi.11}. This work was supported by National Science Centre (NCN, Poland) under Projects No. 2017/24/C/ST3/00276 (A.P.), 2018/31/N/ST3/01746 (A.K.), 2016/23/B/ST3/00839 (A.M.O.), and 2017/25/B/ST3/02586 (P.P.). In addition, A.P. appreciates funding in the frame of scholarships of the Minister of Science and Higher Education of Poland for outstanding young scientists (2019 edition, No.~818/STYP/14/2019). \end{acknowledgments}
2,877,628,090,634
arxiv
\section{Introduction} \subsection{Montgomery's Lemma.} The lemma, which constitutes the main subject of our investigation, has its origins in the theory of {\it{irregularities of distribution}}. Let $\left\{ x_1, \dots, x_N\right\} \subset \mathbb{T}^2 \cong [0,1)^2$ be a set of $N$ points. Montgomery's theorem \cite{mont1} (see also Beck \cite{beck1, beck}) guarantees the existence of a disk $D \subset \mathbb{T}^2$ with radius $1/4$ or $1/2$ such that the proportion of points in the disk is either much larger or much smaller than what is predicted by the area \begin{equation}\label{e.montdisc} \left| \frac1{N} \cdot \# \left\{1 \leq i \leq N: x_i \in D\right\} - |D|\right| \gtrsim N^{-3/4}. \end{equation} Higher-dimensional version of this statement for sets in $\mathbb T^d$ holds with the right-hand side of the order $\displaystyle{N^{-\frac12 - \frac1{2d}}}$. The proof of Montgomery's argument proceeds as follows: we first bound the $L^{\infty}$-norm of the `discrepancy function' trivially from below by the $L^2-$norm and then use Parseval's identity to multiplicatively separate the Fourier transform of the characteristic function of the geometric shape (in the example above: a disk) and the Fourier coefficients of the Dirac measures located at $\left\{ x_1, \dots, x_N\right\} \subset \mathbb{T}^2$ $$ \widehat{ \left( \sum_{n=1}^{N}{\delta_{x_n}} \right) } \,\, (k) = \sum_{n=1}^{N}{ e^{-2 \pi i \left\langle k, x_n \right\rangle}} \,\,\, \textup{ for } \, k \in \mathbb{Z}^2 .$$ A fundamental ingredient of the method is the fact that the Fourier transform of finite set of Dirac measures cannot be too small on low frequencies. \begin{lem*}[Montgomery \cite{mont1}] For any $\left\{ x_1, \dots, x_N\right\} \subset \mathbb{T}^2$ and $X \geq 0$ \begin{equation}\label{e.mont} \sum_{|k_1| \leq X} \sum_{|k_2| \leq X}{ \left| \sum_{n=1}^{N}{ e^{2 \pi i \left\langle k, x_n \right\rangle}}\right|^2} \geq N X^2. \end{equation} \end{lem*} This inequality is a two-dimensional analogue of an earlier result of Cassels \cite{cassels} and related to a result of Siegel \cite{siegel}. Montgomery's Lemma is essentially sharp, generalizations of the statement to $\mathbb{T}^d$ are straightforward. This discussion suggests that expression akin to the left-hand side of \eqref{e.mont} can be used as measures of uniformity of discrete sets of points, much like the discrepancy \eqref{e.montdisc}, see \cite{LuSt}. \subsection{Related recent results} A slight sharpening of Montgomery's Lemma has recently been given by the third author in \cite{stein1} (we only describe the result on $\mathbb{T}^2$, but higher-dimensional versions also hold): for all $\left\{ x_1, \dots, x_N\right\} \subset \mathbb{T}^2$ and $X \geq 0$ \begin{equation}\label{e.St} \sum_{\|k\| \leq X}{ \left| \sum_{n=1}^{N}{ e^{2 \pi i \left\langle k, x_n \right\rangle}}\right|^2} \gtrsim \sum_{i,j=1}^{N}{ \frac{X^2}{1 + X^4 \|x_i -x_j\|^4}}. \end{equation} This quantifies the natural notion that any type of clustering of the points is going to decrease the orthogonality to trigonometric functions. Montgomery's Lemma has usually been regarded as an inequality on the torus as opposed to a more general principle. However, in the study of irregularities of distribution on the sphere $\mathbb{S}^{d}$, the natural analogue of Fourier series is given by harmonic polynomials which are also well understood and allow for fairly explicit analysis. In \cite{bil2} the first and second author proved a generalization of \eqref{e.montdisc} on $\mathbb S^d$, which essentially boiled down to a spherical analogue of \eqref{e.mont}. Namely, denoting the eigenfunctions of the spherical Laplacian (i.e. spherical harmonics) by $\phi_0, \dots, \phi_k, \dots$, this inequality states \begin{equation}\label{e.bd} \sum_{k=0}^{X}{ \left| \sum_{n=1}^{N}{ \phi_k(x_n)} \right|^2} \gtrsim_d N X \end{equation} We observe that $\phi_0$ is constant and thus the first term is already of size $\sim N^2$. Exactly like on $\mathbb{T}^d$, for $k=0$ the inner sum is of size $N^2$ and the inequality is only interesting when the number of eigenfunction $X$ starts to outnumber the number of points $X \gtrsim N$. This is also necessary because there are point sets that are orthogonal to the first $\sim N$ eigenfunctions (this is classical on $\mathbb{T}^d$ and a substantial result on $\mathbb{S}^{d}$, see \cite{ahrens, bond}; it is likely to hold at a much greater level of generality). \section{Main results}\label{s.main} In the present paper we further extend Montgomery's Lemma in two different directions. First, we extend and generalize the statement of Montogomery's Lemma \eqref{e.mont} to general manifolds (with a logarithmic loss). Second, in the case of the sphere $\mathbb S^d$, we combine the ideas of \eqref{e.St}-\eqref{e.bd} and prove a spherical analogue of \eqref{e.St}, which refines \eqref{e.bd}. We also provide several applications of this result to irregularities of distribution and energy minimization on the sphere: a notably example is a refinement of Beck's lower bound on the $L^2-$spherical cap discrepancy. \subsection{Montgomery Lemma on general manifolds.} We now phrase a general version of Montgomery's Lemma on compact manifolds. It relates to various natural questions and we believe that a sharper form would be quite desirable. \begin{theorem}\label{t.1} Let $(M,g)$ be a smooth compact $d-$dimensional manifold, let $(\phi_k)_{k=0}^{\infty}$ denote the $L^2-$normalized Laplacian eigenfunctions of $-\Delta_g$ with the corresponding eigenvalues arranged in increasing order. Let $\left\{ x_1, \dots, x_N\right\} \subset M$, and let $(a_i)_{i=1}^{N}$ be a set of nonnegative weights. Then $$\sum_{k=0}^{X}{ \left| \sum_{n=1}^{N}{ a_n \phi_k(x_n)} \right|^2} \gtrsim_{(M,g)} \left(\sum_{i=1}^{N}{a_i^2} \right) \frac{ X}{(\log{X})^\frac{d}{2}}.$$ \end{theorem} It seems likely that the logarithm is an artifact of the method; the result is more general (but logarithmically worse) than the classical Montgomery Lemma on $\mathbb{T}^d$ and the version on the sphere \cite{bil2} since it allows for nonnegative weights: the classical proofs of Montgomery's Lemma, both on $\mathbb{T}^d$ and $\mathbb{S}^{d}$, fails in this more general setting. The last author has shown \cite{stein2} that, for $N$ sufficiently large, one of the summands for $X \leq c_d N$ is nonzero (where $c_d$ does not depend on the manifold).\\ Theorem 1 has various implications: one would naturally assume that as soon as $X \gtrsim N$, the eigenfunctions should be fairly decoupled from the set of points and each single summand should be roughly of order $\sim N$: the theorem shows this basic intuition to be true up to logarithmic factors. Another application concerns the limits of numerical integration: the Laplacian eigenfunctions $\phi_k$ have mean value 0 as soon as $k \geq 1$ and are oscillating rather slowly. One would, of course, expect it to be possible for $N$ points to integrate $\sim N$ functions exactly but, simultaneously, one would not expect such a rule to be able to do well on a larger set of (mutually orthogonal) functions. This was shown to hold in \cite{stein2}, the formulation of Theorem~1 would lead to a more quantitative result (akin to an estimate on the size of the unavoidable error, see also \cite{LuSt}). \subsection{Spherical extensions of Montgomery's Lemma} We now restrict our attention to the case when $M = {\mathbb S}^d$ is the unit sphere in ${\mathbb R}^{d+1}$ equipped with the normalized Haar measure ${\sigma}$. Denote by $\mathcal{H}_n$ the space of all spherical harmonics of degree $n$ on ${\mathbb S}^d$, and let $\{Y_{n,k}:~ k=1, 2,\cdots, d_n\}$ be a real orthonormal basis of $\mathcal{H}_n$ (recall $\mbox{dim}~ \mathcal{H}_n \sim n^{d-1}$). We have the following spherical analogue of \eqref{e.St}. \begin{theorem}\label{thm-1-1} For $\{x_1, \cdots, x_N\}\subset {\mathbb S}^d$, we have for all $L \in \mathbb{N}$ \begin{equation}\label{1-1} \sum_{n=0}^L \sum_{k=1}^{d_n} \Bigl|\sum_{j=1}^N Y_{n,k}(x_j)\Bigr|^2 \ge c_d L^d \sum_{i,j=1}^N \frac { \log ( 2 + L \|x_i-x_j\|) }{(1+ L\|x_i-x_j\|)^{d+1}}. \end{equation} \end{theorem} We observe that the left-hand side runs over $\sim L^d$ terms. Leaving just the diagonal terms ($i=j$) on the right-hand side one finds that the right-hand side is at least of the order $\sim N L^d$, i.e. \eqref{1-1} is stronger than \eqref{e.bd}. Similar to the case of the torus, this result has immediate applications irregularities of distribution on the sphere. We provide refinements of both classical \cite{Beck2} and recent \cite{bil2} discrepancy bounds. Moreover, with the help of the Stolarsky principle and its generalizations \cite{stol,bil2}, see \eqref{e.stol}-\eqref{e.gstol}, we obtain estimates on the the difference between discrete energies and energy integrals. These corollaries are gathered and proved in \S \ref{s.cor}. \subsection{$L^2-$spherical cap discrepancy.} We wish to highlight a particular implication that refines of a famous result of J. Beck \cite{Beck2}. The $L^2-$spherical cap discrepancy is defined as the $L^2-$norm of the spherical cap discrepancy (i.e. the difference between the empirical distribution of $N$ points and the uniform distribution) integrated over all radii (we refer to \S \ref{s.cor} for a more formal definition). The result of Beck states that for any set $Z$ of $N$ points on $\mathbb{S}^{d}$ $$ D_{L^2, \textup{cap}} (Z) \gtrsim_d N^{-\frac12-\frac{1}{2d}} $$ and this is sharp up to a logarithmic factor. Our approach yields a slight refinement. \begin{theorem}\label{t.beck+} For any set of $N$ points $Z=\left\{z_1, \dots, z_N\right\} \subset \mathbb{S}^d$ $$D_{L^2, \textup{cap}} (Z) \gtrsim_d N^{-\frac12-\frac1{2d}} \left( \frac{1}{N} \sum_{i,j=1}^N \frac {\log \,( 2 + N^{1/d} \|z_i-z_j\|)}{(1+ N^{1/d}\|z_i-z_j\|)^{d+1}}\right)^{1/2}.$$ \end{theorem} We remark that summing over the diagonal $i=j$ shows that the additional factor is $\gtrsim 1$ implying Beck's original result. However, as soon as there is subtle clustering of points, the off-diagonal terms may actually contribute a nontrivial quantity. \section{Montgomery Lemma on general manifolds: proof of Theorem \ref{t.1}.} \begin{proof} We first observe that the eigenfunction $\phi_0 \equiv 1/\sqrt{|M|}$ is constant and thus $$ \sum_{k=0}^{X} \left| \sum_{i=1}^{N}{ a_i \phi_k(x_i)} \right|^2 \gtrsim_{(M,g)} \left(\sum_{i=1}^{N}{ a_i}\right)^2 = \frac{ \left(\sum_{i=1}^{N}{ a_i}\right)^2}{ \sum_{i=1}^{N}{ a_i^2}} \sum_{i=1}^{N}{ a_i^2}$$ and it thus suffices to prove the statement for $$ X \gtrsim \frac{ \left(\sum_{i=1}^{N}{ a_i}\right)^2}{ \sum_{i=1}^{N}{ a_i^2}}.$$ The proof starts by bounding the desired quantity from below; here, we let $t>0$ be an arbitrary number that will be fixed later. \begin{align*} \sum_{k=0}^{X} \left| \sum_{i=1}^{N}{ a_i \phi_k(x_i)} \right|^2 &\geq \sum_{k=0}^{X} e^{-\lambda_k t} \left| \sum_{i=1}^{N}{a_i \phi_k(x_i)} \right|^2 \\ &= \sum_{k=0}^{X} e^{-\lambda_k t} \sum_{i,j=1}^{N}{ a_i \phi_k(x_i) a_j \phi_k(x_j) } \\ &= \sum_{i,j=1}^{N}{ a_i a_j \sum_{k=0}^{X} e^{-\lambda_k t} \phi_k(x_i) \phi_k(x_j)}. \end{align*} Here and throughout the proof, the $\lambda_k$ denote the eigenvalues of $-\Delta_g$ such that $-\Delta_g \phi_k =\lambda_k \phi_k$ and $0=\lambda_0\leq \lambda_1\leq \lambda_2\leq \cdots$. The inner sum is now close to a classical expansion for the heat kernel $$ p_t(x,y) = \sum_{k=0}^{\infty} e^{-\lambda_k t} \phi_k(x) \phi_k(y).$$ This means that we can replace the inner sum by the heat kernel while incurring an error that only depends on the size of $X$. We will now make this precise: the main ingredients are Weyl's law $ \lambda_k \sim c_{M} k^{2/d},$ where $c_{M}$ only depends on the volume of the manifold $M$ and H\"ormander's estimate \cite{hor} $$ \|\phi_k\|_{L^{\infty}} \lesssim_{(M,g)} \lambda_k^{\frac{d-1}{4}}.$$ Combining these two inequalities, we can now estimate the tail: \begin{align*} \left| \sum_{k=X+1}^{\infty} e^{-\lambda_k t} \phi_k(x_i) \phi_k(x_j) \right| &\lesssim_{(M,g)} \sum_{k=X+1}^{\infty} \left|e^{- c k^{\frac{2}{d}} t} \phi_k(x_i) \phi_k(x_j) \right| \\ &\leq \sum_{k=X+1}^{\infty} e^{- c k^{\frac{2}{d}} t} \|\phi_k\|_{L^{\infty}}^2\\ &\lesssim_{(M,g)} \sum_{k=X+1}^{\infty} e^{- ck^{\frac{2}{d}} t} \lambda_k^{\frac{d-1}{2}} \\ &\lesssim_{(M,g)} \sum_{k=X+1}^{\infty} e^{-c k^{\frac{2}{d}} t} k^{1 - \frac{1}{d}} . \end{align*} This quantity can be bounded from above by an integral which, after substitution, reduces to the incomplete Gamma function: \begin{align*} \sum_{k=X+1}^{\infty} e^{- ck^{2/d} t} k^{1 - \frac{1}{d}} &\leq \int_{X}^{\infty} e^{- \left( \frac{y}{t^{-d/2}}\right)^{\frac{2}{d}} } y^{1 - \frac{1}{d}} dy \\ &=\frac{1}{t^{d-\frac{1}{2}}} \int_{cX t^{d/2}}^{\infty} e^{- z^{\frac{2}{d}} } z^{1 - \frac{1}{d}} dz \\ &=\frac d2 \frac{1}{t^{d-\frac{1}{2}}} \Gamma\left(d-\frac{1}{2}, cX^{\frac{2}{d}} t\right). \end{align*} We will end up working in the regime $X^{\frac{2}{d}} t \gg 1$. In this regime, there is a classical asymptotic (see e.g. Abramowitz \& Stegun \cite[\S 6.5]{abra}), valid for $a \gg 1$, $$ \Gamma\left(d-\frac{1}{2}, a \right) \lesssim_{d} a^{d-\frac{3}{2}} e^{-a}.$$ Altogether, this implies, since we may assume that $$ X \gtrsim \frac{ \left(\sum_{i=1}^{N}{ a_i}\right)^2}{ \sum_{i=1}^{N}{ a_i^2}},$$ the bound \begin{align*} \sum_{k=0}^{X} \left| \sum_{i=1}^{N}{ a_i \phi_k(x_i)} \right|^2 &\gtrsim \sum_{i,j=1}^{N}{a_i a_j p_t(x_i, x_j) } - C\sum_{i,j =1}^{N}{ \frac{ a_i a_j }{t^{d-\frac{1}{2}}} \left( X^{\frac{2}{d}} t\right)^{d - \frac{3}{2}} \exp\left(-c X^{\frac{2}{d}} t\right)} \\ &= \left( \sum_{i,j=1}^{N}{a_i a_j p_t(x_i, x_j) } \right) - C\frac{ \left( \sum_{i=1}^{N}{a_i}\right)^2 }{t^{d-\frac{1}{2}}} \left( X^{\frac{2}{d}} t\right)^{d - \frac{3}{2}} \exp\left(- cX^{\frac{2}{d}} t\right) \\ &\gtrsim \sum_{i,j=1}^{N}{a_i a_j p_t(x_i, x_j) } - C\frac{ X \sum_{i=1}^{N}{a_i^2} }{t^{d-\frac{1}{2}}} \left( X^{\frac{2}{d}} t\right)^{d - \frac{3}{2}} \exp\left(-c X^{\frac{2}{d}} t\right) \end{align*} We will end up working at time $t \sim X^{-\frac{2}{d}} \log{X} \lesssim 1$ which, for $X$ sufficiently large, enables us to make use of Varadhan's short-time asymptotics $$ p_t(x,y) \sim \frac{1}{(4 \pi t)^{d/2}} \exp\left( - \frac{\|x-y\|^2}{4t} \right)$$ to argue that $$ \sum_{i,j=1}^{N}{a_i a_j p_t(x_i, x_j) } \geq \sum_{i=1}^{N}{a_i^2 p_t(x_i, x_i) } \gtrsim t^{-\frac{d}{2}} \sum_{i=1}^{N}{a_i^2}.$$ Summarizing, we have $$ \sum_{k=0}^{X} \left| \sum_{i=1}^{N}{ a_i \phi_k(x_i)} \right|^2 \gtrsim_{(M,g)} \sum_{i=1}^{N}{a_i^2} \left[ t^{-\frac{d}{2}} - \frac{C X }{t^{d-\frac{1}{2}}} \left( X^{\frac{2}{d}} t\right)^{d - \frac{3}{2}} \exp\left(-c X^{\frac{2}{d}} t \right) \right].$$ Setting $t =A X^{-\frac{2}{d}} \log{X}$ with $A=\frac 1c (1-\frac 1d) +1$ now implies the result. \end{proof} \section{An improved Montgomery Lemma on the sphere:\\ proof of Theorem \ref{thm-1-1}} Let $C_n^\lambda$ denote the Gegenbauer (ultraspherical) polynomials of degree $n$, which are orthogonal on $[-1,1]$ with respect to the weight $w_\lambda(t) = (1-t^2)^{\lambda - 1/2} $ (see \cite{DX} for the backgound information). Since we are working on $\mathbb S^d$, we set $\lambda=\frac {d-1}2$. Denote also $E_n^\lambda(t)=\frac {n+\lambda}{\lambda} C_n^\lambda (t)$. For ${\delta}>0$, we define the Ces\`aro-type kernel $$ K_L^{\delta} (t):=\sum_{k=0}^L \frac {A_{L-k}^{\delta}}{A_L^{\delta}} E_k^\lambda (t),\ \ \ \text{with}\ \ A_j^{\delta}=\frac {\Gamma(j+{\delta}+1)}{\Gamma(j+1)\Gamma({\delta}+1)}.$$ It is a classical result of Kogbetliantz \cite{kog} (see also \cite{reim}) that $K_L^\delta (t) \ge 0$ on $[-1,1]$, whenever $\delta \ge d$. \begin{lem}\label{lem-1-1} For $\{x_1, \cdots, x_N\}\subset {\mathbb S}^d$ and any ${\delta}>0$, we have $$ \sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2 =\sum_{i,j=1}^N E_n^\lambda (x_i\cdot x_j)\ge 0,\ \ \ n=0,1,\cdots, $$ and \begin{equation}\label{1-2} \sum_{n=0}^L \sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2 \ge \sum_{i,j=1}^N K_{L}^{\delta} (x_i\cdot x_j). \end{equation} \end{lem} This lemma follows directly from the addition formula for spherical harmonics. We include the proof here for the sake of completeness. \begin{proof}By the addition formula for spherical harmonics, we have \begin{align*} \sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2&= \sum_{k=1}^{d_n} \sum_{i=1}^N \sum_{j=1}^N Y_{n,k}(x_i)Y_{n,k}(x_j)=\sum_{i,j=1}^N \sum_{k=1}^{d_n} Y_{n,k}(x_i)Y_{n,k}(x_j)\\ &=\sum_{i,j=1}^N E_n^\lambda (x_i\cdot x_j). \end{align*} This also implies that \begin{align*} \sum_{n=0}^L \sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2&= \sum_{n=0}^L \sum_{i,j=1}^N E_n^\lambda(x_i\cdot x_j) \ge \sum_{n=0}^L \frac {A_{L-n}^{\delta}}{A_{L}^{\delta}} \sum_{i,j=1}^N E_n^\lambda(x_i\cdot x_j)\\ &= \sum_{i,j=1}^N \sum_{n=0}^L \frac {A_{L-n}^{\delta}}{A_{L}^{\delta}} E_n^\lambda(x_i\cdot x_j) =\sum_{i,j=1}^N K_{L}^{\delta} (x_i\cdot x_j). \end{align*} \end{proof} Numerical experiments suggest that $K_n^d$ is not just non-negative, but is actually strictly positive and should satisfy favorable lower bounds. However, we could not prove it, hence, as in \cite{stein1}, we shall make use of additional rounds of averaging. Define $$ G_n^{d+1}(t)=\frac 1{n+1} \sum_{j=0}^n K_j^d(t) \,\,\, \textup{ and }\,\,\, G_n^{d+2}(t)=\frac 1{n+1} \sum_{j=0}^n G_j^{d+1}(t).$$ \begin{lem}\label{lem-1-2}For $n\in {\mathbb N}$ and ${\theta}\in (0, \pi)$, \begin{equation}\label{1-3-0} G_n^{d+2} (\cos{\theta}) \ge C n^d (1+n{\theta})^{-d-1}\log (2+n{\theta}). \end{equation} \end{lem} {\emph{Remark:}} It seems that \eqref{1-3-0} with $G_n^{d+1}$ in place of $G_n^{d+2}$ remains true, but the proof would be more involved (we prove a slightly weaker bound \eqref{1-5}). \begin{proof} First, we recall that $ K_n^d (\cos{\theta})\ge 0$ for ${\theta}\in [0,\pi]$, and $\|K_n^d\|_\infty =K_n^d (1) \sim (n+1)^d$. It follows that for ${\delta}=d+1$ or $ d+2$, $$\|G_n^{\delta} \|_\infty =G_n^{\delta} (1) \sim (n+1)^d.$$ By Bernstein's inequality for trigonometric polynomials, this also implies that for $F_n(t):=K_n^d(t)$ or $ G_n^{d+1}(t)$ or $ G_n^{d+2}(t)$, we have \begin{equation}\label{1-4} F_n (\cos {\theta}) \ge \frac 12 \|F_n\|_\infty \sim (n+1)^d,\ \ 0\leq {\theta}\leq \frac 1{2n}. \end{equation} Next, we show that \begin{equation}\label{1-5} G_n^{d+1} (\cos{\theta}) \ge c n^d(1+n{\theta})^{-d-1},\ \ n\ge 1,\ \ {\theta}\in [0,\pi]. \end{equation} If $0\leq {\theta}\leq \frac 1{2n}$, then \eqref{1-5} follows directly from \eqref{1-4}. For $\frac 1{2n}\leq {\theta}\leq \pi$, we have \begin{align*} G_n^{d+1} (\cos{\theta})& =\frac 1{n+1} \sum_{j=0}^n K_j^d (\cos{\theta}) \ge \frac 1{n+1} \sum_{0\leq j \leq \frac 1{2{\theta}}} K_j^d (\cos{\theta})\\ &\ge c \frac 1{n+1} \sum_{0\leq j \leq \frac 1{2{\theta}}} (j+1)^d\sim n^{-1} {\theta}^{-d-1} \sim n^d (1+n{\theta})^{-d-1}. \end{align*} Finally, we prove estimate \eqref{1-3-0}. Note that \eqref{1-5} with $G_n^{d+2}$ in place of $G_n^{d+1}$ remains true. Thus, without loss of generality, we may assume that $\frac {2}{n}\leq {\theta}\leq \pi$ and $n\ge 10$. We then have \begin{align*} G_n^{d+2} (\cos{\theta})& =\frac 1{n+1} \sum_{j=0}^n G_j^{d+1} (\cos{\theta})\ge c n^{-1}\sum_{j=0}^n j^d(1+j{\theta})^{-d-1}\\ &\ge c n^{-1}\sum_{{\theta}^{-1}\leq j \leq n} j^{-1} {\theta}^{-d-1}\ge c n^{-1}{\theta}^{-d-1} \int_{{\theta}^{-1}+1}^n \frac {dt}t\\ &=c n^{-1}{\theta}^{-d-1} \int_{1+{\theta}}^{n{\theta}} \frac {dt}t\sim n^d (1+n{\theta})^{-d-1} \log (n{\theta}+2). \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm-1-1}] Using Lemma \ref{lem-1-1}, we have \begin{align} \sum_{n=0}^L \sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2 & \ge \frac 1L \sum_{m=0}^L \sum_{n=0}^m \sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2\notag\\ &\ge \frac 1L \sum_{m=0}^L \sum_{i,j=1}^N K_{m}^d (x_i\cdot x_j)=\sum_{i,j=1}^N G_{L}^{d+1} (x_i\cdot x_j).\label{1-6} \end{align} Using \eqref{1-6} and averaging once again, we have \begin{align*} \sum_{n=0}^L &\sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2 \ge \frac 1L \sum_{m=0}^L \sum_{n=0}^m \sum_{k=1}^{d_n} |\sum_{j=1}^N Y_{n,k}(x_j)|^2\\ &\ge \frac 1L \sum_{m=0}^L \sum_{i,j=1}^N G_{m}^{d+1} (x_i\cdot x_j)=\sum_{i,j=1}^N G_{L}^{d+2} (x_i\cdot x_j), \end{align*} which, using \eqref{1-3-0}, implies the desired estimate \eqref{1-1}. \end{proof} \section{Some corollaries for discrepancy and discrete energy of point distributions on the sphere}\label{s.cor} For a finite set of points $Z=\{z_1, \cdots, z_N\}\subset \mathbb{S}^{d}$, its $L^2-$discrepancy with respect to a function $f: [-1,1] \rightarrow \mathbb R$ is defined as \begin{align}\label{e.dL2} D_{L^2, f}(Z) =\bigg( \int\limits_{\mathbb{S}^{d}}\Bigl| \frac 1N \sum_{j=1}^N f(x\cdot z_j) - \int\limits_{\mathbb{S}^{d}} f(x\cdot y)\, d{\sigma}(y) \Bigr|^2\, d{\sigma}(x)\bigg)^{\f12}. \end{align} In particular, when $f(t)= f_\tau (t) = {\bf{1}}_{[\tau,1]} (t)$, one obtains the discrepancy with respect to spherical caps $C(x,\tau) = \{ y \in \mathbb{S}^{d}:\, x\cdot y \ge \tau \}$ of aperture $\arccos \tau$, i.e. \begin{equation} D_{L^2, f_\tau}^2 (Z) = \int\limits_{\mathbb{S}^{d}}\Bigl| \frac 1N \sum_{j=1}^N {\bf{1}}_{C(x,\tau)} ( z_j) - \sigma \big( C(x,\tau) \big) \Bigr|^2\, d{\sigma}(x), \end{equation} Its $L^2-$average over the parameter $\tau$ yields the classical $L^2-${\it{spherical cap discrepancy}} \begin{align}\label{e.disccap} D_{L^2, \textup{cap}}^2 (Z) = \int\limits_{-1}^1 D_{L^2, f_\tau}^2 (Z) \, d\tau , \end{align} which has been extensively studied \cite{beck1,Beck2}. In particular, this quantity satisfies the following identity known as the {\it{Stolarsky principle}} \cite{stol}, which relates it to a certain discrete energy. \begin{equation}\label{e.stol} c_d \, D_{L^2, \textup{cap}}^2 (Z) = \int\limits_{\mathbb S^{d}} \int\limits_{\mathbb S^{d}} \| x- y \| \, d\sigma (x)\, d\sigma (y)\,\, - \,\, \frac{1}{N^2} \sum_{i,j = 1}^N \| z_i - z_j \| , \end{equation} where $c_d$ is a dimensional constant. It has been established in \cite{bil2,bil3} that Stolarsky principle can be generalized in the following way: for $f \in L^2 \big([-1,1], w_\lambda \big)$ \begin{align}\label{e.gstol} D^2_{L^2, f}(Z) = \frac{1}{N^2} \sum_{i=1}^N \sum_{j=1}^N F(z_i\cdot z_j) - \int\limits_{\mathbb{S}^{d}}\int\limits_{\mathbb{S}^{d}} F(x\cdot y)\, d{\sigma}(x) d{\sigma}(y), \end{align} where the function $F: [-1,1]\rightarrow \mathbb R$ is defined through the identity \begin{equation}\label{e.Ff} \widehat{F} (n,\lambda) = \big( \widehat{f} (n,\lambda) \big)^2 . \end{equation} Here and throughout the proof, $$ \widehat{f}(n,\lambda):=\frac {(n+\lambda)\Gamma(\lambda)}{\sqrt{\pi}\Gamma(\lambda+\f12) } \int_{-1}^1 f(t) C_n^\lambda(t) (1-t^2)^{\lambda-\f12}\, dt.$$ It is now easy to see that the refined spherical Montgomery Lemma, Theorem \ref{thm-1-1}, provides new estimates both for the discrepancy and discrete energies. Setting $$\displaystyle{G(x) = \frac1{N} \sum_{j=1}^N f (x\cdot z_j)},~ \mbox{we see that} \quad D_{L^2, f}(Z) = \| G - \widehat{G} (0,\lambda) \|_{L^2 (\mathbb{S}^{d}, d\sigma)}$$ and, according to the Funk--Hecke formula, for any spherical harmonic $Y_n \in \mathcal H_n$ \begin{equation} \langle G, Y_n \rangle = \frac1{N} \sum_{j=1}^N \int\limits_{\mathbb{S}^{d}} f(x\cdot z_j ) Y_n (x) d\sigma (x) = \frac{1}{N} \widehat{f} (n,\lambda) \sum_{j=1}^N Y_n(z_j). \end{equation} Thus we find that \begin{align} D_{L^2, f}^2(Z) & = \| G - \widehat{G} (0,\lambda) \|_2^2 = \sum_{n=1}^\infty \sum_{k=1}^{d_n} |\langle G, Y_{n,k} \rangle|^2 \\ \nonumber & = \frac1{N^2} \sum_{n=1}^\infty \big| \widehat{f} (n,\lambda) \big|^2 \sum_{k=1}^{d_n} \bigg| \sum_{j=1}^N Y_{n,k} (z_j) \bigg|^2 \\ \nonumber & \ge \frac{1}{N^2 } \cdot \min_{1\le n \le L} \big| \widehat{f} (n,\lambda) \big|^2 \cdot \sum_{n=1}^L \sum_{k=1}^{d_n} \bigg| \sum_{j=1}^N Y_{n,k} (z_j) \bigg|^2\\ \nonumber & = \frac{1}{N^2 } \cdot \min_{1\le n \le L} \big| \widehat{f} (n,\lambda) \big|^2 \cdot \left( \sum_{n=0}^L \sum_{k=1}^{d_n} \bigg| \sum_{j=1}^N Y_{n,k} (z_j) \bigg|^2 - N^2 \right), \end{align} where we used the fact that the term, corresponding to $n=0$, is $N^2$. If we set $L= C' N^{\frac1{d}}$ with $C'$ being a large dimensional constant, and leave just the diagonal terms in \eqref{1-1}, we see that $$ \sum_{n=1}^L \sum_{k=1}^{d_n} \bigg| \sum_{j=1}^N Y_{n,k} (z_j) \bigg|^2 \ge c'' N^2.$$ Therefore, again applying \eqref{1-1} of Theorem \ref{thm-1-1}, we arrive at the following corollary: \begin{corollary}\label{c.1} Let $f \in L^2 \big([-1,1], (1-t^2)^{\lambda - \frac12} \big)$. For $Z=\{z_1,\dots, z_N \} \subset \mathbb{S}^{d}$ we have \begin{equation}\label{e.c1} D_{L^2, f}^2 (Z) \gtrsim \frac{1}{N } \cdot \min_{1\le n \le C' N^{\frac1{d}} } \big| \widehat{f} (n,\lambda) \big|^2 \cdot \sum_{i,j=1}^N \frac {\log ( 2 + N^{1/d} \|z_i-z_j\|)}{(1+ N^{1/d}\|z_i-z_j\|)^{d+1}}, \end{equation} where $C'$ is a large constant depending only on the dimension. \end{corollary} Such lower bounds, which show that finite point sets cannot be distributed too uniformly, are a common theme in the subject of {\emph{irregularities of distribution}}. Using the generalized Stolarsky principle \eqref{e.gstol} and relation \eqref{e.Ff} we can also obtain a similar corollary for the discrete energy: \begin{corollary}\label{c.2} Assume that $F\in C[-1,1]$ and $\widehat{F} (n,\lambda) \ge 0$ for all $n\ge 1$ (i.e., up to the constant term, $F$ is a positive definite function on the sphere $\mathbb S^d$). Then for any point distribution $Z=\{z_1,\dots, z_N \} \subset \mathbb{S}^{d}$ \begin{equation}\label{e.c2} \frac{1}{N^2} \sum_{i,j=1}^N F(z_i\cdot z_j) - I_F (\sigma) \gtrsim \frac{1}{N } \cdot \min_{1\le n \le C' N^{1/d} } \widehat{F} (n,\lambda) \cdot \sum_{i,j=1}^N \frac {\log ( 2 + N^{1/d} \|z_i-z_j\|)}{(1+ N^{1/d}\|z_i-z_j\|)^{d+1}}, \end{equation} where $C'$ is a large constant depending only on the dimension, and $ I_F (\sigma) = \int\limits_{\mathbb{S}^{d}}\int\limits_{\mathbb{S}^{d}} F(x\cdot y)\, d{\sigma}(x) d{\sigma}(y) $ denotes the energy integral with potential given by $F$. \end{corollary} \noindent {\emph{Remark:}} The fact that every continuous positive definite function on the sphere can be represented by \eqref{e.Ff}, i.e. has appropriate decay of $\widehat{F} (n,\lambda)$, has been discussed in \cite[Lemma 2.3]{bil2}.\\ It is known (see e.g. \cite{bil2,bil3}) that for positive definite functions $F$, the uniform surface measure $\sigma$ minimizes the energy with potential $F$ over all Borel probability measures on $\mathbb S^d$. Thus Corollary \ref{c.2} states, in a quantitative way, that the energy of finite atomic measures with equal weights cannot be too close to the minimum. We observe that leaving just the $N$ diagonal terms ($i=j$) in the right-hand sides of \eqref{e.c1} and \eqref{e.c2} we recover the bounds obtained in \cite[Theorem 4.2]{bil2}: \begin{align} D_{L^2, f} (Z) &\gtrsim \min_{1\le n \le C' N^{\frac1{d}} } \big| \widehat{f} (n,\lambda) \big|, \\ \nonumber \frac{1}{N^2} \sum_{i,j=1}^N F(z_i\cdot z_j) - I_F (\sigma) & \gtrsim \min_{1\le n \le C' N^{\frac1{d}}} \widehat{F} (n,\lambda). \end{align} Corollaries \ref{c.1} and \ref{c.2} add more subtle information to these lower bounds. \\ Returning to the classical case of the spherical cap discrepancy \eqref{e.disccap}, recall that Beck's famous result \cite{beck}, which states that \begin{equation}\label{e.beck} D_{L^2, \textup{cap}} (Z) \gtrsim N^{-\frac12-\frac{1}{2d}} \end{equation} for any $N$-point set in the sphere $\mathbb{S}^{d}$ (and this is optimal up to a logarithmic factor). Using the fact that (see e.g. \cite{Sz} or \cite{bil2}) \begin{equation} \int_{-1}^1 \big| \widehat{f_\tau} (n,\lambda) \big|^2 \, d\tau \approx n^{-d-1} \end{equation} and repeating the arguments above almost verbatim, but with an additional averaging in $\tau$, one obtains a refinement of Beck's original estimate (this refinement has been stated in \S \ref{s.main} as Theorem \ref{t.beck+}). \begin{corollary}\label{c.3} For any point distribution $Z=\{z_1,\dots, z_N \} \subset \mathbb{S}^{d}$ \begin{equation}\label{e.c3} D_{L^2, \textup{cap}}^2 (Z) \gtrsim_d N^{-2-\frac1{d}} \cdot \sum_{i,j=1}^N \frac {\log \,( 2 + N^{1/d} \|z_i-z_j\|)}{(1+ N^{1/d}\|z_i-z_j\|)^{d+1}}. \end{equation} \end{corollary} As before, by considering only the diagonal terms one recovers Beck's result \eqref{e.beck}, and the bound \eqref{e.c3} provides more information: in particular, if the order of magnitude of the energy on the right-hand side is significantly greater than $N$, then the spherical cap discrepancy of $Z$ is necessarily too big. The original Stolarsky principle \eqref{e.stol} then leads to the following corollary concerning the sum of Euclidean distances between $N$ points on the sphere: \begin{corollary} For any point distribution $Z=\{z_1,\dots, z_N \} \subset \mathbb{S}^{d}$ \begin{equation}\label{e.c3} \mathcal J_d - \,\, \frac{1}{N^2} \sum_{i,j = 1}^N \| z_i - z_j \| \gtrsim_d N^{-2 - \frac1{d} } \cdot \sum_{i,j=1}^N \frac {\log \, ( 2 + N^{\frac 1d} \|z_i-z_j\|)}{(1+ N^{\frac 1d}\|z_i-z_j\|)^{d+1}}, \end{equation} where $$ \mathcal J_d = \int\limits_{\mathbb S^{d}} \int\limits_{\mathbb S^{d}} \| x- y \| \, d\sigma (x)\, d\sigma (y)\,\, = \frac{ 2^d \big[ \Gamma \big(\frac{d+1}{2}\big) \big]^2 }{\sqrt{\pi} \Gamma \big( d+ \frac12 \big) }.$$ \end{corollary} \textbf{Acknowledgment.} Parts of this work were started at the Workshop ``Discrepancy Theory and Quasi-Monte Carlo methods" held at the Erwin Schr\"odinger Institute, September 25 -- 29, 2017. The authors gratefully acknowledge its hospitality. Bilyk's work is supported by NSF grant DMS 1665007.
2,877,628,090,635
arxiv
\section*{Introduction} Let $k$ be a field and $Q$ a quiver of Dynkin type $\Delta$. Let $D^b(kQ)$ denote the bounded derived category of finite dimensional $kQ$-modules. Let $\tau$ denote the Auslander-Reiten translate of $D^b(kQ)$ and let $S$ denote the shift. For $m\in \mathbb{N}$ the \emph{$m$-cluster category} associated to $kQ$ is the orbit category $$\mathcal{C}^m_{\Delta}:=\frac{D^b(kQ)}{S^m\tau^{-1}}.$$ This category was introduced in~\cite{keller} and has been studied by the authors~\cite{baurmarsh}, Thomas~\cite{thomas}, Wralsen~\cite{wralsen} and Zhu~\cite{zhu}. It is known that $\mathcal{C}^m_{\Delta}$ is triangulated~\cite{keller}, Krull-Schmidt and has almost split triangles~\cite[1.2,1.3]{bmrrt}. The $m$-cluster category is a generalisation of the cluster category. The cluster category was introduced in~\cite{ccs1} (for type $A$) and~\cite{bmrrt} (general hereditary case), and can be regarded as the case $m=1$ of the $m$-cluster category. Keller has shown that the $m$-cluster category is Calabi-Yau of dimension $m+1$~\cite{keller}. We remark that such Calabi-Yau categories have also been studied in~\cite{kellerreiten}. One of the aims of the definition of the cluster category was to model the Fomin-Zelevinsky cluster algebra~\cite{fominzelevinsky} representation-theoretically. We show that $\mathcal{C}^m_{D_n}$ can be realised geometrically in terms of a category of arcs in a punctured polygon with $nm-m+1$ vertices. This generalises a result of Schiffler~\cite{schiffler}, who considered the case $m=1$. We remark that the punctured polygon model for the cluster algebra of type $D_n$ appears in work of Fomin, Schapiro and Thurston~\cite{fst} as part of a more general set-up, building on~\cite{fg1,fg2,gsv1,gsv2} which consider links between cluster algebras and Teichm\"{u}ller theory. Also, such a geometric realisation of a cluster category first appeared (with a construction for type $A_n$ in the case $m=1$) in~\cite{ccs1}. Our approach is based on the idea of the \emph{$m$th power} of a translation quiver introduced in~\cite{baurmarsh}. We show that, with a slight modification of the definition for $m=2$, the Auslander-Reiten quiver of $\mathcal{C}^m_{D_n}$ can be realised as a connected component of the $m$th power of the Auslander-Reiten quiver of $\mathcal{C}^1_{D_{nm-m+1}}$. In Section~\ref{se:toralexample} we show that, if this modification is not made, the square of the Auslander-Reiten quiver of $\mathcal{C}^1_{D_4}$ has a connected component whose underlying topological space is a torus. \section{Notation and Definitions} \label{notation} Let $Q$ be a quiver of underlying Dynkin type $D_n$. The vertices of $Q$ are labelled $0,\overline{0},1,\dots,n-2$ and the arrows are $i\to i-1$ ($i=1,\dots,n-2$) together with $1\to \overline{0}$; see Figure~\ref{dnquiver}. \begin{figure}[ht] \begin{center} \includegraphics{D_N.eps} \caption{Quiver of type $D_n$}\label{fig1} \label{dnquiver} \end{center} \end{figure} \vspace{.5cm} We now recall the Auslander-Reiten quiver of the cluster category $\mathcal{C}_{D_n}$ (see~\cite[\S1]{bmrrt},~\cite{happel}). It is a stable translation quiver built from $n$ copies of $Q$. We denote it by $\Gamma(D_n,1)$. The vertices of $\Gamma(D_n,1)$ are $V(D_n,1):=\mathbb{Z}_n\times \{0,\overline{0},1,\dots,n-2\}$. The arrows are \[ \left. \begin{array}{l} (i,j)\to (i,k) \\ (i,k)\to (i+1,k) \end{array} \right\} \quad \text{whenever there is an arrow $j\to k$ in $D_{n}$.} \] Finally, the translation $\tau$ is given by \[ \tau(i,j)=\left\{\begin{array}{ll} (i-1,\overline{j}), & \text{if $i=0,j\in\{0,\overline{0}\}$ and $n$ is odd,}\\ (i-1,j), & \text{otherwise.} \end{array}\right. \] We use the convention that $\overline{\overline{0}}=0$. Note that the switch described here only occurs for odd $n$. As an example, we draw the quivers $\Gamma(D_n,1)$ for $n=3$ and $n=4$; see Figures~\ref{fi:quiverd3} and~\ref{fi:quiverd4}. The translation $\tau$ is indicated by dotted lines (it is directed to the left). \begin{figure} \begin{center} \includegraphics{D3.eps} \caption{The quiver $\Gamma(D_3,1)$} \label{fi:quiverd3} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics{D4.eps} \caption{The quiver $\Gamma(D_4,1)$} \label{fi:quiverd4} \end{center} \end{figure} We recall the notion of the $m$-th power of a translation quiver (cf.~\cite{baurmarsh}). If $\Gamma$ is a translation quiver with translation $\tau$, then the quiver $\Gamma^m$, the {\em $m$-th power of $\Gamma$}, is the quiver whose objects are the same as the objects of $\Gamma$ and whose arrows are the sectional paths (in $\Gamma$) of length $m$. A path $x=x_0\to x_1\to\dots\to x_{m-1}\to x_m=y$ is said to be {\em sectional} if $\tau x_{i+1}\neq x_{i-1}$ for $i=1,\dots,m-1$ (in the cases where $\tau x_{i+1}$ is defined), cf.~\cite{ringel}. One of our goals is to realise the Auslander-Reiten quiver for the $m$-cluster category of type $D_n$ in terms of the $m$th power of the Auslander-Reiten quiver of a cluster category of type $D_{nm-m+1}$. To be able to do this, we introduce a new class of sectional paths. \begin{definition} \rm Let $\Gamma=\Gamma(D_n,1)$ be the translation quiver defined above, with vertices $V(D_n,1)=\mathbb{Z}_n\times\{0,\overline{0},1,\dots,n-2\}$. We say that a sectional path $x=x_0\to x_1\to\dots\to x_{m-1}\to x_m=y$ (where $x_i\in V(D_n,1)$) is {\em restricted} if there is no $i$ such that $x_{i+1}=(r,0)$ for some $r$ while $x_{i-1}=(r-1,\overline{0})$ or such that $x_{i+1}=(r,\overline{0})$ for some $r$ while $x_{i-1}=(r-1,0)$. \end{definition} \begin{remark} \rm \label{re:restricted} Note that unless $m=2$, the restricted sectional paths of length $m$ in $\Gamma$ are exactly the sectional paths of length $m$. We can see this as follows. Firstly, it is clear that any sectional path of length $1$ is necessarily restricted. Suppose that $m>2$. Let $x=x_0\to x_1\to\dots\to x_{m-1}\to x_m=y$ be sectional, and suppose that there is an $i\in\{1,\dots,m-1\}$ such that $x_{i+1}=(r,0)$ and $x_{i-1}=(r-1,\overline{0})$. Then $x_i=(r,1)$. In case $i=1$ we have $x_{i+2}=(r+1,1)$, and it follows that the original path is not sectional, a contradiction. Similarly, if $i>1$, we have $x_{i-2}=(r-1,1)$, and again the original path is not sectional. Hence the only sectional paths that are not restricted are the paths of the form $(i,0)\to (i,1) \to (i+1,\overline{0})$ and $(i,\overline{0})\to (i,1)\to (i+1,0)$ ($i\in\mathbb{Z}_n$). \end{remark} With this new notion we are now ready to introduce a restricted version of the translation quiver $((\Gamma(D_n,1))^m,\tau^m)$. We define a translation quiver \linebreak $(\mu_m(\Gamma(D_n,1)),\tau^m)$ as follows. The vertices of $(\mu_m(\Gamma(D_n,1)),\tau^m)$ are the same as the vertices of $(\Gamma(D_n,1),\tau^m)$, i.e. $\mathbb{Z}_n\times\{0,\overline{0},1,\dots,n-2\}$, the arrows are the restricted sectional paths of length $m$ in $(\Gamma(D_n,1),\tau^m)$ and the translation is $\tau^m$. \begin{lemma} For any $m$, the pair $(\mu_m(\Gamma(D_n,1)),\tau^m)$ is a stable translation quiver. \end{lemma} \begin{proof} We firstly note that the unrestricted version, $(\Gamma(D_n,1)^m,\tau^m)$, is a stable translation quiver by~\cite[6.2]{baurmarsh}. By Remark~\ref{re:restricted}, the quiver $(\mu_m(\Gamma(D_n,1)),\tau^m)$ is the same as $(\Gamma(D_n,1)^m,\tau^m)$ if $m\not=2$, so the result follows in this case. Now assume that $m=2$ and fix a vertex $x$ in $\Gamma(D_n,1)$. To show that \linebreak $(\mu_m(\Gamma(D_n,1)),\tau^m)$ is a translation quiver, we need to show that there is a restricted sectional path of length $2$ from $y$ to $x$ if and only if there is a restricted sectional path of length $2$ from $\tau^2(x)$ to $y$. Since the restricted sectional paths in $\Gamma(D_n,1)$ of length $2$ starting or ending at $x$ are the same as the sectional paths provided $x$ is not of the form $(i,0)$ or $(i,\overline{0})$, we are reduced to this case. If $x=(i,0)$, the sectional paths of length $2$ ending in $x$ are $(i,2)\rightarrow (i,1)\rightarrow (i,0)$ and $(i-1,\overline{0})\rightarrow (i,1)\rightarrow (i,0)$. The sectional paths of length $2$ starting at $\tau^2(x)=(i-2,0)$ are $(i-2,0)\rightarrow (i-1,1)\rightarrow (i,2)$ and $(i-2,0)\rightarrow (i-1,1)\rightarrow (i-1,\overline{0})$. The second path only in each case is not restricted, so we see that there is a restricted sectional path of length $2$ from $y$ to $x$ if and only if there is a restricted sectional path of length $2$ from $\tau^2(x)$ to $y$. The argument in case $x=(i,\overline{0})$ is similar. Hence $(\mu_m(\Gamma(D_n,1)),\tau^m)$ is a translation quiver. By construction, no vertex is projective and $\tau^m$ is defined on all vertices (since $\tau$ is). Therefore, $(\mu_m(\Gamma(D_n,1)),\tau^m)$ is stable. \end{proof} \section{The $m$-cluster category of type $D_n$ as a component of a restricted $m$th power} \label{se:mthpower} Let $n,m\in\mathbb{N}$, with $n\geq 3$. We recall that~\cite{happel} the derived category of a quiver of Dynkin type $D_n$ has vertices $\mathbb{Z}\times \{0,\overline{0},1,2,\ldots ,n-2\}$ and arrows given by $(i,j)\rightarrow (i,j-1)$ and $(i,j-1)\rightarrow (i+1,j)$ for $1\leq j\leq n-2$, and $(i,1)\rightarrow (i,\overline{0})$ and $(i,\overline{0})\rightarrow (i+1,1)$, where $i\in\mathbb{Z}$ is arbitrary. We also have that $$S^m(i,0)=\left\{ \begin{array}{cc} (i+m,0), & \text{$nm$ even,} \\ (i+m,\overline{0}), & \text{$nm$ odd.} \end{array}\right., $$ while $S^m(i,j)=(i+m,j)$, otherwise. Let $\Gamma(D_n,m)$ be the quiver with vertices $$V(D_n,m)=\{(i,j):i\in\mathbb{Z}_{nm-m+1},\ j\in \{0,\overline{0},1,2,\ldots, n-2\}\}.$$ The arrows are given by $(i,j)\rightarrow (i,j-1)$ and $(i,j-1)\rightarrow (i+1,j)$ for $1\leq j\leq n-2$, and $(i,1)\rightarrow (i,\overline{0})$ and $(i,\overline{0})\rightarrow (i+1,1)$, where $i\in\mathbb{Z}_{nm-m+1}$ is arbitrary and the addition is modulo $nm-m+1$. We also define $$\widetilde{\tau}(i,j)= \left\{ \begin{array}{cc} (i-1,\overline{j}) & \mbox{\ if\ } i=0,\ j\in\{0,\overline{0}\}\mbox{\ and\ }nm\mbox{\ is\ odd,} \\ (i-1,j) & \mbox{otherwise.} \end{array}\right.$$ It follows from the construction of $\mathcal{C}^m_{D_n}$ and the above description of the derived category that $(\Gamma(D_n,m),\widetilde{\tau})$ is the Auslander-Reiten quiver of $\mathcal{C}^m_{D_n}$ (and, in particular, is a stable translation quiver). The vertices of the Auslander-Reiten quiver $\Gamma(D_{nm-m+1},1)$ of $\mathcal{C}^1_{D_{nm-m+1}}$ are $$V(D_{nm-m+1},1)= \mathbb{Z}_{nm-m+1}\times \{0,\overline{0},1,2,\ldots ,nm-m-1\}.$$ The arrows are given by $(i,j)\rightarrow (i,j-1)$ and $(i,j-1)\rightarrow (i+1,j)$ for $1\leq j\leq nm-m-1$, and $(i,1)\rightarrow (i,\overline{0})$ and $(i,\overline{0})\rightarrow (i+1,1)$, where $i$ is arbitrary and the addition is modulo $nm-m+1$. We also have $$\tau(i,j)=\left\{ \begin{array}{cc} (i-1,\overline{j}) & \mbox{\ if\ } i=0,\ j\in\{0,\overline{0}\}\mbox{\ and\ }nm-m+1\mbox{\ is\ odd,} \\ (i-1,j) & \mbox{otherwise.} \end{array}\right.$$ \begin{definition} We define a map $\sigma'$ from $V(D_n,m)$ to $V(D_{nm-m+1},1)$ as follows. We set $\sigma'(i,j)=(im,jm)$ whenever $j\not\in\{0,\overline{0}\}$ or $j\in\{0,\overline{0}\}$, $m$ is odd and $n$ is even. Otherwise, we have $j=0$ or $\overline{0}$ and we set $$\sigma'(i,j)=\left\{\begin{array}{cc} (im,jm), & \left\lfloor\frac{im}{nm-m+1}\right\rfloor \mbox{\ even,} \\ (im,\overline{jm}), & \left\lfloor\frac{im}{nm-m+1}\right\rfloor \mbox{\ odd.} \end{array}\right.$$ Here we restrict $i$ to lie in the set $\{0,1,2, n-1\}$. \end{definition} We use the usual convention that, for a real number $x$, $\lfloor x\rfloor$ denotes the largest integer $k$ such that $k\leq x$. Let $V:=\{(r,s)\in V(D_{nm-m+1},1)\,:\,m|s\}$. Here we adopt the convention that $m|\overline{0}$. \begin{lemma} With $\sigma'$ defined as above, we have that $\mbox{im}(\sigma')=V$. \end{lemma} \begin{proof} First we note that it is clear from the definition of $\sigma'$ that $\mbox{im}(\sigma')\subseteq V$. Let $(r,s)\in V(D_{nm-m+1},1)$ and suppose that $m|s$. Suppose first that $s\not=0,\overline{0}$. Write $s=km$ for $k\in\mathbb{Z}$. We have $r=r(nm-m+1-(n-1)m)=-r(n-1)m$ in $\mathbb{Z}_{nm-m+1}$, and it follows that $\sigma'(-r(n-1),k)=(-r(n-1)m,km)=(r,s)$ so $(r,s)\in\mbox{im}(\sigma')$. If $s=0$ or $\overline{0}$ then $\{\sigma'(-r(n-1),0),\sigma'(-r(n-1),\overline{0})\}= \{(r,s),(r,\overline{s})\}$ and we are done. \end{proof} Let $\Gamma$ denote the full subquiver of $\mu_m(\Gamma(D_{nm-m+1},1))$ induced by $V$, and let $\sigma$ be the (surjective) map obtained by restricting the codomain of $\sigma'$ to $V$. We will show that $\sigma$ is an isomorphism from $\Gamma(D_n,m)$ to $\Gamma$ and that $\Gamma$ is a connected component of $\mu_m(\Gamma(D_{nm-m+1},1))$. \begin{lemma} \label{closed} Let $x:=(r,s)\in V$ and suppose that $$x=x_0\rightarrow x_1\rightarrow x_2\rightarrow \cdots \rightarrow x_m=y$$ is a restricted sectional path of length $m$ in $\Gamma(D_{nm-m+1},1)$. Then $y\in V$. \end{lemma} \begin{proof} If $s\not=0,\overline{0}$ we can argue as in~\cite[7.1]{baurmarsh} to see that $y$ is either $(r,s-m)$, $(r+m,s+m)$ or $(r,\overline{s-m})$, where the last possibility only arises if $s=m$. If $s=0$ or $\overline{0}$ a similar argument shows that $y$ must be $(r+m,m)$. If $m=2$ we have the sectional paths $(r,0)\rightarrow (r+1,1)\rightarrow (r+2,\overline{0})$ and $(r,\overline{0})\rightarrow (r+1,1)\rightarrow (r+2,0)$, but, by definition, these are not restricted sectional paths. \end{proof} \begin{lemma} The map $\sigma:\Gamma(D_n,m)\rightarrow \Gamma$ is an isomorphism of quivers. \end{lemma} \begin{proof} Since $|V|=|V(D_n,m)|$ and $\sigma$ is surjective, it follows that $\sigma$ is bijective. The arrows in $\Gamma(D_n,m)$ are of the form $(i,j)\rightarrow (i,j-1)$, $(i,j)\rightarrow (i+1,j+1)$, $(i,1)\rightarrow (i,\overline{0})$ or $(i,\overline{0})\rightarrow (i+1,1)$. The arrows in $V\subseteq \mu_m(\Gamma(D_{nm-m+1},1))$ are of the form $(r,s)\rightarrow (r,s-m)$, $(r,s)\rightarrow (r+m,s+m)$, $(r,m)\rightarrow (r,\overline{0})$ or $(r,\overline{0})\rightarrow (r+m,m)$ (see the proof of Lemma~\ref{closed}). It follows that $\sigma$ is an isomorphism of quivers. \end{proof} \begin{prop} The map $\sigma:\Gamma(D_n,m)\rightarrow \Gamma$ is an isomorphism of translation quivers. Its image, $\Gamma$, is a connected component of $\mu_m(\Gamma(D_{nm-m+1},1))$. \end{prop} \begin{proof} By Lemma~\ref{closed}, both statements of the theorem will follow if we can show that, for all $(i,j)\in V(D_n,m)$, $\sigma(\widetilde{\tau}(i,j))= \tau^m(\sigma(i,j))$, since this will also imply that the image of $\sigma$ is closed under $\tau^m$. We firstly note that if $j\not=0,\overline{0}$ then $\widetilde{\tau}(i,j)=(i-1,j)$ while $\tau^m(im,jm)=((i-1)m,jm)$. Since $\sigma(i,j)=(im,jm)$ and $\sigma(i-1,j)=((i-1)m,jm)$, the result holds. So we are left with the case where $j=0$ or $\overline{0}$. We break this down into cases, considering first the case where $j=0$. \noindent {\bf Case (a)}: $m$ odd and $n$ even. \\ In this case we have that $nm-m+1$ is even, and $nm$ is even, so for any $(i,0)\in V(D_n,m)$, $\tau^m(im,0)=((i-1)m,0)$ while $\widetilde{\tau}(i,0)=(i-1,0)$. Since $\sigma(i-1,0)=((i-1)m,0)$ and $\sigma(i,0)=(im,0)$, we are done. \noindent {\bf Case (b)}: $m$ is even. \\ In this case we have that $nm-m+1$ is odd, so for $l=0$ or $\overline{0}$, we have: $$\tau^m(im,l)=\left\{ \begin{array}{cc} ((i-1)m,\overline{l}), & im\mod nm-m+1 \in\{0,1,\ldots ,m-1\}, \\ ((i-1)m,l), & \mbox{otherwise.} \end{array}\right. $$ Since $m$ is even, $nm$ is even, so $\widetilde{\tau}(i,0)=(i-1,0)$ for all $i$. (i) Suppose first that $im\not\in \{0,1,\ldots ,m-1\}$. Then $\left\lfloor\frac{im}{nm-m+1}\right\rfloor=\left\lfloor\frac{(i-1)m}{nm-m+1}\right\rfloor$. It follows that either $\sigma(i-1,0)=((i-1)m,0)$ and $\sigma(i,0)=(im,0)$ or $\sigma(i-1,0)=((i-1)m,\overline{0})$ and $\sigma(i,0)=(im,\overline{0})$. In either case we see that $\sigma(\widetilde{\tau}(i,0))=\tau^m(\sigma(i,0))$. (ii) Suppose next that $im\in\{1,\ldots ,m-1\}$. Then $\left\lfloor\frac{im}{nm-m+1}\right\rfloor-1=\left\lfloor\frac{(i-1)m}{nm-m+1}\right\rfloor$. It follows that either $\sigma(i-1,0)=((i-1)m,\overline{0})$ and $\sigma(i,0)=(im,0)$ or $\sigma(i-1,0)=((i-1)m,0)$ and $\sigma(i,0)=(im,\overline{0})$. In either case we see that $\sigma(\widetilde{\tau}(i,0))=\tau^m(\sigma(i,0))$. (iii) Finally, suppose that $i=0$. Then $i-1\equiv (n-1)m\mod nm-m+1$, and $\left\lfloor\frac{im}{nm-m+1}\right\rfloor=0$ is even while \begin{eqnarray*} \left\lfloor\frac{(n-1)m}{nm-m+1}\right\rfloor & = & \left\lfloor\frac{(n-1)m^2}{nm-m+1}\right\rfloor \\ & = & \left\lfloor\frac{(nm-m+1)m}{nm-m+1}-\frac{m}{nm-m+1}\right\rfloor \\ & = & \left\lfloor m-\frac{m}{nm-m+1}\right\rfloor=m-1, \end{eqnarray*} is odd (using here the fact that $m<(n-1)m+1=nm-m+1$). It follows that $\sigma(i-1,\overline{0})=((i-1)m,0)$ and $\sigma(i,0)=(im,0)$ and thus that $\sigma(\widetilde{\tau}(i,0))=\tau^m(\sigma(i,0))$. \noindent {\bf Case (c)}: $n,m$ both odd. \\ In this case we have that $nm-m+1$ is odd, so for $l=0$ or $\overline{0}$, we have: $$\tau^m(im,l)=\left\{ \begin{array}{cc} ((i-1)m,\overline{l}) & im\in\{0,1,\ldots ,m-1\}, \\ ((i-1)m,l) & \mbox{otherwise.} \end{array}\right. $$ Since $n$ and $m$ are both odd, $nm$ is odd, so $$\widetilde{\tau}(i,0)= \left\{ \begin{array}{cc} ((i-1),\overline{0}) & i=0, \\ (i-1,0) & \mbox{otherwise.} \end{array}\right. $$ (i) Suppose first that $im\not\in \{0,1,\ldots ,m-1\}$. Then $\left\lfloor\frac{im}{nm-m+1}\right\rfloor=\left\lfloor\frac{(i-1)m}{nm-m+1}\right\rfloor$. It follows that either $\sigma(i-1,0)=((i-1)m,0)$ and $\sigma(i,0)=(im,0)$ or $\sigma(i-1,0)=((i-1)m,\overline{0})$ and $\sigma(i,0)=(im,\overline{0})$. In either case we see that $\sigma(\widetilde{\tau}(i,0))=\tau^m(\sigma(i,0))$. \noindent (ii) Suppose next that $im\in\{1,\ldots ,m-1\}$. Then $\left\lfloor\frac{im}{nm-m+1}\right\rfloor-1=\left\lfloor\frac{(i-1)m}{nm-m+1}\right\rfloor$. It follows that either $\sigma(i-1,0)=((i-1)m,\overline{0})$ and $\sigma(i,0)=(im,0)$ or $\sigma(i-1,0)=((i-1)m,0)$ and $\sigma(i,0)=(im,\overline{0})$. In either case we see that $\sigma(\widetilde{\tau}(i,0))=\tau^m(\sigma(i,0))$. \noindent (iii) Finally, suppose that $i=0$. Then $i-1\equiv (n-1)m\mod nm-m+1$, and, as in Case (b)(i), $\left\lfloor\frac{im}{nm-m+1}\right\rfloor=0$ is even and $\left\lfloor\frac{(n-1)m}{nm-m+1}\right\rfloor=m-1$, which means in this case that it is also even. It follows that $\sigma(i-1,\overline{0})=((i-1)m,\overline{0})$ and $\sigma(i,0)=(im,0)$ and thus that $\sigma(\widetilde{\tau}(i,0))=\tau^m(\sigma(i,0))$. \end{proof} We therefore have: \begin{theorem}\label{thm:D-component} The translation quiver $\Gamma(D_n,m)$ can be realised as a connected component of the restricted $m$th power of the translation quiver $\Gamma(D_{nm-m+1},1)$. \end{theorem} Since $\mathcal{C}^m_{D_n}$ is equivalent to the additive hull of the mesh category of $\Gamma(D_n,m)$, we obtain the following corollary. \begin{cor} The $m$-cluster category of type $D_n$ can be realised as the additive hull of the mesh category of a connected component of the restricted $m$th power of the Auslander-Reiten quiver of the cluster category of type $D_{nm-m+1}$. For $m>2$ it is enough to take the usual $m$th power. \end{cor} \begin{example} \rm We give an example of the theorem in the case where $n=4$ and $m=2$. The theorem tells us that $\Gamma(D_4,2)$ is isomorphic to a connected component of $\mu_2(\Gamma(D_7,1))$. In Figure~\ref{fi:d7usual} we show the translation quiver $\Gamma(D_7,1)$ with the vertices of $V=\mbox{im}(\sigma')$ shown in circles. In Figure~\ref{fi:d4ind7} we isolate the connected component $\Gamma$ of $\mu_2(\Gamma(D_7,1))$ induced by $V$, and in Figure~\ref{fi:d4usual} we indicate the translation quiver $\Gamma(D_4,2)$ with the usual labelling of its vertices. \end{example} \begin{figure}[ht] \begin{center} \includegraphics{D7.eps} \caption{The translation quiver $\Gamma(D_7,1)$} \label{fi:d7usual} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics{gammaD71component.eps} \caption{The connected component $\Gamma$ of the translation quiver $\Gamma(D_7,1)$} \label{fi:d4ind7} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics{gammaD42.eps} \caption{The translation quiver $\Gamma(D_4,2)$} \label{fi:d4usual} \end{center} \end{figure} \section{Geometric realisation} In this section, we give a geometric realisation of the $m$-cluster category of type $D_n$. To do so, we use certain $m$-arcs in a punctured $nm-m+1$-gon. Thus we are generalising the notion of tagged edges of Schiffler~\cite{schiffler} for the cluster category of type $D_n$ and the notion of $m$-diagonals of our work on $m$-cluster categories of type $A_n$~\cite{baurmarsh}. Let $P$ be a punctured $N$-gon in the plane (later we shall specialise to the case where $N=nm-m+1$). We label the vertices of $P$ clockwise. For $i\neq j\in\{1,2,\dots,N\}$, we denote by $B_{ij}$ the boundary path $i,i+1,\dots,j-1,j$, going clockwise around the boundary (taking the vertices $\mod N$). If $i=j$, we let $B_{ii}$ be the whole boundary path $i,i+1,\dots,i-1,i$ and $B_{ii}^{\bullet}$ denote the trivial path at $i$ consisting of the vertex $i$. The {\em length} $|B_{ij}|$ of the boundary path $B_{ij}$ is the number of vertices it runs through. Here, we count both the starting and end point unless $B_{ij}=B_{ii}^{\bullet}$. In particular, $|B_{ii}|=N+1$ and $|B_{ii}^{\bullet}|=1$. As an example of a boundary path of length $4$, we have indicated $B_{62}$ inside a punctured $7$-gon in Figure~\ref{fig:7gon}. \begin{figure}[ht] \begin{center} \includegraphics{7gon.eps} \caption{Punctured seven-gon with boundary path $B_{62}$ and arcs $D_{62}$ and $D_{66}^-$}\label{fig:7gon} \end{center} \end{figure} For $i\neq j$, and $j$ not the clockwise neighbour of $i$, an {\em arc} $D_{ij}$ is a line from $i$ to $j$ that is homotopic to the boundary path $B_{ij}$. If $j$ is the clockwise neighbour of $i$, there is no arc clockwise from $i$ to $j$ other than the boundary path $B_{ij}$. If $i=j$, we always tag the arc by $+$ or $-$, as in Schiffler's work~\cite{schiffler}. Such arcs are denoted by $D_{ii}^+$ and $D_{ii}^-$. We will occasionally write $D_{ij}^{\pm}$ to denote an arbitrary arc and call it a {\em tagged arc}. In that case, if $i\neq j$, then $D_{ij}^{\pm}$ will only stand for the arc $D_{ij}$. As an example, the arcs $D_{62}$ and $D_{66}^-$ of a punctured $7$-gon are pictured in Figure~\ref{fig:7gon}. Addition of subscripts for the $D_{ij}$ will always be modulo $N$. In what follows, we will use a slightly generalised version of a polygon. We will allow arcs $D_{ij}^{\pm}$ and sides $B_{i,i+1}$ of the polygon $P$ as sides of a polygon. We will say that such a (generalised) polygon is {\em degenerate} if it has more sides than vertices. Note that such polygons may or may not contain the puncture. In the remainder, we will in particular be interested in the following types of generalised polygons and generalised degenerate polygons obtained from the regular $N$-gon $P$. {\bf Type (i):} A combination of an arc $D_{ij}$ with the boundary path $B_{ij}$, or of an arc $D_{ij}$ with the boundary path $B_{ji}$, $i\neq j$, where in the former case, $2<|B_{ij}|\le N$ and in the latter case, $1<|B_{ji}|<N$. Such a polygon has $|B_{ij}|$ vertices (respectively, $|B_{ji}|$ vertices). {\bf Type (ii):} A combination of two arcs $D_{ij}$, $D_{ik}$ with the boundary path $B_{jk}$, or of $D_{ik}$, $D_{jk}$ with $B_{ij}$, where $i$, $j$, $k$ are all distinct and lie in clockwise order on $P$. Furthermore, in the former case, $1<|B_{jk}|<N-1$ and in the latter case, $1<|B_{ij}|<N-1$. Such a polygon has $|B_{jk}|+1$ vertices (respectively, $|B_{ij}|+1$ vertices). {\bf Type (iii):} A combination of an arc $D_{ii}^{\pm}$ with the boundary path $B_{ii}$, or with $B_{ii}^{\bullet}$. In the first case, the polygon is has $N+1$ sides and $N$ vertices. In the latter case, the polygon has one side and one vertex. {\bf Type (iv):} A combination of an arc $D_{ii}^{\pm}$ with an arc $D_{ij}$ and the boundary path $B_{ji}$, or a combination of $D_{ii}^{\pm}$ with a boundary path $B_{ij}$ and the arc $D_{ji}$ (where we always have $i\neq j$). In the former case, $1<|B_{ji}|<N$ and in the latter case, $1<|B_{ij}|<N$. Such a polygon has $|B_{ji}|+1$ sides and $|B_{ji}|$ vertices (respectively, $|B_{ij}|+1$ sides and $|B_{ij}|$ vertices). Note that we can view type (iii) as the limit $j\mapsto i$ of type (i) and type (iv) as the limit $k\mapsto i$ or $j\mapsto k$ of type (ii). We show each of these four types in Figure~\ref{fig:polygons}. \begin{figure}[ht] \begin{center} \includegraphics{polygons.eps} \caption{Generalised polygons (some degenerate)}\label{fig:polygons} \end{center} \end{figure} \begin{definition}\label{def:m-arc} Let $D_{ij}^{\pm}$ be an arc of $P$. If $i\neq j$, we say that $D_{ij}$ is an {\em $m$-arc} if the following hold: (i) $D_{ij}$ and $B_{ij}$ form a $km+2$-gon for some $k$, (ii) $D_{ij}$ and $B_{ji}$ form a $lm+1$-gon for some $l$. If $i=j$, $D_{ii}^{\pm}$ is a {\em tagged $m$-arc} if $D_{ii}^{\pm}$ and $B_{ii}$ form a degenerate $km+2$-gon for some $k$. \end{definition} The parts (i) and (ii) in the definition of an $m$-arc also apply to the case $i=j$ if we use the boundary paths $B_{ii}$ and $B_{ii}^{\bullet}$. Namely, $D_{ii}^{\pm}$ is a tagged $m$-arc if $D_{ii}^{\pm}$ and $B_{ii}$ form a degenerate $km+2$-gon for some $k$ and if $D_{ii}^{\pm}$ and $B_{ii}^{\bullet}$ form a degenerate $1$-gon. \begin{example} Let $P$ be a punctured $7$-gon and $m=2$. The arc $D_{62}$ is a $2$-arc (cf. Figure~\ref{fig:7gon}), since the arc $D_{62}$ together with the boundary path $B_{62}$ forms a $4$-gon (i.e. $k=1$) whereas $D_{62}$ and $B_{26}$ form a $5$-gon (i.e. $l=2$). Each of the arcs $D_{66}^{\pm}$ forms an $8$-gon together with $B_{66}$, and thus also is a $2$-arc. \end{example} We now define $m$-moves generalising the $m$-rotation for type $A_n$ of~\cite{baurmarsh} and the elementary moves for type $D_n$ of~\cite{schiffler}. \begin{definition} Let $P$ be a punctured $N$-gon. An {\em $m$-move} arises when there are two arcs in $P$ with a common end-point such that the two arcs and a part of the boundary bound an unpunctured $m+2$-gon, possibly degenerate. If the angle from the first arc to the second at the common end-point is negative (i.e. clockwise), then we say that there is an $m$-move taking the first arc to the second. More precisely, it is a move of one of the following forms: (i) $D_{ij}\to D_{ik}$ if $D_{ij}$, $B_{jk}$ and $D_{ik}$ form an $m+2$-gon, $|B_{jk}|=m+1$. (ii) $D_{ij}\to D_{kj}$ if $D_{ij}$, $B_{ik}$ and $D_{kj}$ form an $m+2$-gon, $|B_{ik}|=m+1$. (iii) $D_{ij}\to D_{ii}^{\pm}$ if $D_{ij}$, $D_{ii}^{\pm}$ and $B_{ji}$ form a degenerate $m+2$-gon, $|B_{ji}|=m+1$. (iv) $D_{ii}^{\pm}\to D_{ji}$ if $D_{ii}^{\pm}$, $D_{ji}$ and $B_{ij}$ form a degenerate $m+2$-gon; $|B_{ij}|=m+1$. \end{definition} In Figure~\ref{fig:m-moves}, we illustrate the four types of $m$-moves inside a heptagon, i.e. $n=4$, $m=2$. \begin{figure}[ht] \begin{center} \includegraphics{m-moves.eps} \caption{$2$-moves inside a heptagon}\label{fig:m-moves} \end{center} \end{figure} Our goal is to model the $m$-cluster category $\mathcal{C}^m(D_n)$ geometrically. To do so, we will from now on assume that $N=nm-m+1$, so the polygon $P$ has $nm-m+1$ vertices. \begin{remark} Let $P$ be a punctured polygon with $nm-m+1$ vertices and let $i\neq j$. Then the two conditions of Definition~\ref{def:m-arc} are equivalent, i.e. $D_{ij}$ and $B_{ij}$ form a $km+2$-gon for some $k$ if and only if $D_{ij}$ and $B_{ji}$ form an $lm+1$-gon for some $l$. \end{remark} We are now ready to define a translation quiver using the punctured polygon, $P$. Let $\Gamma_{\odot}=\Gamma_{\odot}(n,m)$ be the quiver whose vertices are the tagged $m$-arcs of $P$ and whose arrows are given by $m$-moves. Let $\tau_m$ be the map sending an arc $D_{ij}^{\pm}$ to $D_{i-m,j-m}^{\pm}$ if $i\neq j$ or $m$ is even. If $i=j$ and $m$ is odd, we set $\tau_m(D_{ii}^{\pm})=D_{i-m,i-m}^{\mp}$. In other words, if $i\neq j$ or $m$ is even, $\tau_m$ rotates a tagged arc anti-clockwise around the center. In case $i=j$ and $m$ is odd, $\tau_m$ rotates the tagged arc anti-clockwise around the center and changes its tag. Figure~\ref{fig:geom-4-2} shows the example $\Gamma_{\odot}(4,2)$ (we will see shortly, cf. Theorem~\ref{thm:odot-transl}, that $\Gamma_{\odot}(n,m)$ is a translation quiver). \begin{figure}[ht] \begin{center} \includegraphics{gammadot42.eps} \caption{The translation quiver $\Gamma_{\odot}(4,2)$} \label{fig:geom-4-2} \end{center} \end{figure} \begin{lemma}\label{lm:sect-paths} The sectional paths of length $m$ in $\Gamma(D_{nm-m+1},1)$ are of the form \noindent (i) $(i,j)\to (i,j-1)\to\dots\to (i,j-m)\ $ if $j-m> 0$, \noindent (ii) $(i,j)\to (i,j-1)\to\dots\to (i,0)\ $ and $ (i,j)\to \dots\to (i,\overline{0})\ $ if $j=m$, \noindent (iii) $(i,j)\to\dots\to(i+m,j+m)\ $ if $j>0$ and $(i+m,j+m)$ exists, \noindent (iv) $(i,0)\to\dots\to (i+m,m)$ and $(i,\overline{0})\to\dots\to (i+m,m)\ $ if $(i+m,m)$ exists. \end{lemma} \begin{proof} For (i) and (iii) we can argue as in~\cite[7.1]{baurmarsh}, using the vertices $\mathbb{Z}_{nm-m+1}\times\{0,\overline{0},1,\dots,n-2\}$ of $\Gamma(D_{nm-m+1},1)$ instead. The other cases follow with the same argument, using the assumption that the paths are restricted, i.e. excluding the sectional paths $(i,0)\to (i+1,1)\to (i+1,\overline{0})$ and $(i,\overline{0})\to (i+1,1)\to (i+1,0)$. \end{proof} \begin{theorem} \label{thm:odot-transl} The quiver $\Gamma_{\odot}$ is a translation quiver isomorphic to to the Auslander-Reiten quiver of ${\mathcal C}^m_{D_n}$. \end{theorem} \begin{proof} It is enough to show that $\Gamma_{\odot}$ is isomorphic to the image $\Gamma$ of the map $\sigma$ from Section~\ref{se:mthpower} and that, under the isomorphism, the map $\tau_m$ on $\Gamma_{\odot}$ corresponds to $\tau^m$ on $\Gamma$. Recall that the vertices of $\Gamma$ are $V:=\{(r,s)\in V(D_{N}):\ m|s\}$ (using the convention that $m$ divides $\overline{0}$), recalling that $N=nm-m+1$. In other words, $V$ is the subset in $V(D_{N})=\mathbb{Z}_{N} \times\{0,\overline{0},1,\dots,N\}$ of the vertices whose second coordinate is divisible by $m$. We will now define a map $\rho:\ V(\Gamma_{\odot})\to\ V$, where $V(\Gamma_{\odot})$ denotes the set of vertices of $\Gamma_{\odot}$. Note that $m$-arcs in $V(\Gamma_{\odot})$ going through two distinct vertices are always of form $D_{i,i+1+km}$. On such $m$-arcs, $\rho$ is defined as follows: \[ \rho(D_{i,i+1+km})=(lm,(n-1-k)m)\in\mathbb{Z}_{N} \times\{0,\overline{0},1,\dots,N-2\} \] where $i\equiv lm+1$ modulo $N$ and $k=1,\dots,n-2$. \noindent On arcs $D_{ii}^{\pm}$, $\rho$ is defined as follows. \begin{eqnarray*} \rho(D_{ii}^{+}) & = & \left\{ \begin{array}{cc} (lm,0), & \text{if $i$ is odd}, \\ (lm,\overline{0}), & \text{if $i$ is even}.\end{array}\right. \\ \\ \rho(D_{ii}^{-}) & = & \left\{ \begin{array}{cc} (lm,\overline{0}), & \text{if $i$ is odd}, \\ (lm,0), & \text{if $i$ is even}.\end{array}\right. \\ \end{eqnarray*} (where $i\equiv lm+1$ modulo $N$). To see that $\rho$ is a bijection, we divide $V(\Gamma_{\odot})$ up into $n$ types of arcs. Let $V_1$ be the set of arcs of the form $D_{i,i+m+1}$ ($i=1,\dots,N$), i.e. the arcs homotopic to a boundary path of length $m+2$. Then $\rho$ sends each element of $V_1$ to a vertex of the top row of $V$, $D_{i,i+m+1}\mapsto(lm,(n-2)m)$ (where $lm\equiv i-1\mod N$). It is straightforward to check that $\rho$ induces a bijection from the set $V_1$ to the top row of $V$. More generally, for $k=1,\dots,n-2$, let $V_k$ be the set of arcs of the form $D_{i,i+km+1}$, i.e. the set of arcs homotopic to a boundary path of length $km+2$. Since $\rho(D_{i,i+km+1})=(lm,(n-1-k)m)$, $\rho$ sends the arcs in $V_k$ to the $k$th row (from the top) of $V$ ($k\le n-2$). Clearly, this is also a bijection. Furthermore, the arcs $D_{ii}^{\pm}$ are sent to the two last rows of $V$, also bijectively. Thus, we have that $\rho$ is a bijection from $V(\Gamma_{\odot})$ to $V$. Next, we observe that the arrows given by the $m$-moves are the same as the arrows in $\Gamma$: for arcs in $V_k$ with $1\leq k<n-2$, an $m$-move sends $D_{i,i+1+km}$ to $D_{i,i+1+(k+1)m}$ or $D_{i,i+1+km}$ to $D_{i+m,i+1+km}$ whereas a restricted sectional path of length $m$ sends $(lm,(n-1-k)m)$ to $(lm,(n-2-k)m)$ (type (i) in Lemma~\ref{lm:sect-paths}) or to $((l+1)m,(n-k)m)$ (type (iii) in Lemma~\ref{lm:sect-paths}). For arcs in $V_{n-2}$, an $m$-move sends $D_{i,i+1+(n-1)m}$ to $D_{ii}^+$, to $D_{ii}^-$ or to $D_{i+m,i+1+(n-1)m}$ whereas a restricted sectional path of length $m$ sends $(lm,m)$ to $(lm,0)$, to $(lm,\overline{0})$ or to $((l+1)m,2m)$ (types (ii) and (iii) in Lemma~\ref{lm:sect-paths}). Finally, arcs $D_{ii}^{\pm}$ are sent to $D_{i+m,i}$ by $m$-moves, and restricted sectional paths of length $m$ send $(lm,0)$ to $((l+1)m,m)$ and $(lm,\overline{0})$ to $((l+1)m,m)$ (type (iv) in Lemma~\ref{lm:sect-paths}). Furthermore, the translation maps correspond: on $V_k$ (with $1\leq k\le n-2$) $\tau_m(D_{i,i+1+km})=D_{i-m,i+1+(k-1)m}$ (subscripts taken $\mod N$) and on the $n-2$ first rows from the top, $\tau^m(lm,(n-1-k)m)=((l-1)m,(n-1-k)m)$ (first entries taken $\mod N$). If $i>1$ then $\tau_1(D_{ii}^{\pm})=D_{i-1,i-1}^{\mp}$ while $\tau(i-1,0)=(i-2,0)$ and $\tau(i-1,\overline{0})=(i-2,\overline{0})$. If $i=1$ then $\tau_1(D_{11}^+)=D_{N,N}^-$ while $$\tau(0,0)=\left\{ \begin{array}{cc} (N-1,\overline{0}), & \text{if $N$ is odd}, \\ (N-1,0), & \text{if $N$ is even.} \\ \end{array}\right.$$ It follows that $\tau(\rho(D_{ii}^+))=\rho(\tau_1(D_{ii}^+))$ for all $i$. A similar argument applies to the tagged arcs $D_{ii}^-$. Since $\tau_m=\tau_1^m$, we see that $\tau^m(\rho(D_{ii}^{\pm}))= \rho(\tau_m(D_{ii}^{\pm}))$ for all $i$. We have seen that $\rho$ induces an isomorphism of quivers and $\tau^m\rho(D_{ij}^{\pm})=\rho(\tau_m(D_{ij}^{pm}))$ for all arcs $D_{ij}^{\pm}$. It follows that $\Gamma_{\odot}$ is a translation quiver and that $\rho$ is an isomorphism of translation quivers. \end{proof} \section{A toral translation quiver} \label{se:toralexample} In this section we give an example of a toral translation quiver arising from the cluster category $\mathcal{C}_{D_4}^1$ of type $D_4$. The Auslander-Reiten quiver of $\mathcal{C}_{D_4}^1$, $\Gamma(D_4,1)$, is shown in Figure~\ref{fi:quiverd4}. A connected component of its (unrestricted) square, $\Gamma(D_4,1)^2$ is shown in Figure~\ref{fi:toralquiver}. The underlying topological space $|\Gamma(D_4)^2|$ (in the sense of Gabriel and Riedtmann; see~\cite[p51]{ringel}) is a torus. \begin{figure}[ht] \begin{center} \includegraphics{torus.eps} \caption{A connected component of the translation quiver $\Gamma(D_4,1)^2$} \label{fi:toralquiver} \end{center} \end{figure} \section{The components of $\mu_m(\Gamma(D_n,1))$} We have seen in Theorem~\ref{thm:D-component} that the Auslander-Reiten quiver of the $m$-cluster category of type $D_n$ is a connected component of the restricted $m$-th power \linebreak $\mu_m(\Gamma(D_{nm-m+1},1))$. In this section, we describe the other components arising in the restricted $m$-th power of the translation quiver $(\Gamma(D_{nm-m+1},1),\tau)$. \begin{prop}\label{prop:A-components} The quiver $\mu_m(\Gamma(D_{nm-m+1},1))$ has $m-1$ connected components isomorphic to the Auslander-Reiten quiver of $D^b(A_{n-1})/\tau^{nm-m+1}$. \end{prop} \begin{proof} We consider the following subset of the vertices of the quiver \linebreak $\mu_m(\Gamma(D_{nm-m+1},1))$: \begin{eqnarray*} X_k & := & \{(i,j)\mid i\in\mathbb{Z}_{nm-m+1}, j\equiv k\mod m\}\\ & = & \mathbb{Z}_{nm-m+1}\times\{k,m+k,\dots,(n-2)m+k\}. \end{eqnarray*} Such a set $X_k$ is a union of rows in the quiver $\mu_m(\Gamma(D_{nm-m+1},1))$. We show that for each $1\le k\le m-1$, the translation quiver generated by $X_k$ (i.e. the full subquiver induced by $X_k$, together with $\tau^m$) is a connected component of $\mu_m(\Gamma(D_{nm-m+1},1))$. This is done by first showing that $X_k$ is closed under $\tau^m$ and under taking restricted sectional paths of length $m$. This tells us that $X_k$ is a union of connected components of $\mu_m(\Gamma(D_{nm-m+1},1))$. Then we show that $X_k$ is connected, hence is a single component. \vspace{.2cm} 1) The set $X_k$ is closed under the translation $\tau^m$, since, by definition, $X_k$ is the union of all vertices of certain rows and $\tau^m$ shifts vertices along a row. 2) The set $X_k$ is closed under restricted sectional paths of length $m$: we have seen that these paths are of the form $(ij)\to\dots\to(i,j-m)$ or $(ij)\to\dots\to (i+m,j-m)$, cf. Lemma~\ref{lm:sect-paths}. In particular, the new second entry is still congruent to $k$ modulo $m$. 3) The subset $X_k$ is connected: note that $m$ is coprime to $nm-m+1$. Hence, the $\tau^m$-orbit of any vertex $(i,j)$ ($i\in\mathbb{Z}_{nm-m+1}, j\equiv k\mod m$) is the same as the $\tau$-orbit of $(i,j)$. In other words, we can use $\tau^m$ to get everywhere in any given row of $X_k$, in particular in the row through $(0,k)$. Using the arrows and starting at $(0,k)$, we can get to any other row of $X_k$. \vspace{.2cm} Now by definition, $X_k$ is the union of $n-1$ rows, namely the rows $(\cdot,k)$, $(\cdot,m+k)$ up to $(\cdot,(n-2)m+k)$. Each row is of length $nm-m+1$. It is clear from the arrows in $X_k$ that $X_k$ is isomorphic to the Auslander-Reiten quiver of $D^b(A_{n-1})/\tau^{nm-m+1}$. \end{proof} Thus we obtain a complete description of the restricted $m$-th power of \linebreak $\Gamma(D_{nm-m+1},1)$. \begin{theorem} The restricted $m$-th power $\mu_m(\Gamma(D_{nm-m+1},1),\tau^m)$ is the union of the following connected components: \[ \mu_m(\Gamma(D_{nm-m+1},1),\tau^m)=\Gamma_{\odot}(n,m)\cup \bigcup_{k=i}^{m-1} \Gamma(D^b(A_{n-1})/\tau^{nm-m+1}), \] where $\Gamma(D^b(A_{n-1})/\tau^{nm-m+1})$ denotes the Auslander-Reiten quiver of \linebreak $D_b(A_{n-1})/\tau^{nm-m+1}$. \end{theorem} \begin{proof} The statement follows from Theorem~\ref{thm:D-component}, Proposition~\ref{prop:A-components} and the observation that the vertices of $(\mu_m(\Gamma(D_{nm-m+1},1)),\tau^m)$ are exhausted by the subsets $X_k$ ($k=1,\dots,m-1$) together with the vertices of $\Gamma_{\odot}(n,m)$. \end{proof}
2,877,628,090,636
arxiv
\section{Introduction} \begin{figure}[!ht] \includegraphics[width=3.5in]{fig_1.eps} \caption{\small {\bf {a}} Illustration of the three layer graphene based nanopore as a possible multilayered sequencing device. {\bf {b}} Schematic of transmission currents through two graphene layers where isolated DNA bases pass through the nanopores. The current vs. time spectra are recorded for each layer independently. A cross correlation between the current data from multipores reveals useful information by increasing signal to noise ratio as described in the text; {\bf {c}} Hydrogen capped graphene nanoribbons and the DNA bases inside the pore. Here, only the flat orientation of the DNA bases are shown. The other angular orientations are shown in the supplementary section.are shown. The other angular orientations are shown in the supplementary section.} \label{f1} \end{figure} With applications ranging from explosives and drug detection to DNA sequencing and biomolecular identification, the ability to detect specific molecules and/or molecular series presents many challenges for scientists. With a specific need for timely and accurate measurements and evaluation, it is essential that researchers investigate both the manner of detection as well as explore new and improved computational methods for analysis to keep up with the growing pace of the individual fields. The field of DNA sequencing is rapidly evolving due to increasing support and technology. As this occurs, sequencing techniques are challenged by the need for a rapid increase of accuracy, speed, and resolution for smaller amounts of material~\cite{xprize}. Nanopore-based sequencing~\cite{Zwolak2008,Branton2008} and serial methods~\cite{towfiq_dna,Kilina2007} provide promising alternatives to the well established Sanger method~\cite{sanger}, particularly for identifying single DNA bases using transverse conductance~\cite{ventra_2005,ventra_2006}. Such an approach relies on the ability to resolve the electronic fingerprints of DNA one relevant unit at a time (`serial') as DNA translocates through a nanochannel. It has been established that experimental methods are capable of achieving single-base resolution, which has prompted investigations into the local electrical properties of single DNA bases~\cite{tanaka,Yarotski2009}. Concurrently, the theoretical underpinnings of this approach have been continuously developing~\cite{towfiq_dna,Kilina2011,Kilina2007,ventra_2005,ventra_2006}. The single-molecule sensitivity of nanopore sequencing has been recently demonstrated by Kawai {\it {et al.}}~\cite{kawai} and Lindsay{\it {et al.}}~\cite{Chang2010}. The sequence of DNA/RNA oligomers and microRNA by tunneling has also been demonstrated~\cite{kawai2012}. Despite such high-quality experimental methods, the most pressing challenge in serial sequencing lies in overcoming effects of noise that lead to a small signal to noise (S/N) ratio in the measured current $I$. The signal fluctuations generally originate from thermal agitation and bond formation between base and nanopore/electrode walls or interactions with a substrate. In an effort to avoid these limitations, we propose the sequential measurement of transverse current cross-correlations, as obtained from multiple pairs of electrodes. The experimental set up for such a nanopore arrangement is schematically shown in \ref{f1}. To be specific, we focus on graphene as the porous material, because it is atomically thick and exhibits extraordinary thermal and electronic properties. \begin{figure*}[htpb] \begin{minipage}[!t]{0.70\linewidth} \epsfig{file=fig_2.eps, width=\linewidth} \end{minipage}\hfill \begin{minipage}[!t]{0.28\linewidth} \caption{\small {\bf {a}} Schematic of transmission currents through two graphene layers where isolated DNA bases pass through the nanopores. The current vs. time spectra are recorded for each layer independently. A cross correlation between the current data from multipores reveals useful information by increasing signal to noise ratio as described in the text; {\bf {b}} Hydrogen capped graphene nanoribbons and the DNA bases inside the pore. Here, only the flat orientation of the DNA bases are shown. The other angular orientations are shown in the supplementary section.} \label{f2} \end{minipage} \end{figure*} Besides these geometric advantages and good conductivity, graphene also possesses high tensile strength and can endure a high transmembrane pressure environment~\cite{graphene_mechanical}. Consequently, graphene has been proposed as an effective substrate and conducting medium for nanopore sequencing by numerous groups~\cite{branton,merchant,schneider,prezhdo,tanaka,scheicher}. We emphasize, however, that the method for nanopore sequencing may be useful in any other method in which serial measurements (e.g., time series) are made to ascertain individual properties (resistivity here) of the bases. Although this challenge is much more severe for protein based or solid state nanopores, the nature of an atomically thick graphene nanopore wall cannot completely rule out the $\pi-\pi$ stacking between carbon and DNA bases. In addition, vibration and other electronic fluctuations present in the graphene membrane can significantly mask the conductance signals, making it difficult to differentiate the individual DNA bases. Previous theoretical~\cite{Kilina2007,Kilina2011} and experimental~\cite{tanaka} studies of the interactions between DNA bases and graphene derivatives have revealed the local electronic structure of single bases. The experimental realization of a single layer graphene-based nanopore device is made possible by combining several state of the art techniques e.g., mechanical exfoliation from graphite on SiO$_2$ substrate. Transverse tunneling current(conductance) measurements, as the single strand (ss)DNA translocates through a monolayer graphene nanopore, were previously reported by Schneider {\it et al.}~\cite{schneider}. AFM studies~\cite{Yarotski2009} and theoretical simulations of scanning tunneling spectroscopy (STS)~\cite{towfiq_dna} support the identification of electronic features with varying spatial extent and intensity near the HOMO-LUMO band. To make nanopore sequencing and detection a viable method for determining translocating molecules, one must overcome this the noise to signal problem. Therefore, we propose a multilayered graphene device in which the transverse conductance is measured through each nanopore independently, as a series of DNA bases or other molecules translocates through them (see ~\ref{f1}). As molecules translocate, they create a time dependent sequence of translocation currents through each of the layers. One then monitors the translocation currents at different pores and acquires a record of sequential current of the same base as it arrives and moves through the individual pores (shown in ~\ref{f2}). The time series of the cross correlation currents can then be used to reduce the uncorrelated, independent noise source, and hence enhance the signal to noise ratio and improve the differentiation between bases. While our device is being discussed under the idea of DNA sequencing, the general method and device setup can be used for any molecule small enough to fit through a nanopore. While we are focusing on the area of DNA sequencing and biomolecules, this cross-correlation method for data analysis of the transverse currents can be utilized for the analysis of any molecular series given the proper understanding of the molecules electronic properties. \begin{figure*}[htpb] \begin{minipage}[!t]{0.70\linewidth} \epsfig{file=fig_3.eps, width=\linewidth} \end{minipage}\hfill \begin{minipage}[!t]{0.28\linewidth} \caption{\small {Configuration averaged transmission coefficients (solid blue lines) for {\bf{(a)}} Adenine, {\bf{(b)}} Cytosine, {\bf{(c)}} Guanine, and {\bf{(d)}} Thymine. The solid red line is T(E) for pure graphene with nanopore for comparison. The vertical dashed lines are at -0.35 eV and +0.35 eV which are the E$_F$ of the left and right electrodes respectively. The top three curves in each panel are the difference-square curves between the average T(E) for each base. The fermi energy of the central region is at 0 eV and difference curve shows distinguishing features for each of the DNA bases.} } \label{f3} \end{minipage} \end{figure*} \section{Results and Discussion} We first discuss our first-principles calculations of transmittance for individual DNA bases inside the graphene nanopore, as presented in ~\ref{f3}. Then in ~\ref{f4}, we show the partial signal recovery using our time-simulation model with three layer graphene nanopores and the cross-correlation between the corresponding signals. In our first-principles approach, for each DNA base, we have taken three random angular orientation with the graphene membrane, while calculating the transmittance between the two electrodes with 0.7 V bias voltage. The configuration averaged transmittance for A, C, G, and T are shown in the solid blue curve in ~\ref{f3}(a)-(d). The conductance of a pure graphene nanoribbon with hydrogenated nanopore is shown in solid red curve in ~\ref{f3} for comparison. The transmittance curve is analogous to the non-equilibrium density of states in the presence of the bias voltage where the zero of energy is the Fermi energy of the central graphene region. The vertical dashed lines are at -0.35 eV and +0.35 eV, which are the chemical potentials of the left and right electrodes respectively. For each base (~\ref{f3}(a)-(d)), the transmittance curve (solid blue line) in between the left and right electrode chemical potentials is significantly enhanced compared to the pure graphene membrane with a nanopore (solid red line). The features in this region are characteristic of the four bases. For example, a comparison of the Guanine transmittance (~\ref{f3}(c)) with that of Thymine (~\ref{f3}(d)), shows the presence of a characteristic broad peak. For a systematically study of the difference between the transmittance among the four bases, we also plotted the difference curves (the top three) in ~\ref{f3}(a)-(d). If the signatures of one or more of the DNA bases are known prior to the detection, the difference curve may provide the signature of an unknown base. For example, if one knows the transmittance of Thymine, a comparison of the characteristic features of difference-squared transmittance $(A-T)^2$, $(G-T)^2$, $(C-T)^2$, helps identify the unknown base. ~\ref{f3}(a),(c), and (d) show the difference-curves contain several (up to three) dominant peaks in between the vertical dashed lines. In principle, it is possible to calculate a large number of configurations and maintain a complete data-base of such characteristic difference curves for the sequencing purpose. \begin{figure*}[htpb] \begin{minipage}[!t]{0.70\linewidth} \epsfig{file=fig_4.eps, width=\linewidth} \end{minipage}\hfill \begin{minipage}[!t]{0.28\linewidth} \caption{\small{ {\bf {a}} Current vs. time ($\mu$s) plot for a translocating DNA sequence `ACAGTCGT' for three graphene layers labeled as L-1, L2, and L-3. An additive white noise is included in the current spectrum. Due to high noise to signal ratio some of the spectral features became harder to recognize (indicated by a question mark in the figure). {\bf {b}} Cross-correlation between current signals {\it {I$_1$(t)}}, {\it {I$_2$(t)}}, and {\it{I$_3$(t)}} as functions of delay time $\Delta t$ , where the currents are from graphene layers L-1, L-2, and L-3 respectively. {\bf {c}} Enlarged segment of the cross-correlation function from (b). These correlation-signal peaks correspond to the peaks from current-signal for the DNA sequence ACAGTCGT.}} \label{f4} \end{minipage} \end{figure*} Such methods are challenged by two major limitations. The first one is prior knowledge of the exact location of one or more kinds of DNA base, either from the transmittance curve or form other technique. The second one is the presence of significant noise in the data, which makes it difficult for the detection of any single base. Some bases exhibit characteristic features in the transmittance curve, which make them easily detectable. For example, the Thymine (~\ref{f3}(d) solid blue line) has a very low conductance compared to the others which (in agreement with previous calculations~\cite{ventra_2005,ventra_2006}) shown by the low peak amplitude near 0 eV. However, even the detection of Thymine can be difficult in the presence of noisy data. To illustrate the specifics of the approach, we present the simulation of a time-series for three graphene nanopore layers with the test sequence A$_0$C$_0$A$_2$G$_2$T$_1$C$_2$G$_1$T$_2$ in Fig.4. In nanopore based DNA sequencing, the current ($I(t)$) is the measured quantity rather than the transmittance ($T(E)$). Thus, we calculated the current from the transmittance. Using the parameters described previously we simulated time-dependent current spectra $I_{L-1}$, $I_{L-2}$, and $I_{L-3}$ for our test sequence, as shown in ~\ref{f4}(a). The low current amplitude for Thymine in the case of T$_1$ and T$_2$ is expected from the transmittance curve in ~\ref{f3}(d), but the natural noise present in the data makes it difficult to confirm the presence of T$_1$ at the expected location. In ~\ref{f4}(b), we present the cross-correlation between the current spectra from different pairs of graphene layers. For each pair, the cross-correlation is plotted as a function of time-delay within the -10 $\mu$s to +10 $\mu$s range. The cross-correlation spectrum is approximately symmetric around mid point of the total range due to the overlaps between similar pairs of peaks from opposite ends of the original data. Therefore, we only focus on the positive time-delay. The correlation spectrum inside the highlighted dashed box in ~\ref{f4}(b) is enhanced in ~\ref{f4}(c). By comparing peaks between ~\ref{f4}(a) and (c), we confirm the presence of Thymine with T$_1$ configuration. Although the amplitudes of the current spectrum do not translate directly into the amplitudes of the cross-correlation spectrum, they confirm the existence of T$_1$. Thus, a time-series analysis using current cross-correlations $\langle I_{i}(t) \otimes I_{j}(t) \rangle$ recovers all eight peaks in our test sequence (~\ref{f4}(b)). The suppression of white noise is substantial and the peaks at time-delay=0 in the correlation function (~\ref{f4}(b)) are enhanced. We can easily extend this approach to three-point or higher $N$-point correlations, which we demonstrate here, to exponentially reduce the noise-to-signal ratio. The two-point cross-correlation is generally expressed with a single parameter as in \begin{equation} R^{(2)}(\tau)=\int_0^T I_1(t) I_2(t-\tau) dt, \end{equation} where the time interval is between $0$ to $T$. The three-point correlation is a function of two independent variables \begin{equation} R^{(3)}(\tau, \tau')=\int_0^T I_1(t) I_2(t-\tau) I_3(t-\tau') dt. \end{equation} We can simplify the description of triple correlation function in the complete two dimensional parametric space by constraining it to the line $\tau' = 2 \tau$ as in ~\ref{f5}(b). Thus the constrained triple correlation function becomes, \begin{equation} R^{(3)}(\tau)=\int_0^T I_1(t) I_2(t-\tau) I_3(t-2 \tau) dt. \end{equation} Following this procedure we can measure currents from $N$ independent graphene layers and calculate constrained $N$-point correlation as \begin{align} R^{(N)}(\tau)=\int_0^T I_1(t)&I_2(t-\tau) I_3(t-2 \tau) .... \nonumber \\ &.... I_N(t-(N-1)\tau) dt. \end{align} The three panels in ~\ref{f5}(a) show our calculated current signal from a single layer as well as the two and three point cross-correlation functions from the corresponding two and three independent graphene nanopores. The test sequence used here is A$_0$C$_0$A$_2$G$_2$T$_1$C$_2$G$_1$T$_2$C$_1$. Using two, three and four point cross-correlation functions, we estimated the ratios between the average signal and average noise in each case, as shown in Table.1 in the supplementary section. We confirm the exponential drop in the noise to signal ratio as shown in ~\ref{f5}(c). The computational details and the table containing the results are also given in the supplementary section. \section{Computational Method} In this work, we ignore the background contribution from the large phosphate backbone typically present in a single stranded DNA (ssDNA). This simplification is based on the assumption that by identifying and subtracting the background noise coming from the heavy and rigid backbone structure. one can isolate the relevant signal from the individual bases. More specifically we have built on earlier work~\cite{towfiq_dna,Kilina2011,ventra_2005,scheicher} to model the pore conductance containing a molecule in two steps: 1) First, we carried out {\it {ab initio}} calculations of transmission ($T(E)$) and current ($I$) as a single DNA base translocates through the nanopore of a graphene mono-layer. 2) Then, we simulate the time-dependence of the current data by adopting a simple model with multi-layered graphene nanopores with added statistical noise and broadening. Calculations of transmission were performed taking each DNA base inside the nanopore with three different angular orientations, and using the Landauer-Buttikker~\cite{land} formalism implemented in the {\it ab initio} software ATK~\cite{mads_2}. We emphasize that out approach does not rely nor requires a geometry optimization of molecules in the pores. The translocation is a dynamical process with significant variations of configurations found for molecules inside a pore. Thus, the same molecule can arrive in different orientations at each pore, a process which contributes to the configuration noise sources that we address here. Therefore, we do not optimize the configurations and instead use the set of various configurations as the set, from which the random sampling is taken. \begin{figure*}[htpb] \begin{minipage}[!t]{0.70\linewidth} \epsfig{file=fig_5.eps, width=\linewidth} \end{minipage}\hfill \begin{minipage}[!t]{0.28\linewidth} \caption{\small {The three panels in (a) show the improvements in signal-to-noise ratio with higher order cross-correlation. Time dependent current spectrum for the sequence A$_0$C$_0$A$_2$G$_2$T$_1$C$_2$G$_1$T$_2$C$_1$ from a single layer graphene is shown in the top panel (black); where the double and triple cross-correlated spectrums are shown in the middle (red) and bottom (blue) panels. (b) Phase diagram of a triple correlation function on a 2D delay-time parametric space for $\tau$ and $\tau'$. The dashed red line is our constrain for calculating the triple correlation function. (c) Nearly exponential decay of noise-to-signal ratio with higher order correlation.}} \label{f5} \end{minipage} \end{figure*} In these calculations, we have taken a graphene nanoribbon with 208 carbon atoms in the conduction region, where the nanopore is constructed by removing center carbon atoms and capping the inner wall with hydrogen atoms, since hydrogenated edges were found~\cite{scheicher} to enhance the average experimental conductivity. The bias voltage between the left and right electrodes is fixed as +0.35 and -0.35 eV. In this work, the nanopore dimension is much smaller than that modeled by other groups~\cite{postma,prezhdo}. The details and various parameters of our first-principles calculations can be found in the supplementary section. To demonstrate the recoverability of current ($I(t)$) signals from noise, we show the relation between noise coming from different layers. For simplicity, we consider the dominant noise primarily from two sources. As the bases translocate through the {\it {i}}-th graphene nanopore layer, the vibration in the DNA backbone may influence individual base plane to land with random angular orientation with the graphene plane, causing a configuration-noise $S_i^C(t)$. The additional noise, such as thermal vibration of the graphene membrane at the $i$-th nanopore, is defined as $S_i^A(t)$. Thus the total noise of $i$-th nanopore can be expressed as \begin{equation} S_i(t)=S_i^C(t)+S_i^A(t). \end{equation} The correlation between the two layers is therefore given by \begin{align} <S_i(t)&\cdot S_j(t')>\,=\,<S_i^C(t) \cdot S_j^C(t')> \nonumber \\ &+<S_i^C(t) \cdot S_j^A(t')> +<S_i^A(t) \cdot S_j^C(t')> \nonumber \\ &+<S_i^A(t) \cdot S_j^A(t')>. \end{align} Here $t'=t+\Delta t$. For $i \neq j$, the contribution from the last three terms on the right side of Eq.~2 are negligible due the weakly or uncorrelated signals in separate nanopores. Since the DNA bases are strongly attached to the ssDNA backbone, the configuration-noise between two membranes mainly contributes to the first term in Eq.~2. Therefore, the noise can be approximated as \begin{equation} <S_i \cdot S_j>\,\approx\,<S_i^C \cdot S_j^C> , \end{equation} where, for $i = j$, all the terms on right side of Eq.~2 survive and contribute significantly to the total noise. Since the noise between $i$ and $j$ is uncorrelated, a comparison of their signals will enhance the individual base signals by reducing the noise to signal ratio. There are two extreme limits in which we can take advantage of the above observation. These limits relate to the rate of base translocation compared to the typical vibrational frequency of the bases facing the electrodes. When this occurs, the above cross correlations allow us to reduce the {\it intrinsic} noise due to random orientations. On the other hand, when the translocation rate is slower than the vibrational frequency, the uncorrelated noise is eliminated and the only one that survives is the correlated one. We focus here on the second case since experimentally the latter situation is more likely~\cite{Zwolak2008,Branton2008}. As an example, we show the low current amplitude for Thymine in ~\ref{f4}(a), and in ~\ref{f4}(c) the enhancement of the signal to noise ratio. We have taken a test sequence A$_0$C$_0$A$_2$G$_2$T$_1$C$_2$G$_1$T$_2$, where the subscripts imply different angular orientations of the bases inside the pore. The time dependence of this sequence is modeled by taking the time interval between two consecutive bases $ \tau = 1.0 \;\mu\text{s}$, including a random Gaussian uncertainly between the interval with $\sigma_{\tau}= \pm 0.2 \;\mu\text{s}$. Each current signal is also broadened using a random Gaussian broadening with $\sigma_{broad}=0.2\,\mu A$. To simulate a realistic experiment with background noise, we have also included additive white Gaussian noise. We assume that with the applied field in the vertical direction, the average elapsed time between two translocating bases is $\tau \approx 1.0 \;\mu\text{s}$. The time-distance between two consecutive graphene layers is set to $\Delta t \approx 0.2 \;\mu\text{s}$. \section{Conclusions} We implement first-principles calculation of transmittance for a systematic study of the identification of single DNA bases or other biomolecules translocating through graphene nanopores. To eliminate the high background noise, we propose a multilayered graphene-based nanopore device combined with a multi-point cross-correlation method to substantially improve the signal to noise ratio of the electronic readout of biomolecules. To illustrate this approach, we adopted a statistical method for simulating the time-dependent current spectrum. The enhanced resolution is produced by the multiple translocation readouts of the same bases of the same molecule through the pores. The cross-correlated signals from each pair of electrodes will suppress the uncorrelated noise produced by each single translocation event. In this way, thymine can serve as a ``reference molecule'' for identifying other molecules from the difference transmittance curves. We also demonstrate the recovery of signals associated with different configurations by taking cross-correlations between different pairs of graphene layers. This study provides a promising method for an enhanced signal to noise ratio in the multipore graphene based devices (or any other serial sequencing device), and their potential applicability as a next generation biomolecular detection technique. While we focus on the correlations in DNA bases, this cross-correlation method can be used for any molecule or molecular series for detection or identification purposes. \acknowledgement We are grateful to K.T. Wikfeld, K. Zakharchenko and Svetlana Kilina for useful discussions. This work is supported by the Center for Integrated Nanotechnologies at Los Alamos, a U.S. Department of Energy, Office of Basic Energy Sciences user facility. Los Alamos National Laboratory, an affirmative action equal opportunity employer, is operated by Los Alamos National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under contract DE-AC52-06NA25396. Work at NORDITA was supported by VR 621-2012-2983 and ERC 321031-DM. MD acknowledges partial support from the National Institutes of Health. IKS is supported by AFOSR FA9550-10-1-0409. \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{24} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\EndOfBibitem{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\EndOfBibitem\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\EndOfBibitem{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[FOUNDATION(2013)]{xprize} FOUNDATION,~X.~P. Archon Genomics X PRIZE. 2013; \url{http://www.genomics.xprize.org}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Zwolak and {Di Ventra}(2008)Zwolak, and {Di Ventra}]{Zwolak2008} Zwolak,~M.; {Di Ventra},~M. \emph{Rev. Mod. Phys.} \textbf{2008}, \emph{80}, 141--165\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Branton et~al.(2008)Branton, Deamer, Marziali, Bayley, Benner, Butler, {Di Ventra}, Garaj, Hibbs, Huang, Jovanovich, Kristic, Lindsay, Ling, Mastrangelo, Meller, Oliver, Pershin, Ramsey, Riehn, Soni, Tabard-Cossa, Wanunu, Wiggin, and Schloss]{Branton2008} Branton,~D. et~al. \emph{Nat. Biotechnol.} \textbf{2008}, \emph{26}, 1146--1153\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Ahmed et~al.(2012)Ahmed, Kilina, Das, Haraldsen, Rehr, and Balatsky]{towfiq_dna} Ahmed,~T.; Kilina,~S.; Das,~T.; Haraldsen,~J.~T.; Rehr,~J.~J.; Balatsky,~A.~V. \emph{Nano Letters} \textbf{2012}, \emph{12}, 927--931\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Kilina et~al.(2007)Kilina, Tretiak, Yarotski, Zhu, Modine, Taylor, and Balatsky]{Kilina2007} Kilina,~S.; Tretiak,~S.; Yarotski,~D.~A.; Zhu,~J.-X.; Modine,~N.; Taylor,~A.; Balatsky,~A.~V. \emph{The Journal of Physical Chemistry C} \textbf{2007}, \emph{111}, 14541--14551\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Sanger et~al.(1977)Sanger, Nicklen, and Coulson]{sanger} Sanger,~F.; Nicklen,~S.; Coulson,~A.~R. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{1977}, \emph{74}, 5463--5467\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Zwolak and Di~Ventra(2005)Zwolak, and Di~Ventra]{ventra_2005} Zwolak,~M.; Di~Ventra,~M. \emph{Nano Letters} \textbf{2005}, \emph{5}, 421--424\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Lagerqvist et~al.(2006)Lagerqvist, Zwolak, and Di~Ventra]{ventra_2006} Lagerqvist,~J.; Zwolak,~M.; Di~Ventra,~M. \emph{Nano Letters} \textbf{2006}, \emph{6}, 779--782\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Tanaka and Kawai(2009)Tanaka, and Kawai]{tanaka} Tanaka,~H.; Kawai,~T. \emph{Nat Nano} \textbf{2009}, \emph{4}, 518--522\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Yarotski et~al.(2009)Yarotski, Kilina, Talin, Tretiak, Prezhdo, Balatsky, and Taylor]{Yarotski2009} Yarotski,~D.~A.; Kilina,~S.~V.; Talin,~A.~A.; Tretiak,~S.; Prezhdo,~O.~V.; Balatsky,~A.~V.; Taylor,~A.~J. \emph{Nano Letters} \textbf{2009}, \emph{9}, 12--17\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Kilina et~al.(2011)Kilina, Yarotski, Talin, Tretiak, Taylor, and Balatsky]{Kilina2011} Kilina,~S.; Yarotski,~D.~A.; Talin,~A.~A.; Tretiak,~S.; Taylor,~A.~J.; Balatsky,~A.~V. \emph{Journal of Drug Delivery} \textbf{2011}, \emph{2011}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Tsutsui et~al.(2010)Tsutsui, Taniguchi, Yokota, and Kawai]{kawai} Tsutsui,~M.; Taniguchi,~M.; Yokota,~K.; Kawai,~T. \emph{Nat Nano} \textbf{2010}, \emph{5}, 286--290\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Chang et~al.(2010)Chang, Huang, He, Liang, Zhang, Li, Chen, Sankey, and Lindsay]{Chang2010} Chang,~S.; Huang,~S.; He,~J.; Liang,~F.; Zhang,~P.; Li,~S.; Chen,~X.; Sankey,~O.; Lindsay,~S. \emph{Nano Lett.} \textbf{2010}, \emph{10}, 1070--1075\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Ohshiro et~al.(2012)Ohshiro, Matsubara, Tsutsui, Furuhashi, Taniguchi, and Kawai]{kawai2012} Ohshiro,~T.; Matsubara,~K.; Tsutsui,~M.; Furuhashi,~M.; Taniguchi,~M.; Kawai,~T. \emph{Scientific Reports} \textbf{2012}, \emph{2}, 1070--1075\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Lee et~al.(2008)Lee, Wei, Kysar, and Hone]{graphene_mechanical} Lee,~C.; Wei,~X.; Kysar,~J.~W.; Hone,~J. \emph{Science} \textbf{2008}, \emph{321}, 385--388\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Branton et~al.(2008)Branton, Deamer, Marziali, Bayley, Benner, Butler, Di~Ventra, Garaj, Hibbs, Huang, Jovanovich, Krstic, Lindsay, Ling, Mastrangelo, Meller, Oliver, Pershin, Ramsey, Riehn, Soni, Tabard-Cossa, Wanunu, Wiggin, and Schloss]{branton} Branton,~D. et~al. \emph{Nat Biotech} \textbf{2008}, \emph{26}, 1146--1153\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Merchant et~al.(2010)Merchant, Healy, Wanunu, Ray, Peterman, Bartel, Fischbein, Venta, Luo, Johnson, and Drndić]{merchant} Merchant,~C.~A.; Healy,~K.; Wanunu,~M.; Ray,~V.; Peterman,~N.; Bartel,~J.; Fischbein,~M.~D.; Venta,~K.; Luo,~Z.; Johnson,~A. T.~C.; Drndić,~M. \emph{Nano Letters} \textbf{2010}, \emph{10}, 2915--2921\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Schneider et~al.(2010)Schneider, Kowalczyk, Calado, Pandraud, Zandbergen, Vandersypen, and Dekker]{schneider} Schneider,~G.~F.; Kowalczyk,~S.~W.; Calado,~V.~E.; Pandraud,~G.; Zandbergen,~H.~W.; Vandersypen,~L. M.~K.; Dekker,~C. \emph{Nano Letters} \textbf{2010}, \emph{10}, 3163--3167\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Nelson et~al.(2010)Nelson, Zhang, and Prezhdo]{prezhdo} Nelson,~T.; Zhang,~B.; Prezhdo,~O.~V. \emph{Nano Letters} \textbf{2010}, \emph{10}, 3237--3242\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[He et~al.(2011)He, Scheicher, Grigoriev, Ahuja, Long, Huo, and Liu]{scheicher} He,~Y.; Scheicher,~R.~H.; Grigoriev,~A.; Ahuja,~R.; Long,~S.; Huo,~Z.; Liu,~M. \emph{Advanced Functional Materials} \textbf{2011}, \emph{21}, 2602--2602\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Du et~al.(2008)Du, Skachko, Barker, and Andrei]{land} Du,~X.; Skachko,~I.; Barker,~A.; Andrei,~E.~Y. \emph{Nat Nano} \textbf{2008}, \emph{3}, 491--495\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Brandbyge et~al.(1997)Brandbyge, S\o{}rensen, and Jacobsen]{mads_2} Brandbyge,~M.; S\o{}rensen,~M.~R.; Jacobsen,~K.~W. \emph{Phys. Rev. B} \textbf{1997}, \emph{56}, 14956--14959\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem[Postma(2010)]{postma} Postma,~H. W.~C. \emph{Nano Letters} \textbf{2010}, \emph{10}, 420--425, PMID: 20044842\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \end{mcitethebibliography} \end{document}
2,877,628,090,637
arxiv
\chapter{Bundles on K\"ahler manifolds} \label{appendixvectorbundles} \label{app:Kbundle} In this appendix, we review some standard mathematics for K\"ahler manifolds and holomorphic vector bundles thereon, which we rely on in the main part of the text. The exposition mainly follows Refs.~\cite{H}, and more details can also be found in Refs.~\cite{Candelas:1987is,GH}. Let $M$ be a K\"ahler manifold of dimension $n$ and $E\rightarrow M$ be a rank $r$ holomorphic vector bundle over $M$ with fibres $E_x$, where $x \in M$. The space of $E$-valued $(p,q)$ forms on $M$ is denoted by ${\cal A}^{p, q} (E)$. The usual operator $\bar{\partial}:{\cal A}^{p, q}\rightarrow {\cal A}^{p, q+1}$ for differential forms can be generalised to $E$-valued forms \begin{equation} {\bar \partial}_E : {\cal A}^{p, q} (E) \to {\cal A}^{p, q+1} (E) \end{equation} mapping bundle-valued $(p,q)$-forms to bundle-valued $(p,q+1)$-forms. Explicitly, this operator is defined as follows. For a local holomorphic trivialisation $s = (s_1, s_2, \dots, s_r)$ of $E$ we can write a vector bundle-valued $(p, q)$-form $\alpha \in {\cal A}^{p, q} (E)$ as $\alpha = \sum_{i=1}^r \alpha ^i \otimes s_i$, where $\alpha ^i \in {\cal A}^{p, q}$ are regular $(p, q)$-forms. Then ${\bar \partial}_E$ acts as \begin{equation} {\bar \partial}_E \alpha = \sum_{i=1}^r {\bar \partial} \alpha ^i \otimes s_i\; . \label{defbp} \end{equation} Since the transition functions are holomorphic, this definition is independent of the chosen trivialisation, as it should be. It is straightforward to show from this definition that ${\bar \partial}_E^2 =0$ and that the Leibniz rule \begin{equation} {\bar \partial}_E (f \alpha ) = {\bar \partial} (f) \wedge \alpha + f {\bar \partial}_E (\alpha ) \end{equation} holds (here, $f$ is a differentiable function on $M$).\\[4mm] A Hermitian structure on $E$ (which can also be de defined more generally over complex vector bundles) is defined by providing a Hermitian scalar product $h_x$ on each fibre $E_x$. Let $\sigma$ and $\rho$ be two sections of $E$ which, for the aforementioned trivialisation of $E$, are expanded as $\sigma= \sum_{i=1}^r \sigma^i s_i$ and $\rho= \sum_{i=1}^r \rho^i s_i$. Then, the Hermitian structure, acting on $\sigma$ and $\rho$, can be written out as \begin{equation} h (\sigma, \rho)= H_{ij} \sigma^i {\bar \rho}^j = \sigma^{{\rm T}} H {\bar \rho}\,, \quad H_{i j}= h (s_i, s_j)\,. \label{C1} \end{equation} In other words, locally, we can think of the Hermitian structure as being described by Hermitian $r \times r$ matrices $H$. For a different local trivialisation $s' = (s'_1, s'_2, \dots, s'_r)$ related to the original one by $s'_i = \phi^j_{\ i} s_j$, it follows that $H$ transforms as \begin{equation} H'= \phi^{{\rm T}} H {\bar \phi}\,. \label{C2} \end{equation} The Hermitian structure $h$ can also be viewed as an isomorphism between the vector bundle $E$ and its dual $E^*$, so $h: E \stackrel{\simeq}{\rightarrow} E^*$. This isomorphism can be written more explicitly by introducing a ``dual" trivialisation $s_*=(s_*^1,\ldots ,s_*^r)$ of the dual bundle $E^*$, defined by the relations $s_*^i(s_j)=\delta^i_j$. If we further denote the inverse map of $h$ by $h^*:E^* \stackrel{\simeq}{\rightarrow} E$ then we have \begin{equation} h(s_i)=H_{ji}s^j_*\,,\qquad h^*(s^i_*)=\bar{H}^{ji}s_j\,,\qquad H^{ij}H_{jk}=\delta^i_k\; . \end{equation} A Hermitian structure allows one to define a generalisation of the Hodge dual operation ${\bar \star}_E : {\cal A}^{p, q} (E) \to {\cal A}^{n-p, n-q} (E^*)$ to vector bundle-valued forms by setting \begin{equation} {\bar \star}_E (\alpha \otimes s) = \star ({\bar \alpha }) \otimes h(s)\,, \label{C11} \end{equation} where $\star$ is the regular Hodge star operation on forms. It follows that ${\bar \star}_E \circ {\bar \star}_E = (-1)^{p+q}$, in analogy with corresponding rule for the regular Hodge star. Using this generalised Hodge dual, one can define the scalar product \begin{equation} (\alpha , \beta)= \int_M \alpha \wedge {\bar \star}_E (\beta)\, \label{C12} \end{equation} on ${\cal A}^{p, q} (E)$. The adjoint operator ${\bar \partial}_E^{\dagger}: {\cal A}^{p, q} (E) \to {\cal A}^{p, q-1} (E)$ of $\bar{\partial}_E$ relative to this scalar product satisfies \begin{equation} ({\bar \partial}_E \alpha , \beta)= (\alpha , {\bar \partial}_E^{\dagger} \beta)\,, \label{C13} \end{equation} and takes the form \begin{equation} {\bar \partial}_E^{\dagger}= - {\bar \star}_E \circ {\bar \partial}_{E^*} \circ {\bar \star}_E\,, \label{C14} \end{equation} as can be seen explicitly from Eqs.~\eqref{C11}, \eqref{C12} and \eqref{C13}. Furthermore, one can define the generalised Laplacian \begin{equation} \Delta_E = {\bar \partial}_E^{\dagger} {\bar \partial}_{E} + {\bar \partial}_{E} {\bar \partial}_E^{\dagger} \,, \label{C15} \end{equation} which is self-adjoint under the above scalar product. Bundle-valued forms $\alpha \in {\cal A}^{p, q} (E)$ satisfying $\Delta_E \alpha =0$ are called harmonic with respect to the Hermitian structure $h$. For a compact manifold, the harmonic forms $\alpha $ are precisely the closed and co-closed forms, so the forms satisfying \begin{equation} {\bar \partial}_E\alpha =0, \ \quad {\bar \partial}_E^{\dagger} \alpha =0\,. \label{C16} \end{equation} These forms are in one-to-one correspondence with the cohomology groups $H^{p, q}(M, E)\cong H^q (M, E \otimes \Lambda^p \Omega_M)$. Finally, there is a generalisation of the Hodge decomposition which states that every form $\alpha \in {\cal A}^{p, q} (E)$ can be written as a unique sum $\alpha =\eta + {\bar \partial}_{E} \beta +{\bar \partial}_E^{\dagger}\gamma$, where $\eta$ is harmonic. \vskip 4mm\noindent A connection, $\nabla$, on $E$ is a map $\nabla: {\cal A}^0 (E) \to {\cal A}^1 (E)$ which satisfies the Leibniz rule \begin{equation} \nabla (f\sigma)= d (f) \otimes \sigma + f \nabla (\sigma) \label{C5} \end{equation} for local sections $\sigma$ and local functions $f$. Writing $ \sigma =\sum_{i=1}^r \sigma^i s_i$ in terms of a local trivialisation $s=(s_1,\ldots ,s_r)$ we have \begin{equation} \nabla (\sigma)= (d \sigma^i + A^{i}_{\ j} \sigma^j) \otimes s_i\,, \quad \nabla (s_j)= A^{i}_{\ j} s_i\,, \label{C6} \end{equation} where $A$ is the gauge field. In short, locally, the connection can be written as $\nabla=d+A$, with the gauge field transforming as \begin{equation} A'= \phi^{-1} A \phi + \phi^{-1} d\phi\, \label{C7} \end{equation} under a change of trivialisation, $s'_i =\phi^j_{\ i} s_j$. The curvature $F_{\nabla} \in {\cal A}^2 ({\rm End} (E))$ is defined by $F_{\nabla} =\nabla \circ \nabla$. For a given trivialisation, its local form is \begin{equation} F_{\nabla} = dA + A \wedge A\,. \label{C8} \end{equation} A connection is called compatible with the holomorphic structure if $\nabla^{0,1}={\bar \partial}$ and it is called Hermitian if it satisfies $d (h (\sigma, \rho))= h (\nabla (\sigma), \rho) + h (\sigma, \nabla (\rho))$ for any two sections $\sigma$ and $\rho$. For a holomorphic vector bundle, there exists a unique Hermitian connection compatible with the holomorphic structure which is called the Chern connection. In a local frame, the gauge field associated to the Chern connection is given by \begin{equation} A ={\bar H}^{-1} \partial {\bar H}\,. \label{C9} \end{equation} For a holomorphic change of the trivialisation, $s'_i =\phi^j_{\ i} s_j$, it is straightforward to verify that Eq.~\eqref{C9} is consistent with the transformation laws~\eqref{C2} and \eqref{C7}. It can be shown, using Eq.~\eqref{C8}, that the curvature of the Chern connection is a $(1, 1)$-form and, locally, is explicitly given by \begin{equation} F_{\nabla} = {\bar \partial} ({\bar H}^{-1} \partial {\bar H})\,. \label{C10} \end{equation} \vskip 4mm\noindent In the main part of the thesis, we are calculating certain bundle-valued harmonic forms and it is, therefore, important to re-write the defining Eqs.~\eqref{C16} for such forms in a simple and explicit way. As before, we introduce local trivialisations $s=(s_1,\ldots ,s_r)$ and $s_*=(s_*^1,\ldots ,s_*^r)$ on $E$ and $E^*$, satisfying $s^i_* (s_j)= \delta^i_{\ j}$. We start with two $(p,q)$-forms $\alpha =\alpha ^i s_i$ and $\beta =\beta_i s^i_*$ taking values in $E$ and $E^*$, respectively. Then from the definition~\eqref{defbp} of ${\bar \partial}_E$, we have \begin{equation} {\bar \partial}_E (\alpha )= ({\bar \partial} \alpha ^i) \otimes s_i\,, \quad {\bar \partial}_{E^*} (\beta)= ({\bar \partial} \beta_i) \otimes s^i_*\,. \label{C17} \end{equation} For the generalised Hodge star operation~\eqref{C11}, we get \begin{equation} {\bar \star}_{E} (\alpha ) = (* {\bar \alpha }^i) \otimes h(s_i)= H_{ji} (* {\bar \alpha }^i) \otimes s_*^j \,, \quad {\bar \star}_{E^*} (\beta) = (* {\bar \beta}_i) \otimes h^*(s^i_*)= \bar{H}^{ji} (* {\bar \beta}_i) \otimes s_j \,. \label{C18} \end{equation} Combining these equations, we obtain \begin{equation} {\bar \partial}_E^{\dagger} \alpha = -\star (\delta^k_{\ i} \partial + {\bar H}^{kj} \partial {\bar H}_{ji}) \star \alpha ^i \otimes s_k = -\star (\delta^k_{\ i} \partial + A^k_{\ i}) \star \alpha ^i \otimes s_k\,, \label{C19} \end{equation} where $A$ is the Chern connection~\eqref{C9}. Hence, ${\bar \partial}_E^{\dagger}$ corresponds to the dual of the $\nabla^{1, 0}$ part of the Chern connection. From the above argument, we conclude that a harmonic bundle-valued form $\alpha $, written as ${\boldsymbol \alpha }= (\alpha ^1, \ldots ,\alpha ^r)^{{\rm T}}$ relative to a local frame, is characterised by \begin{equation} {\bar \partial} \boldsymbol { \alpha } =0\,, \quad (\partial + A) \star {\boldsymbol \alpha }=0\,, \label{C20} \end{equation} where $A$ is the gauge field associated to the Chern connection on the bundle. Using the explicit expression~\eqref{C9} for the Chern connection, these equations can be cast into the somewhat more convenient form \begin{equation} {\bar \partial} \boldsymbol { \alpha } =0\,, \quad \partial ({\bar H} \star {\boldsymbol \alpha })=0\, , \label{C21} \end{equation} with the Hermitian structure $H$ on the bundle. \chapter{The coboundary map} \label{coboundarymapappendix} It is well-known that for every short exact sequence of sheaves there is an associated long exact sequence in sheaf cohomology. A crucial ingredient in this correspondence is the co-boundary map whose construction can be found in standard textbooks, see for example \cite{GH}, page 40. Since the co-boundary map plays an important role for our discussion in the main part of the thesis, we now briefly review its construction. \noindent We start with the short exact sequence \begin{equation} 0 \longrightarrow A \stackrel{g}{\longrightarrow} B \stackrel{f}{\longrightarrow} C \longrightarrow 0 \label{B1} \end{equation} of sheaves $A$, $B$, $C$ and sheave morphisms $f$, $g$, satisfying $f \circ g = 0$. The associated long exact sequence in cohomology has the form % \begin{eqnarray} \cdots&\longrightarrow & H^i (A) \stackrel{g}{\longrightarrow} H^i (B) \stackrel{f}{\longrightarrow} H^i (C)\nonumber \\ &\stackrel{\delta}{\longrightarrow} &H^{i+1} (A) \stackrel{g}{\longrightarrow} H^{i+1} (B) \stackrel{f}{\longrightarrow} H^{i+1} (C) \longrightarrow \dots\;, \label{B2} \end{eqnarray} where $f$ and $g$ are the induced maps in cohomology and $\delta$ is the co-boundary map which needs to be constructed. To be in line with the main part of the thesis, we will use the language appropriate for vector bundles, rather than more general sheaves, from now on. To derive $\delta$, we start with a differential $(0,i)$-form $\nu \in H^i (C)$ taking values in $C$. Since the map $f : B \to C$ in~\eqref{B1} is surjective it follows that $\nu$ can always be written as $\nu = f(\hat{\nu})$ for some form $\hat{\nu} \in \Omega^i(B)$. However, if $H^{i+1} (A) \neq 0$, the induced map $f: H^i (B) \to H^i (C)$ is not surjective, which implies that the form $\hat{\nu}$ is not necessarily closed. Now we consider ${\bar \partial} \hat{\nu} \in \Omega^{i+1} (B)$. We get \begin{equation} f ({\bar \partial} \hat{\nu}) = {\bar \partial} (f ( \hat{\nu})) = {\bar \partial} \nu =0\,, \label{B3} \end{equation} where we have used the fact that the map $f$ is holomorphic. This implies that ${\bar \partial} \hat{\nu}$ is in the kernel of $f$ and, by the exactness of the sequence~\eqref{B2}, it is in the image of $g$. That is, there exists an element $\hat{\omega}\in \Omega^{i+1} (A)$ such that $g \hat{\omega} = {\bar \partial} \hat{\nu}$. Moreover, since $g {\bar \partial} \hat{\omega} = {\bar \partial}g \hat{\omega} = {\bar \partial}^2 \hat{\nu} = 0$ and $g$ is injective, we have ${\bar \partial} \hat{\omega} = 0$. Hence, $\hat{\omega}$ represents an element of $H^{i+1}(A)$ and we can define the co-boundary map by \begin{equation} \delta(\nu) = \hat{\omega} \,. \label{B4} \end{equation} \noindent In summary, the main features of the short exact sequence \eqref{B1} and its long exact counterpart \eqref{B2} that we will require are as follows. For a $(0,i)$-form $\nu \in H^i(C)$ and its image $\hat{\omega} = \delta(\nu)\in H^{i+1}(A)$ under the co-boundary map, there exists a $(0,i)$–form $\hat{\nu} \in \Omega^i(B)$ such that \begin{equation} \nu = f (\hat{\nu})\,, \quad {\bar \partial} \hat{\nu} = g \hat{\omega}\,. \label{B5} \end{equation} \chapter{Harmonic line bundle-valued forms on $\mathbb{P}^n$} \label{appendixPn} One of the main ingredients of our calculation of Yukawa couplings is the explicit construction of bundle-valued forms, representing line bundle cohomologies on the ambient space. Since the ambient spaces under consideration are products of projective spaces, it is sufficient to discuss a single projective space $\mathbb{P}^n$. We begin by setting up and reviewing standard facts about projective space including the Fubini-Study metric. One way to obtain a one-to-one correspondence between cohomology and forms is to focus on harmonic forms and we will do this relative to the Fubini-Study metric. Line bundles, their Chern connections and cohomology are the subject of the next two parts of the appendix. Most of this material can be found in standard textbooks, such as Refs.~\cite{H,GH,hartshorne}. Finally, we explain how harmonic line-bundle valued forms are related under multiplication. \section{Basics of projective space} As explained in Section \ref{complexmanifoldssection}, the complex projective space $\mathbb{P}^n$ is the set of complex lines through the origin in $\mathbb{C}^{n+1}$. We denote coordinates on $\mathbb{C}^{n+1}$ by $x_{\alpha}$, where $\alpha = 0, 1, ... , n$. The element of $\mathbb{P}^n$ given by the line through the origin and a point $(x_0, x_1, ... , x_n)$ (with at least one $x_{\alpha} \neq 0$) is denoted by $(x_0\!:\!x_1\!:\!...\!:\!x_n) \in \mathbb{P}^n$. The standard open patches on $\mathbb{P}^n$ are $U_{\alpha} = \lbrace(x_0\!:\!x_1\!:\!...\!:\!x_n)\vert x_{\alpha} \neq 0 \rbrace$, where $\alpha = 0, ... , n+1$, with associated charts $(U_{\alpha}, \phi_{\alpha})$ and maps $\phi_{\alpha} : U_{\alpha} \rightarrow \mathbb{C}^n$ defined by $\phi_{\alpha}(x_0\!:\!x_1\!:\!...\!:\!x_n) = (\xi^{\alpha}_0, \xi^{\alpha}_1, ..., \widehat{\xi^{\alpha}_{\alpha}}, ...,\xi^{\alpha}_n)$. Here, $\xi^{\alpha}_{\mu} = x_{\mu}/x_{\alpha}$ are the coordinates on $\mathbb{C}^n$ and it is understood that $\xi^{\alpha}_{\alpha}=1$ is discarded. For an overlap $U_{\alpha} \cap U_{\beta} \neq \emptyset$, the transition functions $\phi_{\beta \alpha} = \phi_{\beta} \circ \phi^{-1}_{\alpha}:\mathbb{C}^n \rightarrow \mathbb{C}^n$ takes the form $\xi^{\alpha}_{\mu} \mapsto \xi^{\beta}_{\mu} = \tfrac{x_{\alpha}}{x_{\beta}}\xi^{\alpha}_{\mu}$. On each patch $U_{\alpha}$, the Fubini-Study K\"ahler potential can be written as \begin{eqnarray} K_{\alpha} = \dfrac{i}{2\pi} \ \textrm{ln}(\kappa_{\alpha})\, , \qquad \kappa_{\alpha} = \sum_{\mu=0}^n \vert\xi^{\alpha}_{\mu}\vert^2 \, . \end{eqnarray} \noindent The associated Fubini-Study K\"ahler form is given by \begin{eqnarray} J = \partial \overline{\partial} K_{\alpha} \, \end{eqnarray} \noindent as usual and it is easy to check that this definition is independent of $\alpha$ on the overlaps and, hence, gives a globally defined form on $\mathbb{P}^n$. The above K\"ahler form is normalised such that \begin{eqnarray} \int_{\mathbb{P}^n} J^n = 1 \, . \end{eqnarray} \noindent It will frequently be convenient to work on the patch $U_0 = \mathbb{C}^n$ whose coordinates we also denoted by $z_{\mu} = x_{\mu}/x_0$, where $\mu = 1,...,n$, and we write $\kappa = \kappa_0 = 1 + \sum_{\mu=1}^n \vert z_{\mu} \vert^2$. \section{Line bundles on projective space} \noindent The $k^{\textrm{th}}$ power of the hyperplane bundle on $\mathbb{P}^n$ is denoted by $L = \mathcal{O}_{\mathbb{P}^n} (k)$. For each patch $U_{\alpha}$, a hermitian bundle metric on $L$ is given by \begin{eqnarray} \label{hermitianbundlemetric} H_{\alpha}=\kappa_{\alpha}^{-k} \, . \end{eqnarray} \noindent On the patch $U_0$, we also write $H = H_0 = \kappa^{-k}$. The associated Chern connection $\nabla^{0,1}=\bar{\partial}$ and $\nabla^{1,0}=\partial+A_{\alpha}$ is specified by the gauge field \begin{eqnarray} A_{\alpha}= \partial \textrm{ln} \bar{H}_{\alpha} = - k \partial \textrm{ln} \kappa_{\alpha} = 2 \pi i k \partial K_{\alpha} \, , \end{eqnarray} \noindent whose curvature $F_{\alpha} = d A_{\alpha} = - \partial \overline{\partial} \textrm{ln} \bar{H}_{\alpha}$ is explicitly given by \begin{eqnarray} F_{\alpha} = k \partial \bar{\partial} \textrm{ln} \kappa_{\alpha} = - 2 \pi i k \partial \bar{\partial} K_{\alpha} = - 2 \pi i k J \, . \end{eqnarray} \noindent For the first Chern class of $L=\mathcal{O}_{\mathbb{P}^n}(k)$, this implies \begin{eqnarray} c_1(\mathcal{O}_{\mathbb{P}^n}(k))=\dfrac{i}{2 \pi} F = k J \, , \end{eqnarray} \noindent as expected. \section{Line bundle cohomology} The dimension of line bundle cohomology for a line bundle $\mathcal{K}=\mathcal{O}_{\mathbb{P}^n}(k)$ is described by Bott’s formula \begin{eqnarray} \label{generalizedbott} h^q(\mathbb{P}^n,\mathcal{O}_{\mathbb{P}^n}(k))=\begin{cases} \dfrac{(n+k)!}{n!k!}\, , & \textrm{ for } q=0, \;\; k \geq 0 \, . \\[2.2mm] \dfrac{(-k-1)!}{n! (-k-n-1)!}\, , & \textrm{ for } q=n, \;\; k\leq -(n+1) \, . \\[3mm] 0\, , & \textrm{ otherwise} \ . \end{cases} \end{eqnarray} \noindent This means that line bundles $\mathcal{O}_{\mathbb{P}^n}(k)$ in the “gap” $-n + 1 < k < 0$ only have trivial cohomologies, while all other line bundles have precisely one non-trivial cohomology. For $k \geq 0$, this non-trivial cohomology is the zeroth cohomology with dimension given in the first row of Eq.~\eqref{generalizedbott}. For $k \leq (-n - 1)$, on the other hand, only the highest, $n^{\textrm{th}}$ cohomology is non-trivial with dimension given in the second row of Eq.~\eqref{generalizedbott}. We would like to represent these cohomologies by line bundle valued $(0, q)$–forms which are harmonic relative to the Fubini-Study metric. Such forms $\nu_{\alpha}$ should, on each patch $U_{\alpha}$ satisfy the equations (see Appendix~\ref{appendixvectorbundles} for details) \begin{eqnarray} \label{2equations} \bar{\partial} \nu_{\alpha} =0 \, , \qquad \partial(\bar{H}_{\alpha} \ast \nu_{\alpha})=0 \, , \end{eqnarray} \noindent where $H_{\alpha}$ is the hermitian bundle metric \eqref{hermitianbundlemetric}. To solve these equations, we should distinguish the different cases displayed in the Bott formula \eqref{generalizedbott}. \begin{enumerate} \item \underline{$\mathcal{K} =\mathcal{O}_{\mathbb{P}^n}(k)$ with $k \geq 0$:} \noindent In this case, $H^0(\mathbb{P}^n, \mathcal{O}_{\mathbb{P}^n}(k))$ is the only non-zero cohomology, so we are looking for sections, that is harmonic $(0, 0)$–forms. On the patch $U_0$ they are given by \begin{eqnarray} \nu_{(k)} = P_{(k)}(z_1, ..., z_n) \, , \end{eqnarray} \noindent where $P_{(k)}$ are polynomials of degree $k$ in $z_{\mu}$. It is straightforward to check that these have the correct transition functions upon transformation to another patch. Note that the dimension of the space of degree $k$ polynomials in $n$ variables is indeed given by the first line in the Bott formula \eqref{generalizedbott}, as required. \item \underline{$\mathcal{K} =\mathcal{O}_{\mathbb{P}^n}(k)$ with $-(n + 1) < k < 0$:} \noindent In this case, all cohomologies vanish and there are no harmonic forms to construct. \item \underline{$\mathcal{K} =\mathcal{O}_{\mathbb{P}^n}(k)$ with $k \leq -(n + 1)$:} \noindent In this case, $H^n(\mathbb{P}^n, \mathcal{O}_{\mathbb{P}^n}(k))$ is the only non-vanishing cohomology, so we are looking for harmonic $(0, n)$–forms. It is straightforward to verify that, on the patch $U_0$, these can be written as \begin{eqnarray} \nu_{(k)} = \kappa^k P_{(k)}(\bar{z}_1,...,\bar{z}_n) d \bar{z}_1 \wedge ... \wedge d \bar{z}_n \, , \end{eqnarray} \noindent where $P_{(k)}$ are polynomials of degree $-k-n-1$ in the $n$ variables $\overline{z}_{\mu}$. Note that the dimension of this polynomial space equals the value in the second row of the Bott formula \eqref{generalizedbott}, as it should. \end{enumerate} \noindent For uniformity of notation, in the following $P_{(k)}$ for $k \geq 0$ denotes a polynomial of degree $k$ in $z_{\mu}$, while $P_{(k)}$ for $k \leq -n-1$ denotes a polynomial of degree $-k - n - 1$ in $\overline{z}_{\mu}$. \section{Multiplication of harmonic forms} Calculating Yukawa couplings requires performing wedge products of harmonic bundle-valued forms on $\mathbb{P}^n$ (or on products of projective spaces) and we would like to understand in detail how this works. As we have seen, on $\mathbb{P}^n$, we have harmonic bundle-valued $(0, 0)$-forms $\nu_{(k)} = P_{(k)}$, which represent the cohomology $H^{0}(\mathbb{P}^n, \mathcal{O}_{\mathbb{P}^n}(k))$ for $k \geq 0$ and harmonic bundle-valued $(0, n)$ forms $\nu_{(k)} = \kappa^k P_{(z)} d \overline{z}_1 \wedge ... \wedge d \overline{z}_n$, which represent the cohomology $H^n(\mathbb{P}^n, \mathcal{O}_{\mathbb{P}^n}(k))$ for $k \leq -n-1$. Performing a wedge product between any two of those forms clearly produces a $\overline{\partial}$–closed form which is a representative of the appropriate cohomology. If this wedge product is between two harmonic $(0, 0)$–forms the result is clearly again a harmonic $(0, 0)$–form. However, the situation is more complicated for a product of a harmonic $(0, 0)$– form and a harmonic $(0, n)$–form. The result is a $\overline{\partial}$–closed $(0, n)$–form which, however, is generally not harmonic. An obvious problem is to find the harmonic $(0, n)$–form in the same cohomology class as this product. To discuss this in detail, we start with a harmonic $(0, 0)$–form $p_{(\delta)}$ representing a class in $H^0(\mathbb{P}^n, \mathcal{O}_{\mathbb{P}^n}(\delta))$ and a harmonic $(0, n)$–form \begin{eqnarray} \label{nuk-delta} \nu_{(k-\delta)}=\kappa^{k-\delta} P_{(k-\delta)} d \overline{z}_1 \wedge ... \wedge d \overline{z}_n \, \end{eqnarray} \noindent representing a class in $H^n(\mathbb{P}^n, \mathcal{O}_{\mathbb{P}^n}(k-\delta))$, where $k \leq -n-1$. The product $p_{(\delta)}\nu_{(k-\delta)}$ is $\overline{\partial}$-closed, but not generally harmonic, and defines a class in $H^n(\mathbb{P}^n, \mathcal{O}_{\mathbb{P}^n}(k))$, whose harmonic representative we denote by \begin{eqnarray} \label{nuk} \nu_{(k)}=\kappa^{k} Q_{(k)} d \overline{z}_1 \wedge ... \wedge d \overline{z}_n \, . \end{eqnarray} \noindent This harmonic representative differs from the original product by an exact piece, so we have an equation of the form \begin{eqnarray} \label{mainequation} p_{(\delta)} \nu_{(k-\delta)} + \overline{\partial} s = \nu_{(k)} \, , \end{eqnarray} \noindent where $s$ is a section of $\mathcal{O}_{\mathbb{P}^n}(k)$. It turns out, and will be shown below, that the correct ansatz for $s$ is \begin{equation} s = \kappa^{k-\delta+1}( S^{(1)} d \overline{z}_2 \wedge ... \wedge d \overline{z}_n - S^{(2)} d \overline{z}_1\wedge d\overline{z}_3 \wedge ... \wedge d \overline{z}_n+...+ (-1)^{n-1} S^{(n)} d \overline{z}_1 \wedge ... \wedge d \overline{z}_{n-1} ) , \end{equation} \noindent where the $S^{(i)}$ are multivariate polynomials of degree $\delta-1$ in $z_i$ and of degree $-k +\delta -n$ in $\overline{z}_i$. Eq.~\eqref{mainequation} can be solved by inserting the various differential forms including the most general polynomials of the appropriate degrees and then matching polynomials coefficients. In this way, given $p_{(\delta)}$ and $\nu_{(k-\delta)}$, both $s$ and $\nu_{(k)}$ can be determined as we will see below. While this is straightforward in principle, the details are complicated. However, the main result can be stated in a simple way and we would like to do this upfront. It turns out that the polynomial $Q_{(k)}$ which determines $\nu_{(k)}$ is given by \begin{eqnarray} \label{resultappC} \tilde{Q}_{(k)} = c \tilde{p}_{(\delta)} \tilde{P}_{(k-\delta)} \, , \quad \textrm{where} \quad c = \dfrac{(-k-1)!}{(-k+\delta - 1)!} \, . \end{eqnarray} \noindent We recall that the tilde denotes the homogenous counterparts of the various polynomials, so all polynomials in the above equation depend on the homogeneous coordinates $x_{\mu}$, where $\mu = 0, 1, ... , n$. The polynomial “multiplication” on the RHS of this equation should be carried out by converting the coordinates $x_{\mu}$ in $\tilde{p}_{(\delta)}$ into the partial derivatives $\partial/\partial\overline{x}_{\mu}$ which then, in turn, act on $\tilde{P}_{(k-\delta)}$ which depends on $\overline{x}_{\mu}$. Note that this leads to the correct degree required for the polynomial $\tilde{Q}_{(k)}$. This remarkably simple solution to Eq.~\eqref{mainequation} is the key to converting the calculation of Yukawa couplings into an “algebraic” calculation. From this result, the wedge products of harmonic forms which appears in the Yukawa integral can simple be converted into polynomial multiplication, with the appropriate conversion of coordinates into partial derivatives, as discussed. Although $s$ is determined by Eq.~\eqref{mainequation}, we are unfortunately not aware of a formula for s as simple as Eq.~\eqref{resultappC}. In order to prove Eq.~\eqref{resultappC}, we first note the derivative \begin{align} \overline{\partial} s = & \, \kappa^{k-\delta+1} \left( \partial_{\overline{z}_1} S^{(1)}+\partial_{\overline{z}_2} S^{(2)}+...+ \partial_{\overline{z}_n} S^{(n)}\right) d \overline{z}_1 \wedge ... \wedge d \overline{z}_{n} \ + \notag \\& \, (k-\delta+1)\kappa^{k-\delta}\left(z_1 S^{(1)}+...+z_n S^{(n)}\right)d \overline{z}_1 \wedge ... \wedge d \overline{z}_{n} \, . \end{align} \noindent Inserting this together with Eqs.~\eqref{nuk-delta} and \eqref{nuk} into Eq.~\eqref{mainequation} leads to \begin{eqnarray} \label{maineq} p P+ \kappa \sum^n_{i=1} \partial_{\overline{z}_i} S^{(i)} - (-k+\delta-1) \sum^n_{i=1} z_i S^{(i)} = \kappa^{\delta} Q \, . \end{eqnarray} \noindent Next, we should write out each of the polynomials explicitly. For each $S^{(i)}$ we have \begin{eqnarray} \label{expansion1} S^{(i)}= \sum_{\lbrace 0 \leq i_1+...+i_n\leq \delta -1\rbrace } \sum_{ \lbrace 0 \leq j_1+...+ j_n \leq -k+\delta-n \rbrace } c^{(i)}_{i_1...i_n; j_1 ... j_n}z_1^{i_1} ... z_n^{i_n} \overline{z}{}^{j_1}_1...\overline{z}{}^{j_n}_n \, , \end{eqnarray} \noindent with coefficients $c^{(i)}_{i_1...i_n; j_1 ... j_n}$ such that $(i_1,...,i_n; j_1, ...,j_n)$ represents any index combination satisfying $0 \leq i_1+...+i_n\leq \delta -1$ and $0 \leq j_1+...+ j_n \leq -k+\delta-n$. Similarly, we can expand the other polynomials \begin{align} \label{expansion2} p_{(\delta)} \; & \, = \sum_{0\leq i_1+...+i_n \leq \delta} a_{i_1 ... i_n} z^{i_1}_1...z^{i_n}_n \, ,\\ \label{expansion3} P_{(k-\delta)} \; & \, =\sum_{0\leq j_1+...+j_n\leq -k+\delta-n-1} b_{j_1...j_n} \overline{z}{}^{j_1}_1...\overline{z}{}^{j_n}_n \, , \\ \label{expansion4} Q_{(k)} \; &= \, \sum_{0\leq j_1+...+j_n\leq -k-n-1} q_{j_1...j_n} \overline{z}{}^{j_1}_1...\overline{z}{}^{j_n}_n \, . \end{align} \noindent A useful polynomial expansion of $\kappa = 1+z_1 \overline{z}_1+...+z_n \overline{z}_n$ is given by \begin{eqnarray} \label{expansion5} \kappa^{\delta}=\sum_{0\leq i_1+...+i_n\leq \delta } \dfrac{\delta !}{i_1 ! i_2! ... i_n! (\delta-i_1-...-i_n)!} z_1^{i_1}...z_n^{i_n} \overline{z}{}^{i_1}_1...\overline{z}{}^{i_n}_n \, . \end{eqnarray} \noindent Now substituting the polynomials from Eq.~\eqref{expansion1}-\eqref{expansion5} into Eq.~\eqref{maineq}, one can derive the following identity, by extracting the coefficient of the $z_1^{i_1}...z_n^{i_n} \overline{z}{}^{i_1+j_1}_1...\overline{z}{}^{i_n+j_n}_n$ term \begin{align} \label{bigbadvoodoo} \dfrac{\delta!}{i_1!...i_n!(\delta-i_1-...-i_n)!}q_{j_1...j_n}= & \;\, a_{i_1...i_n}b_{l_1...l_n}+\sum_{s=1}^{n} (l_s+1)c^{(s)}_{i_1...i_n;l_1...l_s+1...l_n}+ \notag \\ & \;\, \sum_{s=1}^n (k-\delta+1+l_s)c^{(s)}_{i_1...i_s-1,...,i_n;l_1...l_n}+ \notag \\ & \;\, \sum_{s=1}^n \sum_{\substack{r=1 \\ r \neq s}}^n (l_s+1) c^{(s)}_{i_1...i_r-1...i_n;l_1...l_r-1...l_s+1...l_n} , \end{align} \noindent where we have denoted $l_s=i_s+j_s$, for all $s=1,...,n$. Note, however, that Eq.~\eqref{bigbadvoodoo} is true only if all $i_s$ are strictly positive and strictly smaller than $\delta-\sum^n_{r \neq s} i_r$. For $i_s = 0$ the $c^{(s)}_{i_1,...,i_s-1,...,i_n;l_1...l_n}$ term is not present, because the polynomial expansion of $S^{(s)}$ contains only positive exponents. For $i_s = \delta-\sum^n_{r \neq s} i_r$, the term $c^{(s)}_{i_1...i_n;l_1...,l_s+1,...,l_n}$ is missing, because it does not respect the summation rule of Eq.~\eqref{expansion1}. However, we can conventionally define all these unwanted $c^{(s)}$ coefficients to be zero, so that Eq.~\eqref{bigbadvoodoo} is valid for any $i_s \geq 0$. In order to solve the above set of equations for $q_{j_1...j_n}$, it is useful to define the quantities \begin{equation} \label{defofbeta} \beta_{i_1 ... i_n} = \dfrac{(-k+\delta-n-1-(i_1+...+i_n)-(j_1+...+j_n))!}{(-k-n-1-(j_1+...+j_n))!}\dfrac{(i_1+j_1)!}{j_1!}...\dfrac{(i_n+j_n)!}{j_n!} \, , \end{equation} \noindent which satisfy the following combinatorial identity \begin{eqnarray} \label{tobeproven} \sum_{0\leq i_1+...+i_n\leq \delta}\beta_{i_1 ... i_n} \dfrac{\delta!}{i_1!...i_n! (\delta-i_1-...-i_n)!}=\dfrac{(-k+\delta -1 )!}{(-k-1)!} \, . \end{eqnarray} \noindent A proof of this identity can be found at the end of this appendix. Next, we multiply both sides of Eq.~\eqref{bigbadvoodoo} by $\beta_{i_1 ... i_n} $ and then sum over all indices $\lbrace i_1, ..., i_n \rbrace$ with $0 \leq i_1+...+i_n \leq \delta$. This trick removes all coefficients $c^{(s)}$ from our equation, as a result of the identity \begin{multline} \label{relief} \sum_{0 \leq i_1+...+i_n \leq \delta} \beta_{i_1 ... i_n} \biggl(\sum_{s=1}^{n} (l_s+1)c^{(s)}_{i_1...i_n;l_1...l_s+1...l_n}+\sum_{s=1}^n (k-\delta+1+l_s)c^{(s)}_{i_1...i_s-1...i_n;l_1...l_n}\\+ \sum_{s=1}^n \sum_{\substack{r=1 \\ r \neq s}}^n (l_s+1) c^{(s)}_{i_1...i_r-1...i_n;l_1...l_r-1...l_s+1...l_n}\biggr)=0 \, . \end{multline} \noindent To see this, consider the weight $w$ of an arbitrary coefficient $c^{(n)}_{i_1...i_n; l_1...l_{s}+1...l_n}$ in the above sum, defined as \begin{eqnarray} w = (k-\delta +2+l_s) \beta_{i_1...i_s+1...i_n} + (l_s+1) \beta_{i_1...i_n} + (l_s+1)\sum_{\substack{r=1 \\ r \neq s}}^{n} \beta_{i_1...i_r+1...i_n} \, . \end{eqnarray} \noindent Starting from the definition of $\beta$ in Eq.~\eqref{defofbeta}, we notice that \begin{eqnarray} (l_s+1) \beta_{i_1...i_r+1...i_n} = (l_r+1)\beta_{i_1...i_s+1...i_n}, \,\,\,\,\,\ \forall r \neq s \, . \end{eqnarray} \noindent Therefore, the weight of $c^{(n)}_{i_1...i_n; l_1...l_{s}+1...l_n}$ becomes \begin{eqnarray} w=(k-\delta+\sum_{r=1}^n l_r + n+1) \beta_{i_1...i_s+1...i_n} + (l_s+1) \beta_{i_1...i_n} , \end{eqnarray} \noindent which vanishes. Coming back to Eq.~\eqref{bigbadvoodoo}, we multiply with $\beta_{i_1 ... i_n}$ and sum over all $\lbrace i_1,...,i_n\rbrace$ with $0 \leq i_1+...+i_n \leq \delta$. This removes $c^{(i)}$ and leads to an equation for the coefficients of $Q$, namely \begin{eqnarray} \label{proofderiv} q_{j_1...j_n} = \dfrac{(-k-1)!}{(-k+\delta -1 )!} \sum_{0 \leq i_1+...+i_n\leq \delta} \beta_{i_1 ... i_n} a_{i_1...i_n}b_{l_1,...,l_n} \, . \end{eqnarray} \noindent We should now compare this result for $Q$, obtained by solving Eq.~\eqref{mainequation}, with the proposed solution \eqref{resultappC}. To this end, we convert all relevant polynomials into their homogeneous counterparts and also convert the coordinates in $\tilde{p}_{(\delta)}$ into derivatives. This leads to \begin{align} \tilde{p}_{(\delta)} \; &= \, \sum_{i_0+...+i_n = \delta} a_{i_1...i_n}\left( \dfrac{\partial}{\partial \overline{x}_0}\right)^{i_0}\left( \dfrac{\partial}{\partial \overline{x}_1}\right)^{i_1}... \ \left(\dfrac{\partial}{\partial \overline{x}_n}\right)^{i_n} \, ,\\ \tilde{P}_{(k-\delta)} \;&=\, \sum_{j_0+ ...+j_n = -k+\delta-n-1} b_{j_1...j_n} \overline{x}_0^{j_0} \overline{x}_1^{j_1}... \ \overline{x}_n^{j_n} \, , \\ \tilde{Q}_{(k)} \; &=\, \sum_{ j_0 + ...+j_n= -k-n-1} q_{j_1...j_n} \overline{x}_0^{j_0} \overline{x}_1^{j_1}...\ \overline{x}_n^{j_n} \, . \end{align} \noindent Inserting this into the RHS of Eq.~\eqref{resultappC} gives \begin{equation} \resizebox{1.01\hsize}{!}{$ \tilde{p}_{(\delta)} \tilde{P}_{(k-\delta)} = \!\! \mathlarger{\mathlarger{\sum}}_{ \lbrace i_0+...+i_n = \delta \rbrace } \, \mathlarger{\mathlarger{\sum}}_{\lbrace j_0 + ...+j_n= -k-n-1 \rbrace} \underbrace{\dfrac{(i_0+j_0)!}{j_0!}...\dfrac{(i_n+j_n)!}{j_n!}}_{\beta_{i_1 ... i_n}} a_{i_1...i_n} b_{(i_1+j_1)...(i_n+j_n)} \overline{x}_0^{j_0} \overline{x}_1^{j_1}... \overline{x}_n^{j_n} ,$} \end{equation} \noindent and inserting the result~\eqref{proofderiv} for the coefficients of $Q$ proves Eq.~\eqref{resultappC}. \vspace{4mm} \noindent \underline{Proof of Eq.~\eqref{tobeproven}}: We start from the $n = 1$ equation \begin{eqnarray} \label{n1case} \sum_{i=0}^{\delta} \dfrac{(-k+\delta -2-i-j)!(i+j)!}{(-k-j-2)!j!} \dfrac{\delta !}{i! (\delta -i)!} = \dfrac{(-k+\delta -1)!}{(-k-1)!} \, , \end{eqnarray} \noindent which can be proven by explicit calculation. It is then useful to write the sum for $n>1$ as \begin{equation} \!\!\! \sum_{i_1=0}^{\delta} \sum_{i_2=0}^{\delta-i_1}...\!\!\sum_{i_n=0}^{\delta-i_1-...-i_{n-1}} \dfrac{(-k+\delta-n-1-\sum_{s=1}^n l_s)!}{(-k-n-1-\sum_{s=1}^n j_s)!} \dfrac{l_1 !}{j_1 !}... \dfrac{l_n !}{j_n !} \dfrac{\delta!}{i_1!...i_n!(\delta-i_1-...-i_n)!} \, , \end{equation} \noindent and to perform the summation step by step, starting from $i_n$ and ending with $i_1$, while using Eq.~\eqref{n1case} every time. For $i_n$, we use Eq.~\eqref{n1case} with $\delta_n=\delta - \sum_{s=1}^{n-1} i_s$ instead of $\delta$ and $k_n=k+n-1+\sum_{s=1}^{n-1} j_s$ instead of $k$, which leads to \begin{multline} \sum^{\delta-i_1-...-i_{n-1}}_{i_n=0} \dfrac{(-k+\delta-n-1-\sum_{s=1}^n l_s)!}{(-k-n-1-\sum_{s=1}^n j_s)!} \dfrac{l_n!}{j_n!} \dfrac{\delta !}{i_n! (\delta-i_1-...-i_n)!}=\\=\dfrac{(-k+\delta-n-\sum_{s=1}^{n-1} l_s)!}{(-k-n-\sum_{s=1}^{n-1} j_s)!} \dfrac{\delta!}{(\delta-\sum_{s=1}^{n-1}i_s)!}=\dfrac{(-k_{n-1}+\delta_{n-1}-2-l_{n-1})!}{(-k_{n-1}-2-j_{n-1})!}\dfrac{\delta!}{(\delta_{n-1}-i_{n-1})!} \, . \end{multline} \noindent After performing all the sums, we obtain the required result, \eqref{tobeproven}. \chapter{The boundary integral} \label{appendixboundary} When deriving the Yukawa coupling in the main text, in particular converting Eq.~\eqref{Yukamb} into Eq.~\eqref{Yukamb1} in Chapter~\ref{tetraquadricchapter} and Eq.~\eqref{3.8} into Eq.~\eqref{3.9} in Chapter~\ref{chaptern>1codimension}, we have neglected the boundary term which arises from the partial integration. In this appendix, we show that this boundary term does indeed vanish for the cases discussed. Before we get to Yukawa couplings, it might be useful to note that this boundary term can indeed be important for certain integrals of interest. Consider the tetra-quadric in the ambient space ${\cal A}=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$, with the four ambient space K\"ahler forms $\hat{J}_i$, where $i=1,2,3,4$, normalised as $\int_{\mathbb{P}^1}\hat{J}_i=1$ and their restrictions $J_i=\hat{J}_i|_X$ to the tetra-quadric. An object of interest are the triple-intersection numbers of the tetra-quadric, for example \begin{equation} d_{123}=\int_XJ_1\wedge J_2\wedge J_3\; . \label{d123} \end{equation} It is well-known~\cite{Hubsch:1992nu} how to compute these intersection numbers by introducing the two-form $\mu=2\sum_{i=1}^4\hat{J}_i$ and re-writing the above expression as an ambient space integral. This leads to \begin{equation} d_{123}=\int_{{\cal A}}\hat{J}_1\wedge\hat{J}_2\wedge\hat{J}_3\wedge\mu=2\; . \label{d123res} \end{equation} This method is applicable since the ambient space version $\hat{J}_1\wedge\hat{J}_2\wedge\hat{J}_3$ of the integrand is a closed form. However, alternatively, we may proceed to evaluate the integral~\eqref{d123} by inserting a $\delta$-function, as we did for Eq.~\eqref{Yukamb} and, subsequently, using the current identity~\eqref{10.5}. This leads to \begin{equation} d_{123}=\frac{1}{2\pi i}\int_{\cal A}\hat{J_1}\wedge\hat{J}_2\wedge\hat{J}_3\wedge\bar{\partial}\left(\frac{1}{p}\right)\wedge dp =\frac{1}{2\pi i}\int_{\cal A}\hat{J_1}\wedge\hat{J}_2\wedge\hat{J}_3\wedge\left(\bar{\partial}_{\bar{z}_4}\left(\frac{1}{p}\right)d\bar{z}_4\right)\wedge dp \, . \end{equation} Since the K\"ahler forms $\hat{J}_i$ are $\bar{\partial}$-closed, integration by parts and neglecting the boundary term leads to $d_{123}=0$, in contradiction with~\eqref{d123res}. Hence, in this case, the result comes entirely from the boundary term \begin{equation} d_{123}=\frac{1}{2\pi i}\int_{\mathbb{P}_1\times\mathbb{P}_1\times\mathbb{P}^1\times \gamma_4}\hat{J_1}\wedge\hat{J}_2\wedge\hat{J}_3\wedge \frac{dp}{p}\; . \end{equation} where $\gamma_4$ is a contour with $|z_4|\rightarrow\infty$. In this limit, $p\sim z_4^2$ and $p^{-1}dp\sim 2z_4^{-1}dz_4$, which leads to the correct answer $d_{123}=2$. For Yukawa integrals, the integrand is typically not a closed form, so the $\delta$-function current should be used to re-write them as ambient space integrals. As the above example indicates, we should be careful about the boundary term. \section{The co-dimension one case} We start with the ambient space \begin{equation} {\cal A}= {\mathbb P}^{n_1} \times {\mathbb P}^{n_2} \times \dots \times {\mathbb P}^{n_m}\,, \quad \sum_{i=1}^m n_i = 4 \label{A0} \end{equation} and a Calabi-Yau hypersurface $X \subset \mathcal{A}$ defined as the zero locus of a polynomial $p$ of multi-degree $(n_1 + 1, ... , n_m + 1)$. The relevant integral for the Yukawa couplings, before the integration by parts, reads\footnote{In this Appendix we ignore various numeric prefactors since they do not matter for our discussion.} \begin{equation} \lambda (\nu_1, \nu_2, \nu_3) = \int_{X} \Omega \wedge \nu_1 \wedge \nu_2 \wedge \nu_3 \sim \int_{{\mathbb C}^4} d^4 z \wedge \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3 \wedge {\bar \partial} \Big(\frac{1}{p}\Big)\, , \label{A1} \end{equation} \noindent where $z_1, ... , z_4$ are affine coordinates on a patch $\mathbb{C}^4$ of $\mathcal{A}$. Let us introduce the $(0, 3)$-form \begin{equation} \hat{\alpha } = \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3 \in \Omega^3 ({\cal A}, {\cal O}_{{\cal A}})\, , \label{A2} \end{equation} \noindent which takes values in the trivial bundle. Further, we define the form $\hat{\beta}$ by \begin{equation} \bar \partial \hat{\alpha } = p\hat{\beta}\,. \label{A3} \end{equation} \noindent Note that $\hat{\beta}\in H^4(\mathcal{A}, \mathcal{O}_{\mathcal{A}}(-n_1 - 1, ... , -n_m - 1)) \cong \mathbb{C}$ and, hence, that $\hat{\beta}$ is uniquely fixed up to an overall constant and an exact form, both of which are irrelevant for the present purposes. A harmonic representative for $\hat{\beta}$ can be written down following the rules in Appendix~\ref{appendixPn} (see also Section~\ref{maps1} for the case $\mathcal{A} = \mathbb{P}^1 \times \mathbb{P}^1\times \mathbb{P}^1 \times \mathbb{P}^1$) and this leads to \begin{equation} \hat{\beta} \sim \frac{d^4 {\bar z}}{ \kappa_1^{n_1+1} \dots \kappa_m^{n_m+1}} \, . \label{A5} \end{equation} In order to understand the boundary integral, we need to study the limit when the modulus of one of the coordinates, say $z_1$, goes to infinity. Let us assume that $z_1$ is an affine coordinate of the first projective factor $\mathbb{P}^{n_1}$. Then, for large $\vert z_1 \vert$, we have \begin{equation} \hat{\beta} \sim \frac{d^4 \bar z}{z_1^{n_1+1} {\bar z}_1^{n_1+1}} \,, \quad p\hat{\beta} \sim \frac{d^4 \bar z}{ {\bar z}_1^{n_1+1}}\,. \label{A6} \end{equation} Let us solve Eq.~\eqref{A3} for $\hat{\alpha }$ in this limit. The general solution for $\hat{\alpha }$ is given by $\hat{\alpha }=\hat{\alpha }_0+ \hat{\alpha }_1$, where $\hat{\alpha }_0$ is the general solution to the homogeneous equation $\bar{\partial}\hat{\alpha} = 0$ and $\hat{\alpha }_1$ is a partial solution to the inhomogeneous equation~\eqref{A3}. For a four-dimensional ambient space of the form~\eqref{A0}, we have $H^3 ({\cal A}, {\cal O}_{{\cal A}})=0$, and, hence, $\hat{\alpha}_0$ is exact and, therefore, irrelevant for the integral. From Eq.~\eqref{A6} we conclude that \begin{equation} \hat{\alpha }=\hat{\alpha }_1 = \frac{1}{{\bar z}_1^{n_1}} \hat{\alpha }'\,, \label{A7} \end{equation} \noindent where $\hat{\alpha }'$ is a $(0, 3)$-form independent of $z_1, {\bar z}_1$ and $d {\bar z}_1$. Note that $\hat{\alpha } \to 0$ for large $|z_1|$. From Eq.~\eqref{A1} we find that the boundary term in the limit $\vert z_1\vert \rightarrow \infty$ behaves as \begin{equation} \int_{{\mathbb C}^3 \times \gamma_1} d^4 z \wedge \frac{\hat{\alpha }}{p}\Big|_{|z_1| \to \infty}\,, \label{A8} \end{equation} where $\gamma_1$ is the circle at infinity in the complex plane parameterised by $z_1$. This contour integral is zero since, generically, $p \sim z_1^{n_1+1}$ and $\hat{\alpha } \to 0$ for large $|z_1|$. \section{The co-dimension two case} We will now repeat this discussion for a co-dimension two CICY with ambient space \begin{equation} {\cal A}= {\mathbb P}^{n_1} \times {\mathbb P}^{n_2} \times \dots \times {\mathbb P}^{n_m}\,, \quad \sum_{i=1}^m n_i = 5 \, . \label{A9} \end{equation} \noindent The CICY $X \subset \mathcal{A}$ is defined as the common zero locus of a pair of polynomials $p = (p_1, p_2)$ with multidegrees $\mathbf{q}_1 = (q_1^1, ..., q_1^m)$ and $\mathbf{q}_2 = (q_2^1, ..., q_2^m)$, satisfying the Calabi-Yau condition $q_1^i+q_2^i = n_i+1$, for all $i=1,...,m$. Introducing affine coordinates $(z_1, ... , z_5)$ on a patch in $\mathcal{A}$, the formula for the Yukawa coupling can be written as \begin{equation} \lambda \sim \int_{{\mathbb C}^5} d^5 z \wedge \hat{\alpha } \wedge \bar \partial \Big( \frac{1}{p_1}\Big) \wedge \bar \partial \Big( \frac{1}{p_2}\Big)\,, \label{A10} \end{equation} where $\hat{\alpha }$ is given by~\eqref{A2}. Using the results from Section~\ref{derivation}, we obtain \begin{eqnarray} && \bar \partial \hat{\alpha } = p \hat{\beta} = p_1 \hat{\beta}^{1} + p_2 \hat{\beta}^{2}\,, \nonumber \\ && \bar \partial \hat{\beta}^{1}= - p_2 \hat{\eta}\,, \quad \bar \partial \hat{\beta}^{2}= p_1 \hat{\eta}\, , \label{A13} \\ && \bar \partial \hat{\eta} =0\,. \nonumber \end{eqnarray} From Eqs.~\eqref{A2}, \eqref{A13} it follows that \begin{equation} \hat{\beta}^{a} \in \Omega^4 ({\cal A}, {\cal O}_{{\cal A}} (-{\bf q}_{a}))\,, \quad \hat{\eta} \in H^5 ({{\cal A}}, {\cal O}_{{\cal A}} (-{\bf q}_{1}-{\bf q}_{2} )) = H^5 ({{\cal A}}, \Lambda^2 {\cal N}^*)\cong \mathbb{C}\,. \label{A14} \end{equation} \noindent This means that the form $\hat{\eta}$ is unique up to a multiplicative coefficient and an exact form, both irrelevant in the present context. As in the previous subsection, we can use the results from Appendix~\ref{appendixPn} to write down the harmonic representative \begin{equation} \hat{\eta} \sim \frac{d^5 \bar z}{\kappa_1^{n_1+1} \dots \kappa_m^{n_m+1}}\,. \label{A15} \end{equation} \noindent To compute the boundary integrals we need to study the behaviour in the limit when the modulus of one of the affine coordinates, say $z_1$, goes to infinity. Let us assume that $z_1$ is an affine coordinate of the first projective factor $\mathbb{P}^{n_1}$. In the large $|z_1|$ limit we obtain \begin{eqnarray} \hat{\eta} \sim \frac{d^5 \bar z}{z_1^{n_1+1} {\bar z}{}^{n_1+1}_1}\,, \qquad p_1 \hat{\eta} \sim \frac{d^5 \bar z}{z^{q_2^{1}}_1 {\bar z}^{n_1+1}_1}\,, \qquad p_2 \hat{\eta} \sim \frac{d^5 \bar z}{z^{q_1^{1}}_1 {\bar z}{}^{n_1+1}_1}\,. \label{A16} \end{eqnarray} Using Eq.~\eqref{A13}, we can now obtain the behaviour of $\hat{\beta}^{a}$ and $\hat{\alpha }$ in the limit of large $|z_1|$. Their general solution is given by \begin{equation} \hat{\beta}^{a}= \hat{\beta}^{a}_0+ \hat{\beta}^{a}_1 \,, \qquad \hat{\alpha }= \hat{\alpha }_0+ \hat{\alpha }_1\,, \label{A17} \end{equation} where $\hat{\beta}^{a}_0$, $\hat{\alpha }_0$ are the general solutions to the corresponding homogeneous equations and $\hat{\beta}^{a}_1$, $\hat{\alpha }_1$ are partial solutions to the inhomogeneous equations. For a 5-dimensional ambient space of the form~\eqref{A9}, we have $H^{3}({\cal A}, {\cal O}_{{\cal A}})=0$ and $H^{4}({\cal A}, {\cal O}_{{\cal A}} (-{\bf q}_{a}))=0$, so that $\hat{\alpha }_0$ and $\hat{\beta}^{a}_0$ are both exact and can be discarded. Solving for $\hat{\beta}^{a}_1$ and $\hat{\alpha }_1$ yields \begin{eqnarray} && \hat{\beta}^{1}= \hat{\beta}^{1}_1 \sim \frac{d {\bar z}_2 \wedge \dots \wedge d {\bar z}_5}{z_1^{q_1^{1}} {\bar z}_1^{n_1}}\,, \quad \hat{\beta}^{2}= \hat{\beta}^{2}_1 \sim \frac{d {\bar z}_2 \wedge \dots \wedge d {\bar z}_5}{z_1^{q_2^{1}} {\bar z}_1^{n_1}}\,, \nonumber \\ && \hat{\alpha }=\hat{\alpha }_1 =\frac{1}{{\bar z}^{n_1}_1} \hat{\alpha }'\,, \label{A19} \end{eqnarray} where $\hat{\alpha }'$ is a $(0, 3)$-form independent of $z_1, {\bar z}_1$ and $d {\bar z}_1$. Now we have all the ingredients to integrate by parts in~\eqref{A10}. Doing this once leads to \begin{equation} \lambda \sim \int_{{\mathbb C}^5} d^5 z \wedge \hat{\beta}^{1} \wedge {\bar \partial} \Big(\frac{1}{p_2}\Big) + {\rm boundary} \ {\rm terms}\,. \label{A20} \end{equation} We focus on the boundary terms in this expression for $|z_1|\to \infty$ and first note that \begin{equation} \frac{\partial}{\partial {\bar z}_1} \Big(\frac{1}{p_1}\Big) d {\bar z}_1 \wedge {\bar \partial} \Big(\frac{1}{p_2}\Big) = \frac{\partial}{\partial {\bar z}_1} \Big(\frac{1}{p_1}\Big) d {\bar z}_1 \wedge {\bar \partial}_{\hat{1}} \Big(\frac{1}{p_2}\Big)\,, \label{A21} \end{equation} where ${\bar \partial}_{\hat{1}} $ is the Dolbeault operator with the derivative over ${\bar z}_1$ omitted. Then the boundary term for $|z_1|\to \infty$ turns into \begin{equation} \int_{{\mathbb C}^4 \times \gamma_1} d^5 z \wedge \frac{\hat{\alpha }}{p_1} \wedge {\bar \partial}_{\hat{1}} \Big(\frac{1}{p_2}\Big)\Big|_{|z_1|\to \infty}\,. \label{A22} \end{equation} In the limit of large $|z_1|$, we generically have $p_1 \sim z_1^{q_1^{1}} p_1^{\prime}$, $p_2 \sim z_1^{q_2^{1}} p_2^{\prime}$, where $p_1^{\prime}$, $p_2^{\prime}$ are holomorphic polynomials independent of $z_1$. Inserting this into Eq.~\eqref{A22} gives \begin{equation} \int_{{\mathbb C}^4 \times \gamma_1} d^5 z \wedge \frac{\hat{\alpha }}{z^{n_1+1}_1} \frac{1}{p_1^{\prime}} \wedge {\bar \partial}_{\hat{1}} \Big(\frac{1}{p_2^{\prime}}\Big)\Big|_{|z_1|\to \infty}\,. \label{A23} \end{equation} This integral is indeed zero, because $n_1 >0$ and $\hat{\alpha } \to 0$ at infinity. Finally, we need to perform the second integration by parts in the first term in Eq.~\eqref{A20}. As before, we focus on the boundary term for $|z_1|\to \infty$, which is given by \begin{equation} \int_{{\mathbb C}^4 \times \gamma_1} d^5 z \wedge \frac{\hat{\beta}^{1}}{p_2} \Big|_{|z_1|\to \infty} \sim \int_{{\mathbb C}^4 \times \gamma_1} d^5 z \wedge \frac{d {\bar z}_2 \wedge \dots \wedge d {\bar z}_5}{z_1^{n_1+1} {\bar z}_1^{n_1} p_2^{\prime}} \Big|_{|z_1|\to \infty} =0\,. \label{A24} \end{equation} \chapter{Yukawa Couplings in the Heterotic Tetra-quadric Model} \label{tetraquadricchapter} In the last sections of Chapter~\ref{odyssey}, we checked that heterotic Calabi-Yau compactifications can indeed produce terms which are identifiable with the standard $N = 1$ supergravity action in four dimensions \cite{wessandbagger}. This was mainly provided by Eqs.~\eqref{0thorderbosonicaction}, \eqref{reducedyangmillsaction} and \eqref{reducedfermionaction}, with the proviso that moduli fields describing the size and shape of extra dimensions have to be stabilised. The natural step forward would be to search for realistic models with the correct spectrum of particles and physical Yukawa couplings matching experimental observation. Over the past two decades, string model building based on heterotic Calabi-Yau compactifications has seen considerable progress~\cite{Braun:2005ux, Braun:2005bw, Braun:2005nv, Bouchard:2005ag, Blumenhagen:2006ux, Blumenhagen:2006wj, Anderson:2007nc, Anderson:2008uw, Anderson:2009mh, Braun:2009qy, Braun:2011ni} and large classes of models with the MSSM spectrum can now be constructed using algorithmic approaches~\cite{Anderson:2011ns,Anderson:2012yf,Anderson:2013xka}. The other problem however, involving the calculation of Yukawa couplings for such models, has remained largely unaddressed, both in terms of general techniques and actual specific results. In this chapter, we will attempt to make some progress in this direction and develop new methods, mainly based on differential geometry, to calculate holomorphic Yukawa couplings for heterotic line bundle models. Calculating the physical Yukawa couplings of a supersymmetric string compactification comes in two parts: the calculation of the holomorphic Yukawa couplings, that is, the couplings in the superpotential, and the calculation of the matter field K\"ahler metric, in order to work out the field normalisation. The holomorphic Yukawa couplings are quasi-topological -- they do not depend on the Calabi-Yau metric or the Hermitian Yang-Mills connection on the bundle -- and they can, therefore, in principle, be calculated with algebraic methods. The situation is very different for the K\"ahler metric which does depend on the metric and the bundle connection. It is unlikely that an algebraic method for its calculation can be found and, hence, methods of differential geometry will be required. At present, a full calculation of the physical (perturbative) Yukawa couplings is only understood for heterotic Calabi-Yau models with standard embedding. In this case, the holomorphic Yukawa couplings for the $(1,1)$ matter fields are given by the Calabi-Yau triple intersection numbers~\cite{Strominger:1985ks}, while the holomorphic $(2,1)$ Yukawa couplings have been worked out in Ref.~\cite{Candelas:1987se}. The matter field K\"ahler metrics are identified with the corresponding moduli space metrics \eqref{modulispacemetrics}, as explained in Ref.~\cite{Candelas:1990pi}. Further, in Ref.~\cite{Candelas:1987se}, the relation between the analytic calculation of $(2,1)$ holomorphic Yukawa couplings and the algebraic approach has been worked out in detail. Much less is known for heterotic Calabi-Yau models with general vector bundles. An algebraic approach for the calculation of holomorphic Yukawa couplings for such ``non-standard embedding" models has been outlined and applied to examples in Ref.~\cite{Anderson:2009ge}. However, the matter field K\"ahler metric has not been computed for any non-standard embedding model on a Calabi-Yau manifold and no clear method for its computation has been formulated. The purpose of this chapter is two-fold. First, we would like to develop explicit methods based on differential geometry to compute the holomorphic Yukawa couplings for heterotic models with non-standard embedding. Secondly, we would like to understand how these methods relate to the algebraic ones pioneered in Ref.~\cite{Candelas:1987se} and further developed in Ref.~\cite{Anderson:2009ge}. Apart from occasional remarks, we will not be concerned with the matter field K\"ahler metric until Chapter~\ref{kahlerchapter}. For ease of terminology, the term ``Yukawa couplings" will refer to the holomorphic Yukawa couplings in the remainder of the thesis. The present work will be carried out within the context of heterotic line bundle models~\cite{Anderson:2011ns,Anderson:2012yf,Anderson:2013xka}, perhaps the simplest class of heterotic Calabi-Yau models with non-standard embedding. For those models, the gauge bundle has an Abelian structure group $G= S(U(1)^r)$ and is realised by a sum of line bundles, a feature which makes explicit calculations of bundle properties significantly more accessible. Furthermore, we will work within perhaps the simplest class of Calabi-Yau manifolds, namely complete intersections in products of projective spaces~\cite{Green:1986ck,Candelas:1987kf,Hubsch:1992nu} (CICYs). More specifically, we focus on hyper-surfaces in products of projective spaces and the tetra-quadric in the ambient space ${\cal A}=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ in particular. On the one hand, the simplicity of the set-up facilitates developing new and explicit methods to calculate Yukawa couplings. On the other hand, it is known~\cite{Anderson:2011ns,Anderson:2012yf} that this class contains interesting models with a low-energy MSSM spectrum, so that we will be able to apply our methods to quasi-realistic examples. Picking a vector bundle with Abelian structure group means that the low energy theory is governed by the gauge group $H = H_{\textrm{GUT}}\times S(U(1)^r)$, commuting with $G$. One could naively ask whether such a model is anomaly-free, given that the extra $U(1)$ bosons could in principle contribute to anomalous triangle loops. The answer however is that such anomalies are canceled by the 4d version of the Green-Schwarz mechanism, which ensures that the extra $U(1)$ bosons acquire St\"uckelberg masses close in magnitude to the compactification scale. Thus, in the 4d theory, the additional $U(1)$ symmetries are to be interpreted as global and, in fact, their presence may be rather beneficial for phenomenology. In conjunction with topology, these global symmetries could potentially explain the structure of Yukawa matrices and why certain terms, such as \eqref{protondecayterms}, that trigger fast proton decay, are forbidden. The plan of this chapter is as follows. In the next section, we will lay the ground by reviewing some of the basics, including the general structure of heterotic Yukawa couplings, heterotic line bundle models and complete intersection Calabi-Yau manifolds. Since our main focus will be on the tetra-quadric Calabi-Yau manifold, we need to understand in some detail the differential geometry of $\mathbb{P}^1$ and line bundles thereon. This will be developed in Section~\ref{forms}. General results for Yukawa couplings on the tetra-quadric and some toy examples are given in Section~\ref{toyex}. Section~\ref{realex} presents a complete calculation of the Yukawa couplings for a quasi-realistic model~\cite{Anderson:2011ns,Anderson:2012yf, Buchbinder:2013dna, Buchbinder:2014qda,Buchbinder:2014sya,Buchbinder:2014qca} with MSSM spectrum on the tetra-quadric. We conclude in Section~\ref{c1conclusions}. Some related matters and technical issues have been deferred to the Appendices. \section{Yukawa couplings in line bundle models} \subsection{General properties of Yukawa couplings in heterotic Calabi--Yau compactifications} We start with an overview of holomorphic Yukawa couplings in the context of the $E_8 \times E_8$ heterotic string theory on a Calabi--Yau manifold (see, for example, Ref.~\cite{GSW}). As seen in Section~\ref{dimredchapter}, the matter fields originate from the $E_8\times E_8$ gauge fields $A$ and the associated gauginos. Here we focus on one $E_8$ factor (``the visible sector") and assume that the Calabi-Yau manifold $X$ carries a principal bundle with structure group $G$ embedded into $E_8$. The (visible) low-energy gauge group $H$ is then the commutant of $G$ within $E_8$ and the types of matter multiplets can be read off from the branching \begin{equation} {\bf 248}\rightarrow \left[({\rm Adj}_G,{\bf 1})\oplus ({\bf 1},{\rm Adj}_H)\oplus\bigoplus ({\cal R}_G, {\cal R}_H)\right]_{G\times H} \label{genbranch} \end{equation} of the ${\bf 248}$ adjoint representation of $E_8$ under $G\times H$. Specifically, for the above branching, the low-energy theory can contain matter multiplets transforming as representations ${\cal R}_H$ under $H$. These multiplets descend from harmonic bundle valued (0,1)-forms $\nu\in H^1(X,V)$, where $V$ is a vector bundle associated to the principal bundle via the $G$ representations ${\cal R}_G$. Consider three representations $({\cal R}_G^i,{\cal R}_H^i)$, where $i=1,2,3$, which appear in the decomposition~\eqref{genbranch}, such that ${\cal R}_G^1\otimes {\cal R}_G^2\otimes {\cal R}_G^3$ contains a singlet. The three associated vector bundles are denoted as $V_i$ with harmonic bundle-valued (0,1)-forms $\nu_i\in H^1(X,V_i)$. Then, the associated holomorphic Yukawa couplings can be computed from \begin{equation} \lambda(\nu_1,\nu_2,\nu_3)=\int_X\Omega\wedge\nu_1\wedge\nu_2\wedge\nu_3\; , \label{Yukgen} \end{equation} where $\Omega$ is the holomorphic $(3,0)$ form on $X$ and an appropriate contraction over the bundle indices in $\nu_i$ onto the singlet direction is implied. Let us introduce sets of basis forms, $\nu_{i,I}$, where $I=1,\ldots ,h^1(X,V_i)$, for the cohomologies $H^1(X,V_i)$ and define $\lambda_{IJK}=\lambda(\nu_{1,I},\nu_{2,J},\nu_{3,K})$. The four-dimensional $N=1$ chiral superfields associated to $\nu_{i,I}$ are denoted $C_i^I$ and these fields transform as ${\cal R}_H^i$ under the gauge group $H$. The superpotential for these fields can be written as \begin{equation} W=\lambda_{I J K}C_1^I C_2^J C_3^K\; . \end{equation} Here, we are mainly interested in the phenomenologically promising structure groups $G=SU(3)$, $SU(4)$, $SU(5)$ (and their maximal rank sub-groups), which lead to the low-energy gauge groups $H=E_6$, $SO(10)$, $SU(5)$ (times possible $U(1)$ factors), respectively. For these three groups, the decomposition~\eqref{genbranch} takes the form \begin{align} &{\bf 248} \,\rightarrow \, \left[({\bf 8},{\bf 1})\oplus ({\bf 1},{\bf 78})\oplus ({\bf 3},{\bf 27})\oplus (\overline{\bf 3},\overline{\bf 27})\right]_{SU(3)\times E_6} \label{E6dec}\\ & {\bf 248}\,\rightarrow \,\left[({\bf 15},{\bf 1})\oplus ({\bf 1},{\bf 45})\oplus ({\bf 4},{\bf 16})\oplus (\overline{\bf 4},\overline{\bf 16})\oplus ({\bf 6},{\bf 10})\right]_{SU(4)\times SO(10)} \label{SO10dec}\\ & \resizebox{0.91\hsize}{!}{$ {\bf 248} \,\rightarrow\, \left[({\bf 24},{\bf 1})\oplus ({\bf 1},{\bf 24})\oplus ({\bf 5},{\bf 10})\oplus (\overline{\bf 5},\overline{\bf 10})\oplus ({\bf 10},\overline{\bf 5})\oplus (\overline{\bf 10},{\bf 5})\right]_{SU(5)\times SU(5)}$ } \label{SU5dec} \end{align} For $G=SU(3)$ we have matter multiplets in representations ${\bf 27}$, $\overline{\bf 27}$ and ${\bf 1}$ of the low-energy gauge group $H=E_6$ and possible Yukawa couplings of type ${\bf 27}^3$, $\overline{\bf 27}^3$, ${\bf 1}\,{\bf 27}^2$ and ${\bf 1}\,\overline{\bf 27}^2$. For $G=SU(4)$, the families come in ${\bf 16}$ representations and the anti-families in $\overline{\bf 16}$ representations of $H=SO(10)$. Higgs multiplets reside in ${\bf 10}$ representations and bundle moduli in singlets, ${\bf 1}$. Possible Yukawa couplings are of type ${\bf 10}\,{\bf 16}^2$, ${\bf 10}\,\overline{\bf 16}^2$, ${\bf 1}\,{\bf 16}\,\overline{\bf 16}$ and ${\bf 1}\,{\bf 10}^2$. Finally, for $G=SU(5)$ and low-energy gauge group $H=SU(5)$ we have families in $\overline{\bf 5}\oplus {\bf 10}$, anti-families in ${\bf 5}\oplus\overline{\bf 10}$ and bundle moduli singlets, ${\bf 1}$. Allowed Yukawa couplings include the up-type Yukawa couplings ${\bf 5}\,{\bf 10}^2$, the down-type Yukawa couplings $\overline{\bf 5}\,\overline{\bf 5}\,{\bf 10}$ as well as the singlet couplings ${\bf 1}\,{\bf 5}\,\overline{\bf 5}$, ${\bf 1}\,{\bf 10}\,\overline{\bf 10}$. While Eq.~\eqref{Yukgen} has been, initially, written down in terms of the harmonic representatives $\nu_i$ of the cohomologies $H^1(X,V_i)$, it is important to note that the expression is, in fact, independent of the choice of representatives. To see this, perform the transformation\footnote{Here and in the following, we will often denote the derivative $\bar{\partial}_E$ on differential forms taking values in the vector bundle $E$ simply by $\bar{\partial}$ to avoid cluttering the notation.} $\nu_i\rightarrow \nu_i+\bar{\partial}\xi_i$ on Eq.~\eqref{Yukgen}, where $\xi_i$ are sections of $V_i$. Then, integrating by parts and using $\bar{\partial}\nu_i=0$, $\bar{\partial}\Omega=0$ and $\bar{\partial}^2=0$, it follows immediately that \begin{equation} \lambda(\nu_1+\bar{\partial}\xi_1,\nu_2+\bar{\partial}\xi_2,\nu_3+\bar{\partial}\xi_3)=\lambda(\nu_1,\nu_2,\nu_3)\; . \end{equation} This quasi-topological property of the holomorphic Yukawa couplings means that they can, in principle, be computed purely algebraically, as has been noted in Refs.~\cite{Candelas:1987se,Anderson:2009ge}. To recall how this works we focus on the case $G=SU(3)$ and low-energy gauge group $H=E_6$. The families in ${\bf 27}$ descend from bundle-valued (0,1)-forms $\nu,\mu,\rho \in H^1(X,V)$, where $V$ is the associated vector bundle in the fundamental representation, ${\bf 3}$, of $SU(3)$. Since $c_1(V)=0$ it follows that $\wedge^3 V\cong{\cal O}_X$ and we have a map \begin{equation} H^1 (X, V) \times H^1 (X, V) \times H^1 (X, V) \to H^3 (X, \wedge^3 V)\simeq H^3 (X, {\cal O}_X) \simeq {\mathbb C}\,. \label{1.4} \end{equation} More explicitly, this can be expressed by the cup product \begin{equation} \nu\wedge\mu\wedge\rho=\tau(\nu,\mu,\rho)\,\overline{\Omega}\; ,\label{cup} \end{equation} where $\tau(\nu,\mu,\rho)$ is a complex number and $\overline{\Omega}$ is the unique harmonic representative of the cohomology group $H^3 (X, {\cal O}_X)$. Inserting into Eq.~\eqref{Yukgen}, it follows that the complex number $\tau(\nu,\mu,\rho)$ is proportional to the Yukawa couplings via \begin{equation} \lambda(\mu,\nu,\rho)=\tau(\nu,\mu,\rho)\int_X\Omega\wedge\overline{\Omega}\; . \end{equation} This means that the ${\bf 27}^3$ Yukawa couplings, up to an overall constant, can be computed algebraically, by performing a (cup) product between three cohomology representatives. Similar arguments can be made for the other Yukawa couplings in the $SU(3)$ case and indeed for other bundle structure groups $G$. Such an algebraic calculation has been carried out for certain examples in Refs.~\cite{Candelas:1987se,Anderson:2009ge}. While it is elegant and avoids the evaluation of integrals, it also has a number of drawbacks. As a practical matter, the relevant cohomologies are not always directly known, but are merely represented by certain isomorphic cohomologies. In this case, it is not always obvious how the cup product should be carried out. Perhaps more significantly, computing the physical (rather than just the holomorphic) Yukawa couplings also requires knowledge of the matter field K\"ahler metric \eqref{matterfieldmetric} which is proportional to the inner product \begin{equation} (\nu,\omega)=\int_X\nu\wedge \bar{\star}_E\,\omega \end{equation} between two harmonic $(0,1)$-forms $\nu$, $\omega$ representing cohomologies in $H^1(X,V)$. Unlike the holomorphic Yukawa couplings, this expression is not independent of the choice of representatives due to the presence of the complex conjugation, as can be seen by performing a transformation $\nu\rightarrow\nu+\bar{\partial}\alpha$, $\omega\rightarrow\omega+\bar{\partial}\beta$. It needs to be computed with the harmonic $(0,1)$-forms and requires knowledge of the Ricci-flat Calabi-Yau metric. Consequently, a full calculation of the physical Yukawa couplings will have to rely on differential geometry. One purpose of this thesis is to develop such differential geometry methods, for the immediate purpose of calculating the holomorphic Yukawa couplings, but in view of a full calculation of the physical couplings in the future. \subsection{A review of line bundle models} Perhaps the simplest heterotic compactifications for which to calculate Yukawa couplings, apart from models with standard embedding, are line bundle models. In the remainder of the thesis, we will focus on calculating holomorphic Yukawa couplings for such line bundle models and, in the present sub-section, we begin by reviewing their general structure, following Refs.~\cite{Anderson:2011ns, Anderson:2012yf}. Heterotic line bundle models rely on a gauge bundle with (visible) Abelian structure group $G=S(U(1)^n)$, which can be described by a line bundle sum \begin{equation} V = \bigoplus_{a=1}^n L_a\quad\mbox{with}\quad c_1(V)=0\; , \label{Vdef} \end{equation} where $L_a\rightarrow X$ are line bundles over the Calabi-Yau manifold $X$. Here, the condition $c_1(V)=0$ ensures that the structure group of $V$ is indeed special unitary, rather than merely unitary. As every heterotic model, line bundle models need to satisfy two basic consistency conditions. Firstly, the bundle $V$ needs to be supersymmetric, which is equivalent to requiring vanishing slopes \begin{equation} \mu(L_a)\equiv\int_X c_1(L_a)\wedge J\wedge J\stackrel{!}{=}0 \end{equation} for all line bundles $L_a$, where $J$ is the K\"ahler form of the Calabi-Yau manifold $X$. The slope-zero conditions are constraints in K\"ahler moduli space which have to be solved simultaneously for all line bundles in order for the bundle $V$ to preserve supersymmetry. Secondly, we need to be able to satisfy the heterotic anomaly condition which is guaranteed if we require that \begin{equation} c_2(TX)-c_2(V)\geq 0 \; , \end{equation} \noindent or, equivalently, that $c_2(TX)-c_2(V)$ is in the Mori cone of $X$. In this case, the anomaly condition can always be satisfied by adding five-branes to the model (although other completions involving a non-trivial hidden bundle or a combination of hidden bundle and five-branes are usually possible).\\[2mm] Of particular interest are line bundle sums with rank $n=3,4,5$ for which the associated (visible) low-energy gauge groups are $H=E_6\times S(U(1)^3)$, $H=SO(10)\times S(U(1)^4)$ and $SU(5)\times S(U(1)^5)$, respectively. For the non-Abelian part of these gauge groups, the multiplet structure of the low-energy theory can be read off from Eqs.~\eqref{E6dec}--\eqref{SU5dec}. In addition, multiplets carry charges under the Abelian part, $S(U(1)^n)$, of the gauge group. It is convenient to describe these charges by an integer vector ${\bf q} =(q_1, q_2, \dots, q_n)$. Since we would like to label representations of $S (U(1)^n)$, rather than of $U(1)^n$, two such vectors ${\bf q}$ and ${\bf \tilde{q}}$ have to be identified if ${\bf q} - {\bf \tilde{q}} = {\mathbb Z} (1, 1, \dots, 1)$. This charge vector will be attached as a subscript to the representation of the non-Abelian part. The number of each type of multiplet equals the dimension of the cohomology $H^1(X, K)$ for a certain line bundle $K$, which is either one of the line bundles $L_a$ or a tensor product thereof. The precise list of multiplets for the three cases $n=3,4,5$, together with the associated line bundles $K$ is provided in Tables~\ref{tab:e6}, \ref{tab:so10} and \ref{tab:su5}. \renewcommand{1.5}{1.2} \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|l|}\hline multiplet&indices&line bundle $K$&interpretation\\\hline\hline ${\bf 27}_{{\bf e}_a}$&$a=1,2,3$&$L_a$&families/Higgs\\\hline $\overline{\bf 27}_{-{\bf e}_a}$&$a=1,2,3$&$L_a^*$&mirror-families/Higgs\\\hline ${\bf 1}_{{\bf e}_a-{\bf e}_b}$&$a,b=1,2,3\,,\;a\neq b$&$L_a\otimes L_b^*$&bundle moduli\\\hline \end{tabular} \end{center} \caption{\it Multiplets and associated line bundles for bundle structure group $G=S(U(1)^3)$ and low-energy gauge group $H=E_6\times S(U(1)^3)$.}\label{tab:e6} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|l|}\hline multiplet&indices&line bundle $K$&interpretation\\\hline\hline ${\bf 16}_{{\bf e}_a}$&$a=1,2,3,4$&$L_a$&families\\\hline $\overline{\bf 16}_{-{\bf e}_a}$&$a=1,2,3,4$&$L_a^*$&mirror-families\\\hline ${\bf 10}_{{\bf e}_a+{\bf e}_b}$&$a=1,2,3,4\,,\;a<b$&$L_a\otimes L_b$&Higgs\\\hline ${\bf 1}_{{\bf e}_a-{\bf e}_b}$&$a,b=1,2,3,4\,,\;a\neq b$&$L_a\otimes L_b^*$&bundle moduli\\\hline \end{tabular} \end{center} \caption{\it Multiplets and associated line bundles for bundle structure group $G=S(U(1)^4)$ and low-energy gauge group $H=SO(10)\times S(U(1)^4)$.}\label{tab:so10} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|l|}\hline multiplet&indices&line bundle $K$&interpretation\\\hline\hline ${\bf 10}_{{\bf e}_a}$&$a=1,2,3,4,5$&$L_a$&$(Q,u,e)$ families\\\hline $\overline{\bf 10}_{-{\bf e}_a}$&$a=1,2,3,4,5$&$L_a^*$&$(\tilde{Q},\tilde{u},\tilde{e})$ mirror-families\\\hline $\overline{\bf 5}_{{\bf e}_a+{\bf e}_b}$&$a,b=1,2,3,4,5\,,\;a<b$&$L_a\otimes L_b$&$(L,d)$ families\\\hline ${\bf 5}_{-{\bf e}_a-{\bf e}_b}$&$a,b=1,2,3,4,5\,,\;a<b$&$L_a^*\otimes L_b^*$&$(\tilde{L},\tilde{d})$ mirror-families\\\hline ${\bf 1}_{{\bf e}_a-{\bf e}_b}$&$a,b=1,2,3,4,5\,,\;a\neq b$&$L_a\otimes L_b^*$&bundle moduli\\\hline \end{tabular} \end{center} \caption{\it Multiplets and associated line bundles for bundle structure group $G=S(U(1)^5)$ and low-energy gauge group $H=SU(5)\times S(U(1)^5)$. }\label{tab:su5} \end{table} As is clear from the tables, all relevant $S(U(1)^n)$ charges can be expressed easily in terms of the $n$-dimensional standard unit vectors ${\bf e}_a$. Frequently, in order to simplify the notation for multiplets, we will replace the subscripts ${\bf e}_a$ simply by $a$. For example, in the $SO(10)\times S(U(1)^4)$ case, the multiplet ${\bf 16}_{{\bf e}_a}$ becomes ${\bf 16}_a$ or the multiplet ${\bf 10}_{{\bf e}_a+{\bf e}_b}$ becomes ${\bf 10}_{a,b}$. For all three cases, the low-energy spectrum contains fields ${\bf 1}_{a,b}$ which are singlets under the non-Abelian part of the gauge group, but are charged under $S(U(1)^n)$. These fields should be interpreted as bundle moduli which parametrise deformations away from a line bundle sum to bundles with non-Abelian structure group. For many models of interest these bundle moduli are present in the low-energy spectrum and, in such cases, the Abelian bundle is embedded in a moduli space of generically non-Abelian bundles. Much can be learned about non-Abelian bundles by such deformations away from the Abelian locus. This is one of the reasons why studying Yukawa couplings for line bundle models can yield insights into the structure of Yukawa couplings for non-Abelian bundles. Another reason is more technical. In practice, non-Abelian bundles are often constructed from line bundles, for example via extension or monad sequences, and, hence, some of the methods developed for line bundles will be useful to tackle the non-Abelian case. \vskip 2mm So far, we have considered the ``upstairs" theory with a GUT-type gauge group. In order to break this theory to the standard-model group we require a freely-acting symmetry $\Gamma$ on the Calabi-Yau manifold $X$. The line bundle sum $V$ should descend to the quotient Calabi-Yau $X/\Gamma$, that is, it should have a $\Gamma$-equivariant structure. Downstairs, on the manifold $X/\Gamma$, we should include a Wilson line, defined by a representation $W$ of $\Gamma$ into the (hypercharge direction of the) GUT group. As a result, each downstairs multiplet, $\psi$, acquires an induced $\Gamma$-representation denoted $\chi_\psi$. Luckily, the resulting downstairs spectrum can be computed in a simple group-theoretical fashion from the upstairs spectrum. Consider a certain type of upstairs multiplet with associated line bundle $K$. By virtue of the $\Gamma$-equivariant structure of $V$, the cohomology $H^1(X, K)$, associated to the upstairs multiplet, becomes a $\Gamma$-representation.\footnote{In more complicated cases, line bundles might not be equivariant individually, but several line bundles may form an equivariant block. However, the computation of downstairs cohomology for such cases proceeds in a similar group-theoretical fashion.} To compute the spectrum of a certain type, $\psi$, of downstairs multiplet contained in $H^1(X, K)$, we should determine the $\Gamma$-singlet part of \begin{equation} H^1(X, K)\otimes \chi_\psi\; . \label{equivcoh} \end{equation} Fortunately, the computation of Yukawa couplings relates to this Wilson line breaking mechanism in a straightforward way. We can obtain the downstairs (holomorphic) Yukawa couplings by basically extracting the relevant $\Gamma$-singlet directions of the upstairs Yukawa couplings. In our later examples, we will consider Wilson line breaking for the gauge group $SU(5)$. In this case, the Wilson line can be conveniently described in terms of two one-dimensional $\Gamma$-representations $\chi_2$, $\chi_3$, satisfying $\chi_2^2\otimes\chi_3^3=1$ and with at least one of them non-trivial. Such a Wilson line, embedded into the hypercharge direction, breaks $SU(5)$ to the standard model group. The $\Gamma$-representations $\chi_\psi$ of the various standard model multiplets, which enter Eq.~\eqref{equivcoh}, are then explicitly given by \begin{equation} \chi_Q=\chi_2\otimes\chi_3\,,\quad \chi_u= \chi^2_3\,,\quad \chi_e=\chi_2^2\,,\quad \chi_d=\chi_3^*\,,\quad \chi_L=\chi_2^*\,,\quad \chi_H=\chi_2^*\,,\quad \chi_{\bar{H}}=\chi_2\;. \label{WLcharges} \end{equation} \subsection{Holomorphic Yukawa couplings for line bundle models} \label{examples} For heterotic line bundle models, the $(0,1)$-forms $\nu_1$, $\nu_2$ and $\nu_3$, contained in the general expression~\eqref{Yukgen} for the Yukawa couplings, represent the first cohomologies of certain line bundles, denoted by $K_1$, $K_2$ and $K_3$, so that $\nu_i\in H^1(X,K_i)$. The structure of the integral~\eqref{Yukgen} (or, equivalently, four-dimensional gauge symmetry) means that such a line bundle Yukawa coupling can be non-zero only if \begin{equation} K_1 \otimes K_2 \otimes K_3 = {\cal O}_X\; . \end{equation} Provided this is the case, the Yukawa coupling is given by \begin{equation} \lambda(\nu_1,\nu_2,\nu_3)=\int_X\Omega\wedge\nu_1\wedge\nu_2\wedge\nu_3\; , \label{Yuklb} \end{equation} \begin{table}[h] \begin{footnotesize} \begin{center} \begin{tabular}{|l||c|c|c|c|c|}\hline Gauge group&Yukawa coupling&$K_1$&$K_2$&$K_3$&index constraint\\\hline\hline \multirow{3}{*}{$E_6\times S(U(1)^3)$} &${\bf 27}_a\,{\bf 27}_b\,{\bf 27}_c$&$L_a$&$L_b$&$L_c$&$a,b,c$ all different\\\cline{2-6} &$\overline{\bf 27}_a\,\overline{\bf 27}_b\,\overline{\bf 27}_c$&$L_a^*$&$L_b^*$&$L_c^*$&$a,b,c$ all different\\\cline{2-6} &${\bf 1}_{a,b}\,{\bf 27}_b\,\overline{\bf 27}_a$&$L_a\otimes L_b^*$&$L_b$&$L_a^*$&$a\neq b$\\\hline\hline \multirow{3}{*}{$SO(10)\times S(U(1)^4)$} &${\bf 10}_{a,b}\,{\bf 16}_c\,{\bf 16}_d$&$L_a\otimes L_b$&$L_c$&$L_d$&$a,b,c,d$ all different\\\cline{2-6} &${\bf 10}_{a,b}\,\overline{\bf 16}_a\, \overline{\bf 16}_b$&$L_a\otimes L_b$&$L_a^*$&$L_b^*$&$a\neq b$\\\cline{2-6} &${\bf 1}_{a,b}\,{\bf 16}_b\, \overline{\bf 16}_a$&$L_a\otimes L_b^*$&$L_b$&$L_a^*$&$a\neq b$\\\hline\hline \multirow{6}{*}{$SU(5)\times S(U(1)^5)$} &$\overline{\bf 5}_{a,b}\,\overline{\bf 5}_{c,d}\,{\bf 10}_e$&$L_a\otimes L_b$&$L_c\otimes L_d$&$L_e$&$a,b,c,d,e$ all different\\\cline{2-6} &${\bf 5}_{a,b}\,{\bf 10}_a\,{\bf 10}_b$&$L_a^*\otimes L_b^*$&$L_a$&$L_b$&$a\neq b$\\\cline{2-6} &${\bf 5}_{a,b}\,{\bf 5}_{c,d}\,\overline{\bf 10}_e$&$L_a^*\otimes L_b^*$&$L_c^*\otimes L_d^*$&$L_e^*$&$a,b,c,d,e$ all different\\\cline{2-6} &$\overline{\bf 5}_{a,b}\,\overline{\bf 10}_a\,\overline{\bf 10}_b$&$L_a\otimes L_b$&$L_a^*$&$L_b^*$&$a\neq b$\\\cline{2-6} &${\bf 1}_{a,b}\,{\bf 5}_{a,c}\,\overline{\bf 5}_{b,c}$&$L_a\otimes L_b^*$&$L_a^*\otimes L_c^*$&$L_b\otimes L_c$&$a\neq b\,,a\neq c\,,b\neq c$\\\cline{2-6} &${\bf 1}_{a,b}\,{\bf 10}_b\,\overline{\bf 10}_a$&$L_a\otimes L_b^*$&$L_b$&$L_a^*$&$a\neq b$\\\hline \end{tabular} \caption{Relation between the line bundles $K_i$ which enter the expression~\eqref{Yuklb} for the Yukawa couplings and the line bundles $L_a$ which define the vector bundle $V$ in Eq.~\eqref{Vdef}. Note that $K_1\otimes K_2\otimes K_3={\cal O}_X$ always follows, in some cases due to $c_1(V)=0$, which imples $L_1\otimes\cdots\otimes L_n={\cal O}_X$.}\label{tab:KLrel} \end{center} \end{footnotesize} \end{table} an expression similar to Eq.~\eqref{Yukgen}, but with the $(0,1)$-forms $\nu_i$ now taking values in the line bundles $K_i$. The precise relation between the line bundles $K_i$ and the line bundles $L_a$ in Eq.~\eqref{Vdef} which define the vector bundle $V$ depends on the low-energy gauge group and the type of Yukawa coupling under consideration. For the three gauge groups of interest and the relevant types of Yukawa couplings these relations are summarised in Table~\ref{tab:KLrel}. From Eq.~\eqref{Yuklb} it is clear that the Yukawa couplings can depend on the complex structure moduli of the Calabi-Yau manifold $X$. Later, we will see examples with and without explicit complex structure dependence. Given that individual line bundles have no moduli, line bundle Yukawa couplings do not depend on bundle moduli. However, as discussed earlier, line bundle models often reside in a larger moduli space of non-Abelian bundles and Yukawa couplings on this larger moduli space will, in general, display bundle moduli dependence. In this context, our results for line bundle models can be interpreted as a leading-order expressions which are exact at the line bundle locus and provide a good approximation for small deformations away from the line bundle locus. \subsection{Projective ambient spaces} \label{coboundary} So far, our discussion applies to line bundle models on any Calabi-Yau manifold. In this sub-section and from now on we will specialise to what is perhaps the simplest class of Calabi-Yau manifolds, namely Calabi-Yau hyper-surfaces in products of projective spaces. Restricting to this class allows us to take the first steps towards evaluating the Yukawa integral~\eqref{Yuklb} and, later on, to explicitly construct the relevant cohomology representatives and compute the integral. Concretely, we will consider ambient spaces of the form \begin{equation} {\cal A}= {\mathbb P}^{n_1} \times {\mathbb P}^{n_2} \times \dots \times {\mathbb P}^{n_m} \,, \label{Adef} \end{equation} where $n_1+n_2+...+ n_m=4$. The Calabi-Yau hyper-surface $X$ in ${\cal A}$ is defined as the zero-locus of a homogeneous polynomial $p$ with multi-degree $(n_1+1,n_2+1,\ldots ,n_m+1)$, which can be thought of as a section of the normal bundle ${\cal N}={\cal O}_{\cal A}(n_1+1,n_2+1,\ldots ,n_m+1)$. Examples in this class include the quintic in ${\mathbb P}^4$, the bicubic in ${\mathbb P}^2 \times {\mathbb P}^2 $ and the tetra-quadric in ${\mathbb P}^1 \times {\mathbb P}^1 \times {\mathbb P}^1 \times {\mathbb P}^1$. To evaluate the Yukawa couplings for such Calabi-Yau hyper-surfaces, we first assume that the relevant $(0,1)$-forms $\nu_i$ and the $(3,0)$-form $\Omega$ on $X$ can be obtained as restrictions of ambient space counterparts $\hat{\nu}_i$ and $\hat{\Omega}$. Under this assumption and by inserting an appropriate delta-function current~\cite{Candelas:1987se}, we can re-write Eq.~\eqref{Yuklb} as the ambient space integral \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) =- \frac{1}{2 i}\int_{{\cal A}} \hat{\Omega} \wedge \hat{\nu}_{1} \wedge \hat{\nu}_{2} \wedge \hat{\nu}_{3} \wedge \delta^2 (p) dp \wedge d {\bar p}\,. \label{Yukamb} \end{equation} Note that this expression contains the imaginary prefactor $i$, which is completely acceptable for holomorphic Yukawa couplings. One expects the Yukawa couplings to become real (except for the small CP violating phase in the CKM matrix) only after field normalisation is performed. The construction of $\Omega$ and $\hat{\Omega}$ for Calabi--Yau hyper-surfaces in products of projective spaces is well known~\cite{Witten:1985xc, Strominger:1985it, Candelas:1987se, Candelas:1987kf} and we will simply present the result. To this end, we introduce the forms \begin{equation} \mu_i = \frac{1}{n_i!} \epsilon_{\alpha _0 \alpha _1 \dots \alpha _{n_i}} x^{\alpha _0}_i d x^{\alpha _1}_i \wedge \dots \wedge dx^{\alpha _{n_i}}_i\;,\quad \mu =\mu_1 \wedge \mu_2 \wedge \dots \wedge \mu_m\,. \label{10.2} \end{equation} where $x^\alpha _i$ are the homogeneous coordinates on $\mathbb{P}^{n_i}$. With these definitions, the form $\hat{\Omega}$ satisfies \begin{equation} \hat{\Omega} \wedge d p = \mu\,. \label{10.4} \end{equation} Combining this relation with the current identity \begin{equation} \delta^2 (p) d \bar p = \frac{1}{\pi} {\bar \partial} \Big( \frac{1}{p}\Big) \label{10.5} \end{equation} leads to the following expression \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) =- \frac{1}{2 \pi i}\int_{{\cal A}} \frac{\mu}{p} \wedge \Big[ {\bar \partial} \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3- \hat{\nu}_1 \wedge {\bar \partial} \hat{\nu}_2 \wedge \hat{\nu}_3 +\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge {\bar \partial} \hat{\nu}_3 \Big]\,. \label{Yukamb1} \end{equation} for the Yukawa couplings. In deriving this expression, we have performed an integration by parts and ignored the boundary term. This boundary term will be more closely examined in Appendix~\ref{appendixboundary} and we will show that it vanishes in all cases of interest. To understand the implications of this result we need to analyse the relation between the ambient space forms $\hat{\nu}_i$ and their restrictions, $\nu_i$, to the Calabi-Yau manifold $X$. Let $K$ be any of the line bundles $K_1$, $K_2$, $K_3$ and ${\cal K}$ its ambient space counterpart, so that $K={\cal K}|_X$. For a given cohomology representative $\nu\in H^1(X,K)$, we would like to construct an ambient space form $\hat{\nu}$ with $\nu=\hat{\nu}|_X$. The line bundles $K$ and ${\cal K}$ are related by the Koszul sequence \begin{equation} 0 \longrightarrow {\cal N}^* \otimes {\cal K} \stackrel{p}{\longrightarrow} {\cal K} \stackrel{r}{\longrightarrow} K \longrightarrow 0\; , \label{10.8} \end{equation} a short exact sequence with $p$, the defining polynomial of the Calabi-Yau manifold, and $r$, the restriction map. This short exact sequence leads to an associated long exact sequence in cohomology, whose relevant part is given by \begin{eqnarray} \cdots&\longrightarrow & H^1 ({\cal A}, {\cal N}^* \otimes {\cal K}) \stackrel{p}{\longrightarrow} H^1 ({\cal A}, {\cal K}) \stackrel{r}{\longrightarrow} H^1 (X, K)\nonumber \\ &\stackrel{\delta}{\longrightarrow} &H^2 ({\cal A}, {\cal N}^* \otimes {\cal K}) \stackrel{p}{\longrightarrow} H^2 ({\cal A}, {\cal K}) \stackrel{r}{\longrightarrow} H^2 (X, K) \longrightarrow \dots\; , \label{longex} \end{eqnarray} where $\delta$ is the co-boundary map. This sequence allows us to relate the cohomology $H^1(X,K)$ to ambient space cohomologies, namely \begin{eqnarray} H^1 (X, K) & = & r \Big( {\rm Coker} \Big( H^1 ({\cal A}, {\cal N}^* \otimes {\cal K}) \stackrel{p}{\rightarrow} H^1 ({\cal A}, {\cal K})\Big) \Big) \notag \\ & \oplus & \delta^{-1} \Big( {\rm Ker} \Big( H^2 ({\cal A}, {\cal N}^* \otimes {\cal K}) \stackrel{p}{\rightarrow} H^2 ({\cal A}, {\cal K})\Big) \Big) \; . \label{H1eq} \end{eqnarray} Evidently, $H^1(X,K)$ can receive two contributions, one from $H^1({\cal A},{\cal K})$ (modulo identifications) and the other from (the kernel in) $H^2({\cal A},{\cal N}^*\otimes{\cal K})$. Let us discuss these two contributions separately, keeping in mind that the general case is a sum of these.\\[2mm] {\bf Type 1}: If $\nu$ descends from $H^1({\cal A},{\cal K})$, we refer to it as ``type 1". In this case, we have a $(0,1)$-form $\hat{\nu}\in H^1({\cal A},{\cal K})$ which, under the map $r$, restricts to $\nu\in H^1(X,K)$. What is more, since $\hat{\nu}$ represents an ambient space cohomology it is closed, so \begin{equation} \bar{\partial}\hat{\nu}=0\; . \end{equation} {\bf Type 2:} The situation is somewhat more involved if $\nu$ descends from $H^2({\cal A},{\cal N}^*\otimes{\cal K})$, a situation we refer to as ``type 2". In this case, we can start with an ambient space $(0,2)$-form $\hat{\omega}=\delta(\nu)\in H^2({\cal A},{\cal N}^*\otimes{\cal K})$, which is the image of $\nu$ under the co-boundary map. The definition of the co-boundary map from Appendix~\ref{coboundarymapappendix} tells us that, in this case, $\nu$ can be obtained as the restriction to $X$ of an ambient space $(0,1)$-form $\hat{\nu}$ which is related to $\hat{\omega}$ by \begin{equation} {\bar \partial} \hat{\nu}=p \hat{\omega} \,. \label{coboundarymap} \end{equation} Unlike in the previous case, the form $\hat{\nu}$ is no longer closed.\\[2mm] The Yukawa coupling~\eqref{Yukamb1} involves three $(0,1)$-forms, $\hat{\nu}_1$, $\hat{\nu}_2$ and $\hat{\nu}_3$, each of which can be either of type 1 or type 2 (or a combination of both types), so that a variety of possibilities ensues. Perhaps the simplest possibility arises when all three forms are of type 1, so that $\bar{\partial}\hat{\nu}_i=0$ for $i=1,2,3$. Then, Eq.~\eqref{Yukamb1} shows that the Yukawa coupling vanishes, \begin{equation} \lambda(\nu_1,\nu_2,\nu_3)=0\;. \end{equation} This vanishing is quasi-topological and related to the cohomology structure for $K_1$, $K_2$ and $K_3$ in the sequence~\eqref{longex} -- there is no expectation that it can be explained in terms of a symmetry in the four-dimensional theory. An explicit example of this case will be presented later. The next simplest possibility is for two of the forms, say $\hat{\nu}_1$ and $\hat{\nu}_2$, to be of type 1, so that $\bar{\partial}\hat{\nu}_1= \bar{\partial}\hat{\nu}_2=0$, while $\hat{\nu}_3$ is of type 2, so that $\bar{\partial}\hat{\nu}_3=p\hat{\omega}_3$ for some $(0,2)$-form $\hat{\omega}_3$. Inserting into Eq.~\eqref{Yukamb1}, the Yukawa coupling now reduces to the simple expression \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = - \frac{1}{2 \pi i}\int_{{\cal A}} \mu \wedge \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\omega}_3\,. \label{Yuk112} \end{equation} As we will see, this formula is very useful since it is expressed in terms of ambient space forms, which can often be readily written down. When more than one of the forms is of type 2, the general formula~\eqref{Yukamb1} needs to be used and working out all the required forms becomes more complicated. We will study examples for all these cases later on. \section{Line bundle valued harmonic forms } \label{forms} From hereon we will focus on tetra-quadric Calabi-Yau manifolds in the ambient space ${\cal A}={\mathbb P}^1\times{\mathbb P}^1\times{\mathbb P}^1\times{\mathbb P}^1$. Besides the general usefulness of working with a concrete example, the tetra-quadric offers a number of additional advantages. Firstly, the ambient space consists of ${\mathbb P}^1$ factors only and is, therefore, particularly simple to handle. Moreover, it is known~\cite{Anderson:2011ns,Anderson:2012yf} that quasi-realistic line bundle standard models exist on the tetra-quadric, so we will be able to apply our methods for calculating Yukawa couplings to physically relevant models. However, the methods we develop in the context of the tetra-quadric can be generalised to other Calabi-Yau hypersurfaces in products of projective spaces and even to higher co-dimension CICYs, as will be seen in Chapter~\ref{chaptern>1codimension}. The main purpose of this section is to set out the relevant differential geometry for $\mathbb{P}^1$, find the harmonic bundle-valued forms for all line bundles on $\mathbb{P}^1$ and apply the results to the full ambient space ${\cal A}$. In particular, we will work out a multiplication rule for bundle-valued harmonic forms which will be crucial in order to establish the relation between the algebraic and analytic methods for calculating holomorphic Yukawa couplings. Since Yukawa couplings depend only on the cohomology classes of the corresponding forms, we are free to use any non-trivial representatives. For our calculation, we will rely on forms which are harmonic relative to the Fubini-Study metric on ${\cal A}$. As we will see, these can be explicitly constructed. For easier accessibility, this chapter is kept somewhat informal. A review of some relevant mathematical background, mostly following Ref.~\cite{H}, can be found in Appendix~\ref{app:Kbundle}. The proof of the multiplication rule for harmonic forms on projective space is contained in Appendix~\ref{appendixPn}. \subsection{Construction of line bundle valued harmonic forms on ${\mathbb P}^1$} \label{p1} We begin by collecting some well-known properties of ${\mathbb P}^1$. Homogeneous coordinates on ${\mathbb P}^1$ are denoted by $x^\alpha $, where $\alpha =0,1$, and we introduce the standard open patches $U_{(\alpha )}=\{[x^0:x^1]\,|\, x^\alpha \neq 0\}$ with affine coordinates $z=x^1/x^0$ on $U_{(0)}$ and $w=x^0/x^1$ on $U_{(1)}$. The transition function on the overlap is given by $w=1/z$. For convenience, subsequent formulae will usually be written on the patch $U_{(0)}$ and in terms of the coordinate $z$. The K\"ahler potential for the Fubini--Study metric on ${\mathbb P}^1$ reads \begin{equation} \mathfrak{K}= \frac{i}{2 \pi} \log \kappa\,, \qquad \kappa= 1+ |z|^2\,, \label{2.1} \end{equation} with associated K\"ahler form and K\"ahler metric given by \begin{equation} J= \partial {\bar \partial}\mathfrak{K}= \frac{i}{2 \pi \kappa^2} dz \wedge d {\bar z}\,, \qquad g_{z \bar z}= -i J_{z \bar z} =\frac{1}{2 \pi \kappa^2}\,. \label{2.2} \end{equation} Note that the normalisation of $\mathfrak{K}$ has been chosen such that $\int_{{\mathbb P}^1} J=1$. Line bundles on ${\mathbb P}^1$ are classified by an integer $k$ and are denoted by ${\cal O}_{\mathbb{P}^1}(k)$. They can be explicitly constructed by dualising and taking tensor powers of the universal bundle ${\cal O}_{\mathbb{P}^1}(-1)$. With the above covering of ${\mathbb P}^1$ and and the fiber coordinate $v$, the transition function of $ {\cal O}_{{\mathbb P}^1} (k)$ can be written as \begin{equation} \phi_{01} (z, v)= (1/z, z^{k} v)\,. \label{transfct} \end{equation} This means that a section of $ {\cal O}_{{\mathbb P}^1} (k)$ given by $s_{(0)}$ on $U_{(0)}$ and $s_{(1)}$ on $U_{(1)}$ transforms as $s_{(0)}(z)= z^k s_{(1)}(1/z)$. A hermitian structure $H$ on ${\cal L}={\cal O}_{\mathbb{P}^1}(k)$ can be introduced by \begin{equation} H= \kappa^{-k}\; , \label{2.5} \end{equation} and the associated Chern connection, $\nabla^{0,1}= \bar \partial$ and $\nabla^{1,0}= \partial+ A$, with gauge potential $A= {\bar H}^{-1} \partial {\bar H} = \partial \log {\bar H}$ and curvature $F= d A= {\bar \partial} {\partial} \log {\bar H}$ is explicitly specified by \begin{equation} A= -\frac{k \bar z}{\kappa} dz\,, \quad F=- 2 \pi i k J\,. \label{2.6} \end{equation} The last result for the field strength allows the calculation of the first Chern class of ${\cal L}$, which is given by \begin{equation} c_1 ({\cal L})= \frac{i}{2 \pi} F =k J\,, \quad \int_{{\mathbb P}^1} c_1 ({\cal L}) =k\,. \label{2.8} \end{equation} Having introduced a hermitian structure and a connection on the line bundles ${\cal L}$, we can now turn to a discussion of their cohomology and their associated harmonic bundle-valued forms. As explained in Appendix~\ref{app:Kbundle}, a ${\cal L}$-valued harmonic form $\alpha $ is characterised by the equations \begin{equation} {\bar \partial} \alpha =0\,, \quad \partial ({\bar H} \star \alpha )=0\,, \label{harmeqs} \end{equation} where $\star$ is the Hodge star on $\mathbb{P}^1$ with respect to the Fubini-Study metric. The first of these equations simply asserts the $\bar{\partial}$-closure of $\alpha $, which is already sufficient to obtain representatives for cohomology. However, $\bar{\partial}$-closed forms which differ by a $\bar{\partial}$-exact form describe the same cohomology class and such a redundant description of cohomology is not convenient for our purposes. For this reason, we will solve both equations~\eqref{harmeqs} and work with the resulting harmonic representatives, which are in one-to-one correspondence with the relevant cohomology. The cohomology of ${\cal L}={\cal O}_{\mathbb{P}^1}(k)$ is obtained from the Bott formula and we should distinguish three qualitatively different cases. For $k\geq 0$ only the zeroth cohomology is non-vanishing, while for $k\leq -2$ only the first cohomology is non-vanishing. For $k=-1$ the cohomology is entirely trivial. We will now discuss these three cases in turn and explicitly compute the bundle-valued harmonic forms by solving Eqs.~\eqref{harmeqs}.\\[2mm] {\bf Case 1)} $k\geq 0$: In this case, the Bott formula implies that $h^0(\mathbb{P}^1,{\cal L})=k+1$ and $h^1(\mathbb{P}^1,{\cal L})=0$. Hence, we are looking for sections or bundle-valued $(0,0)$-forms of ${\cal L}$. In this case, the second equation~\eqref{harmeqs} is automatically satisfied, while the first one implies that the section is holomorphic, so $\alpha=\alpha(z)$. For a monomial $\alpha=z^l $, a transformation to the other patch gives $z^l=w^{-l}=z^kw^{k-l}$, with the $z^k$ factor being the desired transition function. This means that the section is holomorphic in both patches, only if $l=0,\ldots ,k$. This leads to the well-known result that the sections are given by degree $k$ polynomials, that is, \begin{equation} \alpha=P_{(k)}(z)\; . \end{equation} Note that the space of these polynomials is indeed $k+1$-dimensional, as required.\\[2mm] {\bf Case 2)} $k=-1$: In this case, all cohomologies of ${\cal L}$ vanish and there are no forms to be determined.\\[2mm] {\bf Case 3)} $k\leq -2$: Now, $h^1(\mathbb{P}^1,{\cal L})=-k-1$ and $h^0(\mathbb{P}^1,{\cal L})=0$. Hence, we are looking for harmonic $(0,1)$-forms $\alpha=f(z,\bar{z})d\bar{z}$. Clearly, the first equation~\eqref{harmeqs} is automatically satisfied for such $\alpha$. Using $\star d\bar{z}=-id\bar{z}$ and $\star\alpha =-i\alpha$, the second equation can be written as $\partial(\bar{H}\alpha)=0$, which leads to the general solution $\alpha = \kappa^kg(\bar{z})d\bar{z}$, with a general anti-holomorphic function $g(\bar{z})$. For a monomial $g(\bar{z})=\bar{z}^l$, this transforms to the other patch as \begin{equation} \alpha=(1+|z|^2)^k\bar{z}^ld\bar{z}=-z^k(1+|w|^2)^k\bar{w}^{-k-l-2}d\bar{w}\; . \label{harm01} \end{equation} For holomorphy in both patches, we should therefore have $l=0,\ldots ,-k-2$, so $g(\bar{z})$ is a general polynomial of degree $-k-2$ in $\bar{z}$. It will be convenient to denote such a polynomial of degree $-k-2$ by $P_{(k)}$, with the understanding that the negative degree subscript implies a dependence on $\bar{z}$, rather than $z$. With this notation, the full solution takes the form \begin{equation} \alpha=\kappa^kP_{(k)}(\bar{z})d\bar{z}\; . \label{1forms} \end{equation} Note that the space of degree $-k-2$ polynomials has indeed dimension $-k-1$, as required.\\[2mm] \subsection{Maps between line bundle cohomology on ${\mathbb P}^1$} \label{maps} Calculating Yukawa couplings requires performing a wedge product of bundle-valued forms. It is therefore natural to study how the harmonic forms on $\mathbb{P}^1$ found in the previous sub-section multiply. Recall that we have harmonic $(0,0)$-forms taking values in ${\cal O}_{\mathbb{P}^1}(k)$ for $k\geq 0$ and harmonic $(0,1)$-forms taking values in ${\cal O}_{\mathbb{P}^1}(k)$ for $k\leq -2$. Multiplying two harmonic $(0,0)$-forms, representing classes in $H^0({\mathbb P}^1,{\cal O}_{{\mathbb P}^1}(k))$ and $H^0({\mathbb P}^0,{\cal O}_{{\mathbb P}^1}(l))$ respectively, is straightforward and it leads to another harmonic $(0,0)$-form which represents a class in $H^0({\mathbb P}^1,{\cal O}_{{\mathbb P}^1}(k+l))$. The only other non-trivial case -- the multiplication of a harmonic $(0,0)$-form with a harmonic $(0,1)$-form -- is less straightforward. To be concrete, for $k\leq -2$ and $\delta>0$, we consider a harmonic $(0,1)$-form $\alpha _{(k-\delta)} \in H^1 ({\mathbb P}^1, {\cal O}_{{\mathbb P}^1} (k-\delta))$ and a degree $\delta$ polynomial $p_{(\delta)}$, representing a class in $H^0 ({\mathbb P}^1, {\cal O}_{{\mathbb P}^1} (\delta))$. The product $p_{(\delta)}\alpha _{(k-\delta)}$ is a $(0,1)$-form which represent a class in $H^1 ({\mathbb P}^1, {\cal O}_{{\mathbb P}^1} (k))$, but is not of the form~\eqref{harm01} and, hence, is not harmonic. We would, therefore, like to work out the harmonic representative, denoted $\alpha _{(k)}\in H^1 ({\mathbb P}^1, {\cal O}_{{\mathbb P}^1} (k))$, which is equivalent in cohomology to this product $p_{(\delta)}\alpha _{(k-\delta)}$. This means we should solve the equation \begin{equation} p_{(\delta)} \alpha _{(k-\delta)} + {\bar \partial} s= \alpha _{(k)}\; , \label{prodeqgen} \end{equation} where $s$ is a suitable section of ${\cal O}_{\mathbb{P}^1}(k)$. In general, the section $s$ can be cast into the form \begin{equation} s= \sum_{m \geq -k} S_{(k+m, -m-2)} (z, \bar z) \kappa^{-m}\,, \label{2.14} \end{equation} where $S_{(k+m, -m-2)} (z, \bar z)$ is a polynomial of degree $k+m$ in $z$ and of degree $m$ in $\bar z$. This can be seen by demanding the correct transformation under the transition function~\eqref{transfct}. It turns out that in order to solve Eq.~\eqref{prodeqgen}, we only require the single term with $m=-k+\delta-1$ in this sum for $s$. Using this observation and the general formula~\eqref{1forms} for harmonic $(0,1)$-forms, we insert the following expressions \begin{eqnarray} \resizebox{0.89\hsize}{!}{$\alpha _{(k-\delta)} =\kappa^{k-\delta} P_{(k- \delta)} (\bar z) d \bar z\;, \quad \alpha _{(k)} =\kappa^{k} Q_{(k)} (\bar z) d \bar z \;,\quad s= \kappa^{k-\delta+1} S_{(\delta-1, k -\delta+1)} (z, \bar z)\,.$} \end{eqnarray} into Eq.~\eqref{prodeqgen} to cast it into the more explicit form \begin{equation} p P +\kappa \partial_{\bar z} S - (-k+\delta-1) z S = \kappa^{\delta} Q\,. \label{prodeq} \end{equation} Here, for simplicity of notation, we have dropped the subscripts indicating degrees. Eq.~\eqref{prodeq} determines the polynomials $Q$ and $S$ for given $p$ and $P$ and can be solved by comparing monomial coefficients. This is relatively easy to do for low degrees and we will discuss a few explicit examples below. For arbitrary degrees, Eq.~\eqref{prodeq} seems surprisingly complicated and it is, therefore, remarkable that a closed solution for $Q$ can be written down. To formulate this solution, we introduce the homogeneous counterparts of the polynomials $p$, $P$, $Q$ and $S$, which we denote as $\tilde{p}, \tilde{P}$, $\tilde{Q}$ and $\tilde{S}$. They depend on the homogeneous coordinates $x^0$, $x^1$ and are obtained from the original polynomials by replacing $z=x^1/x^0$ and multiplying with the appropriate powers of $x^0$ and $\bar{x}^0$. Then, the polynomial $\tilde{Q}$ which solves Eq.~\eqref{prodeq} can be written as \begin{equation} \tilde{Q} ({\bar x}^0, {\bar x}^1) = c_{k-\delta, \delta} \ \tilde{p} (\partial_{ {\bar x}^0}, \partial_{ {\bar x}^1}) \tilde{P} ({\bar x}^0, {\bar x}^1)\,, \quad c_{k-\delta, \delta} =\frac{(-k-1)!}{(\delta-k-1)!}\,. \label{prodsol} \end{equation} Here $\tilde{p} (\partial_{ {\bar x}^0}, \partial_{ {\bar x}^1})$ denotes the polynomial $\tilde{p}$ with the coordinates replaced by the corresponding partial derivatives. These derivatives act on the polynomial $\tilde{P}$ in the usual way and thereby lower the degree to the one expected for $\tilde{Q}$. The proof of Eq.~\eqref{prodsol} is given in Appendix~\ref{appendixPn}. Unfortunately, we are not aware at present of a similar closed solution for the polynomial $S$.\\[2mm] While this discussion may have been somewhat technical, the final result is relatively simple and can be summarised as follows. For $k\geq 0$, the harmonic $(0,0)$-forms representing the cohomology $H^0(\mathbb{P}^1,{\cal O}_{\mathbb{P}^1}(k))$ are given by degree $k$ polynomials $P_{(k)}(z)$, which depend on the coordinate $z$. For $k\leq -2$, the harmonic $(0,1)$-forms representing the cohomology $H^1(\mathbb{P}^1,{\cal O}_{\mathbb{P}^1}(k))$ can be identified with degree $-k-2$ polynomials, denoted as $P_{(k)}(\bar{z})$, which depend on $\bar{z}$. The product of two $(0,0)$-forms is simply given by polynomial multiplication, while the product of a $(0,0)$-form and a $(0,1)$-form is performed by using the homogeneous versions of these polynomials and converting the coordinates in the former to partial derivatives which act on the latter. Let us finish this subsection by illustrating the above discussion with two explicit examples.\\[2mm] {\bf Example 1:} Consider the case $k=-3$ and $\delta=1$, so that the relevant forms and associated polynomials are explicitly given by \begin{eqnarray} \alpha_{(-4)}&=&\kappa^{-4}P_{(-4)}(\bar{z})d\bar{z}\;,\quad P_{(-4)}=a_0+a_1\bar{z}+a_2\bar{z}^2 \;, \notag\\ \alpha_{(-3)}&=&\kappa^{-3}Q_{(-3)}d\bar{z}\;,\qquad Q_{(-3)}=b_0+b_1\bar{z} \;, \\ s&=&\kappa^{-3}S_{(0,-5)}\,,\qquad\; S_{(0,-5)}=c_{0,0}+c_{0,1}\bar{z}+c_{0,2}\bar{z}^2+c_{0,3}\bar{z}^3\; , \notag \\ p_{(1)}&=&f_0+f_1 z \;, \notag \end{eqnarray} where $a_i$, $b_i$, $f_i$ and $c_{i,j}$ are constants. Inserting these polynomials into Eq.~\eqref{prodeq}, comparing coefficients for same monomials and solving for the $b_i$ and $c_{i,j}$ in terms of the $a_i$ and $f_i$ results in \begin{eqnarray} Q_{(-3)}&=&\frac{1}{3} \left( 2 a_0 f_0+a_1 f_1+ \left(a_1 f_0+2 a_2 f_1\right)\bar{z}\right) \;,\label{Qres2}\\ S_{(0,-5)}&=&\frac{1}{3}\left( -a_2 f_0 \bar{z}^3+ \left(a_2 f_1-a_1 f_0\right)\bar{z}^2+ \left(a_1 f_1-a_0 f_0\right)\bar{z}+a_0 f_1\right) \;. \end{eqnarray} For the algebraic calculation based on Eq.~\eqref{prodsol}, we start with the homogeneous polynomials \begin{equation} \tilde{p}=f_0 x_0+f_1 x_1\;,\quad \tilde{P}=a_0\bar{x}_0^2+a_1 \bar{x}_0\bar{x}_1+a_2\bar{x}_1^2\;,\quad \tilde{S}=c_{0,0}\bar{x}_0^3+c_{0,1}\bar{x}_0^2\bar{x}_1+c_{0,2}\bar{x}_0\bar{x}_1^2+c_{0,3}\bar{x}_1^3\; . \end{equation} Inserting these into Eq.~\eqref{prodsol} gives \begin{equation} \tilde{Q}=\frac{1}{3} \left( (2 a_0 f_0+a_1 f_1)\bar{x}_0+ \left(a_1 f_0+2 a_2 f_1\right)\bar{x}_1\right)\; , \end{equation} which is indeed the homogeneous version of the polynomial $Q_{(-3)}$ in Eq.~\eqref{Qres2}.\\[2mm] {\bf Example 2:} Let us choose $k=-1$ and $\delta=2$. Since there are no harmonic forms for $k=-1$, we have $Q=0$, while the other forms and polynomials are given by \begin{eqnarray} \alpha _{(-3)} &=& \kappa^{-3} P_{(-3)} (\bar z) d \bar z\,, \quad P_{(-3)}= a_0 + a_1 \bar z \,, \notag \\ s &=&\kappa^{-2}S_{(2,-4)}\,,\qquad\; S_{(2,-4)}=c_{0,0} +c_{0,1} \bar z + c_{0,2} {\bar z}^2 + c_{1,0} z + c_{1,1} |z|^2 + c_{1,2} {\bar z} |z|^2 \,, \notag \\ p_{(2)}&=&p_0 +p_1 z +p_2 z^2 \,. \end{eqnarray} We note that, from~\eqref{prodeqgen}, we now need to solve the equation $p_{(2)} \alpha _{(-3)} =-{\bar \partial} s$, which is similar in structure to Eq.~\eqref{coboundarymap} that determines the co-boundary map. Indeed, we will later find the present example useful to explicitly work out a co-boundary map. Inserting the above polynomials into Eq.~\eqref{prodeq} and comparing coefficients as before leads to \begin{equation} S_{(2,-4)} =\frac{1}{2} (p_1 a_0 + p_2 a_1) - p_0 a_0 \bar z -\frac{1}{2} p_0 a_1 {\bar z}^2 +\frac{1}{2} p_2 a_0 z +p_2 a_1 |z|^2 -\frac{1}{2} (p_0 a_0 + p_1 a_1){ \bar z} |z|^2\,. \label{coboundres} \end{equation} \subsection{Line bundle valued harmonic forms on ${\mathbb P}^1 \times {\mathbb P}^1\times {\mathbb P}^1 \times {\mathbb P}^1 $} \label{maps1} In this sub-section, we generalise the above results for $\mathbb{P}^1$ to the ambient space ${\cal A}= {\mathbb P}^1 \times {\mathbb P}^1\times {\mathbb P}^1 \times {\mathbb P}^1 $. On each ${\mathbb P}^1$, we introduce homogeneous coordinates $(x^0_i, x^1_i)$, where $i=1, \dots, 4$ and cover each ${\mathbb P}^1$ with two standard open sets $U_{(i,\alpha )}=\{[x_i^0:x_i^1]\,|\,x_i^\alpha \neq 0\}$. Further, we introduce affine coordinates $z_i= x^1_i/x^0_i$ on $U_{(i,0)}$ and $w_i= x^0_i/x^1_i$ on $U_{(i,1)}$. On the intersection of $U_{(i,0)}$ and $U_{(i,1)}$ we have $z_i= 1/w_i$. An open cover for the entire space ${\cal A}$ is given by the $16$ sets $U_{(1,\alpha _1)}\times \cdots \times U_{(4,\alpha _4)}$. For practical purposes, we will usually work on the set $U_{(1,0)}\times \cdots \times U_{(4,0)}$ with coordinates $z_1,\ldots,z_4$. For each $\mathbb{P}^1$, we have a Fubini--Study K\"ahler potential and K\"ahler form given by \begin{equation} \mathfrak{K}_i=\frac{i}{2 \pi} \log \kappa_i \,, \quad \kappa_i= 1+ |z_i|^2\,,\quad J_i = \frac{i}{2 \pi \kappa_i^2} dz_i \wedge d {\bar z}_i\ \end{equation} and the K\"ahler cone of ${\cal A}$ is parametrised by $J=\sum_{i=1}^4 t^iJ_i$, with all $t^i>0$. The line bundles on ${\cal A}$ are obtained as the tensor products \begin{equation} {\cal O}_{\cal A}({\bf k})={\cal O}_{\mathbb{P}^1}(k^1)\otimes\cdots\otimes {\cal O}_{\mathbb{P}^1}(k^4) \end{equation} and are, hence, labeled by a four-dimensional integer vector ${\bf k} = (k^1, k^2, k^3, k^4)$. Straightforwardly generalising Eq.~\eqref{2.5}, we can introduce a Hermitian structure \begin{equation} H= \prod_{i=1}^4 \kappa_i^{-k^i}\,. \label{20.4} \end{equation} on these line bundles. The gauge field and gauge field strength for the associated Chern connection \begin{equation} A= {\bar H}^{-1} \partial {\bar H}=- \sum_{i=1}^4 k^i \partial \log \kappa_i\;,\quad F ={\bar \partial} A =- 2 \pi i \sum_{i=1}^4 k^i J_i\, \label{20.5} \end{equation} lead to the first Chern class \begin{equation} c_1 \left( {\cal O}_{\cal A}({\bf k})\right)= \frac{i}{2 \pi} F = \sum_{i=1}^4 k^i J_i\,. \label{20.5.2} \end{equation} The cohomology for ${\cal K}={\cal O}_{\cal A}({\bf k})$ can be obtained by combining the Bott formula for cohomology on $\mathbb{P}^1$ with the K\"unneth formula \eqref{kunneth}. If any of the integers $k^i$ equals $-1$ all cohomologies of ${\cal K}$ vanish. In all other cases, precisely one cohomology, $H^q({\cal A},{\cal K})$, is non-zero, and $q$ equals the number of negative integers $k^i$. The dimension of this non-vanishing cohomology is given by \begin{equation} h^q({\cal A},{\cal K})=\prod_{i:k^i\geq 0}(k^i+1)\prod_{i:k^i\leq -2}(-k^i-1)\; . \end{equation} Generalising our results for $\mathbb{P}^1$, the harmonic $(0,q)$-forms representing this cohomology can be written as \begin{equation} \alpha _{({\bf k})} = P_{({\bf k})} \prod_{i: k^i \leq -2} \kappa_i^{k^i} d {\bar z}_i\,, \label{akres} \end{equation} where $P_{({\bf k})} $ is a polynomial of degree $k^i$ in $z_i$, provided $k^i \geq 0$, and of degree $-k^i-2$ in ${\bar z}_i$, if $k^i \leq -2$. It is also useful to write down a homogeneous version of these forms, which is given by \begin{equation} \alpha _{({\bf k})} = \tilde{P}_{({\bf k})} \prod_{i: k^i \leq -2} \sigma_i^{k^i} {\bar \mu}_i\,, \label{akreshom} \end{equation} where \begin{equation} \sigma_i=|x_i^0|^2+|x_i^1|^2\,,\qquad \mu_i=\epsilon_{\alpha \beta}x_i^\alpha x_i^\beta\;, \end{equation} and $\tilde{P}_{(\bf k)}$ denotes the homogeneous counterpart of $P_{(\bf k)}$. We would now like to generalise our rule for the multiplication of forms obtained on $\mathbb{P}^1$. In general, we have a map \begin{equation} H^q ({\cal A}, {\cal O}_{{\cal A}} ({\bf k}))\times H^p ({\cal A}, {\cal O}_{{\cal A}} ({\bf l})) \to H^{q+p} ({\cal A}, {\cal O}_{{\cal A}} ({\bf k}+{\bf l})) \label{20.10.1} \end{equation} between cohomologies induced by the wedge product and we would like to work out this map for the above harmonic representatives. For a harmonic $(0,q)$-form $\alpha _{({\bf k})}\in H^q({\cal A},{\cal O}_{\cal A}({\bf k}))$ with associated polynomial $P_{({\bf k})}$ and a harmonic $(0,p)$-form $\beta_{({\bf l})}$ with associated polynomial $R_{({\bf l})}$, the wedge product $\alpha _{({\bf k})}\wedge\beta_{({\bf l})}$ is equivalent in cohomology to a harmonic $(0,q+p)$-form, which we denote by $\gamma_{({\bf k}+{\bf l})}\in H^{q+p} ({\cal A}, {\cal O}_{{\cal A}} ({\bf k}+{\bf l}))$ with associated polynomial $Q_{({\bf k}+{\bf l})}$. In general, the relation between those forms can be written as \begin{equation} \alpha _{({\bf k})} \wedge \beta_{({\bf l})} +{\bar \partial} s = \gamma_{({\bf k} +{\bf l})} \, , \label{20.12} \end{equation} for a suitable $(0,p+q-1)$-form $s$ taking values in ${\cal O}_{\cal A}({\bf k}+{\bf l})$. Our earlier results for $\mathbb{P}^1$ show that the polynomial $Q_{({\bf k}+{\bf l})}$ which determines $\gamma_{({\bf k} +{\bf l})}$ can be directly obtained from $P_{({\bf k})}$ and $R_{({\bf l})}$ by the formula \begin{equation} \tilde{Q}= c_{{\bf k}, {\bf l}} \tilde{ P} \tilde {R}\,, \end{equation} where, as before, $\tilde{ P}, \tilde{R}, \tilde{Q}$ are the homogeneous counterparts of $P, R, Q$ and $c_{{\bf k}, {\bf l}}$ is the appropriate product of numerical factors in Eq.~\eqref{prodsol}. The understanding is that positive degrees in a particular $\mathbb{P}^1$, represented by powers of $x_i^\alpha $ should be converted into derivatives $\partial_{\bar{x}^i_\alpha }$ whenever they act on negative degrees in the same $\mathbb{P}^1$, represented by $\bar{x}_i^\alpha $. When both degrees in $\tilde{P}$ and $\tilde{R}$ are positive for a given $\mathbb{P}^1$, a simple polynomial multiplication should be carried out. Finally, for two negative degrees in the same $\mathbb{P}^1$, the resulting $\tilde{Q}$ vanishes (since there will be a term $d\bar{z}^i \wedge d\bar{z}^i$ in the corresponding wedge product of the forms). \subsection{Line bundles and cohomology on the tetra-quadric} \label{relations} As the final step in our discussion of line bundles and harmonic forms, we need to consider line bundles on the tetra-quadric $X$. Recall that a tetra-quadric resides in the ambient space ${\cal A}=\mathbb{P}^1\times\mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^1$ and is defined as the zero locus of a polynomial $p$ of multi-degree $(2,2,2,2)$, which can be seen as a section of the normal bundle \begin{equation} {\cal N}= {\cal O}_{{\cal A}} ({\bf q})\,, \quad {\bf q} = (2, 2, 2, 2)\,. \label{20.14} \end{equation} The tetra-quadric has Hodge numbers $h^{1,1}(X)=4$ and $h^{2,1}(X)=68$. Later, we will use the freely-acting $\Gamma=\mathbb{Z}_2\times\mathbb{Z}_2$ symmetry of the tetra-quadric, whose generators are given by \begin{equation} g_1=\left(\begin{array}{cc}1&0\\0&-1\end{array}\right)\;,\quad g_2=\left(\begin{array}{cc}0&1\\1&0\end{array}\right)\; . \label{g1g2} \end{equation} These matrices act simultaneously on all four pairs of homogeneous coordinates. The quotient $\tilde{X}=X/\Gamma$ is a Calabi-Yau manifold with Hodge numbers $h^{1,1}(\tilde{X})=4$ (since all four K\"ahler forms $J_i$ are $\Gamma$-invariant) and $h^{2,1}(\tilde{X})=20$ (using divisibility of the Euler number). All line bundles on the tetra-quadric can be obtained as restrictions of line bundles on ${\cal A}$, that is \begin{equation} {\cal O}_X({\bf k})={\cal O}_{{\cal A}}({\bf k})|_X\; . \end{equation} As discussed in Section~\ref{coboundary}, the Koszul sequence and its associated long exact sequence provide a close relationship between line bundle cohomology on ${\cal A}$ and $X$, which is summarised by Eq.~\eqref{H1eq}. This equation shows that the cohomology of a line bundle $K={\cal O}_X({\bf k})$ depends on the first and second cohomologies of the ambient space line bundles ${\cal K}={\cal O}_{\cal A}({\bf k})$ and ${\cal N}^*\otimes{\cal K}={\cal O}_{\cal A}({\bf k}-{\bf q})$. As discussed earlier, line bundles on ${\cal A}$ have at most one non-vanishing cohomology and, hence, ${\cal K}$ and ${\cal N}^*\otimes{\cal K}$ have at most one non-zero cohomology each. This leads to the following four cases: \begin{enumerate} \item[1)]\underline{$H^2 ({\cal A}, {\cal N}^* \otimes {\cal K})=0$ and $H^2 ({\cal A}, {\cal K})=0$}\\ In this case, $H^1 (X, K)$ is given by $(0,1)$-forms $\alpha _{({\bf k})}$, as in Eq.~\eqref{akres}, with associated polynomials $P_{({\bf k})}$ and, in the terminology of Section~\ref{coboundary}, the cohomology representatives are of type 1. If $H^1({\cal A},{\cal N}^*\otimes {\cal K})$ is non-trivial we have to compute the co-kernel in Eq.~\eqref{H1eq}, which amounts to imposing the identification $\tilde{P}_{({\bf k})} \sim \tilde{P}_{({\bf k})} +\tilde{p} \tilde{Q}_{({\bf k}- {\bf q})}$ for arbitrary polynomials $\tilde{Q}_{({\bf k}- {\bf q})}$ of multi-degree ${\bf k}-{\bf q}$. Recall that the tilde denotes the homogeneous version of the polynomials and that coordinates appearing with positive degree have to be converted into derivatives whenever they act on negative degree coordinates, as discussed at the end of the last sub-section. Since the coefficients of $p$ depend on the complex structure, this identification leads to complex structure dependence of the representatives. \item[2)] \underline{$H^1 ({\cal A}, {\cal N}^* \otimes {\cal K})=0$ and $H^1 ({\cal A}, {\cal K})=0$}\\ In this case, $H^1 (X, K)$ is represented by $(0,2)$-forms $\alpha _{({\bf k}- {\bf q})}$, with associated polynomials $P_{({\bf k}-{\bf q})}$, satisfying $p \alpha _{({\bf k}- {\bf q})}= {\bar \partial} \beta_{({\bf k})}$ for a suitable $(0,1)$-form $\beta_{({\bf k})}$. Using the terminology of Section~\ref{coboundary}, this corresponds to type 2 representatives. If $H^2 ({\cal A}, {\cal K})\neq 0$, we have to work out the kernel in Eq.~\eqref{H1eq}, which amounts to imposing the condition $\tilde{p}\tilde{P}_{({\bf k}-{\bf q})}=0$. This leads to explicit complex structure dependence of the representatives. \item[3)] \underline{$H^1 ({\cal A}, {\cal N}^* \otimes {\cal K})=0$ and $H^2 ({\cal A}, {\cal K})=0$}\\ This is a combination of the previous two cases, where $H^1 (X, K)$ is a direct sum of type 1 and type 2 contributions. \item[4)] \underline{$H^2 ({\cal A}, {\cal N}^* \otimes {\cal K})=0$ and $H^1 ({\cal A}, {\cal K})=0$}\\ In this case, $H^1(X, K)=0$. \end{enumerate} \section{Yukawa couplings on the tetra-quadric and some toy examples}\label{toyex} We have now collected all relevant technical details on line bundles and harmonic bundle-valued forms on the tetra-quadric and are ready to apply these to concrete calculations of Yukawa couplings. To begin, we collect some general statements on Yukawa couplings on the tetra-quadric -- including the precise relation between an explicit analytic calculation of the integral and a corresponding algebraic calculation -- and then move on to work out Yukawa couplings for a number of toy examples. In the next section, we compute the Yukawa couplings for a quasi-realistic standard model on the tetra-quadric. \subsection{General properties of Yukawa couplings} \label{comments} As we have discussed earlier, we can distinguish two types of harmonic bundle-valued $(0,1)$-forms on the tetra-quadric: forms of type 1, which descend from harmonic $(0,1)$-forms on the ambient space, and forms of type 2, which descend from harmonic $(0,2)$-forms on the ambient space. The Yukawa couplings involve three harmonic $(0,1)$-forms and, as shown in Section~\ref{coboundary}, their structure depends on the types of these $(0,1)$-forms. Let us consider a line bundle model on the tetra-quadric, specified by line bundles $L_a$, where $a=1,\ldots ,n$, and a Yukawa coupling with three associated line bundles $K_1={\cal O}_X({\bf k}_1)$, $K_2={\cal O}_X({\bf k}_2)$ and $K_3={\cal O}_X({\bf k}_3)$, which are related to $L_a$ as in Table~\ref{tab:KLrel}. Consider three harmonic $(0,1)$-forms $\nu_i\in H^1(X,K_i)$. We have seen that the Yukawa coupling vanishes if these three forms are of type 1. The next simplest case, when two of the forms, say $\nu_1$ and $\nu_2$, are of type 1 and descend from ambient space harmonic $(0,1)$-forms $\hat{\nu}_1\in H^1({\cal A},{\cal O}_{\cal A}({\bf k}_1))$ and $\hat{\nu}_2\in H^1({\cal A},{\cal O}_{\cal A}({\bf k}_2))$, while $\nu_3$ is of type 2 and descends from a harmonic ambient space $(0,2)$-form $\hat{\omega}_3\in H^2 ({\cal A}, {\cal O}_{\cal A} ({\bf k}_3 -{\bf q}))$, leads to the particularly simple formula \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = - \frac{1}{2 \pi i}\int_{{\mathbb C}^4} d^4 z \wedge \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\omega}_3\,, \label{Yuk112copy} \end{equation} for the Yukawa coupling. This follows from Eq.~\eqref{Yuk112}, together with Eqs.~\eqref{10.2}, which show that the form $\mu$ is given by \begin{equation} \mu = d z_1 \wedge dz_2 \wedge dz_3 \wedge dz_4 = d^4 z\,. \label{muform} \end{equation} The integral over ${\cal A}$ can then be thought of as the integral over ${\mathbb C}^4$, provided the forms $ \hat{\nu}_1, \hat{\nu}_2, \hat{\omega}_3$ transform to the other patches as sections of the appropriate line bundles. Since $\hat{\nu}_1$ and $\hat{\nu}_2$ are $(0,1)$-forms, the vectors ${\bf k}_1$ and ${\bf k}_2$ should contain precisely one entry $\leq -2$ each, while the vector ${\bf k}_3$ contains precisely two entries $\leq 0$, in line with $\hat{\omega}_3$ being a $(0,2)$-form. Further, recall from Table~\ref{tab:KLrel} that ${\cal K}_1\otimes{\cal K}_2\otimes{\cal K}_3={\cal O}_{\cal A}$ and, hence, ${\bf k}_1+{\bf k}_2+{\bf k}_3=0$. This means that the four non-positive entries in these vectors must all arise in different $\mathbb{P}^1$ directions. Hence, we can assume, possibly after re-ordering, that $k_1^1 \leq -2$, $k_2^2 \leq -2$ and $k_3^3, k_3^4 \leq 0$, while all other entries are positive. With these conventions, we can apply Eq.~\eqref{akres} to write down the relevant forms as \begin{equation} \hat{\nu}_1 = \kappa_1^{k_1^1} P_{({\bf k}_1)} d {\bar z}_1\,, \qquad \hat{\nu}_2 = \kappa_2^{k_2^2} R_{({\bf k}_2)} d {\bar z}_2\,, \qquad \hat{\omega}_3 = \kappa_3^{k_3^3 -2 } \kappa_4^{k_3^4 -2 } T_{({\bf k}_3 -{\bf q} )} d {\bar z}_3 \wedge d {\bar z}_4\,. \end{equation} Inserting these forms into Eq.~\eqref{Yuk112copy} leads to the integral \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = - \frac{1}{2 \pi i}\int_{{\mathbb C}^4} d^4 z \ d^4 {\bar z} \ \kappa_1^{k_1^1} \kappa_2^{k_2^2}\kappa_3^{k_3^3 -2 } \kappa_4^{k_3^4 -2 } P_{({\bf k}_1)} R_{({\bf k}_2)} T_{({\bf k}_3 -{\bf q} )} \,. \label{Yuk112spec} \end{equation} There are two ways of evaluating this integral. Firstly, we can explicitly insert the factors $\kappa_i=1+|z_i|^2$ and the polynomials and simply integrate, using polar coordinates in each $\mathbb{C}$ plane. All terms with non-matching powers of $z_i$ and $\bar{z}_i$ vanish due to the angular integration. The remaining terms all reduce to the standard integrals \begin{eqnarray} \int_{\mathbb{C}}\frac{|z|^{2q}}{\kappa^p}dz\,d\bar{z}=2\pi i I_{p,q}\;, \,\,\,\, I_{p,q}=2\int_0^\infty dr\frac{ r^{2q+1}}{(1+r^2)^p}=\frac{q!}{(p-1)\cdots (p-q-1)} \;,\label{stdint} \end{eqnarray} where $p \geq q+2$ is a requirement satisfied for all the cases in use. Alternatively, we can work out the integral~\eqref{Yuk112spec} ``algebraically". To do this, we first note that the integrand $\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\omega}_3$ represents an element of the one-dimensional cohomology $H^4({\cal A},{\cal N}^*)$. It can, therefore, be written as $\mu (P, R, T)\kappa_1^{-2} \kappa_2^{-2} \kappa_3^{-2} \kappa_4^{-2} d^4 {\bar z}$, where \begin{equation} \mu (P, R, T)= \tilde{P} \tilde{R} \tilde{T} \label{Yukalg} \end{equation} is the product of the three associated polynomials (carried out as discussed in Section~\ref{maps1}) and simply a complex number. Inserting this into Eq.~\eqref{Yuk112copy} shows that \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = 8 i \pi^3 c \mu (P, R, T)\; , \label{lmurel} \end{equation} where the numerical factor $c$ follows from Eq.~\eqref{prodsol} and is explicitly given by \begin{equation} c=c_{k_1^1, -k_1^1-2} \ c_{k_2^2, -k_2^2-2}\ c_{k_3^3-2, -k_3^3} \ c_{k_3^4-2, -k_3^4}\, . \label{cgen} \end{equation} In conclusion, up to an overall numerical (and explicitly computed) factor, the Yukawa couplings are simply given by Eq.~\eqref{Yukalg} and can, hence, be obtained by a multiplication of the associated polynomials. In the general case, the Yukawa couplings are given by the integral~\eqref{Yukamb1} which can be written as \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) =- \frac{1}{2 \pi i}\int_{{\mathbb C}^4} d^4 z \wedge [\hat{\omega}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3- \hat{\nu}_1 \wedge \hat{\omega}_2 \wedge \hat{\nu}_3 +\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\omega}_3 ]\, , \label{Yukgen4} \end{equation} % with the $(0,1)$-forms $\hat{\nu}_i$ and the $(0,2)$-forms $\hat{\omega}_i$ in this expression related by \begin{equation} {\bar \partial} \hat{\nu}_i=p \hat{\omega}_i \,. \end{equation} If the Yukawa coupling depends on more than one form of type 2, we have to solve this last equation for some of the $\hat{\nu}_i$ in terms of $\hat{\omega}_i$. This can be done explicitly for specific examples, as we will demonstrate later, but as discussed in Section~\ref{maps}, we are currently not aware of a general solution. \subsection{An example with vanishing Yukawa couplings} \label{vanishing} We would like to consider a rank four line bundle sum on the tetra-quadric specified by the line bundles \begin{equation} \begin{array}{ll} L_1 = {\cal O}_X (-1, 0, 0, 1)\,, \quad & L_2= {\cal O}_X (0, -2, 1, 3)\,, \\ L_3 = {\cal O}_X (0, 0, 1, -3)\,, \quad & L_4= {\cal O}_X (1, 2, -2, -1)\,. \end{array} \end{equation} This bundle leads to a four-dimensional theory with gauge group $SO(10)\times S(U(1)^4)$. Table~\ref{tab:so10} contains the basic information required to determine the multiplet content of such a theory, and together with the cohomology results \begin{equation} \begin{array}{llllll} h^{^{\!\bullet}}(X,L_2)&=&(0,8,0,0)\, ,&h^{^{\!\bullet}}(X,L_3)&=&(0,4,0,0) \, ,\\[3pt] h^{^{\!\bullet}}(X,L_1\otimes L_4)&=&(0,3,3,0)\, ,&h^{^{\!\bullet}}(X,L_2\otimes L_3)&=&(0,3,3,0) \, ,\\[3pt] h^{^{\!\bullet}}(X,L_1\otimes L_2^*)&=&(0,0,12,0) \, ,&h^{^{\!\bullet}}(X,L_1\otimes L_3^*)&=&(0,0,12,0) \, ,\\[3pt] h^{^{\!\bullet}}(X,L_2\otimes L_3^*)&=&(0,7,15,0)\, ,&h^{^{\!\bullet}}(X,L_2\otimes L_4^*)&=&(0,60,0,0)\, , \\[3pt] h^{^{\!\bullet}}(X,L_3\otimes L_4^*)&=&(0,0,36,0)\, , & \end{array} \end{equation} we find the upstairs spectrum \begin{equation} \resizebox{1\hsize}{!}{$ 8 \ {\bf 16}_2\,, \ 4 \ {\bf 16}_3\,, \ 3 \ {\bf 10}_{1,4}\,, \ 3 \ {\bf 10}_{2,3}\,, \ 12 \ {\bf 1}_{2,-1}\,, \ 12 \ {\bf 1}_{3,-1}\,, \ 7 \ {\bf 1}_{2,-3}\,, \ 15 \ {\bf 1}_{3,-2}\,, \ 60 \ {\bf 1}_{2,-4}\,, \ 36 \ {\bf 1}_{4,-3}\,.$} \end{equation} This spectrum is designed to produce a standard-model with three families upon dividing by a freely-acting symmetry of order four. Such symmetries are indeed available for the tetra-quadric however, unfortunately, for group-theoretical reasons these symmetries cannot break the $SO(10)$ gauge group to the standard model group. For this reason, the above model should be considered a toy example. Nevertheless, it is useful to calculate the Yukawa couplings for this model, in order to gain some experience with our formalism. Specifically, we are interested in couplings of the type \begin{equation} \lambda_{IJK} {\bf 10}^{(I)}_{1, 4} {\bf 16}^{(J)}_{2} {\bf 16}^{(K)}_{3} \,. \end{equation} which are allowed by the $SO(10)\times S(U(1)^4)$ gauge symmetry. Following Table~\ref{tab:KLrel}, the required harmonic forms are contained in the first cohomologies of the line bundles \begin{equation} \resizebox{1.01\hsize}{!}{$ K_1= L_1 \otimes L_4 = {\cal O}_X (0, 2, -2, 0)\,,\quad K_2= L_2 ={\cal O}_X (0, -2, 1, 3)\,, \quad K_3= L_3 ={\cal O}_X (0, 0, 1, -3)\,. $} \end{equation} These line bundles satisfy $H^1 (X, K_i)\cong H^1 ({\cal A}, {\cal K}_i)$ and $H^2 ({\cal A}, {\cal N}^* \otimes {\cal K}_i)=0$, where ${\cal K}_i$ are the corresponding ambient space line bundles with $K_i = {\cal K}_i|_X$. This shows (see Section~\ref{coboundary}) that all three harmonic forms which enter the Yukawa integral are of type 1. From our general arguments, this means that the Yukawa couplings vanish, so \begin{equation} \lambda_{IJK} =0\; . \end{equation} Note that this vanishing is, apparently, not caused by a symmetry in the low-energy theory, but happens due to quasi-topological reasons related to the cohomology of the line bundles involved. (However, we do not rule out that a symmetry which explains this vanishing result may be found.) \subsection{An $E_6$ example} \label{E6example} For a simple example with gauge group $E_6\times S(U(1)^3)$, consider the following choice of line bundles \begin{equation} \resizebox{1.01\hsize}{!}{$ L_1=K_1={\cal O}_X(-2,0,1,0)\,,\quad L_2=K_2={\cal O}_X(0,-2,0,1)\,,\quad L_3=K_3={\cal O}_X(2,2,-1,-1)\; .$} \end{equation} The above line bundles $K_i$ may also arise as appropriate tensor products for other gauge groups, see Table~\ref{tab:KLrel}, and the subsequent calculation also applies to these cases. However, for definiteness, we will focus on $E_6\times S(U(1)^3)$ and the corresponding multiplets, as summarised in Table~\ref{tab:e6}. The cohomology results \begin{equation} h^\bullet(K_1)=(0,2,0,0)\,,\quad h^\bullet(K_2)= (0,2,0,0)\,,\quad h^\bullet(K_3)=(0,4,0,0) \end{equation} show that we have a spectrum \begin{equation} 2\; {\bf 27}_1\,,\;2\;{\bf 27}_2\,,\;4\;{\bf 27}_3 \end{equation} plus $E_6$ singlets which are irrelevant to the present discussion. We are interested in the Yukawa couplings \begin{equation} \lambda_{IJK}{\bf 27}_1^{(I)}\,{\bf 27}_2^{(J)}\,{\bf 27}_3^{(K)}\; . \end{equation} Clearly, the first two line bundles are of type 1 with the corresponding harmonic $(0,1)$-forms contained in $H^1({\cal A},{\cal K}_1)$ and $H^1({\cal A},{\cal K}_2)$. However, $K_3$ is of type 2 and the associated harmonic $(0,2)$-forms represent the cohomology $H^2({\cal A},{\cal N}^*\otimes{\cal K}_2)$. Altogether, using Eq.~\eqref{akres}, this means the relevant harmonic forms and polynomials are \begin{equation} \begin{array}{lll} \hat{\nu}_1=\kappa_1^{-2}P_{(-2,0,1,0)}d\bar{z}_1 \, ,&\quad&P_{(-2,0,1,0)}=p_0+p_1z_3 \, , \\ \hat{\nu}_2=\kappa_2^{-2}Q_{(0,-2,0,1)}d\bar{z}_2 \, ,&\quad&Q_{(0,-2,0,1)}=q_0+q_1z_4 \, ,\\ \hat{\omega}_3=\kappa_3^{-3}\kappa_4^{-3}R_{(0,0,-3,-3)}d\bar{z}_3\wedge d\bar{z}_4 \, , &\quad& R_{(0,0,-3,-3)}=r_0+r_1\bar{z}_3+r_2\bar{z}_4+r_3\bar{z}_3\bar{z}_4 \, , \end{array} \end{equation} where $p_I$, $q_I$ and $r_I$ are complex coefficients parametrising the various ${\bf 27}$ multiplets. Multiplying the three polynomials and discarding terms with different powers of $z_i$ and $\bar{z}_i$ gives \begin{equation} PQR=p_0q_0r_0+p_0q_1r_2|z_4|^2+p_1q_0r_1|z_3|^2+p_1q_1r_3|z_3|^2|z_4|^2+\mbox{ non-matching terms} \, . \end{equation} This can be directly inserted into the integral~\eqref{Yuk112spec} and, together with the standard integrals~\eqref{stdint} (specifically, $I_{2,0}=1$, $I_{3,0}=1/2$, $I_{3,1}=1/2$), we find \begin{equation} \lambda(P,Q,R)=2 i \pi^3\left(p_0q_0r_0+p_0q_1r_2+p_1q_0r_1+p_1q_1r_3\right)\; . \label{exresint1} \end{equation} Alternatively, we can use the algebraic calculation method based on Eq.~\eqref{Yukalg}. For simplicity of notation, we denote the four sets of homogeneous ambient space coordinates by $(x_i^\alpha )=((x_0,x_1),(y_0,y_1),(u_0,u_1),(v_0,v_1))$ from hereon. Then, the homogeneous versions of the three polynomials read explicitly \begin{equation} \tilde{P}=p_0u_{0}+p_1u_{1}\;,\quad \tilde{Q}=q_0v_{0}+q_1v_{1}\;,\quad \tilde{R}=r_0\bar{u}_{0}\bar{v}_{0}+r_1\bar{v}_{0}\bar{u}_{1}+r_2\bar{u}_{0}\bar{v}_{1}+r_3\bar{u}_{1}\bar{v}_{1}\; . \end{equation} Their product is given by \begin{eqnarray} \mu(P,Q,R)&=&\left(p_0{\partial}_{\bar{u}_0}+p_1{\partial}_{\bar{u}_1}\right)\left(q_0{\partial}_{\bar{v}_0}+q_1{\partial}_{\bar{v}_1}\right) \left(r_0\bar{u}_{0}\bar{v}_{0}+r_1\bar{v}_{0}\bar{u}_{1}+r_2\bar{u}_{0}\bar{v}_{1}+r_3\bar{u}_{1}\bar{v}_{1}\right) \notag \\ &=&p_0q_0r_0+p_0q_1r_2+p_1q_0r_1+p_1q_1r_3\; , \end{eqnarray} where we have converted the coordinates in $\tilde{P}$ and $\tilde{Q}$ into derivatives, as required by our general rules. Inserting the correct numerical coefficient from Eqs.~\eqref{lmurel} and \eqref{cgen}, this indeed coincides with the result~\eqref{exresint1} from direct evaluation of the integral. If we choose a standard basis where each of the coefficients $p_I$, $q_I$ and $r_I$ equals one while all others vanish, we can write down the explicit Yukawa matrices \begin{equation} ( \lambda_{1JK})=2 i \pi^3 \left(\begin{array}{llll}1&0&0&0\\0&0&1&0\end{array}\right)\,,\quad ( \lambda_{2JK})=2 i \pi^3 \left(\begin{array}{llll}0&1&0&0\\0&0&0&1\end{array}\right)\; . \end{equation} Both matrices have maximal rank and are independent of complex structure. \subsection{An example with complex structure dependence}\label{csexample} We would like to discuss the Yukawa couplings related to the three line bundles \begin{equation} K_1={\cal O}_X(0,-2,1,1)\,,\quad K_2={\cal O}_X(-4,0,1,1)\,,\quad K_3={\cal O}_X(4,2,-2,-2)\; , \end{equation} with cohomologies \begin{equation} h^\bullet(K_1)=(0,4,0,0)\,,\quad h^\bullet(K_2)=(0,12,0,0)\,,\quad h^\bullet(K_3)=(0,12,0,0)\; . \end{equation} It will be convenient to think about this situation as arising from an $SU(5)\times S(U(1)^5)$ model, defined by five line bundles $L_a$, with $K_1=L_1\otimes L_2$ and $K_2=L_3\otimes L_4$ and $K_3=L_5$. Then, using the correspondence from Table~\ref{tab:KLrel}, the $SU(5)\times S(U(1)^5)$ spectrum related to $K_1$, $K_2$ and $K_3$ is \begin{equation} 4\;\overline{\bf 5}_{1,2}\;,\quad12\;\overline{\bf 5}_{3,4}\;,\quad 12\;{\bf 10}_5\; . \end{equation} \vspace{-2mm} \noindent We will later introduce a $\mathbb{Z}_2\times\mathbb{Z}_2$ Wilson line to break to the standard model group, in which case, as we will see, the above spectrum reduces to \begin{equation} H_{1,2}\;,\quad 3\;d_{3,4}\;,\quad 3\;Q_5\; . \end{equation} \vspace{-2mm} \noindent We are interested in computing the d-quark Yukawa couplings \begin{equation} \lambda^{(d)}_{JK}H_{1,2}d_{3,4}^JQ_5^K\; . \end{equation} \vspace{-2mm} \noindent However, for now, we construct the relevant bundle-valued forms in the upstairs theory, and restrict to the $\mathbb{Z}_2\times\mathbb{Z}_2$-quotient later. The line bundles $K_1$ and $K_2$ are both of type 1, with $H^1(X,K_1)\cong H^1(X,{\cal K}_1)$ and $H^1(X,K_2)\cong H^1(X,{\cal K}_2)$, while $K_3$ is of type 2 and \begin{equation} H^1(X,K_3)\cong {\rm Ker}(H^2({\cal A},{\cal N}^*\otimes{\cal K}_3)\stackrel{p}{\rightarrow }H^2({\cal A},{\cal K}_3) )\; . \label{ex3ker} \end{equation} Hence, following Eq.~\eqref{akreshom}, the relevant ambient space forms and polynomials can be written in terms of homogeneous coordinates as \begin{equation} \!\! \resizebox{1.05\hsize}{!}{$ \begin{array}{llll} 4 \ \overline{\mathbf{5}}_{1,2} &\!\!\rightarrow & \!\!\hat{\nu}_1=\sigma_2^{-2}\tilde{Q}_{(0,-2,1,1)}\bar{\mu}_2 \, , & \tilde{Q}\in{\rm Span}(u_0v_0,u_0v_1,u_1v_0,u_1v_1) \, ,\\[1mm] 12 \ \overline{\mathbf{5}}_{3,4} &\!\!\rightarrow &\!\! \hat{\nu}_2=\sigma_1^{-4}\tilde{R}_{(-4,0,1,1)}\bar{\mu}_1 \, , & \tilde{R}\in{\rm Span}(\bar{x}_0^2,\bar{x}_0\bar{x}_1,\bar{x}_1^2)\,{\rm Span}(u_0,u_1)\,{\rm Span}(v_0,v_1) \, , \\[1mm] 12 \ \mathbf{10}_5 &\!\!\rightarrow& \!\!\hat{\omega}_3=\sigma_3^{-2}\sigma_4^{-2}\tilde{S}_{(2,0,-4,-4)}\bar{\mu}_3\wedge \bar{\mu}_4 \, , &\tilde{S}\in{\rm Span}(x_0^2,x_0x_1,x_1^2)\,{\rm Span}(\bar{u}_0^2,\bar{u}_0\bar{u}_1,\bar{u}_1^2)\\[1mm] &&& \;\;\;\;\;\;\;{\rm Span}(\bar{v}_0^2,\bar{v}_0\bar{v}_1,\bar{v}_1^2)\; , \end{array}$} \end{equation} where by Span() we mean the ideal generated by polynomials, with complex number coefficients. The polynomial $\tilde{S}$ lies in a $27$-dimensional space which, in line with Eq.~\eqref{ex3ker}, is mapped into the $15$-dimensional space \begin{equation} {\rm Span}(x_0^4,x_0^3x_1,x_0^2x_1^2,x_0x_1^3,x_1^4)\,{\rm Span}(y_0^2,y_0y_1,y_1^2)\; . \end{equation} We have to ensure that $\tilde{S}$ resides in the kernel of this map, which amounts to imposing the condition \begin{equation} \tilde{p}\tilde{S}=0\; . \label{pS0} \end{equation} This leads to a $12$-dimensional space, as expected. These results are quite complicated due to the large number of multiplets. To simplify matters, it is useful to quotient by the freely-acting $\Gamma=\mathbb{Z}_2\times\mathbb{Z}_2$ symmetry with generators~\eqref{g1g2}. Representations of this symmetry are denoted by a pair of charges, $(q_1,q_2)$, where $q_i\in\{0,1\}$. We choose a trivial equivariant structure for all line bundles and, following the discussion around Eq.~\eqref{WLcharges}, a Wilson line specified by $\chi_2=(1,1)$, $\chi_3=(0,0)$ with associated multiplet charges \begin{equation} \chi_H=\chi_2=(1,1)\;,\quad \chi_d=\chi_3^*=(0,0)\;,\quad \chi_Q=\chi_2\otimes \chi_3=(1,1)\; . \end{equation} Taking into account that the differentials $\mu_i$ carry charge $(1,1)$ under the $\mathbb{Z}_2\times\mathbb{Z}_2$ symmetry, this choice means we should project onto the $(0,0)$ states for $\tilde{Q}$, and the $(1,1)$ states for $\tilde{R}$ and $\tilde{S}$. This leads to to the explicit $\mathbb{Z}_2\times\mathbb{Z}_2$-equivariant polynomials \begin{equation} \resizebox{1.02\hsize}{!}{$ \begin{array}{lll} \tilde{Q}&=&u_0v_0+u_1v_1 \, ,\\[1.5mm] \tilde{R}&=&a_3 \left(u_0 v_0 \bar{x}_0 \bar{x}_1-u_1 v_1 \bar{x}_0 \bar{x}_1\right)+a_1 \left(u_0 v_1 \bar{x}_0^2-u_1 v_0 \bar{x}_1^2\right)+a_2 \left(u_1 v_0 \bar{x}_0^2-u_0 v_1 \bar{x}_1^2\right) \, , \\[1.5mm] \tilde{S}&=&b_4 \left(x_0^2 \bar{u}_1^2 \bar{v}_0 \bar{v}_1-x_1^2 \bar{u}_0^2 \bar{v}_0 \bar{v}_1\right)+b_1 \left(x_0^2 \bar{u}_0^2 \bar{v}_0 \bar{v}_1-x_1^2 \bar{u}_1^2 \bar{v}_0 \bar{v}_1\right)+b_6 \left(x_0 x_1 \bar{u}_0^2 \bar{v}_1^2-x_0 x_1 \bar{u}_1^2 \bar{v}_0^2\right)+ \\[1.5mm] && b_3 \left(x_0^2 \bar{u}_0 \bar{u}_1 \bar{v}_1^2-x_1^2 \bar{u}_0 \bar{u}_1 \bar{v}_0^2\right)+b_2 \left(x_0^2 \bar{u}_0 \bar{u}_1 \bar{v}_0^2-x_1^2 \bar{u}_0 \bar{u}_1 \bar{v}_1^2\right)+b_5 \left(x_0 x_1 \bar{u}_0^2 \bar{v}_0^2-x_0 x_1 \bar{u}_1^2 \bar{v}_1^2\right)\, . \end{array}$} \end{equation} Hence, we are left with a single Higgs multiplet, $H_{1,2}$, three d-quarks, $d_{3,4}^I$, with parameters ${\bf a}=(a_I)$ and six left-handed quarks $Q_5^J$ with parameters ${\bf b}=(b_J)$. In terms of these parameters, the Yukawa couplings are given by \begin{equation} \mu(Q,R,S)=\tilde{Q}\tilde{R}\tilde{S}=8 \left(a_1 \left(b_1+b_3\right)+a_2 \left(b_2+b_4\right)+a_3 b_5\right)\; . \label{yukex3} \end{equation} However, for the ``physical" result we still have to find the kernel~\eqref{ex3ker}, that is, compute the vectors ${\bf b}$ which satisfy Eq.~\eqref{pS0}. To this end, we write down the most general tetra-quadric polynomial consistent with the $\Gamma=\mathbb{Z}_2\times \mathbb{Z}_2$ symmetry. \begin{eqnarray} \tilde{p}&=&C_1 u_0 u_1 v_0 v_1 x_0 x_1 y_0 y_1+C_2 (u_1^2 x_0 x_1 y_0 y_1 v_0^2+u_0^2 v_1^2 x_0x_1 y_0 y_1)+\nonumber\\ &&C_3 (u_0^2 x_0 x_1 y_0 y_1 v_0^2+u_1^2 v_1^2 x_0 x_1 y_0 y_1)+C_{14}(u_0 u_1 v_1^2 y_0 y_1 x_0^2+u_0 u_1 v_0^2 x_1^2 y_0 y_1)+\nonumber\\ &&C_{13} (u_1^2 v_0 v_1 y_0 y_1 x_0^2+u_0^2 v_0 v_1 x_1^2 y_0y_1)+C_{16} (u_0^2 v_0 v_1 y_0 y_1 x_0^2+u_1^2 v_0 v_1 x_1^2 y_0 y_1)+\nonumber\\ &&C_{15} (u_0 u_1 v_0^2 y_0 y_1 x_0^2+u_0 u_1 v_1^2 x_1^2 y_0 y_1)+C_{12} (u_1^2 v_1^2 x_1^2 y_0^2+u_0^2 v_0^2 x_0^2 y_1^2)+\nonumber\\ &&C_9(u_0^2 v_1^2 x_1^2 y_0^2+u_1^2 v_0^2 x_0^2 y_1^2)+C_{10} (u_0 u_1 v_0 v_1 x_1^2 y_0^2+u_0 u_1 v_0 v_1 x_0^2 y_1^2)+\nonumber\\ &&C_{11}(u_1^2 v_0^2 x_1^2 y_0^2+u_0^2 v_1^2 x_0^2 y_1^2)+ C_8 (u_0^2 v_0^2 x_1^2 y_0^2+u_1^2 v_1^2 x_0^2 y_1^2)+\nonumber\\ &&C_5(u_0 u_1 v_1^2 x_0 x_1 y_0^2+u_0 u_1 v_0^2 x_0 x_1 y_1^2)+C_4 (u_1^2 v_0 v_1 x_0 x_1 y_0^2+u_0^2 v_0 v_1 x_0 x_1y_1^2)+\nonumber\\ &&C_7(u_0^2 v_0 v_1 x_0 x_1 y_0^2+u_1^2 v_0 v_1 x_0 x_1 y_1^2)+C_6 (u_0 u_1 v_0^2 x_0 x_1 y_0^2+u_0 u_1 v_1^2 x_0 x_1y_1^2)+\nonumber\\ &&C_{17} (u_1^2 v_1^2 x_0^2 y_0^2+u_0^2 v_0^2 x_1^2 y_1^2)+C_{20} (u_0^2 v_1^2 x_0^2 y_0^2+u_1^2 v_0^2 x_1^2y_1^2)+\nonumber\\ &&C_{19}(u_0 u_1 v_0 v_1 x_0^2 y_0^2+u_0 u_1 v_0 v_1 x_1^2 y_1^2)+C_{18} (u_1^2 v_0^2 x_0^2 y_0^2+u_0^2 v_1^2 x_1^2 y_1^2)+\nonumber\\ &&C_{21} (u_0^2 v_0^2 x_0^2 y_0^2+u_1^2 v_1^2 x_1^2 y_1^2)\, .\label{pgen} \end{eqnarray} The dimension of the complex structure moduli space for $\tilde{X}=X/(\mathbb{Z}_2\times \mathbb{Z}_2)$ is given by $h^{2,1}(\tilde{X})=20$. The $21$ coefficients $C_i$ in the above polynomial provide projective (local) coordinates on this moduli space. Using this polynomial, Eq.~\eqref{pS0} is solved by vectors ${\bf b}$ satisfying \begin{equation} M{\bf b}=0\,,\qquad M=\left( \begin{array}{cccccc} \frac{C_{16}}{2} & \frac{C_{15}}{2} & \frac{C_{14}}{2} & \frac{C_{13}}{2} & 0 & 0 \\ \frac{C_7}{2} & \frac{C_6}{2} & \frac{C_5}{2} & \frac{C_4}{2} & C_{21}-C_{17} & C_{20}-C_{18} \\ \frac{C_4}{2} & \frac{C_5}{2} & \frac{C_6}{2} & \frac{C_7}{2} & C_{12}-C_8 & C_{11}-C_9 \\ \end{array} \right)\; . \end{equation} The matrix $M$ has indeed a (generically) three-dimensional kernel, but its basis vectors ${\bf v}_I$, where $I=1,2,3$, are very complicated functions of the complex structure moduli. In principle, this basis can be computed, and ${\bf b}$ can then be written as \begin{equation} {\bf b}=\sum_{I}\beta_I{\bf v}_I \, , \label{bspec} \end{equation} where the three $\beta_I$ now parametrise the three left-handed quark families. Inserting this result into Eq.~\eqref{yukex3} gives the desired result for the Yukawa couplings, and it can be shown that the rank of the Yukawa matrix $\lambda_{IJ}^{(d)}$ is three at generic loci in the complex structure moduli space. In order to obtain a more explicit result, we restrict to a five-dimensional sub-locus of our $20$-dimensional complex structure moduli space, described by polynomials of the form \begin{eqnarray} \tilde{p}_s&=&c_1 u_0 u_1 v_0 v_1 x_0 x_1 y_0 y_1+c_2 (u_0^2 v_0 v_1 x_0^2 y_0 y_1+u_1^2 v_0 v_1x_0^2 y_0 y_1+u_0 u_1 v_0^2 x_1 x_0 y_0^2+\nonumber\\ &&u_0 u_1 v_1^2 x_1 x_0 y_0^2+u_0 u_1 v_0^2 x_1 x_0 y_1^2+u_0 u_1 v_1^2 x_1 x_0 y_1^2+u_0^2 v_0 v_1 x_1^2 y_0 y_1 + \nonumber\\ && u_1^2 v_0 v_1 x_1^2 y_0 y_1)+c_5 (u_0^2 v_1^2 x_0^2 y_0^2+u_0^2 v_0^2 x_1^2 y_0^2+u_1^2 v_1^2 x_0^2 y_1^2+u_1^2 v_0^2 x_1^2 y_1^2)+\nonumber\\ && c_4 (u_0^2 v_0^2 x_0 x_1 y_0 y_1 - u_1^2 v_0^2 x_0 x_1 y_0 y_1+ u_0 u_1 v_1 v_0 x_0^2 y_0^2-u_0 u_1 v_1 v_0 x_1^2y_0^2- \nonumber\\ && u_0 u_1 v_1 v_0 x_0^2 y_1^2+u_0 u_1 v_1 v_0 x_1^2 y_1^2-u_0^2 v_1^2 x_0 x_1 y_0 y_1+u_1^2 v_1^2 x_0 x_1 y_0 y_1)+\nonumber\\ &&c_3(u_1^2 v_0^2 x_0^2 y_0^2+u_1^2 v_1^2 x_1^2 y_0^2+u_0^2 v_0^2 x_0^2 y_1^2+u_0^2 v_1^2 x_1^2 y_1^2)+c_6 (u_0^2 v_0^2 x_0^2 y_0^2+\nonumber\\ &&u_1^2 v_1^2 x_0^2 y_0^2+u_1^2 v_0^2 x_1^2 y_0^2+u_0^2 v_1^2 x_1^2 y_0^2+u_1^2 v_0^2 x_0^2 y_1^2+u_0^2 v_1^2 x_0^2 y_1^2+\nonumber\\ && u_0^2 v_0^2 x_1^2 y_1^2+u_1^2v_1^2 x_1^2 y_1^2)\label{pspec}\; . \end{eqnarray} In fact, this polynomial is the most general consistent with the freely-acting $\mathbb{Z}_4\times\mathbb{Z}_4$ symmetry of the tetra-quadric, which contains the $\mathbb{Z}_2\times \mathbb{Z}_2$ symmetry used previously as a sub-group. The equation $\tilde{p}_s\tilde{S}=0$ for the kernel now reads \begin{equation}\label{Mex3} M{\bf b}=0\,,\qquad M=\left( \begin{array}{cccccc} c_2 & 0 & 0 & c_2 & 0 & 0 \\ 0 & c_2 & c_2 & 0 & 0 & 2 c_5-2 c_3 \\ 0 & c_2 & c_2 & 0 & 2 c_3-2 c_5 & 0 \\ \end{array} \right)\; . \end{equation} Generically, the dimension of this kernel is three and a basis can be readily found as \begin{equation} \begin{array}{c} {\bf v}_1=\frac{1}{8}\left(0,2 \left(c_3-c_5\right),0,0,-c_2,c_2\right)^T, \quad {\bf v}_2=\frac{1}{8}\left(-c_2,0,0,c_2,0,0\right)^T, \\[1.5mm] {\bf v}_3=\frac{1}{8}\left(0,-c_2,c_2,0,0,0\right)^T. \end{array} \end{equation} Inserting these vectors into Eq.~\eqref{bspec} and \eqref{yukex3} and choosing a standard basis for the coefficients ${\bf a}$ and ${\boldsymbol\beta}$ then gives the Yukawa couplings \begin{equation} \lambda^{(d)}=i \pi^3c \left( \begin{array}{ccc} 0 & -c_2 & c_2 \\ 2 c_3-2 c_5 & c_2 & -c_2 \\ -c_2 & 0 & 0 \\ \end{array} \right)\, ,\label{yukex4} \end{equation} where $c$ is the numerical factor from Eq.~\eqref{lmurel}. Evidently, the generic rank of this matrix is two. This shows that the rank of the Yukawa matrix can vary in complex structure moduli space and can reduce at specific loci. In the present case, it is generically of rank three in the $20$-dimensional complex structure moduli space described by the polynomials~\eqref{pgen}. On the five-dimensional sub-locus, described by the polynomials~\eqref{pspec}, the rank reduces to two. If we specialise further to the four-dimensional locus where $c_2=0$, the rank of \eqref{yukex4} reduces to one. It turns out that the tetra-quadric~\eqref{pspec} remains generically smooth on this sub-locus. However, we have to be careful since the rank of the matrix M in Eq.~\eqref{Mex3} also depends on the complex structure. In fact, for $c_2=0$ the rank of $M$ reduces to two so that the dimension of the kernel increases from three to four. Hence, on this sub-locus the spectrum in the low-energy theory enhances from three left-handed quark multiplets to four (plus one mirror left-handed quark multiplet, since the index remains unchanged). A basis of the kernel is then given by ${\bf v}_I={\bf e}_I/8$, where $I=1,\ldots ,4$, and ${\bf e}_I$ are the six-dimensional standard unit vectors. From Eq.~\eqref{bspec} and \eqref{yukex3}, this leads to the Yukawa couplings \begin{equation} \lambda^{(d)}=i \pi^3 c \left( \begin{array}{cccc} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{array} \right)\, . \end{equation} Hence, after properly including the additional multiplet, the rank of the Yukawa matrix remains two. \section{Yukawa couplings in a quasi-realistic model on the tetra-quadric}\label{realex} In the previous section, we have applied our methods to a number of toy examples and we have seen cases with vanishing and non-vanishing Yukawa couplings, both with and without complex-structure dependence. We would now like to calculate Yukawa couplings in a quasi-realistic model on the tetra-quadric, that is, a model with gauge group $SU(3)\times SU(2)\times U(1)$ (plus additional $U(1)$ symmetries which are Green-Schwarz anomalous or can be spontaneously broken) and the exact MSSM spectrum (plus moduli fields uncharged under the standard model group, including bundle moduli singlets). This model appears in the standard model data base~\cite{Anderson:2011ns,Anderson:2012yf} and has been further analysed in Refs.~\cite{Buchbinder:2013dna, Buchbinder:2014qda,Buchbinder:2014sya,Buchbinder:2014qca}. We begin by reviewing the basic structure of this model and then calculate the two types of non-vanishing Yukawa couplings which arise, that is, the standard up-quark Yukawa couplings and the singlet Yukawa couplings of the form $SL\overline{H}$, with bundle moduli singlets $S$. \subsection{The model} The upstairs model is based on a rank five line bundle sum, $V=\bigoplus_{a=1}^5L_a$, on the tetra-quadric, with the five line bundles explicitly given by \begin{equation}\label{lbs5} \begin{array}{lll} L_1={\cal O}_X(-1,0,0,1) \, ,& L_2= {\cal O}_X(-1,-3,2,2) \, , & L_3 ={\cal O}_X(0,1,-1,0) \, ,\\ L_4 ={\cal O}_X(1,1,-1,-1) \, ,& L_5={\cal O}_X(1,1,0,-2)\; . & \end{array} \end{equation} Hence, the low-energy GUT group is $SU(5)\times S(U(1)^5)$. The non-zero cohomologies of line bundles appearing in $V$, $\wedge^2V$ and $V\otimes V^*$ are \begin{equation} \begin{array}{llllll} h^{^{\!\bullet}}(X,L_2)&=&(0,8,0,0)\, ,&h^{^{\!\bullet}}(X,L_5)&=&(0,4,0,0) \, ,\\[3pt] h^{^{\!\bullet}}(X,L_2\otimes L_4)&=&(0,4,0,0)\, ,&h^{^{\!\bullet}}(X,L_2\otimes L_5)&=&(0,3,3,0) \, ,\\[3pt] h^{^{\!\bullet}}(X,L_4\otimes L_5)&=&(0,8,0,0) \, ,&h^{^{\!\bullet}}(X,L_1\otimes L_2^*)&=&(0,0,12,0) \, ,\\[3pt] h^{^{\!\bullet}}(X,L_1\otimes L_5^*)&=&(0,0,12,0)\, ,&h^{^{\!\bullet}}(X,L_2\otimes L_3^*)&=&(0,20,0,0)\, , \\[3pt] h^{^{\!\bullet}}(X,L_2\otimes L_4^*)&=&(0,12,0,0)\, , &h^{^{\!\bullet}}(X,L_3\otimes L_5^*)&=&(0,0,4,0)\; . \end{array} \end{equation} Following Table~\ref{tab:su5}, these cohomologies give rise to the GUT spectrum \begin{equation} 8\, {\bf 10}_2\,,\; 4\,{\bf 10}_5\,,\;4\,\overline{\bf 5}_{2,4}\,,\;3\,\overline{\bf 5}_{2,5}^H\,,\;8\,\overline{\bf 5}_{4,5}\,,\; 3{\bf 5}_{2,5}^{\overline{H}}\,,\; 12\,{\bf 1}_{2,1}\,,\;12\,{\bf 1}_{5,1}\,,\;20\,{\bf 1}_{2,3}\,,\;12\,{\bf 1}_{2,4}\,,\;4\,{\bf 1}_{5,3}\; .\label{gutspec} \end{equation} At the GUT level, the only superpotential terms allowed by the gauge symmetry are \begin{equation} W=\lambda_{IJK} {\bf 5}_{2,5}^{(I)}{\bf 10}_2^{(J)}{\bf 10}_5^{(K)}+\rho_{IJK} {\bf 1}_{2,4}^{(I)}\overline{\bf 5}_{4,5}^{(J)}{\bf 5}_{2,5}^{(K)}\; , \label{Wgut} \end{equation} where the indices $I,J,K\ldots $ run over various ranges, as indicated by the multiplicities in the spectrum~\eqref{gutspec}, and $\lambda_{IJK}$ and $\rho_{IJK}$ are the couplings we would like to calculate. Evidently, the above GUT model has $12$ families of quarks and leptons, three vector-like ${\bf 5}^{\overline{H}}$--$\overline{\bf 5}^H$ pairs, which can account for the Higgs multiplets, and a spectrum of bundle moduli singlets. This is a promising upstairs spectrum which may lead to a downstairs standard model upon dividing by a freely-acting symmetry of order four. Indeed, this can be accomplished using the $\mathbb{Z}_2\times\mathbb{Z}_2$ symmetry with generators~\eqref{g1g2}, a choice of Wilson line specified by $\chi_2=(0,1)$ and $\chi_3=(0,0)$ and a trivial equivariant structure for all line bundles. The relevant GUT multiplets branch as ${\bf 10}\rightarrow (Q,u,e)$, $\overline{\bf 5}\rightarrow (d,L)$, $\overline{\bf 5}^H\rightarrow (T,H)$ and ${\bf 5}^{\overline{H}}\rightarrow (\bar{T},\bar{H})$ (where $T$ and $\bar{T}$ are the Higgs triplets, to be projected out). From Eq.~\eqref{WLcharges}, these standard model multiplets carry the Wilson line charges \begin{equation} \begin{array}{lllll} \chi_Q=\chi_2\otimes\chi_3=(0,1) \, , &\quad&\chi_u=\chi_3^2=(0,0) \, , &\quad&\chi_e=\chi_2^2=(0,0) \, ,\\ \chi_d=\chi_3^*=(0,0) \, , &\quad&\chi_L=\chi_2^*=(0,1) \, ,&\quad&\chi_H=\chi_2^*=(0,1)\, ,\\ \chi_{\overline{H}}=\chi_2=(0,1) \, ,&\quad&\chi_T=\chi_3^*=(0,0)\, ,&\quad&\chi_{\overline{T}}=\chi_3=(0,0)\; . \end{array} \end{equation} Applying the rule~\eqref{equivcoh} for this choice of charges then leads to the downstairs spectrum \begin{equation} \resizebox{1\hsize}{!}{$ 2\, (Q,u,e)_2,\; (Q,u,e)_5,\;(d,L)_{2,4},\;2\,(d,L)_{4,5},\; H_{2,5},\;\overline{H}_{2,5},\; 3\,{\bf 1}_{2,1},\;3\,{\bf 1}_{5,1},\;5\,{\bf 1}_{2,3},\;3\,{\bf 1}_{2,4},\;{\bf 1}_{5,3},$} \label{smspec} \end{equation} a perfect MSSM spectrum plus additional bundle moduli singlets. Ordering the quarks as $(Q^{(I)})=(Q_2^1,Q_2^2,Q_5)$ and $(u^{(I)})=(u_2^1,u_2^2,u_5)$, the downstairs analogue of the superpotential~\eqref{Wgut} can be written as \begin{equation} W=\lambda_{IJ}^{(u)}\overline{H}_{2,5}u^{(I)}Q^{(J)}+\rho_{IJ} {\bf 1}_{2,4}^{(I)}L_{4,5}^{(J)}\overline{H}_{2,5}\; . \label{Wsm} \end{equation} The up-Yukawa matrix $\lambda^{(u)}$ is further constrained by the $S(U(1)^5)$ symmetry and must be of the form \begin{equation} \lambda^{(u)}=\left(\begin{array}{lll}0&0&a\\0&0&b\\alpha '&b'&0\end{array}\right)\; . \label{uppattern} \end{equation} However, it is not yet clear that the entries $a$, $b$, $a'$, $b'$ of this matrix are non-zero and that the rank of the up-Yukawa matrix is indeed two, as the pattern of \eqref{uppattern} suggests. This is the question we will answer in the next sub-section. The $3\times 2$ singlet coupling matrix $\rho$ is unconstrained by gauge symmetry and evidently plays an important role for the existence of a massless Higgs doublet pair, away from the line bundle locus. More precisely, if \begin{equation} \langle \rho_{IJ}{\bf 1}_{2,4}^{(I)}\rangle \end{equation} is non-zero, then the Higgs pair (where a combination of the lepton multiplets plays the role of the down Higgs) receives a large mass and disappears from the spectrum. At the line bundle locus, we have $\langle {\bf 1}_{2,4}^{(I)}\rangle=0$, and the Higgs pair is massless, consistent with the result of our cohomology calculation. However, once we move away from the line bundle locus such that $\langle {\bf 1}_{2,4}^{(I)}\rangle\neq 0$,\footnote{Note that we can turn on all the available singlets except ${\bf 1}_{2, 4}^{(I)}$ and keep the Higgs pair massless. As was shown in~Ref.~\cite{Buchbinder:2014qda}, this deformation leads to a standard model with global $B-L$ symmetry.} the Higgs pair may become massive, depending on the structure of the couplings $\rho_{IJ}$. In fact, in Ref.~\cite{Buchbinder:2014qda} we have verified -- by performing a cohomology calculation for the associated non-Abelian bundles -- that the Higgs pair does indeed become massive for generic complex structure, once $\langle {\bf 1}_{2,4}^{(I)}\rangle\neq 0$. This suggests that at least some of the singlet couplings $\rho_{IJ}$ are non-zero, generically. Below, we will confirm this expectation by explicitly calculating the couplings $\rho_{IJ}$. \subsection{Up Yukawa coupling} To calculate the up Yukawa couplings, we begin with the upstairs GUT model and focus on the first term in the superpotential~\eqref{Wgut}. The line bundles and ambient space harmonic forms (see Eq.~\eqref{akreshom}) for these multiplets are \begin{equation} \begin{array}{lllll} 3\;{\bf 5}^H_{2,5}&\longrightarrow&K_1=L_2^*\otimes L_5^* \, ,&\quad&\hat{\nu}_1=\sigma_3^{-2}\tilde{Q}_{(0,2,-2,0)}\bar{\mu}_3 \, ,\\ 4\;{\bf 10}_2&\longrightarrow&K_2=L_5 \, , &\quad&\hat{\nu}_2=\sigma_4^{-2}\tilde{R}_{(1,1,0,-2)}\bar{\mu}_4 \, , \\ 8\;{\bf 10}_5&\longrightarrow&K_3=L_2 \, , &\quad&\hat{\omega}=\sigma_1^{-3}\sigma_2^{-5}\tilde{S}_{(-3,-5,0,0)}\bar{\mu}_1\wedge \bar{\mu}_2\; , \end{array} \end{equation} with associated polynomials \begin{eqnarray} \tilde{Q}&=&q_0y_0^2+q_1y_0y_1+q_2y_1^2 \, , \label{Qt}\\ \tilde{R}&=&r_0x_0y_0+r_1x_1y_0+r_2x_0y_1+r_3x_1y_1 \, ,\\ \tilde{S}&=&s_0\bar{x}_0\bar{y}_0^3+s_1\bar{x}_0\bar{y}_0^2\bar{y}_1+s_2\bar{x}_0\bar{y}_0\bar{y}_1^2+s_3\bar{x}_0\bar{y}_1^3+s_4\bar{x}_1\bar{y}_0^3+ s_5\bar{x}_1\bar{y}_0^2\bar{y}_1+ \notag \\ && s_6\bar{x}_1\bar{y}_0\bar{y}_1^2+s_7\bar{x}_1\bar{y}_1^3\; , \end{eqnarray} and coefficients $q_I$, $r_I$ and $s_I$ parametrising the multiplets. Evidently, $K_1$ and $K_2$ are of type 1, while $K_3$ is of type 2, so we can proceed with the algebraic calculation explained in Section~\ref{comments}. Converting everything to holomorphic coordinates for simplicity of notation, we have \begin{eqnarray} \mu(Q,R,S)&=&\left(q_0\partial_{y_0}^2+q_1\partial_{y_0}\partial_{y_1}+q_2\partial_{y_1}^2\right) \left(r_0\partial_{x_0}\partial_{y_0}+r_1\partial_{x_1}\partial_{y_0}+r_2\partial_{x_0}\partial_{y_1}+r_3\partial_{x_1}\partial_{y_1}\right)\nonumber\\ && \big(s_0x_0y_0^3+s_1x_0y_0^2y_1+s_2x_0y_0y_1^2+s_3x_0y_1^3+s_4x_1y_0^3+ s_5x_1y_0^2y_1+ \nonumber\\ && \,\,\, s_6x_1y_0y_1^2+s_7x_1y_1^3\big) \nonumber\\ &=&2\left[3q_0r_0s_0+3q_0r_1s_4+q_0r_2s_1+q_0r_3s_5+q_1r_0s_1+q_1r_1s_5+\right.\nonumber\\ &&\left.\quad q_1r_2s_2+q_1r_3s_6+q_2r_0s_2+q_2r_1s_6+3q_2r_2s_3+3q_2r_3s_7\right]\; . \end{eqnarray} Inserting standard choices for the coefficients then leads to the couplings $\lambda_{IJK}$ in the superpotential~\eqref{Wgut}. In particular, we see that these couplings are just numbers, that is, they are independent of complex structure.\\[2mm] For a simpler and physically more meaningful result, we should consider the downstairs theory. This means we have to extract, from the above polynomials $\tilde{Q}$, $\tilde{R}$ and $\tilde{S}$, the $\mathbb{Z}_2\times\mathbb{Z}_2$ equivariant parts. Remembering that the differentials $\mu_i$ carry charge $(1,1)$ under $\mathbb{Z}_2\times\mathbb{Z}_2$, while the $\sigma_i$ are invariant, this leads to \begin{eqnarray} \bar{H}&:&\tilde{Q}_{\bar{H}}=y_0y_1 \, ,\label{b1}\\ Q_5&:&\tilde{R}_{Q_5}=y_0x_1+y_1x_0 \, ,\\ u_5&:&\tilde{R}_{u_5}=y_0x_1-y_1x_0 \, ,\\ Q_2^\alpha&:&\tilde{S}_{Q_2}=-x_0y_0^2+x_1y_1^3\,,\;-x_0y_0y_1^2+x_1y_1y_0^2 \, ,\\ u_2^\alpha&:&\tilde{S}_{u_2}=x_0y_0^3+x_1y_1^3\,,\;x_0y_0y_1^2+x_1y_1y_0^2 \, .\label{b5} \end{eqnarray} To carry out the algebraic calculation, we first note that \begin{equation} \lambda(Q,R,S)=\frac{i \pi^3}{24}\mu(Q,R,S)\; , \end{equation} where the additional factor of $1/4$ relative to Eq.~\eqref{lmurel} accounts for the fact that we are integrating over the upstairs manifold $X$, while the actual calculation should be carried out on the quotient $X/\Gamma$. We find \begin{equation} \mu(\bar{H},u_5,Q_2^\alpha)=\left(\partial_{y_0}\partial_{y_1}\right)\left(\partial_{y_0}\partial_{x_1}-\partial_{y_1}\partial_{x_0}\right) \left( \begin{array}{l}-x_0y_0^3+x_1y_1^3\\-x_0y_0y_1^2+x_1y_1y_0^2\end{array}\right)=\left(\begin{array}{l}0\\4\end{array}\right) \end{equation} and \begin{equation} \mu(\bar{H},u_2^\alpha,Q_5)=\left(\partial_{y_0}\partial_{y_1}\right)\left(\partial_{y_0}\partial_{x_1}+\partial_{y_1}\partial_{x_0}\right) \left( \begin{array}{l}x_0y_0^3+x_1y_1^3\\xi_0y_0y_1^2+x_1y_1y_0^2\end{array}\right)=\left(\begin{array}{l}0\\4\end{array}\right) \, . \end{equation} Combining these results leads to the up Yukawa matrix \begin{equation} \label{upyukawamatrix} \lambda^{(u)}=\frac{i \pi^3}{6}\left(\begin{array}{lll}0&0&0\\0&0&1\\0&1&0\end{array}\right)\; . \end{equation} We have, therefore, shown that the up Yukawa matrix has indeed rank 2, as suggested by the general structure~\eqref{uppattern}. In addition, we see that these Yukawa couplings are independent of complex structure. This happens because the cohomologies of the above line bundles $K_i$ have a simple representation in terms of ambient space cohomologies, without any kernel or co-kernel operations required. It is interesting to point out that the Yukawa matrix obtained in Eq.~\eqref{upyukawamatrix} does not match its counterpart from particle physics very accurately (for example, in Eq.~\eqref{upyukawamatrix}, the charm- and top-quarks seem to have the same mass.) This is not necessarily an indication that the model has failed as, first of all, the fields are not normalised, and also non-perturbative effects were not yet taken into account. Such effects are generated by instantonic strings wrapped around curves and they are proportional to $\textrm{exp}(-n_i T^i)$, where $T^i$ are K\"ahler moduli and $n_i$ are positive integers parametrising the curve. The very small but non-zero mass of the up-quark could be obtained from such non-perturbative effects. Similarly, these effects could explain the great disparity between the masses of the charm-quark and top-quark. \subsection{Singlet-Higgs-lepton coupling} To calculate the singlet Yukawa coupling, we start with the upstairs theory as before and focus on the second term in the superpotential~\eqref{Wgut}. The relevant line bundles and forms are \begin{equation}\label{ex4forms} \begin{array}{lllll} 12\;{\bf 1}_{2,4}&\longrightarrow&K_1=L_2\otimes L_4^* \, , &\quad& \hat{\omega}_1=\kappa_1^{-4}\kappa_2^{-6}Q_{(-4,-6,1,1)}d\bar{z}_1\wedge d\bar{z}_2 \, , \\[1mm] 8\;\overline{\bf 5}_{4,5}&\longrightarrow&K_2=L_4\otimes L_5 \, , &\quad&\hat{\omega}_2=\kappa_3^{-3}\kappa_4^{-5}R_{(0,0,-3,-5)}d\bar{z}_3\wedge d\bar{z}_4 \, ,\\[1mm] 4\;{\bf 5}_{2,5}^{\overline{H}}&\longrightarrow&K_3=L_2^*\otimes L_5^* \, , &\quad& \hat{\nu}_3=\kappa_3^{-2}S_{(0,2,-2,0)}d\bar{z}_3\; . \end{array} \end{equation} There are two additional complications, compared to the previous calculation, evident from this list of forms. First of all, the singlet space is defined as the kernel \begin{equation} {\rm Ker}\left(H^2({\cal A},{\cal N}^*\otimes {\cal K}_1)\stackrel{p}{\rightarrow} H^2({\cal A},{\cal K}_1)\right) \end{equation} of a map between a $60$ and a $48$-dimensional space. These dimensions are quite large, but we will improve on this shortly by taking the $\mathbb{Z}_2\times\mathbb{Z}_2$ quotient. At any rate, we should impose the constraint $\tilde{p}\tilde{Q}=0$ on the polynomials $Q$ in order to work out this kernel, and this will lead to complex structure dependence. Secondly, two line bundles, $K_1$ and $K_2$, are of type 2, which means that we will have to work with the more general Eq.~\eqref{Yukgen4} for the Yukawa couplings. Given the differentials $d\bar{z}_i$ which appear in \eqref{ex4forms}, only the term proportional to $\hat{\omega}_1\wedge\hat{\nu}_2\wedge\hat{\nu}_3$ can contribute to the integral~\eqref{Yukgen4}. This means we need to determine the $(0,1)$-forms $\hat{\nu}_2$ satisfying \begin{equation} \bar{\partial}\hat{\nu}_2=p\hat{\omega}_2\; . \end{equation} To do this, we write down the two relevant polynomials \begin{equation} R_{(0,0,-3,-5)}=r_0+r_1\bar{z}_3\;,\quad p=p_0+p_1z_3+p_2z_3^2 \; , \label{Rsplit} \end{equation} with the $z_3$-dependence made explicit and apply the result~\eqref{coboundres}, which reads \begin{equation} {\cal R}=-\frac{1}{2}(p_1r_0+p_2r_1)+p_0r_0\bar{z}_3+\frac{1}{2}p_0r_1\bar{z}_3^2-\frac{1}{2}p_2r_0z_3-p_2r_1|z_3|^2+\frac{1}{2}(p_0r_0+p_1r_1)\bar{z}_3|z_3|^2 \; .\label{Rdef} \end{equation} Then, the desired $(0,1)$-form $\hat{\nu}_2$ can be written as \begin{equation} \hat{\nu}_2=\kappa_3^{-2}\kappa_4^{-5}{\cal R}d\bar{z}_4\; . \end{equation} Using these results for the forms in the basic formula~\eqref{Yukgen4} for the Yukawa couplings, we find \begin{equation} \lambda(\nu_1,\nu_2,\nu_3)=-\frac{1}{2 \pi i}\int_{\mathbb{C}^4}\frac{Q{\cal R}S}{\kappa_1^4\kappa_2^6\kappa_3^4\kappa_4^5}d^4z\,d^4\bar{z}\; . \label{yukexample3} \end{equation} To simplify the calculation, we descend to the downstairs theory and divide by the $\mathbb{Z}_2\times\mathbb{Z}_2$ with generators~\eqref{g1g2}. The polynomials $Q$, $R$ and $S$ then simplify to \begin{eqnarray} Q&=&a_{14} \left(z_3 \bar{z}_1 \bar{z}_2^2+z_4 \bar{z}_1 \bar{z}_2^2\right)+a_5 \left(\bar{z}_1^2 \bar{z}_2^2+z_3 z_4 \bar{z}_2^2\right)+a_4 \left(z_3 z_4 \bar{z}_1^2 \bar{z}_2^2+\bar{z}_2^2\right)+ \nonumber \\ && a_7 \left(z_3 \bar{z}_2^3+z_4 \bar{z}_1^2 \bar{z}_2\right)+ a_6 \left(z_4 \bar{z}_2^3+z_3 \bar{z}_1^2 \bar{z}_2\right)+a_{13} \left(\bar{z}_1 \bar{z}_2^3+z_3 z_4 \bar{z}_1 \bar{z}_2\right)+ \nonumber\\ && a_{12} \left(z_3 z_4 \bar{z}_1 \bar{z}_2^3+\bar{z}_1 \bar{z}_2\right)+a_2 \left(z_3 \bar{z}_1^2 \bar{z}_2^3+z_4 \bar{z}_2\right)+ a_3 \left(z_4 \bar{z}_1^2 \bar{z}_2^3+z_3 \bar{z}_2\right)+ \nonumber\\ && a_8 \left(\bar{z}_2^4+z_3 z_4 \bar{z}_1^2\right)+a_9 \left(z_3 z_4 \bar{z}_2^4+\bar{z}_1^2\right)+a_{10} \left(z_3 \bar{z}_1 \bar{z}_2^4+z_4 \bar{z}_1\right)+\nonumber\\ &&a_{11} \left(z_4 \bar{z}_1 \bar{z}_2^4+z_3 \bar{z}_1\right)+a_1\left(\bar{z}_1^2 \bar{z}_2^4+z_3 z_4\right)+a_0 \left(z_3 z_4 \bar{z}_1^2\bar{z}_2^4+1\right) \, ,\\ R&=&b_1 \left(\bar{z}_4^2-\bar{z}_3 \bar{z}_4\right)+b_0 \left(1-\bar{z}_3 \bar{z}_4^3\right) \, ,\\ S&=&z_2\; . \end{eqnarray} We still have to impose the condition $\tilde{p}\tilde{Q}=0$, which reduces the $15$ parameters ${\bf a}=(a_I)$ down to a generic number of three, corresponding to the three singlets ${\bf 1}_{2,4}$. The two coefficients ${\bf b}=(b_0,b_1)$ parametrize the leptons $L_{4,5}$, while $S=z_2$ represents the Higgs $\overline{H}_{2,5}$. From Eq.~\eqref{Rdef} and using the five-parameters $\mathbb{Z}_4\times \mathbb{Z}_4$-invariant family of tetra-quadrics~\eqref{pspec}, in order to make the calculation manageable, we can explicitly work out the polynomial ${\cal R}$. Then, inserting into Eq.~\eqref{yukexample3}, gives \begin{eqnarray} \label{Yukawa_5.3} \lambda({\bf a},{\bf b})&=&\frac{i \pi^3}{6480}\big(2 a_{14} b_1 c_1+9 a_{12} b_0 c_2+9 a_{13} b_0 c_2-8 a_4 b_1 c_2-8 a_5 b_1 c_2+3 a_{12} b_1 c_2+ \nonumber \\ && \qquad \,\,\,\, 3 a_{13} b_1 c_2-36 a_7 b_0 c_3- 12 a_2 b_1 c_3-12 a_{14} b_0 c_4+6 a_2 b_1c_4+6 a_3 b_1 c_4- \nonumber \\ && \qquad \,\,\,\, 6 a_6 b_1 c_4-6 a_7 b_1 c_4+4 a_{14} b_1 c_4-36 a_6 b_0 c_5- 12 a_3 b_1 c_5-36 a_2 b_0 c_6- \nonumber \\ && \qquad \,\,\,\, 36 a_3 b_0 c_6-12 a_6 b_1 c_6-12 a_7 b_1 c_6\big) \, . \label{yukex3gen} \end{eqnarray} We still have to impose the kernel condition on the vector ${\bf a}$, and as before, we use the five-parameter family of tetra-quadrics~\eqref{pspec}. This condition can then be written as $M{\bf a}=0$, where \begin{align} &M= \\ &\!\!{\scriptsize \left(\arraycolsep=1.4pt\def1.5{1.5} \begin{array}{ccccccccccccccc} 24 c_6 & 0 & 0 & 0 & 4 c_3 & 4 c_6 & 0 & 0 & 0 & 24 c_5 & 0 & 0 & 3 c_4 & 0 & 0 \\ 24 c_5 & 0 & 6 c_2 & 0 & 4 c_6 & 4 c_3 & 0 & 6 c_2 & 0 & 24 c_6 & 0 & 0 & -3 c_4 & 0 & 0 \\ 24 c_4 & 24 c_6 & 0 & 6 c_2 & 4 c_6 \!\!- \!\!4 c_4 & 4 c_3\!\!+\!\!4 c_4 & 6 c_2 & 0 & 24 c_5 & -24 c_4 & 12 c_2 & 0 & 3 c_1 & 3 c_4 & 2 c_2 \\ 0 & 24 c_5 & 0 & 0 & 4 c_3 & 4 c_6 & 0 & 0 & 24 c_6 & 0 & 12 c_2 & 0 & 0 & -3 c_4 & 2 c_2 \\ 24 c_3 & 0 & 0 & 0 & 4 c_6 & 4 c_5 & 0 & 0 & 0 & 24 c_6 & 0 & 12 c_2 & -3 c_4 & 0 & 2 c_2 \\ 24 c_6 & 24 c_4 & 6 c_2 & 0 & 4 c_4\!\!+\!\!4 c_5 & 4 c_6\!\!-\!\!4 c_4 & 0 & 6 c_2 & -24 c_4 & 24 c_3 & 0 & 12 c_2 & 3 c_4 & 3 c_1 & 2 c_2 \\ 0 & 24 c_3 & 0 & 6 c_2 & 4 c_5 & 4 c_6 & 6 c_2 & 0 & 24 c_6 & 0 & 0 & 0 & 0 & -3 c_4 & 0 \\ 0 & 24 c_6 & 0 & 0 & 4 c_6 & 4 c_5 & 0 & 0 & 24 c_3 & 0 & 0 & 0 & 0 & 3 c_4 & 0 \\ 0 & 0 & 12 c_6 & 12 c_6 & 8 c_2 & 8 c_2 & 12 c_3 & 12 c_5 & 0 & 0 & 0 & 0 & 0 & 0 & 4 c_4 \\ 0 & 0 & 12 c_5 & 12 c_3 & 0 & 0 & 12 c_6 & 12 c_6 & 0 & 0 & 0 & 0 & 0 & 0 & -4 c_4 \\ 0 & 0 & 12 c_6 & 12 c_6 & 0 & 0 & 12 c_5 & 12 c_3 & 0 & 0 & 0 & 0 & 6 c_2 & 6 c_2 & 4 c_4 \\ 0 & 0 & 12 c_3\!\!+\!\!12 c_4 & 12 c_4\!\!+\!\!12 c_5 & 8 c_2 & 8 c_2 & 12 c_6\!\!-\!\!12 c_4 & 12 c_6\!\!-\!\!12 c_4 & 0 & 0 & 0 & 0 & 6 c_2 & 6 c_2 & 4 c_1\!\!-\!\!4 c_4 \\ \end{array} \right)}\nonumber \end{align} This matrix has generic rank $12$ and, hence, a three-dimensional kernel, spanned by vector ${\bf v}_I$. We can write \begin{equation} {\bf a}=\sum_{I}\alpha_I{\bf v}_I\; , \label{aexp} \end{equation} with the three coefficients $\alpha_I$ describing the three singlets $S^I$. Unfortunately, even for our 5-parameter family~\eqref{pspec} of tetra-quadrics, the ${\bf v}_I$ contain very complicated functions of the complex structure moduli, which make an analytic calculation impractical. Instead, we choose random numerical values for the complex structure moduli $c_1,\ldots ,c_6$, calculate a basis of ${\rm Ker}(M)$ for this choice and then work out the Yukawa matrix by inserting into Eqs.~\eqref{aexp} and \eqref{yukex3gen}. In this way, we obtain an explicit numerical $3\times 2$ Yukawa matrix $\rho$, valid at this specific point in complex structure moduli space. This calculation leads to a Yukawa matrix $\rho$ with rank two, and this should be considered the generic result in complex structure moduli space. An analytic calculation can be carried out by restricting to the 4-parameter sub-family with $c_2=0$. In this case, the kernel basis vectors are \begin{equation} \resizebox{1.03\hsize}{!}{$ \begin{array}{ll} {\bf v}_1=&\left(0,0,-c_1 \left(c_3^2+c_5 c_3-2 c_6^2\right)-c_4 \left(-c_3^2+\left(3 c_4-2 c_6\right) c_3+c_5 \left(c_5+2 c_6\right)+c_4 \left(c_5+4 c_6\right)\right),\right.\\[1mm] &c_4c_3^2+\left(c_4^2+2 c_6 c_4+c_1 c_5\right) c_3-c_4 c_5 \left(c_5+2 c_6\right)+c_4^2 \left(3 c_5+4 c_6\right)+c_1 \left(c_5^2-2 c_6^2\right),0,0, \\[1mm] & -\left(c_3-c_5\right) \left(c_4^2+c_3 c_4+\left(c_5+2 c_6\right) c_4-c_1 c_6\right),-\left(c_3-c_5\right) \left(c_4^2+c_3 c_4+\left(c_5+2 c_6\right) c_4-c_1 c_6\right),\\[1mm] &\left. 0,0,0,0,0,0,3\left(c_3-c_5\right) \left(c_3+c_4+c_5-2 c_6\right) \left(c_3+c_5+2 c_6\right)\right)^T \, ,\\[1mm] {\bf v}_2=&\left(0,0,0,0,0,0,0,0,0,0,0,3 \left(c_3-c_5\right) \left(c_3+c_4+c_5-2 c_6\right) \left(c_3+c_5+2 c_6\right),0,0,0\right)^T \, , \\[1mm] {\bf v}_3=&\left(0,0,0,0,0,0,0,0,0,0,3 \left(c_3-c_5\right) \left(c_3+c_4+c_5-2 c_6\right) \left(c_3+c_5+2 c_6\right),0,0,0,0\right)^T\, . \end{array}$} \end{equation} Inserting these vectors into Eq.~\eqref{aexp} and then into the general form \eqref{yukex3gen} of the Yukawa couplings leads to \begin{equation} \lambda({\boldsymbol\alpha},{\bf b})=\frac{i \pi^3}{360} \alpha _1 b_1 \left(c_3-c_5\right) \left(4 c_4^2+c_1 \left(c_3+c_5-2 c_6\right)\right) \left(c_3+c_5+2 c_6\right)\; . \end{equation} For the Yukawa matrix $\rho$ in the superpotential~\eqref{Wsm}, this means \begin{equation} \rho=\frac{i \pi^3}{360}\left( \begin{array}{cc}0&\left(c_3-c_5\right) \left(4 c_4^2+c_1 \left(c_3+c_5-2 c_6\right)\right) \left(c_3+c_5+2 c_6\right)\\0&0\\0&0\end{array}\right) \, . \label{Yuksing} \end{equation} The matrix has rank one, which is reduced from the generic value two that we have found for the five-dimensional family~\eqref{pspec}. Hence, we have found another example of a Yukawa coupling with rank varying as a function of complex structure. In addition, our results show that, for generic complex structure, the Higgs pair receives a mass whenever $\langle {\bf 1}_{2,4}\rangle\neq 0$, in agreement with the results in Ref.~\cite{Buchbinder:2014qda}. For special sub-loci of our four-parameter family of tetra-quadrics, characterised by the vanishing of one of the factors in Eq.~\eqref{Yuksing}, the Yukawa matrix vanishes entirely. However, as before, we have to be careful, since the kernel of the matrix $M$ might also change in these case. Let us begin by imposing $c_3=c_5$, in addition to $c_2=0$, on the family of polynomials~\eqref{pspec}. In this case, the dimension of ${\rm Ker}(M)$ turns out to be six and a basis is given by \begin{equation} \resizebox{1.01\hsize}{!}{$ \begin{array}{ll} \mathbf{v}_1=(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0)^T, &\mathbf{v}_2=(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0)^T, \\[1mm] \mathbf{v}_3=(0, 1, 0, 0, -6, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0)^T, &\mathbf{v}_4= (1, 0, 0, 0, 0, -6, 0, 0, 1, 0, 0, 0, 0, 0, 0)^T, \\[1mm] \mathbf{v}_5=(0, 0, 0, 0, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0)^T, &\mathbf{v}_6= (0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)^T. \end{array}$} \end{equation} Using these six vectors in Eqs.~\eqref{aexp} and \eqref{Yukawa_5.3} leads to a $6\times 2$ Yukawa matrix which vanishes entirely. Similar results are obtained for other sub-loci of interest. If $4 c_4^2+c_1 (c_3+c_5-2c_6)=0$, in addition to $c_2=0$, the dimension of the kernel becomes four and the $4\times 2$ Yukawa matrix vanishes entirely. The same statements hold for $c_3+c_5+2c_6=0$. This shows that there are specific loci in complex structure moduli space where the Higgs pair remains massless, even in the presence of generic bundle moduli VEVs. \section{Final remarks} \label{c1conclusions} In this chapter, we have developed methods to calculate holomorphic Yukawa couplings for heterotic line bundle models, focusing on Calabi-Yau manifolds defined as hypersurfaces in products of projective spaces and the tetra-quadric in $\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ in particular. While our approach is based on differential geometry, we also make contact with the algebraic methods in Refs.~\cite{Candelas:1987se,Anderson:2009ge}. We provide explicit rules for writing down the relevant bundle-valued harmonic forms which enter the Yukawa couplings. These forms can be identified with polynomials of certain multi-degrees which are the key players in the algebraic calculation. It turns out that these forms can be of different topological types, which we have referred to as type 1 and type 2 (as well as mixed type). If all three forms involved in a Yukawa coupling are of type 1, it turns out that the Yukawa coupling vanishes. This vanishing is topological in nature and is not, apparently, due to a symmetry in the low-energy theory. Our most explicit results, see for example Eq.~\eqref{Yuk112copy}, are for Yukawa couplings which involve two forms of type 1 and one form of type 2. We also show how to compute Yukawa couplings which involve more than one form of type 2, by explicitly working out co-boundary maps. The various cases are illustrated with explicit toy examples on the tetra-quadric. In Section~\ref{vanishing}, we have provided an example, based on the gauge group $SO(10)$, of a ${\bf 10}\,{\bf 16}\,{\bf 16}$ Yukawa coupling with topological vanishing, due to all three relevant forms being of type 1. An example of a complex structure independent ${\bf 27}^3$ Yukawa coupling for gauge group $E_6$ and a standard pattern of two forms of type 1 and one form of type 2 has been provided in Section~\ref{E6example}. Finally, Section~\ref{csexample} contains an example with gauge group $SU(5)$ which leads to a complex structure dependent d-quark Yukawa coupling. In Section~\ref{realex} we have computed all Yukawa couplings allowed by the gauge symmetry for a line bundle standard model on the tetra-quadric. The up-quark Yukawa matrix turns out to be complex structure independent and of rank two, while the singlet coupling to $L\overline{H}$ is complex structure dependent. The latter involves two forms of type 2 and requires an explicit calculation of a co-boundary map as well as a kernel of a map in cohomology. For two of our examples, we have explicitly calculated the complex structure dependence of the Yukawa matrix, only for a sub-locus in complex structure moduli space. To our knowledge, this is the first such calculation for heterotic Calabi-Yau models with non-standard embedding. The detailed complex structure dependence of these Yukawa matrices is not necessarily physical since the matter field K\"ahler metric will also typically depend on complex structure. However, the rank of the Yukawa matrices is not affected by the field normalisation and has to be considered a physical quantity. We have shown that this rank can vary in complex structure moduli space. Our calculations were applied only to one quasi-realistic tetraquadric model in the database \cite{Anderson:2011ns}. An interesting direction for future research would be to calculate the holomorphic Yukawa couplings for all the other MSSM models, in order to see how the results vary from model to model. Extending our calculation methods to all the known models on the tetraquadric would be useful to see a distribution of the Yukawa couplings among the models. The results of this chapter are limited to a relatively narrow class of Calabi-Yau manifolds and bundles with Abelian structure group. However, the methods we have developed point to and facilitate a number of generalisations. We expect that suitable generalisations of our approach can be used to calculate Yukawa couplings for more general classes of Calabi-Yau manifolds, notably higher co-dimension CICYs (as will be seen in Chapter~\ref{chaptern>1codimension}) and hypersurfaces in toric varieties. Non-Abelian bundles are frequently constructed from line bundles, for example via monad or extension sequences. The results for line bundles obtained in this chapter could be useful to calculate Yukawa couplings for such non-Abelian bundles. The most pressing problem remains the calculation of the matter field K\"ahler metric which is essential in order to determine the physical Yukawa couplings. While we have not addressed this problem yet, it is clear that it requires an approach based on differential geometry. Our hope is that the methods developed in this chapter will eventually lead to a framework for such a calculation. \chapter{Holomorphic Yukawa Couplings for Co-dimension $k \geq 2$ Complete Intersection Calabi-Yau Manifolds} \label{chaptern>1codimension} In Chapter~\ref{tetraquadricchapter}, we have presented a new approach to calculating the holomorphic Yukawa couplings, based entirely on methods of differential geometry. This approach was developed in the context of the simplest class of Calabi-Yau manifolds -- hypersurfaces in products of projective spaces and the tetraquadric manifold in a product of four $\mathbb{P}^1$’s in particular -- and for bundles with Abelian structure groups. In its original form, as presented in Chapter~\ref{tetraquadricchapter}, this method is only applicable to a handful of Calabi-Yau manifolds. The purpose of this chapter is to present a significant generalisation to all complete intersection Calabi-Yau manifolds. Hence, we will show that our approach is not restricted to specific manifolds but can in fact be applied to large classes, in this case to the almost 8000 CICY manifolds classified in Refs. \cite{Candelas:1987kf, candelascicy2}, as well as to their quotients \cite{braunquotients}. We would also like to relate our method to the earlier algebraic ones \cite{Candelas:1987se, Anderson:2009ge} and demonstrate that the two approaches are equivalent. Although, in the present chapter, we will only discuss the holomorphic Yukawa couplings, we hope that the insight gained in this context will ultimately also be of use for the calculation of the matter field K\"ahler potential and the physical Yukawa couplings. As in the co-dimension one case, we start from the general expression of holomorphic Yukawa couplings for a line bundle model on a Calabi-Yau manifold $X$ \begin{eqnarray} \label{Yukgen2} \lambda(\nu_1, \nu_2, \nu_3) = \int_X \Omega \wedge \nu_1 \wedge \nu_2 \wedge \nu_3 \, . \end{eqnarray} Here, $\Omega$ is the holomorphic $(3, 0)$–form on $X$ and $\nu_i \in H^1(X, K_i)$ are closed $(0, 1)$-forms, taking values in certain line bundles $K_i$ on $X$, which represent the three types of matter multiplets involved in the corresponding superpotential term. Consistency of Eq.~\eqref{Yukgen2} requires that $K_1 \otimes K_2 \otimes K_3 = \mathcal{O}_X$ , where $\mathcal{O}_X$ is the trivial bundle on $X$. The ambient space on which the CICY is defined is generally expressed as $\mathcal{A}=\mathbb{P}^{n_1} \times ... \times \mathbb{P}^{n_m}$, as explained in Section~\ref{cicysection}. Provided that the line bundles $K_i$ are obtained as restrictions of ambient space line bundles $\mathcal{K}_i \rightarrow \mathcal{A}$ to $X$, we will show that the $(0, 1)$-forms $\nu_i$ can be obtained from certain forms on the ambient space $\mathcal{A}$ and that the integral \eqref{Yukgen2} can be evaluated explicitly by converting it to an integral over the ambient space. More precisely, we find that a closed $(0, 1)$-form $\nu_i$ is, in general, related to an entire chain of ambient space $(0, a)$-forms, $\hat{\nu}_{i,a}$, where $a = 1, . . . , k + 1$ and $k$ is the co-dimension of $X$ in $\mathcal{A}$. The integral \eqref{Yukgen2} can then be re-written as an integral over $\mathcal{A}$ which, in general, involves all forms $\hat{\nu}_{i,a}$. For a given $\nu_i$, the associated chain may terminate early, in the sense that, for a certain $\tau_i$, we have $\hat{\nu}_{i,\tau_i} \neq 0$ and $\hat{\nu}_{i,a} = 0$ for all $a > \tau_i$. In this case we say that $\nu_i$ is of type $\tau_i$. One of our most important results is the vanishing theorem \begin{eqnarray} \tau_1+\tau_2+\tau_3 < \textrm{dim}(\mathcal{A}) \quad\Rightarrow \quad \lambda (\nu_1,\nu_2,\nu_3) = 0 \, . \end{eqnarray} \noindent Particularly for high co-dimension and corresponding large ambient space dimension $\textrm{dim}(\mathcal{A})$, this statement implies the vanishing of many Yukawa couplings, since cases with large types $\tau_i$ are relatively rare. The vanishing due to this theorem can not be explained by an obvious symmetry of the effective four-dimensional theory and is topological in nature. The outline of the chapter is as follows. In Section~\ref{codim2}, we generalise to co-dimension two CICYs and in Section~\ref{higher}, we deal with the general case of arbitrary co-dimension. In Section~\ref{chapter3examples}, our method is illustrated with several explicit examples and we conclude in Section~\ref{chapter3conc}. A number of technical issues have been moved to the appendices. Of particular importance is Appendix~\ref{appendixPn}, which explains the multiplication of harmonic forms on $\mathbb{P}^n$, the key ingredient required to relate our approach to the earlier algebraic methods \cite{Candelas:1987se, Anderson:2009ge} for calculating holomorphic Yukawa couplings. \section{Yukawa couplings for co-dimension two CICYs} \label{codim2} We start by discussing the case of co-dimension two manifolds. This consists of two main steps: first, showing how $(0,1)$-forms on the CICY can be obtained from ambient space forms and secondly, expressing the Yukawa couplings as integrals over the ambient space. \subsection{Lifting forms to the ambient space} As before, the ambient space ${\cal A}$ is given by a product of projective spaces \begin{equation} {\cal A}= {\mathbb P}^{n_1} \times {\mathbb P}^{n_2} \times ... \times {\mathbb P}^{n_m}\,, \label{3.1} \end{equation} but now we require that $n_1+...+n_m = 5$. The Calabi-Yau manifold $X$ is given by the common zero locus of two polynomials $p=(p_1,p_2)$ with multi-degrees $\mathbf{q}_1 = (q_1^1, ..., q_1^m)$ and $\mathbf{q}_2 = (q_2^1, ..., q_2^m)$, respectively. The Calabi-Yau condition, $c_1(TX) = 0 $, translates into \begin{eqnarray} q_1^r + q_2^r = n_r +1, \end{eqnarray} \noindent for all $r=1,...,m$. We can also view $p$ as a global, holomorphic section of the normal bundle of $X$, \begin{equation} {\cal N}= {\cal O}_{{\cal A}} ({\bf q}_{1}) \oplus {\cal O}_{{\cal A}} ({\bf q}_{2})\,, \label{3.3} \end{equation} \noindent which is now a rank-$2$ vector bundle in $\mathcal{A}$. Naturally, the wedge product $\Lambda^2 \mathcal{N} = {\cal O}_{{\cal A}} ({\bf q}_{1} + {\bf q}_{2})$ is a line bundle. As in the previous chapter, we would like to understand the relation between closed line-bundle valued $(0, 1)$-forms on $X$ and certain forms on the ambient space $\mathcal{A}$. We start with a line bundle $K \rightarrow X$, its ambient space counterpart $\mathcal{K} \rightarrow \mathcal{A}$ such that $K = \mathcal{K}\vert_X$ and a closed $K$-valued $(0, 1)$-form $\nu \in H^1 (X, K)$, which represents any of the three forms $\nu_i$ entering the integral \eqref{Yukgen2} for the holomorphic Yukawa couplings. The relation between $K$ and $\mathcal{K}$ is still described by the Koszul sequence which, due to $X$ being defined at co-dimension two, is no longer short-exact but given by the four-term sequence \begin{equation} 0 \longrightarrow \Lambda^2 {\cal N}^* \otimes {\cal K} \stackrel{q}{\longrightarrow} {\cal N}^* \otimes {\cal K} \stackrel{p}{\longrightarrow} {\cal K} \stackrel{r}{\longrightarrow} K \longrightarrow 0\;. \label{3.10.1} \end{equation} As before, the map $p=(p_1,p_2)$ acts by multiplication and $r$ is the restriction map. The map $q$ is fixed by exactness of the sequence, that is $p \circ q =0$, and by matching polynomial degrees. As a result, it is given, up to an overall, irrelevant constant, by \begin{equation} q = \left( \begin{array}{cc} -p_2 \\ \,\,\ p_1 \end{array} \right)\,. \label{3.12} \end{equation} \noindent In practice, the four-term sequence~\eqref{3.10.1} is best dealt with by splitting it up into the two short exact sequences \begin{equation} \begin{array}{l} 0 \longrightarrow \Lambda^2 {\cal N}^* \otimes {\cal K} \stackrel{q}{\longrightarrow} {\cal N}^* \otimes {\cal K} \stackrel{g_1}{\longrightarrow} {\cal C}\longrightarrow 0\,, \\[1mm] 0 \longrightarrow {\cal C} \stackrel{g_2}{\longrightarrow} {\cal K} \stackrel{r}{\longrightarrow} K \longrightarrow 0 \end{array} \label{3.13} \end{equation} \noindent where $\mathcal{C}$ is a suitable co-kernel and $g_1$, $g_2$ are maps satisfying $g_2 \circ g_1 = p$. These quantities are determined by exactness of the above two sequences but will, fortunately, not be required explicitly. The relevant parts of the two long exact sequences associated to the short exact sequences~\eqref{3.13} read \begin{eqnarray} \cdots&\longrightarrow & H^1 ({\cal A}, {\cal C}) \stackrel{g_2}{\longrightarrow} H^1 ({\cal A}, {\cal K}) \stackrel{r}{\longrightarrow} H^1 (X, K)\nonumber \\ &\stackrel{\delta_1}{\longrightarrow} &H^2 ({\cal A}, {\cal C}) \stackrel{g_2}{\longrightarrow} H^2 ({\cal A}, {\cal K}) \longrightarrow \dots\; , \label{3.15} \end{eqnarray} and \begin{eqnarray} \cdots&\longrightarrow & H^1 ({\cal A}, \Lambda^2{\cal N}^* \otimes {\cal K}) \stackrel{q}{\longrightarrow} H^1 ({\cal A}, {\cal N}^* \otimes{\cal K}) \stackrel{g_1}{\longrightarrow} H^1 ({\cal A}, {\cal C})\nonumber \\ &\stackrel{\delta_2}{\longrightarrow} &H^2 ({\cal A}, \Lambda^2{\cal N}^* \otimes {\cal K}) \stackrel{q}{\longrightarrow} H^2 ({\cal A}, {\cal N}^* \otimes {\cal K}) \stackrel{g_1}{\longrightarrow} H^2 ({\cal A}, {\cal C}) \nonumber \\ &\stackrel{\delta_3}{\longrightarrow} &H^3 ({\cal A}, \Lambda^2{\cal N}^* \otimes {\cal K}) \stackrel{q}{\longrightarrow} H^3 ({\cal A}, {\cal N}^* \otimes {\cal K}) \longrightarrow \dots\;. \label{3.16} \end{eqnarray} \noindent Our goal is to obtain an expression for $H^1(X, K)$ in terms of ambient space cohomologies and from~\eqref{3.15} we find that \begin{eqnarray} H^1 (X, K) &=& r \Big( {\rm Coker} \Big( H^1 ({\cal A}, {\cal C}) \stackrel{g_2}{\rightarrow} H^1 ({\cal A}, {\cal K})\Big) \Big) \notag \\ &\oplus& \delta_1^{-1} \Big( {\rm Ker} \Big( H^2 ({\cal A}, {\cal C}) \stackrel{g_2}{\rightarrow} H^2 ({\cal A}, {\cal K})\Big) \Big) \; . \label{3.17} \end{eqnarray} This expression is analogous to Eq.~\eqref{H1eq} obtained in the co-dimension one case, but here we still have to work out $H^1 ({\cal A}, {\cal C})$ and $H^2 ({\cal A}, {\cal C})$, which are obtained from the sequence~\eqref{3.16} \begin{eqnarray} H^1 ({\cal A}, {\cal C})& = & g_1 \Big( {\rm Coker} \Big( H^1 ({\cal A}, \Lambda^2{\cal N}^* \otimes {\cal K}) \stackrel{q}{\rightarrow} H^1 ({\cal A}, {\cal N}^* \otimes {\cal K})\Big) \Big) \nonumber \\ & \oplus & \delta_2^{-1} \Big( {\rm Ker} \Big( H^2 ({\cal A}, \Lambda^2 {\cal N}^* \otimes {\cal K}) \stackrel{q}{\rightarrow} H^2 ({\cal A}, {\cal N}^* \otimes {\cal K})\Big) \Big) \; , \label{3.18.1} \end{eqnarray} \begin{eqnarray} H^2 ({\cal A}, {\cal C})& = & g_1 \Big( {\rm Coker} \Big( H^2 ({\cal A}, \Lambda^2{\cal N}^* \otimes {\cal K}) \stackrel{q}{\rightarrow} H^2 ({\cal A}, {\cal N}^* \otimes {\cal K})\Big) \Big) \nonumber \\ & \oplus & \delta_3^{-1} \Big( {\rm Ker} \Big( H^3 ({\cal A}, \Lambda^2 {\cal N}^* \otimes {\cal K}) \stackrel{q}{\rightarrow} H^3 ({\cal A}, {\cal N}^* \otimes {\cal K})\Big) \Big) \;. \label{3.18.2} \end{eqnarray} Substituting Eqs. ~\eqref{3.18.1} and \eqref{3.18.2} into Eq.~\eqref{3.17} gives the desired formula for $H^1 (X, K)$ in terms of ambient space cohomology. Despite its apparent complexity, we will see that it is possible to get to a simple generalisation of the structure derived in the co-dimension one case. We begin by observing that $H^1(X, K)$ receives contributions from three ambient space cohomologies, namely from $H^1 ({\cal A}, {\cal K})$, $H^2 ({\cal A}, {\cal N}^* \otimes {\cal K})$ and $H^3 ({\cal A}, \Lambda^2{\cal N}^* \otimes {\cal K})$ (or, more accurately, from kernels or quotients within these cohomologies). This means that a given closed $(0,1)$-form $\nu \in H^1(X,K)$ descends, in general, from three ambient space forms, a $(0,1)$-form $\hat{\nu}$, a $(0,2)$-form $\hat{\omega}$ and a $(0,3)$-form $\hat{\rho}$. However, a specific $\nu \in H^1(X,K)$ might not receive all three contributions. We call a $\nu \in H^1(X,K)$ “type 1” if the associated $\hat{\omega}$ and $\hat{\rho}$ vanish and, hence, if it is determined by the $(0,1)$-form $\hat{\nu}$ only. Likewise, $\nu \in H^1(X,K)$ is called “type 2” if the associated $\hat{\rho}$ vanishes and it is determined by the $(0,2)$-form $\hat{\omega}$. If $\nu \in H^1(X,K)$ is determined by $\hat{\rho}$, it is called “type 3”. In general, a $\nu \in H^1(X,K)$ is a linear combination of these three types, but the discussion is much simplified if we focus on each type separately. In fact, it is always possible to choose a basis of $ H^1(X,K)$, such that every basis element has a definite type. Let us now be more precise and discuss each of these three types in turn. \vspace{2mm} \noindent {\bf Type 1}: We will refer to $\nu \in H^1(X,K)$ as “type 1” if it descends from $H^1 (\mathcal{A}, \mathcal{K})$, that is, if there is a $(0, 1)$-form $\hat{\nu} \in H^1(\mathcal{A},\mathcal{K})$ on the ambient space with \begin{equation} \begin{array}{ll} \nu =\hat{\nu}|_X\, , \qquad \qquad \qquad & \nu \in H^1(X,K) \, , \\[1mm] \bar \partial \hat{\nu} =0\, , \qquad \qquad \qquad & \hat{\nu} \in H^1(\mathcal{A},\mathcal{K}) . \end{array} \end{equation} \vspace{2mm} \noindent {\bf Type 2}: We will refer to $\nu \in H^1 (X, K)$ as “type 2” if it descends from a closed $(0, 2)$-form $\hat{\omega} \in H^2 (\mathcal{A}, \mathcal{N} ^* \otimes \mathcal{K})$. To understand the relation between $\nu$ and $\hat{\omega}$, we need to chase through Eqs.~\eqref{3.17} and \eqref{3.18.2}. Starting with Eq.~\eqref{3.17} and setting $\hat{\gamma} = \delta_1(\nu) \in H^2 (\mathcal{A}, \mathcal{C})$, we know from the definition of the co-boundary map $\delta_1$ (see Appendix~\ref{coboundarymapappendix} for a review) that there is a $(0, 1)$-form $\hat{\nu} \in \Omega^1(\mathcal{A},\mathcal{K})$, such that \begin{equation} \overline{\partial}\hat{\nu} = g_2 \hat{\gamma}\, , \qquad \qquad \qquad \nu = \hat{\nu}\vert_X \, . \end{equation} \noindent Further, from Eq.~\eqref{3.18.2}, there is a $\hat{\omega} \in H^2 (\mathcal{A},\mathcal{N}^* \otimes \mathcal{K})$ with \begin{equation} \hat{\gamma} = g_1 \hat{\omega} \, . \end{equation} \noindent Combining these last two equations, together with $g_2 \circ g_1 = p$ then leads to \begin{eqnarray} \overline{\partial}\hat{\nu} = (g_2 \circ g_1) \hat{\omega} = p \hat{\omega} \, . \end{eqnarray} \noindent To summarise this discussion, we can write down the following chain of equations \begin{equation} \begin{array}{ll} \nu =\hat{\nu}|_X\, , \qquad \qquad \qquad & \nu \in H^1(X,K) \, , \\[1mm] \bar \partial \hat{\nu} = p \hat{\omega}\, , \qquad \qquad \qquad & \hat{\nu} \in \Omega^1(\mathcal{A},\mathcal{K}) , \\[1mm] \bar \partial \hat{\omega} = 0 \, , \qquad \qquad \qquad & \hat{\omega} \in H^2 (\mathcal{A},\mathcal{N}^* \otimes \mathcal{K}) \, , \end{array} \end{equation} \noindent which describes the relation between $\nu$ and the $(0,2)$-form $\hat{\omega}$ from which it descends. \vspace{2mm} \noindent {\bf Type 3}: We will refer to $\nu$ as “type 3” if it descends from a closed $(0, 3)$-form $\hat{\rho} \in H^3 (\mathcal{A}, \Lambda^2 \mathcal{N}^* \otimes \mathcal{K})$ and we need to understand the relation between $\nu$ and $\hat{\rho}$. As in the case of type 2, we start with Eq.~\eqref{3.17} and define $\hat{\gamma} = \delta_1(\nu)\in H^2(\mathcal{A},\mathcal{C})$ and a $(0,1)$-form $\hat{\nu}\in \Omega^1(\mathcal{A},\mathcal{K})$, such that \begin{equation} \overline{\partial}\hat{\nu} = g_2 \hat{\gamma}\, , \qquad \qquad \qquad \nu = \hat{\nu}\vert_X \, . \end{equation} \noindent From surjectivity of $g_1$ in the first sequence \eqref{3.13}, we can write $\hat{\gamma} = g_1 \hat{\omega}$ for an $\hat{\omega} \in \Omega^2(\mathcal{A}, \mathcal{N}^* \otimes \mathcal{K})$ and combining this with the previous equation leads to \begin{eqnarray} \overline{\partial} \hat{\nu} = p \hat{\omega} \, , \end{eqnarray} \noindent as in the type 2 case. However, unlike for the type 2 case, $\hat{\omega}$ is no longer closed and we need to carry out one more step. To this end, we consider Eq.~\eqref{3.18.2} and define the closed $(0, 3)$-form $\hat{\rho} = \delta_3(\hat{\gamma}) \in H^3 (\mathcal{A}, \Lambda^2 \mathcal{N}^* \otimes \mathcal{K})$. Writing out the co-boundary map $\delta_3$ (see Appendix \ref{coboundarymapappendix}) now leads to \begin{equation} \bar \partial \hat{\omega} = q \hat{\rho}\,, \qquad \qquad \qquad \bar \partial \hat{\rho} =0\, . \label{3.37} \end{equation} \noindent Altogether, this gives the following chain of equations \begin{equation} \begin{array}{ll} \nu =\hat{\nu}|_X \,, \qquad \qquad \qquad & \nu \in H^1(X, K)\,, \\[1mm] {\bar \partial} \hat{\nu} = p \hat{\omega} \,, \qquad \qquad \qquad & \hat{\nu} \in \Omega^1 ({\cal A}, {\cal K})\,, \\[1mm] \bar \partial \hat{\omega} = q \hat{\rho}\,, \qquad \qquad \qquad & \hat{\omega} \in \Omega^2 ({\cal A}, {\cal N}^* \otimes {\cal K})\,, \\[1mm] \bar \partial \hat{\rho} =0 \,, \qquad \qquad \qquad & \hat{\rho} \in H^3 ({\cal A}, \Lambda^{2} {\cal N}^* \otimes {\cal K})\, , \end{array} \label{3.37.2} \end{equation} \noindent which describes the relation between $\nu$ and the $(0, 3)$-form $\hat{\rho}$ from which it descends. In fact, the system of equations \eqref{3.37.2} describes the general relationship between $\nu$ and the three ambient space forms $\hat{\nu}$, $\hat{\omega}$ and $\hat{\rho}$. For a given $\nu$, solving the equations \eqref{3.37.2} gives the associated ambient space forms which, in general, are all non-zero. The three types discussed above arise from Eq.~\eqref{3.37.2} as special cases. If $\hat{\omega} = \hat{\rho} = 0$ for a given $\nu$, then $\hat{\nu}$ is closed and $\nu$ is of type 1. If $\hat{\omega} \neq 0$ but $\hat{\rho} = 0$ (and $\hat{\nu}$ does not have a closed part which would correspond to a type 1 component) then $\hat{\omega}$ is closed and $\nu$ is of type 2. Finally, if $\hat{\rho} \neq 0$ (and $\hat{\nu}$, $\hat{\omega}$ do not have closed parts which would correspond to type 1 and type 2 components, respectively) then $\nu$ is of type 3. Let us point out that, in general, the set of all forms $\hat{\nu}$, $\hat{\omega}$, $\hat{\rho}$ is not always identified with the entire spaces in the second column of \eqref{3.37.2} but, rather, with kernels and co-kernels of the maps $p$ and $q$ within those spaces. In each particular case, these kernels and co-kernels can be found from Eqs.~\eqref{3.17}, \eqref{3.18.1} and \eqref{3.18.2}. Our goal now is to express the Yukawa couplings \eqref{Yukgen2} in terms of the ambient space forms $\hat{\nu}$, $\hat{\omega}$, $\hat{\rho}$. If $\nu$ is of a specific type, the highest non-vanishing form which appears in the Eqs.~\eqref{3.37.2} represents an ambient space cohomology and can be written down explicitly, following the rules explained in Appendix~\ref{appendixPn}. The lower-degree forms then have to be obtained by solving the Eqs.~\eqref{3.37.2}. In this way, all relevant ambient space forms can be calculated explicitly. \subsection{A derivation of Yukawa couplings} \label{derivation} We will now derive the formula for the Yukawa couplings \eqref{Yukgen2} in terms of ambient space forms. For each $(0, 1)$-form $\nu_i \in H^1 (X, K_i)$ involved, we have an associated chain of ambient space forms $\hat{\nu}_i$, $\hat{\omega}_i$ and $\hat{\rho}_i$, in line with the Eqs.~\eqref{3.37.2}. The forms $\hat{\omega}_i$ take values in the rank-2 line bundle sum $\mathcal{N}^* \otimes \mathcal{K}_i = \mathcal{O}_{\mathcal{A}}(-\mathbf{q}_1) \otimes \mathcal{K}_i \oplus \mathcal{O}_{\mathcal{A}}(-\mathbf{q}_2) \otimes \mathcal{K}_i$ and we denote the two corresponding components by $\hat{\omega}_i^a$, where $a=1,2$. Starting with Eq. \eqref{Yukgen2}, we insert two delta-function currents \begin{equation} \lambda (\nu_1, \nu_2, \nu_3)= \frac{1}{(2 \pi i)^2} \int_{{\cal A}} \hat{\Omega} \wedge \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3 \wedge dp_1 \wedge {\bar \partial} \Big( \frac{1}{p_1}\Big) \wedge dp_2 \wedge {\bar \partial} \Big( \frac{1}{p_2}\Big) \, , \label{3.6} \end{equation} which converts the integral to one over the ambient space. Using the standard formula (see~\cite{Candelas:1987se, Strominger:1985it, Witten:1985xc, Candelas:1987kf}) \begin{equation} \hat{\Omega} \wedge dp_1 \wedge dp_2 =\mu \, , \label{3.7} \end{equation} \noindent where $\mu$ has been defined in Eq.~\eqref{10.2}, we obtain \begin{equation} \lambda (\nu_1, \nu_2, \nu_3)= \frac{1}{(2 \pi )^2} \int_{{\cal A}} \mu \wedge \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3 \wedge {\bar \partial} \Big( \frac{1}{p_1}\Big) \wedge {\bar \partial} \Big( \frac{1}{p_2}\Big) \,. \label{3.8} \end{equation} Now we have to integrate by parts twice, ignoring the boundary integrals, which do not contribute (see Appendix~\ref{appendixboundary}). After the first integration, we obtain \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = \frac{1}{(2 \pi )^2}\int_{{\cal A}} \frac{\mu}{p_1} \wedge \Big[\bar \partial \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3- \hat{\nu}_1 \wedge \bar \partial \hat{\nu}_2 \wedge \hat{\nu}_3 +\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \bar \partial \hat{\nu}_3 \Big] \wedge {\bar \partial} \Big( \frac{1}{p_2}\Big) \,. \label{3.9} \end{equation} The derivatives of $\hat{\nu}_i$ can be evaluated using~\eqref{3.37.2}. This leads to \begin{equation} \bar \partial \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3- \hat{\nu}_1 \wedge \bar \partial \hat{\nu}_2 \wedge \hat{\nu}_3 +\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \bar \partial \hat{\nu}_3 := p\hat{\beta} = p_1 \hat{\beta}^{1} +p_2 \hat{\beta}^{2} \,, \label{3.100.1} \end{equation} % where $\hat{\beta}$ is a vector with components given by \begin{equation} \begin{array}{l} \hat{\beta}^{1} =\hat{\omega}_1^{1} \wedge \hat{\nu}_2 \wedge \hat{\nu}_3- \hat{\nu}_1 \wedge \hat{\omega}_2^{1} \wedge \hat{\nu}_3 +\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\omega}_3^{1}\,, \\[1mm] \hat{\beta}^{2} =\hat{\omega}_1^{2} \wedge \hat{\nu}_2 \wedge \hat{\nu}_3- \hat{\nu}_1 \wedge \hat{\omega}_2^{2} \wedge \hat{\nu}_3 +\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\omega}_3^{2}\,. \end{array} \label{3.10.2} \end{equation} Substituting these expressions back into the integral \eqref{3.9}, we note that the term $ p_2 \hat{\beta}^{2}$ does not contribute, since $p_2 \bar \partial \Big( \frac{1}{p_2}\Big) \sim p_2 \delta^2 (p_2) d {\bar p}_2=0$ and that we are, hence, left with \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = \frac{1}{(2 \pi )^2}\int_{{\cal A}} \mu \wedge \hat{\beta}^{1} \wedge {\bar \partial} \Big( \frac{1}{p_2}\Big) =- \frac{1}{(2 \pi )^2}\int_{{\cal A}} \frac{\mu }{p_2} \wedge \bar \partial \hat{\beta}^{1}\,. \label{3.40} \end{equation} Using Eqs.~\eqref{3.37.2}, we obtain that $\bar \partial \hat{\beta}^{1} =- p_2 \hat{\eta}$, where $\hat{\eta}$ is given by \begin{equation} \begin{array}{lll} \hat{\eta} &=& \hat{\rho}_1 \wedge \hat{\nu}_2 \wedge \hat{\nu}_3 + \hat{\nu}_1 \wedge \hat{\rho}_2 \wedge \hat{\nu}_3+ \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\rho}_3 \\[1mm] &+& \hat{\nu}_1 \wedge \hat{\omega}_2^{2} \wedge \hat{\omega}_3^{1}- \hat{\nu}_1 \wedge \hat{\omega}_2^{1} \wedge \hat{\omega}_3^{2} \\[1mm] &+& \hat{\omega}_1^{1} \wedge \hat{\nu}_2 \wedge \hat{\omega}_3^{2} - \hat{\omega}_1^{2} \wedge \hat{\nu}_2 \wedge \hat{\omega}_3^{1} \\[1mm] &+& \hat{\omega}_1^{2} \wedge \hat{\omega}_2^{1} \wedge \hat{\nu}_3- \hat{\omega}_1^{1} \wedge \hat{\omega}_2^{2} \wedge \hat{\nu}_3\,. \end{array} \label{3.42} \end{equation} Hence, the final expression for the Yukawa coupling is \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = \frac{1}{(2 \pi )^2}\int_{{\cal A}} \mu \wedge \hat{\eta} \, , \label{3.43} \end{equation} with $\hat{\eta}$ given in~\eqref{3.42}. Eq.~\eqref{3.43} together with Eq.~\eqref{3.42} is our main general result for the co-dimension two case. As we will see in Section~\ref{chapter3examples}, this result, together with the expressions for ambient space harmonic forms in Appendix~\ref{appendixPn} and Eq.~\eqref{3.37.2}, allows for an explicit calculation of the holomorphic Yukawa couplings. It is worth discussing a number of special cases. If all three forms $\nu_i$ are of type 1, then $\hat{\omega}_i = \hat{\rho}_i = 0$, for $i=1,2,3$ and as a result $\hat{\eta}$ in Eq.~\eqref{3.42} is zero and, hence, the Yukawa coupling vanishes. Now suppose two of the forms $\nu_i$, say $\nu_1$ and $\nu_2$, are of type 1, while $\nu_3$ is of type 2. In this case we have $\hat{\omega}_i = \hat{\rho}_i =0$ for $i=1,2$ and $\hat{\rho}_3=0$, so that $\hat{\eta}=0$ in Eq.~\eqref{3.42} and the Yukawa coupling still vanishes. These observations can be summarised by the following \vspace{2mm} \noindent {\bf Theorem}: Assume that the forms $\nu_i$ which enter the integral \eqref{Yukgen2} for the Yukawa couplings are of type $\tau_i$, where $i=1,2,3$. Then \begin{equation} \label{theoremdim5} \tau_1+\tau_2+\tau_3<{\rm dim} ({\cal A}) =5 \qquad \Longrightarrow \qquad \lambda(\nu_1,\nu_2,\nu_3) = 0 \, . \end{equation} \noindent For co-dimension one we have observed that the Yukawa coupling vanishes if all three forms $\nu_i$ are of type 1. The above vanishing theorem generalises this statement to the case of co-dimension two. There are two special cases for which the expression \eqref{3.43} simplifies considerably. Firstly, assume that the types of the $(0, 1)$-forms $\nu_i$ are given by $(\tau_1,\tau_2,\tau_3) = (1, 1, 3)$. Then we have from Eqs.~\eqref{3.43} and \eqref{3.42} \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = \frac{1}{(2 \pi )^2}\int_{{\cal A}} \mu \wedge \hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\rho}_3 \, , \label{yuk113} \end{equation} \noindent and all three bundle-valued forms in the integrand represent ambient space cohomologies. The other simple case arises for types $(\tau_1,\tau_2,\tau_3) = (1, 2, 2)$, where Eq.~\eqref{3.43} becomes \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = \frac{1}{(2 \pi )^2}\int_{{\cal A}} \mu \wedge \hat{\nu}_1 \wedge \hat{\omega}_2 \wedge \hat{\omega}_3 \, , \label{yuk122} \end{equation} \noindent with an anti-symmetric contraction of the bundle indices for $ \hat{\omega}_i$ understood. Again, all three forms in the integrand represent ambient space cohomologies. We will now proceed to arbitrary co-dimension and show that analogous statements can be obtained in the general case. \section{Generalisations to higher co-dimensions} \label{higher} \subsection{Lifting forms to the ambient space} We will now tackle the case of arbitrary co-dimension starting, as before, with the problem of writing closed line bundle-valued $(0, 1)$-forms on the Calabi-Yau manifold in terms of ambient space forms. Our ambient space remains the product of projective spaces \begin{equation} {\cal A}= {\mathbb P}^{n_1} \times {\mathbb P}^{n_2} \times ... \times {\mathbb P}^{n_m}\, , \label{4.1} \end{equation} where now $n_1+...+n_m = 3+k$, and $k$ is the co-dimension. The CICY manifold $X \subset \mathcal{A}$ is defined as the common zero locus of $k$ homogeneous polynomials $p_a$ with multi-degrees $\mathbf{q}_a = (q_a^1, ..., q_a^m)$, where $a=1,...,k$. The Calabi-Yau condition, $c_1(TX)=0$ now reads \begin{eqnarray} \sum_{a=1}^k q_a^r = n_r+1 \, , \end{eqnarray} \noindent for all $r=1,...,m$. As before, we combine these polynomials into the row vector $p=(p_1,...,p_k)$, which can be viewed as a section of the line bundle sum \begin{equation} {\cal N} = {\cal O}_{{\cal A}} ({\bf q}_{1}) \oplus ... \oplus {\cal O}_{{\cal A}} ({\bf q}_{k}) \, . \end{equation} \noindent The relation between a line bundle $K \rightarrow X$ and its ambient space counterpart ${\cal K} \rightarrow {\cal A}$ (such that $K = {\cal K}\vert_X$ ) is again governed by the Koszul sequence \begin{equation} 0 \longrightarrow \Lambda^k {\cal N}^* \otimes {\cal K} \stackrel{q_k}{\longrightarrow} \Lambda^{k-1} {\cal N}^* \otimes {\cal K} \stackrel{q_{k-1}}{\longrightarrow} \dots \stackrel{q_{2}}{\longrightarrow} {\cal N}^* \otimes {\cal K} \stackrel{q_1=p}{\longrightarrow} {\cal K} \stackrel{q_0 = r}{\longrightarrow} K \longrightarrow 0\;, \label{4.7} \end{equation} \noindent which now consists of $k + 2$ terms and contains maps $q_a$ satisfying $q_a \circ q_{a+1} = 0$ for all $a = 0, ... , k-1$. As previously, $q_0 = r$ is the restriction map, $q_1 = p$ is the map acting by multiplication with the polynomial vector $p$ and the higher maps $q_a$ for $a > 1$ are the obvious tensor maps induced by $p$. An $(a + 1)$–form $\hat{\nu}$ taking values in $\Lambda^{a} {\cal N}^* \otimes {\cal K}$ has components $\hat{\nu}^{b_1,...,b_a}$ with completely anti-symmetrised upper indices, and the action of $q_a$ on this form can be explicitly written as \begin{eqnarray} (q_a \hat{\nu})^{b_1,...,b_{a-1}} = p_b \hat{\nu}^{b_1...b_{a-1}b} \, . \label{eq4.5} \end{eqnarray} \noindent Splitting \eqref{4.7} up into $k$ short exact sequences and chasing through the associated long exact sequences shows that $H^1(X, K)$ can now receive contributions from the $k+1$ ambient space cohomologies $H^1(\mathcal{A}, {\cal K})$, $H^2(\mathcal{A}, {\cal N}^* \otimes {\cal K} )$, ..., $H^{k}(\mathcal{A}, \Lambda^{k-1} {\cal N}^* \otimes {\cal K})$, $H^{k+1}(\mathcal{A}, \Lambda^{k} {\cal N}^* \otimes {\cal K})$. A closed $K$-valued $(0, 1)$-form $\nu \in H^1(X,K)$ is, therefore, related to a chain of $k+1$ ambient space $(0,a)$-forms $\hat{\nu}_a$, where $a=1,...,k+1$. The precise relationship between $\nu$ and $\hat{\nu}_a$ can be derived by a straightforward generalisation of the co-dimension two case discussed in the previous section. The result is \begin{equation} \begin{array}{llllll} \nu & \!\!\! = & \! \hat{\nu}_1|_X \,, \qquad \qquad \ \ & \nu & \!\!\! \in & \!\! H^1(X, K)\,, \\[1mm] {\bar \partial} \hat{\nu}_1 &\!\!\! = & \! q_1 \hat{\nu}_2 \,, \qquad \qquad \ \ \ & \hat{\nu}_1 & \!\!\! \in & \! \Omega^1 ({\cal A}, {\cal K})\,, \\[1mm] \bar \partial \hat{\nu}_2 & \!\!\! = & \! q_2 \hat{\nu}_3\,, \qquad \qquad \ \ \ &\hat{\nu}_2 & \!\!\! \in & \! \Omega^2 ({\cal A}, {\cal N}^* \otimes {\cal K})\,, \\[1mm] & \!\!\! \, \vdots & \! \qquad \qquad & & \!\!\! \, \vdots & \! \\[1mm] \bar \partial \hat{\nu}_{k} & \!\!\! = & \! q_{k} \hat{\nu}_{k+1}\,, \qquad \qquad & \hat{\nu}_{k} & \!\!\! \in & \! \Omega^k ({\cal A}, \Lambda^{k-1}{\cal N}^* \otimes {\cal K})\,, \\[1mm] \bar \partial \hat{\nu}_{k+1} & \!\!\! = & \! 0 \,, \qquad \qquad \ \ \ & \hat{\nu}_{k+1} &\!\!\! \in & \!\! H^{k+1} ({\cal A}, \Lambda^k{\cal N}^* \otimes {\cal K})\,. \end{array} \label{4.10} \end{equation} \noindent Note that, just like in the co-dimension two case, the forms $\hat{\nu}_a$ should be thought of as elements of certain kernels and co-kernels of the maps $q_a$ within the spaces on the right-hand side of Eq.~\eqref{4.10}. For a given $\nu \in H^1(X,K)$, the associated chain of ambient space forms is obtained by solving the above equations and, in general, this leads to $k + 1$ non-trivial forms $\hat{\nu}_a$. However, as before, it is useful to introduce the type $\tau$ of $\nu$, which can now take the values $\tau \in \lbrace 1, ... , k+1\rbrace$. We say that $\nu$ is of type $\tau$ if $\hat{\nu}_{\tau} \neq 0$, $\hat{\nu}_a = 0$ for all $a > \tau$ and all $\hat{\nu}_a$ for $a < \tau$ do not contain any $\overline{\partial}$-closed parts. In this case, $\nu$ descends, via the Eqs.~\eqref{4.10}, from the $\overline{\partial}$-closed $(0, \tau)$-form $\hat{\nu}_{\tau}$, which defines an element of $H^{\tau}(\mathcal{A}, \Lambda^{\tau-1} \mathcal{N}^* \otimes \mathcal{K})$. \subsection{The structure of Yukawa couplings and a vanishing theorem} Each of the three forms $\nu_i \in H^1(X,K_i)$ involved in the Yukawa coupling has, from Eq.~\eqref{4.10} an associated chain of ambient space forms which we denote by $\hat{\nu}_{i,a}$, where $a=1,...,k+1$. To derive the general expression for the Yukawa couplings we start with \eqref{Yukgen2}, insert $k$ delta-function currents and use the standard formula (see~\cite{Candelas:1987se, Strominger:1985it, Witten:1985xc, Candelas:1987kf}) \begin{eqnarray} \hat{\Omega} \wedge d p_1 \wedge ... \wedge d p_k = \mu \, , \end{eqnarray} \noindent where $\mu$ has been defined in Eq.~\eqref{10.2}. This leads to \begin{align} \lambda (\nu_1, \nu_2, \nu_3) &= \Big(- \frac{1}{2 \pi i}\Big)^k \int_{{\cal A}} \hat{\Omega} \wedge \hat{\nu}_{1,1} \wedge \hat{\nu}_{2,1} \wedge \hat{\nu}_{3,1} \wedge dp_1 \wedge {\bar \partial} \Big( \frac{1}{p_1}\Big) \wedge ... \wedge d p_k \wedge {\bar \partial} \Big( \frac{1}{p_k} \Big) \nonumber \\ & = \frac{\tilde{C}_k}{(2 \pi)^k k!} \epsilon_{b_1 ... b_k} \int_{{\cal A}} \mu \wedge \hat{\nu}_{1,1} \wedge \hat{\nu}_{2,1} \wedge \hat{\nu}_{3,1} \wedge {\bar \partial} \Big( \frac{1}{p_{b_1}}\Big) \wedge ... \wedge {\bar \partial} \Big( \frac{1}{p_{b_k}} \Big) \, , \label{exhibit1} \end{align} \noindent where $\tilde{C}_k=(-1)^{k(k+1)/2} \, i^k$. is a phase factor. Integrating the first $\overline{\partial}$ operator by parts (ignoring the boundary terms, whose vanishing can be shown in the same way as in Appendix~\ref{appendixboundary}) and using Eqs.~\eqref{eq4.5}, \eqref{4.10}, this turns into \begin{align} \lambda (\nu_1, \nu_2, \nu_3) = \frac{\tilde{C}_k}{(2 \pi)^k k!} \epsilon_{b_1 ... b_k} & \int_{{\cal A}} \mu \wedge \big(\hat{\nu}^{b_1}_{1,2} \wedge \hat{\nu}_{2,1} \wedge \hat{\nu}_{3,1} - \hat{\nu}_{1,1} \wedge \hat{\nu}^{b_1}_{2,2} \wedge \hat{\nu}_{3,1} + \notag \\ & \;\, + \hat{\nu}_{1,1} \wedge \hat{\nu}_{2,1} \wedge \hat{\nu}^{b_1}_{3,2} \big) \wedge {\bar \partial} \Big( \frac{1}{p_{b_2}}\Big) \wedge ... \wedge {\bar \partial} \Big( \frac{1}{p_{b_k}} \Big) \, . \label{exhibit2} \end{align} Here, the relation \begin{eqnarray} p_b \overline{\partial}\Big( \frac{1}{p_{b}} \Big) = 0 \end{eqnarray} \noindent has led to the insertion of $\delta_b^{b_1}$ from Eq.~\eqref{eq4.5}, so that we remain with a sum over $b_1$, as indicated above (while the resulting factor $p_{b_1}$ from Eq.~\eqref{eq4.5} cancels against $1/p_{b_1}$). We can now continue integrating by parts until all factors of the form $\overline{\partial}(1/p_{b})$ are used up. Each of these factors leads to a partial differentiation of all forms $\hat{\nu}_{i,a}^{b_1 ... b_{a-1}}$ which appear in the integral, effectively replacing them by the forms $\hat{\nu}_{i,a+1}^{b_1...b_{a-1}b}$, which are found one step lower down in the chain \eqref{4.10}. Since there are $k$ such partial integrations to be performed, starting with three $(0, 1)$-forms, the end result is a sum which contains all products of three forms whose degree sums up to $\textrm{dim}(\mathcal{A}) = 3 + k$. This leads to \begin{eqnarray} \lambda (\nu_1, \nu_2, \nu_3) = \dfrac{C_k}{(2\pi)^k} \sum^{k+1}_{\substack{a_1,a_2,a_3=1 \\ a_1+a_2+a_3 = \textrm{dim}(\mathcal{A})}} (-1)^{s(a_1,a_2,a_3)} \int_{\mathcal{A}} \mu \wedge \hat{\nu}_{1,a_1} \wedge \hat{\nu}_{2,a_2} \wedge \hat{\nu}_{3,a_3} \, , \label{eq4.11} \end{eqnarray} \noindent where $s\,(a_1, a_2, a_3) = (a_1 + 1)a_2 + a_1 a_3 + a_2 a_3$ determines the relative signs of the terms and $C_k =(-1)^{k(k+1)/2}(-1)^{[(k+1)/2]} i^k$ is another phase. In this formula, the bundle indices have been suppressed so the wedge product should be understood as including an appropriate tensoring of the bundle directions to form a singlet, via anti-symmetrisation by $\epsilon_{b_1 ... b_k}$. The anti-symmetrisation is achieved by summing in every case as many terms with permuted indices as required for complete anti-symmetry, each with a factor $1$ or $-1$ and no additional overall normalisation. This means that, for example, $\hat{\nu}_{1,2} \wedge \hat{\nu}_{2,2} \wedge \hat{\nu}_{3,1}= \epsilon_{b_1 b_2} \hat{\nu}^{b_1}_{1,2} \wedge \hat{\nu}^{b_2}_{2,2} \wedge \hat{\nu}_{3,1}$, while $\hat{\nu}_{1,3} \wedge \hat{\nu}_{2,1} \wedge \hat{\nu}_{3,1} = \tfrac{1}{2} \epsilon_{b_1 b_2} \hat{\nu}^{b_1,b_2}_{1,3} \wedge \hat{\nu}_{2,1} \wedge \hat{\nu}_{3,1}$. Eq.~\eqref{eq4.11} is our main general result for the holomorphic Yukawa couplings. All the ambient space forms $\hat{\nu}_{i,a}$ can be constructed explicitly, starting with Appendix \ref{appendixPn} in order to write down (harmonic) representatives for ambient space cohomology for the highest degree non-trivial forms in the chain~\eqref{4.10} and then solving these equations to find all associated lower-degree forms. With these forms inserted, the integral \eqref{eq4.11} can be carried out explicitly, as we will demonstrate for the examples in Section \ref{chapter3examples}. As before, it is useful to discuss some special cases. First assume, that the $(0, 1)$-forms $\nu_i$ are of type $\tau_i$, so that $\hat{\nu}_{i,a} = 0$ for all $a>\tau_i$. If the $\tau_i$ sum up to less than the ambient space dimension $\textrm{dim}(\mathcal{A})$, then all terms in Eq.~\eqref{eq4.11} vanish due to the summation constraint. As a result, the Yukawa coupling vanishes. Let us formulate this concisely: \vspace{2mm} \noindent {\bf Theorem}: Assume that the forms $\nu_i$ which enter the integral \eqref{Yukgen2} for the Yukawa couplings are of type $\tau_i$, where $i = 1, 2, 3$. Then \begin{equation} \tau_1+ \tau_2 +\tau_3 < {\rm dim} ({\cal A}) \qquad \Longrightarrow \qquad \lambda (\nu_1,\nu_2,\nu_3) =0 \, . \label{4.12} \end{equation} \vspace{2mm} \noindent This is the general version of the vanishing theorem we have already seen for co-dimensions one and two in previous sections. As we have discussed, the type $\tau$ of a form $\nu \in H^1(X,K)$ is determined by the cohomology $H^{\tau}(\mathcal{A}, \Lambda^{\tau-1} \mathcal{N}^* \otimes \mathcal{K})$, from which it descends via successive co-boundary maps. As a rule of thumb, large $\tau$'s are relatively rare since they require many non-trivial co-boundary maps and cohomologies. Consequently, for a large ambient space dimension $\textrm{dim}(\mathcal{A})$, the condition in \eqref{4.12} is frequently satisfied and many Yukawa couplings vanish. We stress again that vanishing due to \eqref{4.12} appears to be topological in nature, that is, these couplings vanish despite being allowed by the obvious symmetries of the four-dimensional effective theory. Another special case of interest is for types $\tau_i$ satisfying $\tau_1 + \tau_2 + \tau_3 = \textrm{dim}(\mathcal{A})$. In this case, only one term in \eqref{eq4.11} contributes and the integral simplifies to \begin{equation} \lambda (\nu_1,\nu_2,\nu_3) \sim \dfrac{1}{(2 \pi)^k} \int_{\mathcal{A}} \mu \wedge \hat{\nu}_{1,\tau_1} \wedge \hat{\nu}_{2,\tau_2} \wedge \hat{\nu}_{3,\tau_3} \, , \end{equation} \noindent where we have dropped an overall phase factor. Note that, unlike in the general case \eqref{eq4.11}, all three forms $\hat{\nu}_{i,\tau_i}$ in the integrand are closed and represent ambient space cohomologies in $H^{\tau_i}(\mathcal{A}, \Lambda^{\tau_i-1} \mathcal{N}^* \otimes \mathcal{K})$. They can, therefore, be directly constructed from the rules given in Appendix~\ref{appendixPn}, without any need to solve Eqs.~\eqref{4.10}. \section{Examples} \label{chapter3examples} In this section, we will illustrate our general statements for models on a certain co-dimension two CICY and show that the relevant ambient space integrals can, in fact, be carried out explicitly. We begin by introducing the specific CICY and its properties, then move on to describing line bundles and line bundle-valued forms before we derive two more specific formulae for the Yukawa couplings for types $(\tau_1, \tau_2, \tau_3) = (1, 1, 3)$ and $(\tau_1, \tau_2, \tau_3) = (1, 2, 2)$, respectively. These results are then applied to three examples, each defined by a certain line bundle sum on the relevant CICY. \subsection{A co-dimension two CICY and its properties} Our chosen CICY is a co-dimensional two manifold in the ambient space $\mathcal{A} = \mathbb{P}^{1} \times \mathbb{P}^{1} \times \mathbb{P}^{1} \times \mathbb{P}^{1} \times \mathbb{P}^{1}$, whose homogeneous coordinates we either denote by $\mathbf{x}=(x_i^{\alpha})$, where $i=1,...,5$ and $\alpha=0,1$ or, more explicitly, by $\mathbf{x}=((x_0, x_1),(y_0, y_1),(u_0, u_1),(v_0, v_1),(w_0, w_1))$. We also introduce affine coordinates $z_i = x_i^1/x_i^0$ on the coordinate patch of $\mathcal{A}$ where all $x_i^0 \neq 0$. The CICY is defined as the common zero locus in $\mathcal{A}$ of two homogeneous polynomials $p = (p_1,p_2)$ with multi-degrees $\mathbf{q}_1 = (0,1,1,1,1)$ and $\mathbf{q}_2=(2,1,1,1,1)$, respectively. This information is often summarised by the configuration matrix \begin{eqnarray} X=\begin{pmatrix} \mathbb{P}^{1}& \vline & 0 & 2 \\ \mathbb{P}^{1}&\vline & 1 & 1 \\ \mathbb{P}^{1} &\vline & 1 & 1 \\ \mathbb{P}^{1} &\vline & 1 & 1 \\ \mathbb{P}^{1}& \vline & 1 & 1 \end{pmatrix}_{- 80}^{5, 45} \label{cicy7487} \end{eqnarray} \noindent whose columns are given by $\mathbf{q}_1$ and $\mathbf{q}_2$. Attached as a superscript are the Hodge numbers $h^{1,1}(X)$, $h^{2,1}(X)$ and as a subscript the Euler number, $\eta(X)$. In the standard list of Refs. \cite{Candelas:1987kf, candelascicy2}, this manifold carries the number 7487.\footnote{The reason why CICY 7487 was chosen is the large number of line bundle GUT models that can be built on this manifold. Twenty-four of those models, which are listed in Ref.~\cite{Anderson:2011ns}, were investigated during the development of this chapter. They all have a $SU(5) \times S(U(1)^5)$ gauge group and produce the Standard Model with three families upon dividing by the freely acting symmetry $\Gamma$.} The defining polynomials $p = (p_1, p_2)$ can also be viewed as a section of the line bundle sum \begin{eqnarray} \mathcal{N} = \mathcal{O}_{\mathcal{A}}(\mathbf{q}_1) \oplus \mathcal{O}_{\mathcal{A}}(\mathbf{q}_2) \, . \end{eqnarray} \noindent For later reference, we also define $\mathbf{q}=\mathbf{q}_1+\mathbf{q}_2 = (2,2,2,2,2)$ and note that \begin{eqnarray} \Lambda^2 \mathcal{N} = \mathcal{O}_{\mathcal{A}}(\mathbf{q}) \, . \end{eqnarray} \noindent In order to reduce the size of the problem, it will frequently be useful to work on a discrete quotient of the above manifold. In fact, $X$ has a freely-acting symmetry $\Gamma = \mathbb{Z}_2 \times \mathbb{Z}_2$ whose generators act on the homogeneous coordinates as \begin{eqnarray} \label{Gammasym} \gamma(g_1) = \mathbbmss{1}_5 \times \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} \, , \qquad \gamma(g_2) = \mathbbmss{1}_5 \times \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix} \, , \end{eqnarray} \noindent while the action on the defining polynomials is \begin{eqnarray} \label{Gammarho} \rho(g_1) = \textrm{diag}(1,-1) \, , \qquad \rho(g_2) = \textrm{diag}(1,-1) \, . \end{eqnarray} \noindent The quotient $\tilde{X}= X/\Gamma$ is a Calabi-Yau manifold with Euler number $\eta (\tilde{X}) = \eta(X)/\vert\Gamma\vert = -20$ and Hodge numbers $h^{1,1}(\tilde{X})=5$, $h^{2,1}(\tilde{X})=15$. \subsection{Line bundles and line bundle-valued harmonic forms} The CICY defined by \eqref{cicy7487} is favourable, by which we mean that the entire second cohomology of $X$ descends from the ambient space. This implies that every line bundle $L \rightarrow X$ can be obtained as a restriction $L = \mathcal{L}\vert_X$ of an ambient space line bundle $\mathcal{L} = \mathcal{O}_{\mathcal{A}}(\mathbf{l})$, where $\mathbf{l} = (l^1,...,l^5)$. In order to compute Yukawa integrals, we need to understand the cohomology of such ambient space line bundles and write down explicit differential forms representing these cohomologies. Since we are dealing with products of $\mathbb{P}^1$ factors, results from Chapter \ref{tetraquadricchapter} can be imported, however a generalisation to arbitrary $\mathbb{P}^n$ factors is found in Appendix \ref{appendixPn}. As before, the cohomology dimensions for a line bundle $\mathcal{L} = \mathcal{O}_{\mathcal{A}}(\mathbf{l})$ is obtained by combining Bott’s formula for line bundle cohomology on $\mathbb{P}^1$ and the K\"unneth formula. Firstly, all cohomologies of $\mathcal{L}$ vanish if at least one of the integers $l^i$ equals $-1$. If all $l^i \neq -1$, then there is precisely one non-vanishing cohomology $H^q(\mathcal{A},\mathcal{L})$, and $q$ equals the number of integers $l^i$ with $l^i \leq -2$. The dimension of this one non-vanishing cohomology is given by \begin{equation} \label{eq5.6} h^q(\mathcal{A},\mathcal{L}) = \prod_{i:l^i \geq 0} (l^i+1)\prod_{i:l^i \leq -2} (-l^i-1) \, . \end{equation} \noindent The $\mathcal{L}$-valued $(0,q)$-forms representing $H^q(\mathcal{A},\mathcal{L})$ can be written down as \begin{eqnarray} \label{5.5} \alpha_{(\mathbf{l})}=P_{(\mathbf{l})}\prod_{i: l^{i} \leq -2} \kappa_i^{l^i} d \overline{z}_i \, , \end{eqnarray} \noindent where $\kappa_i = 1+\vert z_i \vert^2$ and $P_{(\mathbf{l})}$ is a polynomial of degree $l^i$ in $z_i$, if $l^i \geq 0 $, and of degree $-l^i -2$ in $\overline{z}_i$, if $l^i \leq - 2$. In fact, the above forms are harmonic (relative to the Fubini-Study metric) and are, hence, in one-to-one correspondence with the elements of $H^q(\mathcal{A},\mathcal{L})$. In particular, note that the number of arbitrary coefficients in the polynomial $P_{(\mathbf{l})}$ equals the dimension \eqref{eq5.6} of the cohomology group. The above differential forms have been written down in affine coordinates $z_i$. A useful equivalent version in terms of homogeneous coordinates is given by \begin{eqnarray} \alpha_{(\mathbf{l})}=\tilde{P}_{(\mathbf{l})}\prod_{i: l^{i} \leq -2} \sigma_i^{l^i} d \overline{\mu}_i \, , \end{eqnarray} \noindent where $\tilde{P}_{(\mathbf{l})}$ is the homogeneous counterpart of $P_{(\mathbf{l})}$ and \begin{eqnarray} \sigma_i = \vert x_i^0\vert^2 + \vert x_i^1\vert^2, \qquad \mu_i = \epsilon_{\alpha \beta} x_i^{\alpha}x_i^{\beta} \, . \end{eqnarray} The Yukawa couplings involve wedge products of differential forms and we should, therefore, understand what happens if we form wedge products of the above forms. To be specific, let us consider a form $\alpha_{(\mathbf{l})}$ with associated polynomial $\tilde{P}_{(\mathbf{l})}$, representing the cohomology $H^p(\mathcal{A}, \mathcal{O}_{\mathcal{A}}(\mathbf{l}))$ and a form $\beta_{(\mathbf{m})}$ with associated polynomial $\tilde{Q} _{(\mathbf{m})}$, representing the cohomology $H^q(\mathcal{A}, \mathcal{O}_{\mathcal{A}}(\mathbf{m}))$. It is clear that $\alpha_{(\mathbf{l})}\wedge \beta_{(\mathbf{m})}$ is $\overline{\partial}$–closed and represents an element of $H^{p+q}(\mathcal{A}, \mathcal{O}_{\mathcal{A}}(\mathbf{l} + \mathbf{m}))$, however, this will, in general not be the harmonic representative. We can ask how this harmonic representative, which we denote by $\gamma_{(\mathbf{l}+\mathbf{m})}$ with associated polynomial $\tilde{R}_{(\mathbf{l}+\mathbf{m})}$, can be obtained from $\alpha_{(\mathbf{l})}$ and $\beta_{(\mathbf{m})}$. Fortunately, there is a simple answer which can be expressed in terms of the associated polynomials $\tilde{P}_{(\mathbf{l})}$, $\tilde{Q}_{(\mathbf{m})}$ and $\tilde{R}_{(\mathbf{l}+\mathbf{m})}$. For a product of $\mathbb{P}^1$ spaces this has been derived in Chapter~\ref{tetraquadricchapter}. In Appendix~\ref{appendixPn}, we explain how harmonic forms on a single $\mathbb{P}^n$ are multiplied. These results can be easily applied to a product of projective spaces with arbitrary dimensions and lead to \begin{equation} \label{RPQ} \tilde{R}_{(\mathbf{l}+\mathbf{m})} = c_{\mathbf{l},\mathbf{m}} \tilde{P}_{(\mathbf{l})} \tilde{Q}_{(\mathbf{m})} \, , \end{equation} \noindent where $c_{\mathbf{l},\mathbf{m}}$ is a numerical coefficient explicitly given by \begin{equation} \label{coefc} c_{\mathbf{l},\mathbf{m}} = \prod_{i: l^i\leq-2} c_{l^i,m^i}\prod_{j: m^j\leq-2} c_{m^j,l^j} \, , \quad c_{l,m} = \dfrac{(-l-m-1)!}{(-l-1)!} \, . \end{equation} \noindent The polynomial multiplication on the RHS of Eq.~\eqref{RPQ} is understood with a replacement of coordinates by associated partial derivatives whenever positive degrees meet negative degrees. More specifically, whenever coordinates $x_i^{\alpha}$ in $\tilde{P}_{(\mathbf{l})}$ act on coordinates $\overline{x}{}_i^{\alpha}$ in $\tilde{Q}_{(\mathbf{m})}$, the former should be replaced by $\partial/\partial \overline{x}{}_i^{\alpha}$. In the following, we would like to further evaluate the Yukawa couplings for our example manifold and certain specific types. We will work within our familiar setting, that is, we have three line bundles $K_i = \mathcal{O}_X (\mathbf{k}_i)$ on $X$ underlying the expression for the Yukawa couplings. These line bundles descend from their ambient space counterparts $\mathcal{K}_i = \mathcal{O}_{\mathcal{A}}(\mathbf{k}_i)$ and have to satisfy the condition \begin{equation} \label{conditionk1k2k3} K_1 \otimes K_2 \otimes K_3 = \mathcal{O}_X \qquad \Longrightarrow \qquad \mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3 = 0 \, . \end{equation} \noindent We would like to calculate the Yukawa couplings for three $K_i$–valued $(0, 1)$–forms $\nu_i \in H^1(X,K_i)$. From the Eqs.~\eqref{3.37.2} each of these comes with a chain of ambient space forms, namely the $(0,1)$-forms $\hat{\nu}_i$, the $(0, 2)$–forms $\hat{\omega}_i$ and the $(0, 3)$–forms $\hat{\rho}_i$ which enter the general formula \eqref{3.43} for the Yukawa couplings. In the following, we focus on certain cases where the $\nu_i$ have specific types $\tau_i$. \subsection{Yukawa couplings of type $(1, 1, 3)$} We now assume that two of the forms $\nu_i$, say $\nu_1$ and $\nu_2$ for definiteness, are of type 1, while $\nu_3$ is of type 3. Note that this saturates the bound in Eq.~\eqref{theoremdim5} and constitutes one of the two simplest cases for co-dimension two to which the vanishing theorem does not apply (the other one being discussed in the next sub-section). In this case, the Yukawa couplings are given by Eq.~\eqref{yuk113}, which only involves the ambient space forms $\hat{\nu}_1 \in H^1(\mathcal{A},\mathcal{K}_1)$, $\hat{\nu}_2 \in H^1(\mathcal{A},\mathcal{K}_2)$ and $\hat{\rho}_3 \in H^3(\mathcal{A},\Lambda^2 \mathcal{N}^*\otimes\mathcal{K}_3)$ . Following the rules for cohomology explained in the last sub-section, in order for $H^1(\mathcal{A},\mathcal{K}_1)$ and $H^1(\mathcal{A},\mathcal{K}_2)$ to be non-trivial, we require that $\mathbf{k}_1$ and $\mathbf{k}_2$ each have precisely one entry less than or equal to $-2$ and all other entries positive. Further, for $H^3(\mathcal{A},\Lambda^2 \mathcal{N}^*\otimes\mathcal{K}_3) = H^3(\mathcal{A}, \mathcal{O}_{\mathcal{A}}(\mathbf{k}_3 - \mathbf{q}))$ to be non-trivial, the vector $\mathbf{k}_3$ is required to have precisely three entries less than or equal to $0$ and the others greater than or equal to $2$. Due to Eq.~\eqref{conditionk1k2k3}, these non-positive entries must arise in different components of the three vectors. Without restricting generality, we can, therefore, assume that $k_1^1 \leq -2$, with all other components of $\mathbf{k}_1$ being greater than or equal to $0$, $k_2^2 \leq -2$ with all other components of $\mathbf{k}_2$ greater than or equal to 0, $k_3^3 \leq 0$, $k_3^4 \leq 0$, $k_3^5 \leq 0$, while $k_3^1 \geq 2$, $k_3^2 \geq 2$. Using these conventions, we can specialise Eq.~\eqref{5.5} to find the following explicit expressions for the relevant ambient space forms \begin{equation} \begin{array}{ll} \hat{\nu}_1 & \!\!\! = \kappa^{k^1_1}_1 P_{(\mathbf{k_1})} d \overline{z}_1 \, , \qquad \quad \,\,\,\, \hat{\nu}_2 = \kappa^{k^2_2}_2 Q_{(\mathbf{k_2})} d \overline{z}_2 \, , \\[1mm] \hat{\rho}_3 & \!\!\! = \kappa^{k^3_3-2}_3 \kappa^{k^4_3-2}_4 \kappa^{k^5_3-2}_5 R_{(\mathbf{k_3}-\mathbf{q})} d \overline{z}_3 \wedge d \overline{z}_4 \wedge d \overline{z}_5 \, . \end{array} \label{5.9} \end{equation} \noindent Inserting the forms into Eq.~\ref{yuk113} leads to \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = \frac{1}{(2 \pi )^2}\int_{{\mathbb{C}^5}} d^5 z \ d^5\overline{z} \ \kappa^{k^1_1}_1 \kappa^{k^2_2}_2 \kappa^{k^3_3-2}_3 \kappa^{k^4_3-2}_4 \kappa^{k^5_3-2}_5 \ P_{(\mathbf{k_1})} Q_{(\mathbf{k_2})} R_{(\mathbf{k_3}-\mathbf{q})}. \label{5.10} \end{equation} \noindent By inserting expressions for the polynomials, this integral splits up into products of integrals over $\mathbb{P}^1$ and can be worked explicitly. Alternatively, we can proceed by noticing that the integrand $\hat{\nu}_1 \wedge \hat{\nu}_2 \wedge \hat{\rho}_3$ represents a cohomology class in $H^5(\mathcal{A},\mathcal{O}_{\mathcal{A}}(-\mathbf{q}))$, which is one-dimensional. Its harmonic representative has the form \begin{equation} c \mu(P,Q,R) \kappa_1^{-2} \kappa_2^{-2} \kappa_3^{-2} \kappa_4^{-2} \kappa_5^{-2} d^5 \overline{z} \, , \end{equation} \noindent where \begin{equation} \mu (P,Q,R)= \tilde{P}\tilde{Q}\tilde{R} \end{equation} \noindent must be a number, since $h^5(\mathcal{A},\mathcal{O}_{\mathcal{A}}(-\mathbf{q}))=1$. This number is obtained from polynomial multiplication as discussed in the previous sub-section and $c$ is a constant obtained from \eqref{coefc}, \begin{equation} \resizebox{0.65\hsize}{!}{$ \begin{array}{lll} c& = &c_{k_1^1,-k_1^1-2} \ c_{k_2^2,-k_2^2-2} \ c_{k_3^3-2,-k_3^3} \ c_{k_3^4-2,-k_3^4} \ c_{k_3^5-2,-k_3^5} \\[1mm] & = &\tfrac{1}{(-k_1^1-1)!} \tfrac{1}{(-k_2^2-1)!} \tfrac{1}{(-k_3^3+1)!} \tfrac{1}{(-k_3^4+1)!} \tfrac{1}{(-k_3^5+1)!} \, . \end{array}$} \label{c113} \end{equation} \noindent Together with the basic identity \begin{eqnarray} \int_{\mathbb{C}} \dfrac{1}{\kappa^2} d z \wedge d \overline{z} = 2 \pi i \, , \end{eqnarray} \noindent this leads to the final expression \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = 8 i \pi^3 c \ \mu(P,Q,R) \, , \qquad \mu(P,Q,R) = \tilde{P}\tilde{Q}\tilde{R} \, . \label{result113} \end{equation} \noindent This equation represents our final result for the Yukawa couplings in this case and it allows for an “algebraic” calculation by multiplying together the polynomials $\tilde{P}$, $\tilde{Q}$ and $\tilde{R}$. Note that, given the rules for converting coordinates into partial derivatives in these polynomials, as discussed in the last sub-section, this must always result in a number, that is, the partial derivatives remove all remaining coordinates. \subsection{Yukawa couplings of type $(1, 2, 2)$} The other simple case which avoids the vanishing theorem \eqref{theoremdim5} arises if one of the forms, say $\nu_1$, is of type 1, while $\nu_2$ and $\nu_3$ are of type 2. This case can be dealt with in complete analogy with the $(1, 1, 3)$ case in the previous sub-section. The relevant formula for the Yukawa couplings in this case is Eq.~\eqref{yuk122}, which only involves the ambient space forms $\hat{\nu}_1 \in H^1(\mathcal{A},\mathcal{K}_1)$, $\hat{\omega}_2 \in H^2(\mathcal{A}, \mathcal{N}^* \otimes\mathcal{K}_2)$ and $\hat{\omega}_{3} \in H^2(\mathcal{A}, \mathcal{N}^* \otimes \mathcal{K}_{3} )$. In order to construct these forms, it is again useful to fix our conventions. Since we require that $H^1(\mathcal{A}, \mathcal{K}_1)$ be non-trivial, we need precisely one component in $\mathbf{k}_1$ less than or equal to $-2$ (and all others non-negative) and we choose $k^1_1 \leq -2$. The two $(0, 2)$-forms $\hat{\omega}_2$, $\hat{\omega}_3$ need to originate from different line bundles in the rank two bundle $\mathcal{N}^*$ tensored with $\mathcal{K}_2$ and $\mathcal{K}_3$, or else the Yukawa coupling would vanish, so we assume that $\hat{\omega}_2 \in H^2(\mathcal{A},\mathcal{O}_{\mathcal{A}}(-\mathbf{q}_1)\otimes \mathcal{K}_2)$ and $\hat{\omega}_3 \in H^2(\mathcal{A},\mathcal{O}_{\mathcal{A}}(-\mathbf{q}_2)\otimes \mathcal{K}_3)$. Hence we need precisely two entries in $\mathbf{k}_2 - \mathbf{q}_1$ and in $\mathbf{k}_3 - \mathbf{q}_2$ to be less than or equal to $-2$ (with all other entries non-negative). Due to Eq.~\eqref{conditionk1k2k3}, all negative entries have to arise in different components. Hence, we can choose $k_2^2 - q_1^2 \leq -2$, $k_2^3-q_1^3 \leq -2$, $k_3^4 - q_2^4 \leq -2$ and $k_3^5 - q_2^5 \leq -2$, with all the other entries non-negative. Applying these conventions to Eq.~\eqref{5.5} results in \begin{equation} \begin{array}{lll} \hat{\nu}_1 &=& \kappa^{k^1_1}_1 P_{(\mathbf{k_1})} d \overline{z}_1 \, , \\[1mm] \hat{\omega}_2 &=& \kappa^{k^2_2 - q^2_1}_2 \kappa^{k^3_2 - q^3_1}_3 Q_{(\mathbf{k}_2-\mathbf{q}_1)} d \overline{z}_2 \wedge d \overline{z}_3 \, , \\ [1mm] \hat{\omega}_3 &=& \kappa^{k^4_3-q^4_2}_4 \kappa^{k^5_3-q^5_2}_5 R_{(\mathbf{k_3}-\mathbf{q}_2)} d \overline{z}_4 \wedge d \overline{z}_5 \, . \end{array} \label{5.13} \end{equation} \noindent Inserting these forms into Eq.~\eqref{yuk122}, the integral can be carried out as in the previous subsection and results in the same formula \begin{equation} \lambda(\nu_1,\nu_2,\nu_3) = 8 i \pi^3 c \ \mu(P,Q,R) \, , \qquad \mu(P,Q,R) = \tilde{P}\tilde{Q}\tilde{R} \, , \label{result122} \end{equation} \noindent but with the constant $c$ now given by \begin{equation} \resizebox{0.94\hsize}{!}{$ \begin{array}{lll} c& \!\!\! = &\!\!\! c_{k_1^1,-k_1^1-2} \ c_{k_2^2 - q^2_1 ,-k_2^2+q^2_1-2} \ c_{k_2^3 - q^3_1 ,-k_2^3+q^3_1-2} \ c_{k_3^4 - q^4_2 ,-k_3^4+q^4_2-2} \ c_{k_3^5 - q^5_2 ,-k_3^5+q^5_2-2} \\[1mm] &\!\!\! = & \!\!\! \tfrac{1}{(-k_1^1-1)!} \tfrac{1}{(-k_2^2+q_1^2-1)!} \tfrac{1}{(-k_2^3+q_1^3-1)!} \tfrac{1}{(-k_3^4+q_2^4-1)!} \tfrac{1}{(-k_3^5+q_2^5-1)!} \, . \end{array}$} \label{c122} \end{equation} \subsection{An example with vanishing Yukawa couplings} \noindent We consider a rank five line bundle sum on the CICY \eqref{cicy7487} specified by the following line bundles: \begin{equation} \!\!\!\! \begin{array}{l} L_1=\mathcal{O}_X(1,0,-2,0,1) \, , \quad L_2=\mathcal{O}_X(1,-2,0,1,0) \, , \quad L_3=\mathcal{O}_X(0,1,0,0,-1) \, , \\[1mm] L_4=\mathcal{O}_X(0,0,1,-1,0) \, , \quad L_5=\mathcal{O}_X(-2,1,1,0,0) \, . \end{array} \end{equation} \noindent This model leads to a four-dimensional theory with gauge group $SU(5) \times S(U(1)^5)$. The non-vanishing cohomologies of these line bundles and their tensor products are \begin{equation} \begin{array}{llllllll} h^\bullet(X,L_1)&\! =&\!(0, 4, 0, 0) \, , & \quad& h^\bullet(X,L_2)&\!=&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_5)&\! =&\!(0, 4, 0, 0) \, , &\quad & h^\bullet(X,L_1 \otimes L_2)&\! =&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_1 \otimes L_3)&\!=&\!(0,4,0,0) \, , & \quad& h^\bullet(X,L_2 \otimes L_4)&\!=&\!(0,4,0,0) \, , \\[1mm] h^\bullet(X,L_3 \otimes L_4)&\!=&\!(0,1,1,0) \, , & \quad & h^\bullet(X,L_1 \otimes L_2^*)&\!=&\!(0,4,4,0) \, , \\[1mm] h^\bullet(X,L_1 \otimes L_4^*)&\!=&\!(0, 16, 0, 0) \, , & \quad & h^\bullet(X,L_2 \otimes L_3^*)&\!=&\!(0, 16, 0, 0) \, , \\[1mm] h^\bullet(X,L_3 \otimes L_4^*)&\!=&\!(0, 1, 1, 0) \, , & \quad & h^\bullet(X,L_5 \otimes L_3^*)&\!=&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_5 \otimes L_4^*)&\!=&\!(0, 4, 0, 0) \, . &\ \quad &&& \end{array} \end{equation} \noindent These results imply the following upstairs spectrum \begin{equation} \begin{array}{l} 4 \ {\bf 10}_1,\,\, 4 \ {\bf 10}_2,\,\, 4 \ {\bf 10}_5, \\[1mm] 4 \ \overline{\bf 5}_{1,2}, \,\, 4 \ \overline{\bf 5}_{1,3}, \,\, 4 \ \overline{\bf 5}_{2,4}, \,\, \overline{\bf 5}{}^H_{3,4}, \,\, {\bf 5}^{\overline{H}}_{3,4}, \\[1mm] 4 \ {\bf 1}_{1,2}, \,\, 4 \ {\bf 1}_{2,1}, \,\, 4 \ {\bf 1}_{1,3}, \,\, 12 \ {\bf 1}_{1,4}, \,\, 12 \ {\bf 1}_{2,3}, \,\, 4 \ {\bf 1}_{2,4}, \,\, {\bf 1}_{3,4}, \,\, {\bf 1}_{4,3}, \,\, 4 \ {\bf 1}_{5,3}, \,\, 4 \ {\bf 1}_{5,4} \, . \end{array} \end{equation} \noindent Here, the bold-face numbers denote $SU(5)$ representations and the subscripts indicate under which of the five $U(1)$ symmetries a multiplet is charged. This spectrum consists of 12 families in $\overline{\mathbf{5}}\oplus \mathbf{10}$, one $\mathbf{5}$--$\overline{\mathbf{5}}$ pair of Higgs multiplets and a number of $SU(5)$ singlets. Upon dividing by the freely-acting symmetry $\Gamma = \mathbb{Z}_2 \times \mathbb{Z}_2$ in Eq.~\eqref{Gammasym}, one obtains the standard model spectrum with three families. It is important to remember, however, that only couplings which respect the $S(U(1)^5)$ symmetry are allowed in the four-dimensional theory. One such allowed coupling is described by the following superpotential term \begin{align} \label{Yuk111} W & = \lambda_{I J K} \overline{\mathbf{5}}{}^{(I)}_{1,3} \overline{\mathbf{5}}{}^{(J)}_{2,4} \mathbf{10}^{(K)}_5 \, . \end{align} \noindent In order to compute this coupling, we write down the relevant line bundles and bundle-valued forms which are given by \begin{equation} \label{type111} \!\!\!\!\!\!\! \resizebox{0.935\hsize}{!}{$ \begin{array}{ll} \,\,\ 4 \ \overline{\mathbf{5}}_{1,3} \rightarrow K_1=L_1 \otimes L_3 = \mathcal{O}_X(1, 1, -2, 0, 0), & \hat{\nu}_1= \sigma_3^{-2} \tilde{P}_{(1, 1, -2, 0, 0)} d\bar{\mu}_3 \in H^1(\mathcal{A},\mathcal{K}_1) \, , \\[1mm] \,\,\ 4 \ \overline{\mathbf{5}}_{2,4} \rightarrow K_2 = L_2 \otimes L_4 = \mathcal{O}_X(1, -2, 1, 0, 0), & \hat{\nu}_2= \sigma_2^{-2} \tilde{Q}_{(1, -2, 1, 0, 0)} d\bar{\mu}_2 \in H^1(\mathcal{A},\mathcal{K}_2) \, , \\[1mm] \,\,\ 4 \ \mathbf{10}_5 \rightarrow K_3 = L_5 = \mathcal{O}_X(-2,1,1,0,0) , & \hat{\nu}_3=\sigma_1^{-2} \tilde{R}_{(-2,1,1,0,0)} d\bar{\mu}_1 \in H^1(\mathcal{A},\mathcal{K}_3) \, , \end{array}$} \end{equation} \noindent with explicit polynomials \begin{equation} \begin{array}{lll} \tilde{P} & = & p_0 x_0 y_0+p_1 x_0 y_1 + p_2 x_1 y_0 + p_3 x_1 y_1 \, , \\[1mm] \tilde{Q} & = & q_0 x_0 u_0+q_1 x_0 u_1 + q_2 x_1 u_0 + q_3 x_1 u_1 \, , \\[1mm] \tilde{R} & = &r_0 y_0 u_0+r_1 y_0 u_1 + r_2 y_1 u_0 + r_3 y_1 u_1 \, . \end{array} \end{equation} \noindent Evidently, from Eq.~\eqref{type111}, all three forms $\nu_i$ are of type $\tau_i = 1$ and, hence, the Yukawa couplings $\lambda_{IJK}$ in Eq.~\eqref{Yuk111} are all zero as a consequence of the vanishing theorem~\eqref{theoremdim5}. \subsection{An example with Yukawa couplings of type $(1, 1, 3)$} \label{sectionexample113} \noindent A line bundle model on the CICY \eqref{cicy7487} which realises Yukawa couplings of type $(\tau_1, \tau_2, \tau_3) = (1, 1, 3)$ is defined by the five line bundles \begin{equation} \!\!\!\! \begin{array}{l} L_1=\mathcal{O}_X(1,-2,0,0,1) \, , \quad L_2=\mathcal{O}_X(0,1,0,1,-2) \, , \quad L_3=\mathcal{O}_X(0,0,1,-2,1) \, , \\[1mm] L_4=\mathcal{O}_X(0,0,-1,0,1) \, , \quad L_5=\mathcal{O}_X(-1,1,0,1,-1) \, . \end{array} \end{equation} As before, the four-dimensional gauge group is $SU(5) \times S(U(1)^5)$ and the non-trivial cohomologies of the above line bundles and their tensor product \begin{equation} \begin{array}{llllllll} h^\bullet(X,L_1)&\!=&\!(0, 4, 0, 0) \, , &\quad & h^\bullet(X,L_2)&\!=&\! (0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_3)&\!=&\!(0, 4, 0, 0) \, , & \quad & h^\bullet(X,L_1 \otimes L_3)&\!=&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_2 \otimes L_3)&\!=&\!(0,1,1,0) \, , & \quad & h^\bullet(X,L_2 \otimes L_4)&\!=&\!(0,1,1,0) \, , \\[1mm] h^\bullet(X,L_2 \otimes L_5)&\!=&\!(0,8,0,0) \, , & \quad & h^\bullet(X,L_3 \otimes L_4)&\!=&\!(0,3,3,0) \, , \\[1mm] h^\bullet(X,L_1 \otimes L_4^*)&\!=&\!(0, 4, 0, 0) \, , & \quad & h^\bullet(X,L_1 \otimes L_5^*)&\!=&\!(0, 8, 0, 0) \, , \\[1mm] h^\bullet(X,L_2 \otimes L_3^*)&\!=&\!(0, 9, 9, 0) \, , &\quad & h^\bullet(X,L_2 \otimes L_4^*)&\!=&\!(0, 16, 0, 0) \, , \\[1mm] h^\bullet(X,L_3 \otimes L_4^*)&\!=&\!(0, 3, 3, 0) \, , & \quad & h^\bullet(X,L_3 \otimes L_5^*)&\!=&\!(0, 12, 0, 0) \, , \\[1mm] h^\bullet(X,L_5 \otimes L_4^*)&\!=&\!(0, 4, 0, 0) \, \end{array} \end{equation} \noindent lead to the following spectrum: \begin{equation} \begin{array}{l} 4 \ {\bf 10}_1,\,\, 4 \ {\bf 10}_2,\,\, 4 \ {\bf 10}_3, \\[1mm] 4 \ \overline{\bf 5}_{1,3}, \,\, \ \overline{\bf 5}{}^H_{2,3}, \,\, \ {\bf 5}^{\overline{H}}_{2,3}, \,\, \ \overline{\bf 5}{}^H_{2,4}, \,\, \ {\bf 5}^{\overline{H}}_{2,4}, \,\, 8 \ \overline{\bf 5}_{2,5}, \,\, 3 \ \overline{\bf 5}{}^H_{3,4}, \,\, 3 \ {\bf 5}^{\overline{H}}_{3,4}, \\[1mm] 4 \ {\bf 1}_{1,4}, \,\, 8 \ {\bf 1}_{1,5}, \,\, 9 \ {\bf 1}_{2,3}, \,\, 9 \ {\bf 1}_{3,2}, \,\, 16 \ {\bf 1}_{2,4}, \,\, 3 \ {\bf 1}_{3,4}, \,\, 3 \ {\bf 1}_{4,3}, \,\, 12 \ {\bf 1}_{3,5}, \,\, 4 \ {\bf 1}_{5,4} \, . \end{array} \end{equation} \noindent This spectrum contains 12 families $\overline{\mathbf{5}}\oplus \mathbf{10}$, five $\mathbf{5}$--$\overline{\mathbf{5}}$ Higgs pairs and $SU(5)$-singlet multiplets and gives rise to a three-family standard model after a suitable quotient with the symmetry \eqref{Gammasym}. We are interested in the superpotential terms \begin{align} \label{coupling5.34} W & = \lambda_{I J K} \overline{\mathbf{5}}{}^{H, (I)}_{3,4} \mathbf{10}^{(J)}_1 \overline{\mathbf{5}}{}^{(K)}_{2,5} \, , \end{align} \noindent which are allowed by all gauge symmetries of the model. The relevant harmonic forms are given by \begin{equation} \!\!\! \resizebox{1.04\hsize}{!}{$ \begin{array}{ll} 3 \ \overline{\mathbf{5}}^H_{3,4} \rightarrow K_1 = L_3 \otimes L_4 = \mathcal{O}_X(0, 0, 0, -2, 2), & \hat{\nu}_1= \sigma_4^{-2} \tilde{P}_{(0, 0, 0, -2, 2)} d\bar{\mu}_4 \, , \\[1mm] 4 \ \mathbf{10}_1 \rightarrow K_2 = L_1 = \mathcal{O}_X(1,-2,0,0,1), & \hat{\nu}_2=\sigma_2^{-2} \tilde{Q}_{(1,-2,0,0,1)} d\bar{\mu}_2 \, , \\[1mm] 8 \ \overline{\mathbf{5}}_{2,5} \rightarrow K_3=L_2 \otimes L_5 = \mathcal{O}_X(-1, 2, 0, 2, -3), & \hat{\rho}_3= \sigma_1^{-3} \sigma_3^{-2} \sigma_5^{-5} \tilde{R}_{(-3,0,-2,0,-5)} d\bar{\mu}_1 \wedge d\bar{\mu}_3 \wedge d\bar{\mu}_5 , \\[1mm] \end{array}$} \end{equation} \noindent where $\hat{\nu}_1 \in H^1(\mathcal{A},\mathcal{K}_1)$, $\hat{\nu}_2 \in H^1(\mathcal{A},\mathcal{K}_2)$ and $\hat{\rho}_3 \in H^3(\mathcal{A},\Lambda^2 \mathcal{N}^* \otimes \mathcal{K}_3)$. The polynomials $\tilde{P}$, $\tilde{Q}$, $\tilde{R}$ can be explicitly written as \begin{equation} \begin{array}{lll} \tilde{P} & = &p_0 w_0^2 + p_1 w_0 w_1 + p_2 w_1^2 \, , \\[1mm] \tilde{Q} & = & q_0 x_0 w_0 + q_1 x_1 w_0 + q_2 x_0 w_1 + q_3 x_1 w_1 \, , \\[1mm] \tilde{R} & = & r_0 \overline{x}_0 \overline{w}^3_0 + r_1 \overline{x}_0 \overline{w}^2_0 \overline{w}_1 + r_2 \overline{x}_0 \overline{w}_0 \overline{w}^2_1 + r_3 \overline{x}_0 \overline{w}^3_1 \, + \\[1mm] & & r_4 \overline{x}_1 \overline{w}^3_0 + r_5 \overline{x}_1 \overline{w}^2_0 \overline{w}_1 + r_6 \overline{x}_1 \overline{w}_0 \overline{w}^2_1 + r_7 \overline{x}_1 \overline{w}^3_1 \, . \end{array} \end{equation} \noindent Note that the coefficients $p_I$ , $q_J$ and $r_K$ in these polynomials parametrise the various families. Using these polynomials we can compute the upstairs Yukawa couplings from Eq.~\eqref{result113}, which leads to \begin{align} \mu(\tilde{P},\tilde{Q},\tilde{R}) \, = & \; \ 6 p_0 q_0 r_0 + 6 p_0 q_1 r_4 + 2 p_0 q_2 r_1 + 2 p_0 q_3 r_5 + 2 p_1 q_0 r_1 + 2 p_1 q_1 r_5 \, + \notag \\ & \; \ 2 p_1 q_2 r_2 + 2 p_1 q_3 r_6 + 2 p_2 q_0 r_2 + 2 p_2 q_1 r_6 + 6 p_2 q_2 r_3 + 6 p_2 q_3 r_7 \, . \end{align} \noindent The Yukawa couplings $\lambda_{IJK}$ in Eq.~\eqref{coupling5.34} can be easily obtained from this expression by choosing a basis for the coefficients, for example by setting each of the coefficients $p_I$, $q_J$, $r_K$ to one and the others to zero. It is however more interesting to see what happens in the downstairs theory, obtained from the present $SU(5)$ GUT theory by a quotient with the $\Gamma = \mathbb{Z}_2 \times \mathbb{Z}_2$ symmetry \eqref{Gammasym}. The GUT multiplets branch as $\mathbf{10} \rightarrow (Q,u,e)$, $\overline{\mathbf{5}} \rightarrow (d,L) $, $\overline{\mathbf{5}}{}^H \rightarrow (T,H)$ into standard model multiplets, where $T$ is the Higgs triplet which has to be projected out. On the quotient manifold $\tilde{X}$, we introduce a Wilson line in the standard hypercharge direction in order to break $SU(5)$ to the standard model group. Such a Wilson line can be described by two $\Gamma$-representations $\chi_2$, $\chi_3$ which we choose as $\chi_2 = (1, 1)$ and $\chi_3 = (0, 0)$. This induces the multiplet charges \begin{eqnarray} \label{charges} \chi_d = \chi_3^* = (0,0)\, , \qquad \chi_H = \chi_2^* = (1,1)\, , \qquad \chi_Q = \chi_2 \otimes \chi_3 = (1,1) \, . \end{eqnarray} \noindent In order to determine the polynomials corresponding to the downstairs spectrum, one has to keep in mind that every differential $d \overline{\mu}_i$ has charge $(1,1)$ under $\Gamma$. Moreover, for the $(0,3)$-form $\hat{\rho}_3$, there is an additional $(1,1)$ charge flip due to the fact that the bundle $ \Lambda^2 \mathcal{N}^* \otimes \mathcal{K}_3$ transforms non-trivially under $\Gamma$ from Eq.\eqref{Gammarho}. Matching these charges up with the Wilson line charges \eqref{charges}, the representatives of the downstairs spectrum become \begin{equation} \begin{array}{ll} H_{3,4} &\! : \,\, \tilde{P} \in \textrm{Span} (w_0^2 + w_1^2 ) \, ,\\[1mm] Q_1 &\! : \,\, \tilde{Q} \in \textrm{Span} (x_0 w_0 + x_1 w_1) \, , \\[1mm] d_{2,5} &\! : \,\, \tilde{R} \in \textrm{Span} (\overline{x}_0 \overline{w}_0 \overline{w}_1^2 + \overline{x}_1 \overline{w}_1 \overline{w}_0^2 \, , \, \overline{x}_0 \overline{w}_0^3 + \overline{x}_1 \overline{w}_1^3 ) \, . \end{array} \end{equation} \noindent Using Eq.~\eqref{result113} the Yukawa couplings in $\lambda_K H_{3,4} Q_1 d^{(K)}_{2,5} $ become proportional to \begin{eqnarray} \!\!\!\! \mu(H_{3,4}, Q_1, d_{2,5})\! = \!\dfrac{1}{4} \! \big( \partial^2_{\overline{w}_0}+ \partial^2_{\overline{w}_1}\big) \! \big( \partial_{\overline{x}_0}\partial_{\overline{w}_0}+ \partial_{\overline{x}_1}\partial_{\overline{w}_1}\big) \!\! \begin{pmatrix} \overline{x}_0 \overline{w}_0 \overline{w}_1^2 + \overline{x}_1 \overline{w}_1 \overline{w}_0^2 \\ \overline{x}_0 \overline{w}_0^3 + \overline{x}_1 \overline{w}_1^3 \end{pmatrix} \!\! = \!\! \begin{pmatrix} 1 \\ 3 \end{pmatrix}\!, \end{eqnarray} \noindent where we have converted the homogeneous coordinates into derivatives and introduced an additional factor $1/4$, to account for the fact that the integral is carried out on the quotient manifold. The numerical coefficient $c$ in Eq.~\eqref{c113} is given by \begin{eqnarray} c = c_{(-2,0)} c_{(-2,0)} c_{(-5,3)} c_{(-4,2)} c_{(-7,5)} = 1\cdot 1\cdot\dfrac{1}{4!}\cdot\dfrac{1}{3!}\cdot\dfrac{1}{6!} \, . \end{eqnarray} \subsection{An example with Yukawa couplings of type $(1, 2, 2)$} \noindent This example on the CICY \eqref{cicy7487} is defined by the five line bundles \begin{equation} \!\!\!\! \begin{array}{l} L_1=\mathcal{O}_X(1,-2,0,0,1)\, , \quad L_2=\mathcal{O}_X(0,1,-2,0,1)\, , \quad L_3=\mathcal{O}_X(0,0,1,1,-2) \, , \\[1mm] L_4=\mathcal{O}_X(0,0,1,-1,0)\, , \quad L_5=\mathcal{O}_X(-1,1,0,0,0) \, , \end{array} \end{equation} \noindent The non-vanishing cohomologies of these line bundles and their tensor products \begin{equation} \begin{array}{llllllll} h^\bullet(X,L_1)&\!=&\!(0, 4, 0, 0) \, , &\quad & h^\bullet(X,L_2)&\!=&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_3)&\!=&\!(0, 4, 0, 0) \, , &\quad & h^\bullet(X,L_1 \otimes L_3)&\!=&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_1 \otimes L_4)&\!=&\!(0,4,0,0) \, , &\quad & h^\bullet(X,L_2 \otimes L_3)&\!=&\!(0,1,1,0) \, , \\[1mm] h^\bullet(X,L_2 \otimes L_4)&\!=&\!(0,1,1,0) \, , &\quad & h^\bullet(X,L_3 \otimes L_4)&\!=&\!(0,3,3,0) \, , \\[1mm] h^\bullet(X,L_3 \otimes L_5)&\!=&\!(0, 4, 0, 0) \, , &\quad & h^\bullet(X,L_1 \otimes L_2^*)&\!=&\!(0, 12, 0, 0) \, , \\[1mm] h^\bullet(X,L_3 \otimes L_1^*)&\!=&\!(0, 12, 0, 0) \, , &\quad & h^\bullet(X,L_1 \otimes L_4^*)&\!=&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_1 \otimes L_5^*)&\!=&\!(0, 12, 0, 0) \, , &\quad & h^\bullet(X,L_2 \otimes L_3^*)&\!=&\!(0, 9, 9, 0) \, , \\[1mm] h^\bullet(X,L_2 \otimes L_4^*)&\!=&\!(0, 16, 0, 0) \, , &\quad & h^\bullet(X,L_2 \otimes L_5^*)&\!=&\!(0, 4, 0, 0) \, , \\[1mm] h^\bullet(X,L_3 \otimes L_4^*)&\!=&\!(0, 3, 3, 0) \, , &\quad & h^\bullet(X,L_3 \otimes L_5^*)&\!=&\!(0, 4, 0, 0) \, \end{array} \end{equation} \noindent lead to the upstairs spectrum \begin{equation} \!\! \begin{array}{l} 4 \ {\bf 10}_1,\,\, 4 \ {\bf 10}_2,\,\, 4 \ {\bf 10}_3, \\[1mm] 4 \ \overline{\bf 5}_{1,3}, \,\, 4 \ \overline{\bf 5}_{1,4}, \,\, \ \overline{\bf 5}{}^H_{2,3}, \,\, \ {\bf 5}^{\overline{H}}_{2,3}, \,\, \ \overline{\bf 5}{}^H_{2,4}, \,\, \ {\bf 5}^{\overline{H}}_{2,4}, \,\, 3 \ \overline{\bf 5}{}^H_{3,4}, \,\, 3 \ {\bf 5}^{\overline{H}}_{3,4}, \,\, 4 \ \overline{\bf 5}_{3,5}, \\[1mm] 12 \ {\bf 1}_{1,2}, \,\, 12 \ {\bf 1}_{3,1}, \,\, 4 \ {\bf 1}_{1,4}, \,\, 12 \ {\bf 1}_{1,5}, \,\, 9 \ {\bf 1}_{2,3}, \,\, 9 \ {\bf 1}_{3,2}, \,\, 16 \ {\bf 1}_{2,4}, \,\, 4 \ {\bf 1}_{2,5}, \,\, 3 \ {\bf 1}_{3,4}, \,\, 3 \ {\bf 1}_{4,3}, \,\, 4 \ {\bf 1}_{3,5} \, . \\[1mm] \end{array} \end{equation} \noindent As before, this spectrum with 12 families in $\overline{\mathbf{5}} \oplus \mathbf{10}$, five $\mathbf{5}$--$\overline{\mathbf{5}}$ Higgs pairs and $SU(5)$-singlets leads to a three-family standard model after the quotient by $\Gamma = \mathbb{Z}_2 \times \mathbb{Z}_2$. We are interested in the following superpotential term: \begin{align} W & = \lambda_{I J K} \mathbf{10}^{(I)}_2 \overline{\mathbf{5}}{}^{(J)}_{1,4} \overline{\mathbf{5}}{}^{(K)}_{3,5} \, . \end{align} \noindent The associated harmonic forms \begin{equation} \!\!\!\!\!\!\! \resizebox{0.935\hsize}{!}{$ \begin{array}{ll} \,\,\, 4 \ \mathbf{10}_2 \rightarrow K_1 = L_2 = \mathcal{O}_X(0,1,-2,0,1)\ , & \hat{\nu}_1=\sigma_3^{-2} P_{(0,1,-2,0,1)} d\bar{\mu}_3 \ , \\[1mm] \,\,\, 4 \ \overline{\mathbf{5}}_{1,4} \rightarrow K_2=L_1 \otimes L_4 = \mathcal{O}_X(1,-2,1,-1,1)\ , & \hat{\omega}_2= \sigma_2^{-3} \sigma_4^{-2} Q_{(1, -3, 0, -2, 0)} d\bar{\mu}_2 \wedge d\bar{\mu}_4 \ , \\[1mm] \,\,\, 4 \ \overline{\mathbf{5}}_{3,5} \rightarrow K_3 = L_3 \otimes L_5 = \mathcal{O}_X(-1,1,1,1,-2)\ , & \hat{\omega}_3= \sigma_1^{-3} \sigma_5^{-3} R_{(-3, 0, 0, 0, -3)} d\bar{\mu}_1 \wedge d\bar{\mu}_5 \ , \end{array}$} \end{equation} \noindent where $\hat{\nu}_1 \in H^1(\mathcal{A},\mathcal{K}_1)$, $\hat{\omega}_2 \in H^2(\mathcal{A},\mathcal{N}^* \otimes \mathcal{K}_2)$ and $\hat{\omega}_3 \in H^2(\mathcal{A},\mathcal{N}^* \otimes \mathcal{K}_3)$ contain the polynomials \begin{align} \tilde{P} & = p_0 y_0 w_0 + p_1 y_1 w_0 + p_2 y_0 w_1 + p_3 y_1 w_1 \, , \notag \\ \tilde{Q} & = q_0 x_0 \overline{y}_0 + q_1 x_1 \overline{y}_0 + q_2 x_0 \overline{y}_1 + q_3 x_1 \overline{y}_1 \, , \\ \tilde{R} & = r_0 \overline{x}_0 \overline{w}_0 + r_1 \overline{x}_1 \overline{w}_0 + r_2 \overline{x}_0 \overline{w}_1 + r_3 \overline{x}_1 \overline{w}_1 \, . \notag \end{align} \noindent From Eq.~\eqref{result122}, this leads to upstairs Yukawa couplings \begin{align} \mu(\tilde{P},\tilde{Q},\tilde{R}) = & \; p_0 q_0 r_0 + p_0 q_1 r_1 + p_1 q_2 r_0 + p_1 q_3 r_1 \, + \notag \\ & \; p_2 q_0 r_2 + p_2 q_1 r_3 + p_3 q_2 r_2 + p_3 q_3 r_3 \, . \end{align} \noindent Under the standard model group, the relevant part of the upstairs spectrum branches as $\mathbf{10}_2 \rightarrow (Q,u,e)_2$, $\overline{\mathbf{5}}_{1,4} \rightarrow (d,L)_{1,4}$, $\overline{\mathbf{5}}_{3,5} \rightarrow (T,H)_{3,5}$. We choose the same Wilson line, $\chi_2 = (1, 1)$ and $\chi_3 = (0, 0)$, as in Section~\ref{sectionexample113}, which then leads to the same multiplet charges as in Eq.~\eqref{charges}. Once again, we have to keep in mind that the differentials $d\overline{\mu}_i$ carry charge $(1, 1)$ under $\Gamma$. Moreover, we have to remember from Eq.~\eqref{Gammarho} that forms which arise from $\mathcal{O}_{\mathcal{A}}(- \mathbf{q}_2) \otimes \mathcal{K}_i$ transform with an additional overall $\Gamma$-charge $(1, 1)$, while forms from $\mathcal{O}_{\mathcal{A}}(-\mathbf{q}_1)\otimes \mathcal{K}_i$ do not. With these rules, the polynomials corresponding to the downstairs spectrum turn out to be \begin{align} Q_2& \,\, : \,\, \tilde{P} \, \in \, \textrm{Span} (y_0 w_0 + y_1 w_1)\, , \notag \\ d_{1,4}& \,\, : \,\, \tilde{Q} \, \in \, \textrm{Span} (x_0 \overline{y}_0 + x_1 \overline{y}_1) \, , \\ H_{3,5}& \,\, : \,\, \tilde{R}\, \in \, \textrm{Span} (\overline{x}_0 \overline{w}_0 + \overline{x}_1 \overline{w}_1) \, . \notag \end{align} \noindent Then, from Eq.~\eqref{result122}, the downstairs Yukawa coupling for $Q_2 \ d_{1,4} H_{3,5}$ is proportional to \begin{eqnarray} \mu(Q_2, d_{1,4}, H_{3,5}) = \dfrac{1}{4} \left( \partial_{\overline{y}_0}\partial_{\overline{w}_0}+ \partial_{\overline{y}_1} \partial_{\overline{w}_1}\right) \left( \overline{y}_0 \partial_{\overline{x}_0} + \overline{y}_1 \partial_{\overline{x}_1} \right) \left( \overline{x}_0 \overline{w}_0 + \overline{x}_1 \overline{w}_1 \right) = \dfrac{1}{2} \, , \end{eqnarray} \noindent with the constant $c$ given by \begin{eqnarray} c = c_{(-2,0)} c_{(-4,2)} c_{(-3,1)} c_{(-4,2)} c_{(-5,3)} = 1 \cdot \dfrac{1}{3!} \cdot \dfrac{1}{2!} \cdot \dfrac{1}{3!} \cdot \dfrac{1}{4!} \, . \end{eqnarray} \section{Final remarks} \label{chapter3conc} In the previous chapter, methods to calculate the holomorphic Yukawa couplings have been developed for line bundle models on certain special Calabi-Yau manifolds, with a focus on the tetra-quadric Calabi-Yau manifolds defined in the ambient space $\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1$. This chapter generalises these methods to all CICY manifolds, and, hence, demonstrates that they are applicable to large classes of Calabi-Yau manifolds. Our methods rely on the presence of an ambient space $\mathcal{A}$, presently a product of projective spaces, although generalisations are likely possible, into which the Calabi-Yau manifold $X$ is embedded at co-dimension $k$. Likewise, the three line bundles $K_i \rightarrow X$ associated to a Yukawa coupling should be restrictions of ambient space line bundles $\mathcal{K}_i \rightarrow \mathcal{A}$. We have shown that, in this situation, the three $K_i$-valued $(0, 1)$–forms $\nu_i \in H^1(X, K_i)$ which enter the expression for the holomorphic Yukawa couplings can each be related to a chains $\hat{\nu}_{i,a}$ of $(0, a)$ ambient space forms, where $a = 1, ..., k + 1$. Moreover, from Eq.~\eqref{eq4.11}, the Yukawa couplings can be written in terms of these ambient space forms $\hat{\nu}_{i,a}$. We say that a form $\nu_i$ is of type $\tau_i \in \lbrace 1, ... , k + 1\rbrace$, if it originates from the ambient space $(0, \tau_i)$-form $\hat{\nu}_{i,\tau_i} \in H^{\tau_i} (\mathcal{A},\Lambda^{\tau_i-1} \mathcal{N}^* \otimes \mathcal{K}_i)$. This means that the associated chain of ambient space forms breaks down at $a = \tau_i$, that is, if $\hat{\nu}_{i,a} = 0$ for $a > \tau_i$. One of our main results is a vanishing theorem which states that the Yukawa coupling between three forms $\nu_i$ vanishes if their associated types satisfy $\tau_1 + \tau_2 + \tau_3 < \textrm{dim}(\mathcal{A})$. Especially for large ambient space dimension $\textrm{dim}(\mathcal{A})$, this implies the vanishing of many Yukawa couplings, since large types $\tau_i$ tend to be rare. This vanishing is not explained by one of the obvious four-dimensional symmetries and, therefore, appears to be topological in nature. The nature of this vanishing statement is somewhat puzzling in that it relates a physical property -- the vanishing of Yukawa couplings -- to conditions on unphysical quantities, essentially properties of the ambient space $\mathcal{A}$, which is auxiliary and carries no physical relevance. We do not currently know if the vanishing statement is restricted to Calabi-Yau manifolds which can be embedded into an ambient space in this way or if it extends to all Calabi-Yau manifolds. In the latter case, there should be an “intrinsic” formulation of this statement which only refers to properties of the Calabi-Yau manifold. We have illustrated our methods by computing certain holomorphic Yukawa couplings for three different line bundle standard models on a co-dimension two CICY. The most pressing issue is, of course, the calculation of the matter field K\"ahler potential and, hence, of the physical Yukawa couplings. \chapter{Introduction} The search for a fundamental theory beyond the Standard Model of elementary particles has produced an impressive number of predictions and hypotheses. Perhaps the most important ones are supersymmetry, a symmetry which relates bosons and fermions, and grand unification, which implies the convergence of the three gauge couplings of the Standard Model into a single value at high energies, $M_{\textrm{GUT}}\sim 10^{16} \textrm{ GeV}$. The next step towards creating a Theory of Everything is to incorporate gravity in a quantum framework that is free of ultraviolet divergences and to propose a unified description of the four fundamental interactions. To date, string theory is the most successful attempt towards realising these goals. In superstring theory, point-like particles are superseded by vibrating strings and the space-time is predicted to be ten-dimensional, with the six extra spatial dimensions remaining unobserved, given the scales currently probed. Furthermore, the cancellation of ten-dimensional gauge and gravitational anomalies is possible only for two select gauge groups -- $SO(32)$ and $E_8 \times E_8$ -- as it was originally demonstrated by Michael Green and John H. Schwarz in 1984~\cite{greenschwarzarticle}. Their discovery led to the development, in the following year, by David Gross, Jeffrey Harvey, Emil Martinec and Ryan Rohm \cite{stringquartet1,stringquartet2,stringquartet0} of the heterotic $E_8\times E_8$ string theory, which is phenomenologically one of the most promising superstring theories. In the low energy limit, this theory can give rise to $N=1$ supersymmetric models of chiral particles, for which the gauge group is embedded into one $E_8$ factor, while the other $E_8$ is interpreted as the ``hidden sector", where supersymmetry can be spontaneously broken. The goal of string phenomenology is to investigate under which circumstances the precise configuration of the Standard Model is obtained, so that string theory can be connected to measurable physics and the realm of falsifiable predictions. This could solve many problems that the Standard Model itself seems to present. For example, seemingly arbitrary features such as the number of particle families and the hierarchy of particle masses may arise from the underlying structure of the ten-dimensional theory. In this thesis, we will pursue $E_8 \times E_8$ heterotic string model building by compactifying on Calabi-Yau manifolds~\cite{Candelas:1985en,Strominger:1985it,Witten:1985xc,greene1986}. The motivation for compactifying on such spaces is that we want to preserve $N=1$ supersymmetry at low energies, so that the Higgs mass can be stabilised\footnote{Even though supersymmetry has not been detected at LHC scales, so that the supersymmetric solution to the hierarchy problem is not completely natural anymore, it still helps bridging much of the hierarchy between the Planck and the electroweak scale.}. To be precise, we only want $N=1$ and not extended $N \geq 2$ supersymmetry in 4d, because those extended theories cannot produce chiral fermions. In the simplest case where the NS flux is set to zero, the Calabi-Yau manifolds turn out to be the only class of compact manifolds that satisfy the Killing spinor equations in the low energy limit, thus ensuring $N=1$ supersymmetry. Heterotic Calabi-Yau compactification models are well-known in the literature and have been shown to produce the spectrum of the Minimal Supersymmetric Standard Model (MSSM) ~\cite{Braun:2005ux, Braun:2005bw, Braun:2005nv, Bouchard:2005ag, Blumenhagen:2006ux, Blumenhagen:2006wj, Anderson:2007nc, Anderson:2008uw, Anderson:2009mh, Braun:2009qy, Braun:2011ni}. In recent years, a sizeable database of phenomenologically viable models has been produced through an algorithmic scan over large classes of compactifications \cite{Anderson:2011ns,Anderson:2012yf,Anderson:2013xka}. Given that compactifications with the correct spectrum can now be readily engineered, one of the most pressing problems is the calculation of Yukawa couplings for such models. As known from the Standard Model, Yukawa couplings describe interactions between fermions and the Higgs field, thus generating quark and lepton masses when the electroweak symmetry is spontaneously broken. Understanding these couplings as structurally connected to the geometry of the extra dimensions could prove that the masses we encounter in particle physics are not arbitrary, but rather reflect the inner geometry of the Universe. Unfortunately, the calculation of Yukawa couplings for geometric compactifications of the heterotic string is not straightforward, even at the perturbative level, and relatively few techniques and results are known \cite{Strominger:1985ks, Candelas:1987se, greene1, greene2, greene3, braunheovrut, bouchardcvetic, Anderson:2009ge, textures}. The task of computing the physical Yukawa couplings for such models can be split up into two steps: the calculation of the holomorphic Yukawa couplings, that is, the couplings in the superpotential, and the calculation of the matter field K\"ahler potential. The former relates to a holomorphic quantity and can, therefore, to some extent be carried out algebraically, as explained in Refs.~\cite{Candelas:1987se, Anderson:2009ge}. However, the matter field K\"ahler potential is non-holomorphic and its algebraic calculation does not seem to be possible -- rather, it is likely that methods of differential geometry have to be used.\footnote{See Refs.~\cite{delaossahardy, candelasmetric, mcoristeffective} for recent progress in this direction.} At present the matter field K\"ahler potential has not been worked out explicitly for any case other than the standard embedding model (where it can be expressed in terms of the K\"ahler and complex structure moduli space metrics). The purpose of this thesis is to expand the knowledge that we have about holomorphic Yukawa couplings in heterotic string theory, by proposing a method to compute these couplings for a specific class of string models, namely for line bundle models on Complete Intersection Calabi-Yau (CICY) manifolds. The method is presented in Chapters~\ref{tetraquadricchapter} and \ref{chaptern>1codimension} in a generalised form and then applied to specific examples. As a secondary objective, we also develop in Chapter~\ref{kahlerchapter} a method for calculating the matter field K\"ahler potential in a very restricted scenario, where sufficiently large flux can lead to localisation of the matter field wave function. It still remains to be seen whether such an approach can meet all the phenomenological requirements. Nevertheless, we hope that our techniques will eventually lead to a framework where physical Yukawa couplings are reliably extracted from various geometrical models. The structure of the thesis is as follows. In Chapter~\ref{odyssey}, we lay down the background material, starting with some relevant concepts from the Standard Model and the physics that is expected beyond it (supersymmetry, grand unification). Later on, we present the $E_8 \times E_8$ heterotic string theory and the mathematical apparatus that is used for compactification. The chapter then explains how to recover the four-dimensional spectrum and Lagrangian terms from the ten-dimensional theory, in order to enable a comparison with the observable physics. In Chapter~\ref{tetraquadricchapter}, we develop techniques, based on differential geometry, to compute holomorphic Yukawa couplings for heterotic line bundle models on Calabi-Yau manifolds defined as hypersurfaces in products of projective spaces. Our methods are based on constructing the required bundle-valued forms explicitly and evaluating the relevant integrals over the projective ambient space. It is shown that the rank of the Yukawa matrix can decrease at specific loci in complex structure moduli space. In particular, we compute the up Yukawa coupling and the singlet-Higgs-lepton trilinear coupling in the heterotic standard model described in Ref.~\cite{Buchbinder:2014qda}. In Chapter~\ref{chaptern>1codimension}, we generalise the results of Chapter~\ref{tetraquadricchapter}, by applying similar techniques to higher co-dimension Complete Intersection Calabi-Yau manifolds. A vanishing theorem, which we prove, implies that certain Yukawa couplings allowed by low-energy symmetries are zero due to topological reasons. To illustrate our methods, we calculate Yukawa couplings for $SU(5)$-based standard models on a co-dimension two complete intersection manifold. In Chapter~\ref{kahlerchapter}, we propose an analytic method to calculate the matter field K\"ahler metric for line bundles models with large internal gauge flux. In this case, the integrals involved in the calculation localise around certain points on the compactification manifold and, hence, can be evaluated approximately without precise knowledge of the Ricci-flat Calabi-Yau metric. In a final step, we show how this local result can be expressed in terms of the global moduli of the Calabi-Yau manifold. The method is illustrated for the family of Calabi-Yau hypersurfaces embedded in $\mathbb{P}^1 \times \mathbb{P}^3$ and we obtain an explicit result for the matter field K\"ahler metric in this case. The work presented in this thesis is drawn from three research papers. More precisely, Chapter~\ref{tetraquadricchapter} is based on \begin{itemize} \item S.~Blesneag, E.~I.~Buchbinder, P.~Candelas and A.~Lukas, \textit{Holomorphic Yukawa Couplings in Heterotic String Theory}, JHEP {\bf 1601} (2016) 152.~\cite{paper1} \end{itemize} Chapter~\ref{chaptern>1codimension} is based on \begin{itemize} \item S. Blesneag, E. I. Buchbinder and A. Lukas, \textit{Holomorphic Yukawa Couplings for Complete Intersection Calabi-Yau Manifolds}, JHEP {\bf 1701} (2017) 119.~\cite{paper2} \end{itemize} Chapter~\ref{kahlerchapter} is based on \begin{itemize} \item S. Blesneag, E. I. Buchbinder, A. Constantin, A. Lukas, E. Palti, \textit{Matter Field K\"ahler Metric in Heterotic String Theory from Localisation}, JHEP {\bf 1804} (2018) 139.~\cite{paper3} \end{itemize} The Appendices~\ref{app:Kbundle},~\ref{coboundarymapappendix},~\ref{appendixPn},~\ref{appendixboundary} are also based on materials from the above-mentioned research papers and contain more technical background to support the calculations. \chapter{A String Theory Odyssey} \label{odyssey} \noindent The purpose of this chapter is to provide an overview of the background that is required for computing Yukawa couplings in heterotic string theory. We start with the Standard Model and what motivated physicists to search for a theory beyond it, then we continue with a review of supersymmetry and grand unification, culminating in the $E_8 \times E_8$ heterotic string theory and its ten-dimensional $N=1$ supergravity limit. At that stage, it will become evident that several tools from topology and complex geometry are needed. Therefore, we will provide a summary of the mathematics that is used throughout the thesis. Finally, the last part of the chapter is dedicated to the process of compactification, in an attempt to reconnect the high-energy ten-dimensional theory back to the four-dimensional physics from which we started. However, the search for the correct model is not free of phenomenological problems, the most obvious one being the presence of gauge-neutral moduli scalars in the spectrum of compactification. Such fields cause vacuum degeneracy, so they need to be stabilised (see, for example, Refs.~\cite{aglomoduli, aglomoduli2}). Moreover, correlating physical parameters to quantities from the 10d theory is particularly difficult, given that the latter depend on the unknown geometry of the extra dimensions. As it turns out, information about the spectrum and the \textit{holomorphic} Yukawa couplings can be extracted from less specific, quasi-topological properties of the internal manifold, while still leaving field normalisation unresolved. Proving this will become our main goal by the end of the chapter. \section{Yukawa Couplings in the Standard Model} \label{smsection} \noindent Formulated in the early 1970s as a quantum field theory with symmetry group $G_{\textrm{SM}} = SU(3)\times SU(2)_L \times U(1)_Y$, the Standard Model has proven to be extremely successful for describing particle physics at energies up to the TeV scale. Its remarkable performance, demonstrated through decades of experimental testing, lies in its ability to accurately predict the interactions of all known elementary particles under three of the four fundamental forces -- strong, weak and electromagnetic. More precisely, the $SU(3)$ part of the gauge group describes Quantum Chromodynamics, or the theory of the strong interaction, while the $SU(2)_L \times U(1)_Y$ part is responsible for the electroweak theory. In the Standard Model, these interactions are realised through the exchange of $12$ vector bosons ($8$ gluons and $4$ electroweak gauge bosons), which are in the adjoint representation of their corresponding gauge groups \begin{equation} \begin{array}{lllll} \label{smgaugerep} G^{1,...,8}=(\mathbf{8},\mathbf{1})_0, & \quad\quad & W^{1,2,3}=(\mathbf{1},\mathbf{3})_0, & \quad\quad & B=(\mathbf{1},\mathbf{1})_0 , \end{array} \end{equation} \noindent where the notation $(\mathbf{a},\mathbf{b})_c$ corresponds to $(SU(3),SU(2)_L)$ representations with subscript for the $U(1)$ charge. Matter is described by three generations of Weyl fermions (quarks and leptons), each carrying a representation of $G_{\textrm{SM}}$ \begin{align} \label{representations} Q^i &= \begin{pmatrix} u^i_L\\ d^i_L\end{pmatrix} = (\mathbf{3},\mathbf{2})_{1/6}, \quad \,\ \,\, u^i = (u^i_R)^c=(\overline{\mathbf{3}},\mathbf{1})_{-2/3}, \quad \,\, d^i = (d^i_R)^c=(\overline{\mathbf{3}},\mathbf{1})_{1/3}, \notag \\[1.5mm] L^i & = \begin{pmatrix} \nu_L^i \\ e_L \end{pmatrix} = (\mathbf{1},\mathbf{2})_{-1/2}, \quad \,\, e^i = (e^i_R)^c =(\mathbf{1},\mathbf{1})_1, \quad \end{align} \noindent where generation number is labeled by $i = 1,2,3$ and, by convention, charge conjugation is applied on right-hand components in order to represent everything as left-handed. From these expressions one can see that the left- and right-hand Weyl fermions transform differently under the electroweak gauge group, which means that the Standard Model is a chiral theory. The Lagrangian used to encode all the information about the dynamics of the particles contains the following kinetic terms \begin{align} \mathcal{L}^{\textrm{SM}}_{\textrm{kin}} = & - \dfrac{1}{2} \textrm{Tr} \left[G_{\mu \nu} G^{\mu \nu}\right] - \dfrac{1}{2} \textrm{Tr} \left[W_{\mu \nu} W^{\mu \nu}\right]- \dfrac{1}{4} F_{\mu \nu} F^{\mu \nu}+ \notag \\ & + i \sum_{i=1}^3 \left[ \overline{Q}{}^i \slashed{D} Q^i + \overline{u}{}^i \slashed{D} u^i + \overline{d}{}^i \slashed{D} d^i+ \overline{L}{}^i \slashed{D} L^i+ \overline{e}{}^i \slashed{D} e^i\right], \end{align} \noindent where $G_{\mu\nu}$, $W_{\mu\nu}$ and $F_{\mu\nu}$ are the field strengths of the three gauge fields and $\slashed{D} = \overline{\sigma}{}^{\mu} D_{\mu}$ is the Dirac operator. As a consequence of gauge invariance, no explicit mass terms such as $m_{\psi} \overline{\psi}_L \psi_R$ or $ m^2_W W^{\mu} W_{\mu} $ are permitted, so the only way fermions and gauge bosons can acquire mass is through the spontaneous breaking of the electroweak symmetry. This is realised through the Higgs mechanism, with the introduction of a scalar Higgs field $H = (H^{0}, H^{-})^T$, which is a $(\mathbf{1},\mathbf{2})_{-1/2}$ representation of $G_{\textrm{SM}}$ and has the potential energy \begin{eqnarray} V(H) = - \mu^2 H^\dagger H + \lambda (H^\dagger H)^2. \end{eqnarray} \noindent Because of its non-vanishing vacuum expectation value $\langle H \rangle = (v/\sqrt{2},0)^T$, the Higgs field breaks $SU(2)_L \times U(1)_Y \rightarrow U(1)_{\textrm{em}}$ at a scale of around $v = \mu/\sqrt{\lambda}= 246 $ GeV, such that the well-known electric charge is given by the combination $Q_{\textrm{em}} = T_3 + Y$, where $T_3 = \pm 1/2$ is the weak isospin of $SU(2)_L$ and $Y$ is the weak hypercharge. The kinetic term of the Higgs, $D^{\mu} H^{\dagger} D_{\mu} H$, is responsible for electroweak boson mass terms in the effective Lagrangian (three massive bosons $W^{\pm}$, $Z$ and one massless photon). More explicitly, the three would-be Goldstone bosons of the spontaneously broken $SU(2)_L \times U(1)_Y$ symmetry are “eaten up” or absorbed to become the longitudinal components of the three massive gauge bosons. On the other hand, fermions acquire mass from interactions with the Higgs, described by the Yukawa terms \begin{eqnarray} \label{smyukawa} \mathcal{L}^{\textrm{SM}}_{\textrm{Yuk}} = Y_u^{i j} Q^a_{i} u_{j} \bar{H}_a + Y_d^{i j} Q^a_{i} d_{j} \epsilon_{ab} H^b + Y_e^{i j} L^a_{i} e_{j} \epsilon_{ab}H^b + \textrm{h.c.} \end{eqnarray} \noindent where ``h.c." stands for ``Hermitian conjugate", $a,b=1,2$ label the $SU(2)_L$ components and $\epsilon_{a b}$ is the Levi-Civita tensor (see also Ref. \cite{srednicki}). As one can see from this formula, the Yukawa couplings $Y^{i j}_{u,d,e}$ dictate the structure of fermionic mass matrices $m^{ij}_f = Y^{ij}_f v / \sqrt{2}$ in the low energy limit. These matrices are expected to be non-diagonal, since weak interaction eigenstates act as a mixture of mass eigenstates both in the quark and lepton sector. For quarks, the mixing is realised through the unitary CKM (Cabibbo – Kobayashi – Maskawa) matrix, which is parametrised by three angles and one CP violation phase. In the case of the leptons, a similar PMNS (Pontecorvo–Maki–Nakagawa–Sakata) matrix is introduced, however it is important to note that in this case, the Standard Model has to be extended to include mechanisms that explain neutrino mass, usually by assuming the existence of heavy sterile right-handed neutrinos, $N^i$. In the Minimally Extended Standard Model for example, the $N^i$ generate effective 5-dimensional Weinberg operators $y^2 L \bar{H} \bar{H} L /M_N$, due to neutrino Yukawa interactions $y^{ij} L_i N_j \bar{H}$, thus inducing small Majorana neutrino masses $m_{\nu} \sim y^2 v^2 /M_N$, where $M_N \gg v$ is the mass scale of the $N^i$. This is just one of the many possible realisations of the so-called “seesaw mechanism”. Finally, one remarkable feature of the Standard Model is the automatic cancellation of gauge anomalies. These anomalies, which are violations of gauge symmetries under quantum corrections, could in principle arise from triangle loops of chiral fermions contributing to triple gauge-boson vertices. For example, for the $[U(1)_Y]^3$ coupling, the anomaly is proportional to $\textrm{Tr}(Y^3)$, while for the non-abelian $[SU(n)]^{3}$ coupling it is proportional to $\textrm{Tr} (T_a \lbrace T_b, T_c \rbrace)$, where the $T_a$ are the generators of $SU(n)$. Further anomalies arising from $U(1)_Y [SU(n)]^2$ triangles are written as summations $\textrm{Tr}(Y)_L$ over the left-hand fermions ($n=2$) and $\textrm{Tr}(Y)_q$ over the quarks ($n=3$), while mixed anomalies, which involve both gauge and gravitational interactions, have a total factor of $\textrm{Tr}(Y)$. It can be shown that the structure of the Standard Model, as described by \eqref{representations}, ensures that all these anomalies are canceled. \section{Beyond the Standard Model} \noindent Despite its experimental success, the Standard Model cannot be the complete description of our Universe because of various compelling reasons. First of all, it neglects gravity, thus acting as an effective theory at energies much lower than the Planck scale. The established theory of gravity, General Relativity, is in fact incompatible with the quantum framework of the Standard Model and is proven to be non-renormalisable. Moreover, the Standard Model fails to provide an explanation for the existence of dark matter and dark energy, which constitute about 22\% and 74\% of the Universe. It fails to explain why the cosmological constant is so small ($\Lambda \sim 10^{-12} \textrm{eV}^4 $) compared to the quantum field theory estimation which is $120$ orders of magnitude higher. Another factor that makes the Standard Model seem incomplete is the fact that its dynamics depends on $19$ free parameters (the three gauge couplings, the nine quark and lepton masses, not including neutrinos, the three CKM mixing angles, the CKM CP-violating phase $\delta$, the CP-violating strong-interaction parameter $\theta_{\textrm{QCD}}$, the Higgs vev $v$ and the Higgs self-coupling $\lambda$), all of which have to be derived from experiment, without any theoretical insight into their origin. If one were to accomodate neutrino oscillations, the resulting model would require nine additional parameters (masses, mixing angles and CP-violating phases), thus complicating the problem even further \cite{johnellis}. Overall, one does not know why the $U(1)_Y$ charges take the values that they do, or why there are precisely three generations of quarks and leptons. Further problems come from what is called “naturalness” or the idea that physical parameters should naturally be of the same order, otherwise a reasonable explanation has to be given for their hierarchy. For example, the large discrepancy between the weak force and gravity constitutes the principal hierarchy problem of the SM. It is unknown why the electroweak scale $M_W \sim 10^2 \textrm{ GeV}$ is so many orders of magnitude smaller than the Planck scale $M_{\textrm{P}}\sim 10^{19}\textrm{ GeV}$. Given the fact that the Higgs mass can receive quantum loop corrections which are proportional to the cut-off scale, so that $\Delta m_H^2 = \mathcal{O}(\Lambda_{\textrm{cutoff}}^2)$, one would expect $m_H$ to be much closer to $M_{\textrm{P}}$, unless an underlying mechanism ensures that the sum of all these corrections vanishes. It is also unclear why the strong CP-violating angle $\theta_{\textrm{QCD}}$ is so small compared to the CKM CP-violating phase. Experimental limits of the neutron electric dipole moment set $\vert\theta_{\textrm{QCD}}\vert < 10^{-10}$, which constitutes the strong CP problem. Last but not least, it is a mystery why the SM particles seem to obey a hierarchical pattern, with masses varying from around $1$ eV for the neutrinos and $0.5$ MeV for the electron to $173$ GeV for the top quark. As a consequence of these unanswered questions, physicists are searching for extensions of the Standard Model at higher, yet unexplored energies. \subsection{Supersymmetry and the MSSM} \label{susymssm} \noindent Supersymmetry (SUSY) is the only possible non-trivial\footnote{By non-trivial we mean an extension that is not simply a direct product between the Poincar\'e group and an internal group.} extension of the Poincar\'e algebra of special relativity, as it is inferred from the Haag-$\slashed{\textrm{L}}$opusza\' nski-Sohnius generalisation \cite{haaglopuszanski} of the Coleman-Mandula no-go theorem \cite{colemanmandula}. As such, supersymmetry looks promising as a possible symmetry of nature. Being a graded-Lie algebra, it is realised through the introduction of $N$ spinorial generators $Q^A_{\alpha}$, $A=1,...,N$, and their Hermitian conjugates $\overline{Q}{}^A_{\dot{\alpha}}$, which satisfy certain anti-commutation relations. However, in all future discussions, we will only consider the simple case, $N=1$, for which the algebra is given by \begin{eqnarray} \lbrace Q_{\alpha}, \overline{Q}_{\dot{\alpha}} \rbrace = 2 \sigma_{\alpha \dot{\alpha}}^{\mu} P_{\mu}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \lbrace Q_{\alpha}, Q_{\beta} \rbrace = 0 , \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \lbrace \overline{Q}_{\dot{\alpha}}, \overline{Q}_{\dot{\beta}} \rbrace=0, \end{eqnarray} \noindent where $P_{\mu}$ is the generator of spacetime translations and $\alpha$, $\dot{\alpha}=1,2$ are spinor indices. In the framework of supersymmetry, every particle has a superpartner whose spin differs by half an integer, so that a fermion is transformed into a boson and vice versa, through the action of the supersymmetric generator. Schematically, \begin{eqnarray} \delta \phi \sim \overline{\epsilon} \psi, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \delta \psi \sim\epsilon \partial \phi , \end{eqnarray} \noindent for a boson field $\phi$ and a fermion field $\psi$, where $\epsilon^{\alpha}$ is an infinitesimal, anticommuting, constant spinor, parametrising the transformation. Particles which are superpartners of each other are grouped together into supermultiplets (irreducible representations of the SUSY algebra) and have the same mass, although after supersymmetry is broken, some of them can become significantly heavier, thus explaining why they are not observed. There are different types of supermultiplets: those with spin $(1/2,0)$ are called chiral multiplets, because they contain chiral fermions and their superpartners, while those with spin $(1,1/2)$ are called vector multiplets, containing vector bosons and their superpartners. Any given supermultiplet can be written as a superfield, i.e. a function of the spacetime coordinates $x^{\mu}$ and some fermionic dimensions $\lbrace \theta^{\alpha}$, $\overline{\theta}{}^{\dot{\alpha}}\rbrace_{\alpha,\dot{\alpha}=1,2}$ called Grassmann numbers. Together, the coordinates $(x^{\mu}, \theta^{\alpha}$, $\overline{\theta}{}^{\dot{\alpha}})$ parametrise the eight-dimensional superspace. With these notations, the action governing the dynamics of $n$ chiral superfields $\Phi^I = (\phi^I,\psi^I)$, $I=1,...,n$, in a 4d $N=1$ SUSY theory, is generally expressed as \begin{eqnarray} S = \int d^4 x \int d^2 \theta d^2 \overline{\theta} K(\Phi^I, \Phi^I{}^{\dagger}) + \left( \int d^4 x \int d^2 \theta W(\Phi^I) + \textrm{h.c.} \right), \end{eqnarray} \noindent where $W$ is the superpotential -- a holomorphic function providing Yukawa and mass terms \begin{eqnarray} \label{generalsuperpotential} W = \lambda_{I J K} \Phi^I \Phi^J \Phi^J + M_{I J} \Phi^I \Phi^J, \end{eqnarray} \noindent while $K$ is the K\"ahler potential, a general real function, which gives rise to kinetic terms of the form $G_{IJ} \partial^{\mu} \phi^{I*} \partial_{\mu} \phi^J$ and $i G_{IJ} \overline{\psi}{}^I \slashed{D} \psi^J $, where $G_{IJ} = \tfrac {\partial^2 K} {\partial \Phi^{I \dagger} \partial \Phi^J}$. It is important to note here that the Yukawa couplings $\lambda_{I J K}$ of the superpotential and the physical Yukawa couplings $Y_{IJK}$ of Eq.~\eqref{smyukawa} can be identified only if the kinetic terms are brought to a canonical form, through an appropriate field redefinition, $\Phi^I \rightarrow \tilde{\Phi}^I = U^I{}_J \Phi^J$, where $U^I{}_J$ is a unitary matrix. Otherwise, if for example lack of knowledge about $G_{IJ}$ prevents such a redefinition to be applied, $\lambda_{I J K}$ are to be referred to as \textbf{holomorphic} Yukawa couplings, to distinguish them from $Y_{IJK}$. Returning to the supersymmetric action, the term corresponding to vector supermultiplets reads \begin{eqnarray} \label{gaugesusyaction} S = \int d^4 x \int d^2 \theta f_{ab}(\Phi^I)\mathcal{W}^a \mathcal{W}^b + \textrm{h.c.} \, , \end{eqnarray} \noindent where $\mathcal{W}^a$ is the “field strength” chiral superfield and $f_{ab}$ is the holomorphic gauge kinetic function, with indices $a,b$ running over the adjoint representations of gauge groups in the theory. \vspace{3mm} One of the main benefits of supersymmetry is that it solves the fine-tuning problem of the Higgs mass, by canceling UV divergences pair by pair, since bosonic and fermionic superpartners contribute with opposite signs. It is for this reason that the study of supersymmetry is so relevant for the Standard Model. Naturally preserving the electroweak scale at $10^2$ GeV means that SUSY has to be encountered at energies of several TeV \cite{cohen}. The simplest extension, involving the smallest field content, is called the Minimal Supersymmetric Standard Model (MSSM). In this model, every SM particle is interpreted to be an element of an $N=1$ supermultiplet along with its yet undiscovered partner, so that $Q^i$, $u^i$, $d^i$ are redefined to represent quark-squark pairs, $L^i$ and $e^i$ denote lepton-slepton pairs, while $G$, $W$ and $B$ are the gauge boson-gaugino pairs. The fermionic partner of the Higgs, the higgsino, would upset the anomaly cancellation conditions $\textrm{Tr}(Y) \textrm{, } \textrm{Tr}(Y^3) \stackrel{!}{=} 0 $ and it is for this reason that supersymmetry has not one, but two Higgs supermultiplets $H_u = (H_u^{+}, H_u^0)$ and $H_d = (H_d^0, H_d^{-})$, with opposite hypercharges that cancel the anomalies. The MSSM superpotential containing the Yukawa couplings and the Higgs mass term reads \begin{eqnarray} \label{mssmyukawa} W = Y_u^{i j} Q_i u_{j} H_u + Y_d^{i j} Q_i d_{j} H_d + Y_e^{i j} L_i e_{j} H_d + \mu H_u H_d . \end{eqnarray} \noindent It can be shown that the holomorphicity prevents the superpotential from being renormalised at any order in perturbation theory \cite{grisaru1979}\cite{seiberg1993}. This is because all perturbative corrections are real (non-holomorphic), therefore they cannot modify a holomorphic quantity such as $W$. Another consequence of the holomorphicity of $W$ is that Yukawa terms involving the conjugate of the Higgs field are no longer permitted, as they were in Eq.~\eqref{smyukawa}, therefore the introduction of two Higgs superfields $H_u$ and $H_d$ is necessary to ensure all matter fields receive a mass \cite{wessandbagger}. As for the mass parameter $\mu$ responsible for electroweak symmetry breaking, it leads to a naturalness problem (the “$\mu$-problem”), when one tries to understand why $\mu \ll M_\textrm{P}$. In principle, the MSSM superpotential could also include other renormalisable terms of the form \begin{eqnarray} \label{protondecayterms} W_{\textrm{RV}} = \beta^{i j k} L_i L_j e_{k} + \beta^{'}{}^{i j k} L_i Q_j d_{k} + \beta^{''}{}^{i j k} u_{i} d_{j} d_{k} + \alpha^i L_i H_u , \end{eqnarray} \noindent where $\alpha$, $\beta$, $\beta'$ and $\beta''$ are the couplings of the interactions, however these terms would violate lepton and baryon number, thus allowing fast proton decay, so they should be ruled out. One way of doing this is by imposing a discrete global symmetry $R = (-1)^{3(B-L)+2s}$, which is known as R-parity, where $s$ is the spin of the particle and $B$ and $L$ are baryon and lepton numbers. For SM particles $R=1$, while for their superpartners $R=-1$. As R-parity is conserved, this would mean that the lightest superpartner (LSP) is a stable particle and in fact it is considered a candidate for dark matter. Typically the LSP is deemed to be the neutralino, a linear combination of the neutral electroweak gauginos and the neutral higgsinos. \vspace{3mm} On a final note, if supersymmetry is indeed a symmetry of nature it is clear that it must be broken at a certain energy scale $M_{\textrm{SUSY}}$, given that none of the predicted superpartners have been observed. In general, SUSY breaking translates into the requirement that certain auxiliary fields $F_i = - \partial W/\partial \phi^i$ and $D^a$, associated to chiral and vector supermultiplets respectively, acquire non-zero vacuum expectation values $\langle F_i\rangle$, $\langle D^a \rangle \neq 0$. This is equivalent to saying that the potential energy \begin{eqnarray} \label{scalarpotential} V = \vert F_i \vert^2 + \dfrac{1}{2} (D^a)^2 \end{eqnarray} \noindent is non-zero. In the MSSM however, supersymmetry cannot be broken spontaneously, otherwise the cancellation of quadratic divergences would not occur. Instead, one has to break supersymmetry “softly”, by introducing explicit SUSY-breaking terms in the Lagrangian. Such terms, like the scalar mass term $m_{\phi}^2 \phi^* \phi$, the gaugino mass term $M_{\lambda} \lambda \lambda$ and the bilinear and trilinear scalar couplings $b^{i j} \phi_i \phi_j$ and $a^{i j k} \phi_i \phi_j \phi_k$ are believed to be the effective result of an underlying SUSY breaking mechanism that occurs in a hidden sector and is communicated to the MSSM via some messenger fields. One can think of the hidden sector as a collection of singlets under $G_{\textrm{SM}}$ which interact with the SM particles very weakly, for example through gravity. In string theory, a common interpretation is that the hidden and visible sectors are geometrically separated -- they live on different branes separated by extra dimensions and the messenger fields propagate between them, in the bulk \cite{djhchung}. In any case, the great inconvenience of the soft SUSY breaking is that it introduces $105$ new free parameters (masses, phases, mixing angles) in addition to those already found in the Standard Model. This degree of arbitrariness in the Lagrangian makes the MSSM seem like an incomplete description of particle physics. \subsection{Grand Unified Theories} \label{gutsubsection} \noindent Another natural way to extend the Standard Model is to presume that the non-semisimple gauge group $G_{\textrm{SM}}$ is embedded into a larger simple group $G_{\textrm{GUT}}$, so that the three gauge couplings $g_1$, $g_2$ and $g_3$, corresponding to $U(1)_Y$, $SU(2)_L$ and $SU(3)$ respectively, must equate a single coupling $g_{\textrm{GUT}}$ of a Grand Unified Theory (GUT). This unification is motivated by the observation that gauge couplings evolve with respect to energy scale, according to a set of equations called “the renormalisation group”. For example, $g_3$ is shown to decrease as the energy scale increases, while $g_1$ gets significantly larger. At around $10^{15} - 10^{16}\textrm{ GeV}$, the three gauge couplings reach very similar values, although they are not precisely equal. Equality is however acquired in the MSSM, which is one of the reasons why we will consider only supersymmetric versions of Grand Unified Theories, despite the fact that originally they were not built with supersymmetry in mind. Another reason why SUSY GUTs are favoured is because they can be embedded in superstring theories, thus paving the way for higher energy exploration. The phenomenological interpretation of a Grand Unified Theory is that $G_{\textrm{GUT}}$ is the underlying symmetry of nature, while the Standard Model is the effective theory resulting after $G_{\textrm{GUT}}$ has been broken at a certain high-energy scale. As such, all GUTs provide an explanation for the values of the $U(1)_Y$ charges, by embedding the hypercharge in a simple group. For example, the well-known condition $Q_{\textrm{proton}}+Q_{\textrm{electron}}=0$ follows from the fact that quarks and leptons are combined in the same GUT multiplets, and the GUT group generators are traceless \cite{ksbabu}. Since the maximum number of commuting generators (i.e. the rank) of $G_{\textrm{SM}}$ is 4, it is required that $\textrm{rank}(G_{\textrm{GUT}})\geq 4$. For $SU(n)$, the rank is $n-1$, therefore the smallest simple group that can contain $G_{\textrm{SM}}$ is $SU(5)$. In this GUT model, discovered by Georgi and Glashow \cite{georgiglashow}, each generation of SM particles fits into a $\overline{\mathbf{5}} \oplus \mathbf{10}$ representation of $SU(5)$, in the following way \begin{eqnarray} \overline{\mathbf{5}} \rightarrow (\overline{\mathbf{3}},\mathbf{1})_{1/3} \oplus (\mathbf{1},\mathbf{2})_{-1/2}, \,\,\,\,\,\,\,\,\, \mathbf{10} \rightarrow (\mathbf{3},\mathbf{2})_{1/6} \oplus(\overline{\mathbf{3}},\mathbf{1})_{-2/3} \oplus (\mathbf{1},\mathbf{1})_1, \end{eqnarray} \noindent where $\overline{\mathbf{5}}$ is the anti-fundamental representation of $SU(5)$ and $\mathbf{10}$ is the antisymmetric compontent of the $\mathbf{5} \otimes \mathbf{5}$ matrix. Comparing this to Eq.~\eqref{representations} shows that $\overline{\mathbf{5}}^i$ contains $\lbrace d^i, L^i\rbrace$ and $\mathbf{10}^i$ contains $\lbrace Q^i, u^i, e^i \rbrace$, for $i=1,2,3$. The interaction of these matter fields is mediated by $n^2-1 = 24$ gauge fields, transforming in the adjoint representation of $SU(5)$, for which the SM decomposition reads \begin{eqnarray} \mathbf{24} \rightarrow (\mathbf{8},\mathbf{1})_0 \oplus (\mathbf{1},\mathbf{3})_0, \oplus (\mathbf{1},\mathbf{1})_0 \oplus (\mathbf{3},\mathbf{2})_{-5/6} \oplus (\overline{\mathbf{3}},\mathbf{2})_{5/6}, \end{eqnarray} \noindent where the first three terms are recognisable as the $12$ SM gauge fields in Eq.~\eqref{smgaugerep}, and the last two terms are $12$ new bosons denoted by $\lbrace X^{\pm}, Y^{\pm}\rbrace_{1,2,3}$. These new bosons can mediate transitions between quarks and leptons, thus violating lepton and baryon number as well as enabling proton decay modes such as $p\rightarrow \pi^0 e^+$ \cite{jhisano}. Consequently, when $SU(5)$ is broken, the $X$ and $Y$ bosons must acquire a mass of at least $10^{15} \textrm{ GeV}$ according to current experimental limits, a condition which is met by the SUSY version of the $SU(5)$ GUT, but not by the minimal non-supersymmetric model \cite{wdeboer}. The symmetry breaking mechanism is achieved by a GUT-Higgs scalar field $H_{\mathbf{24}}$ in the adjoint representation $\mathbf{24}$, having a non-vanishing vev $\langle H_{\mathbf{24}} \rangle = v_{24} \textrm{ diag} (2,2,2,-3,-3)$, which is proportional to the hypercharge generator and commutes with $G_{\textrm{SM}}$. The masses of the $X$ and $Y$ bosons are then given by $M_{X,Y}^2 \sim g_{\textrm{GUT}}^2 v^2_{24}$. On the other hand, the SM Higgs doublet belongs to a $\overline{\mathbf{5}}_H$ representation of $SU(5)$, along with three other states which form the colour-triplet Higgs $T$. Since the triplet can also mediate proton decay, this time through the mode $p\rightarrow \overline{\nu} K^+$, it must receive a heavy mass of at least $10^{15}$ GeV \cite{dboer}. The manner in which $H$ and $T$ acquire masses so different in orders of magnitude ($m_{H}/m_{T}\sim \mathcal{O}(10^{-13})$) is referred to as the double-triplet splitting problem. In the supersymmetric GUT, two Higgs doublets $H_d$ and $H_u$ are needed, which can sit in $\overline{\mathbf{5}}_H$ and $\mathbf{5}_H$ respectively, giving rise to the following types of Yukawa couplings \begin{eqnarray} W^{SU(5)}_{\textrm{Yuk}} = Y^{i,j}_u \Phi^i_{\mathbf{10}} \Phi^j_{\mathbf{10}} H^u_{\mathbf{5}} + Y^{i,j}_{d,l} \Phi^i_{\mathbf{10}} \Phi^j_{\overline{\mathbf{5}}} H^d_{\overline{\mathbf{5}}} . \end{eqnarray} \noindent Here, each term is to be interpreted as a singlet, through an appropriate contraction of $SU(5)$ indices: $\epsilon_{abcde} \Phi^{ab}_{\mathbf{10}} \Phi^{cd}_{\mathbf{10}} H^{e}_{\mathbf{5}}$ and $\Phi^{a b}_{\mathbf{10}} \Phi_{\overline{\mathbf{5}} a} H_{\overline{\mathbf{5}} b}$. Also, note that in the $SU(5)$ theory, $Y_d$ and $Y_l$ are equal at the GUT scale, thus exemplifying Yukawa coupling unification. Looking further, one can seek GUT groups with rank larger than $4$. For $\textrm{rank} = 5$, the only simple groups in which $G_{\textrm{SM}}$ can be embedded are $SU(6)$ and $SO(10)$, while for $\textrm{rank} = 6$, they are $SU(7)$ and $E_6$. We will only discuss the $SO(10)$ and $E_6$ GUT models, since the $SU(6)$ and $SU(7)$ cases are simply extensions of the $SU(5)$ theory and they add no interesting new features. The relevant chain of embeddings reads \begin{eqnarray} \label{gutgroups} G_{\textrm{SM}} \subset SU(5)\subset SO(10) \subset E_6, \end{eqnarray} \noindent but in later chapters we will see that $E_6$ can be embedded further in larger groups such as $E_7$ and $E_8$, the latter being particularly significant because of the $E_8 \times E_8$ heterotic string theory. One must stress however that $E_7$ and $E_8$ do not qualify as 4d GUT groups, because they have no complex representations and therefore the theories would not be chiral. In the $SO(10)$ GUT, one spinor representation $\mathbf{16}$ of $SO(10)$ contains all the fifteen particles of an SM family and an additional unknown particle, interpreted to be the right-handed neutrino. More explicitly, if we decompose $\mathbf{16} = \overline{\mathbf{5}} \oplus \mathbf{10} \oplus \mathbf{1}$ under the $SU(5)$ subgroup, we recognise the previously discussed $\overline{\mathbf{5}} \oplus \mathbf{10}$ representation in $SU(5)$ plus the extra singlet $\mathbf{1}$. The adjoint representation $\mathbf{45}$ contains the 12 SM gauge bosons and 33 extra gauge bosons with weak and colour charges. Unlike $SU(5)$, the $SO(10)$ group can be broken down to $G_{\textrm{SM}}$ either directly or through various intermediate steps, for example via the maximal subgroup decomposition: $SO(10) \rightarrow SU(5) \times U(1) \rightarrow G_{\textrm{SM}}$, or via the Pati-Salam model \cite{patisalam}: $SO(10) \rightarrow SU(4) \times SU(2)_L \times SU(2)_R\rightarrow G_{\textrm{SM}}$. Through their non-vanishing vevs, the scalar fields of the Higgs sector are the ones dictating which symmetry-breaking scenario occurs. For a direct decomposition, a $\mathbf{144}_H$ scalar multiplet is needed; for intermediate routes, combinations of scalars such as $\mathbf{45}_H$/$\mathbf{54}_H$/$\mathbf{210}_H$ (rank-preserving) and $\mathbf{16}_H$/$\mathbf{126}_H$ (rank-reducing) are introduced instead. The sheer complexity of the Higgs sector is often regarded as a problem of GUTs that have large unification groups. As for the MSSM Higgs doublets $H_u$ and $H_d$, they are contained in one fundamental representation $\mathbf{10}_H$ of $SO(10)$, which decomposes as $\mathbf{5}_H\oplus\overline{\mathbf{5}}_H$ under $SU(5)$. The only permitted Yukawa term reads \begin{eqnarray} W^{SO(10)}_{\textrm{Yuk}} = Y^{i,j} \Phi^i_{\mathbf{16}} \Phi^j_{\mathbf{16}} H_{\mathbf{10}}, \end{eqnarray} \noindent such that all Yukawa coupling constants are unified at the $SO(10)$ GUT scale. Finally, in the $E_6$ GUT, the $15+1$ matter fields of one SM generation are fitted inside the fundamental representation $\mathbf{27}$. The remaining $11$ states correspond to unknown heavy particles (non-chiral quarks and leptons), thus making the $E_6$ theory very cumbersome from a phenomenological perspective. Usually, the MSSM Higgs doublets also descend from one family $\mathbf{27}$ multiplet, while the other two $\mathbf{27}$s contain pairs of Higgs which are inert and do not get a vev \cite{pathron}. Well-known breaking patterns are $E_6 \rightarrow SO(10) \times U(1)$ with branching $\mathbf{27}\rightarrow\mathbf{16}_1\oplus\mathbf{10}_{-2}\oplus\mathbf{1}_4$ and $E_6 \rightarrow SU(3)_c \times SU(3)_L \times SU(3)_R$ (trinification model) with branching $\mathbf{27}\rightarrow (\mathbf{3},\mathbf{3}, \mathbf{1}) \oplus (\overline{\mathbf{3}}, \mathbf{1}, \overline{\mathbf{3}}) \oplus (\mathbf{1},\overline{\mathbf{3}},\mathbf{3})$. Again, the symmetry breaking is realised by scalar fields which must be added to the theory. The Yukawa coupling is of the form \begin{eqnarray} W^{E_6}_{\textrm{Yuk}} = Y^{i,j} \Phi^i_{\mathbf{27}} \Phi^j_{\mathbf{27}} H_{\mathbf{27}}. \end{eqnarray} Overall, Grand Unified Theories have an equal share of strengths and weaknesses. On the one hand, gauge coupling unification and the explanation for the values of hypercharges are extremely valuable features. On the other hand, no reason is given for the existence of three generations and many new Higgs particles have to be introduced by hand in order to explain symmetry breaking. Moreover, the double-triplet splitting problem has to be solved through mechanisms which avoid severe fine-tuning, but add new representations to the model. Examples of such mechanisms are the Dimopoulos–Wilczek (missing vev) mechanism in $SO(10)$ and the missing partner model in $SU(5)$ \cite{mohapatra, raby}. Last but not least, important predictions of GUTs such as proton decay and magnetic monopoles have not yet been confirmed by experiment. \subsection{Supergravity} \noindent Supergravity is an extension of supersymmetry, designed to include the principles of General Relativity. In order to make this possible, supersymmetry has to become local, with a spacetime-dependent spinor $\epsilon(x)$ parametrising the infinitesimal SUSY transformation. The key ingredient of supergravity is the graviton $h_{\mu \nu}$, a massless spin-2 elementary particle which couples to the stress-energy tensor, thus mediating gravitational interactions. Its fermionic, spin-$3/2$ partner, the gravitino $\psi^\alpha_\mu$, equipped both with a spinor index $\alpha$ and a spacetime index $\mu$, is the gauge field of local supersymmetry and becomes massive when SUSY is broken, by absorbing the emerging goldstino in the so-called super-Higgs mechanism. There are two ways in which the graviton can be related to the metric $g_{\mu \nu}$, either through an infinitesimal expansion $g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}$ around the flat metric $\eta_{\mu \nu}$, or through the vielbein formalism. As is well-known from General Relativity, the metric (and implicitly the graviton) has to satisfy the Einstein's field equations, which, in a vacuum, correspond to minimising the Einstein-Hilbert action \begin{eqnarray} S_{\textrm{EH}}= \dfrac{1}{16\pi G_N} \int d^4 x \sqrt{-g} R \, , \end{eqnarray} \noindent where $G_N$ is Newton’s constant. As for the chiral and vector multiplets of the theory, they are taken into account when the superpotential $W$, the K\"ahler potential $K$ and the field strength superfield $\mathcal{W}$ are included in the total action, so that, for example, Eq.~\eqref{mssmyukawa} is reinterpreted in the context of supergravity. Of particular interest is the scalar potential \begin{eqnarray} \label{sugrapotential} V = e^K \left(K^{I \overline{J}} D_I W D_{\overline{J}} \overline{W} - 3 \vert W \vert^2\right) + \dfrac{1}{2} \mathcal{D}^2, \\[1mm] \quad \quad \quad \quad \textrm{where} \,\, D_I W = \dfrac{\partial W}{\partial \Phi^I} + \dfrac{\partial K}{\partial \Phi^I} W , \notag \end{eqnarray} \noindent which is expressed in terms of the auxiliary fields $F_I = e^{K/2} D_I W$ and $\mathcal{D}^a$. As in the global SUSY case, supersymmetry is spontaneously broken when at least one auxiliary field has a non-zero vev, however this time $V$ is no longer positive semidefinite, therefore solutions with $V=0$ that approximate our Universe can in principle be consistent with broken supersymmetry. Another feature of supergravity is that it can be formulated in more than four dimensions, in a way that mimics the Kaluza-Klein theory, the primary motivation being the unification of gravity with the other three forces of nature. Being non-renormalisable however, supergravity has to be interpreted as the low-energy limit of a higher structure. In particular, superstring theories lead to effective supergravity theories in 10d, as we will see in later chapters. For this reason, it is useful to establish a specific string model before returning to supergravity and its implications to the phenomenology of particle physics. \section{String Theory} \noindent String Theory is the leading candidate for a quantum theory that unifies all interactions of nature, including gravity. In the framework of string theory, elementary particles are interpreted to be one-dimensional objects called strings, which appear to be point-like only at energies much lower than the string scale $M_s$. These objects can be either open or closed and sweep a two-dimensional surface, called worldsheet, that is parametrised by a space coordinate $\sigma$ and a time coordinate $\tau$. As strings vibrate, each vibrational mode is identified with a fundamental particle, whose mass and quantum numbers are determined by the equations of the string. Overall, the infinite number of oscillation modes gives rise to an infinite tower of particles, but only the zero modes, i.e the massless particles are observable at energies way below $M_s$. In particular, the zero-mode spectrum of a closed string always includes the graviton, which means that gravity is automatically incorporated. Unlike supergravity however, string theory is UV-complete, since there is a UV/IR correspondence between strings at high and low energies, thus allowing all UV divergences to be reinterpreted as IR divergences. Moreover, the Lagrangian of string theory only contains one free parameter $\alpha'$ (historically called Regge slope), which defines the string length $l_s = \sqrt{\alpha '}$ and the string scale $M_s=1/\sqrt{\alpha '}$. The first attempt to build a string model was the bosonic string theory, only consistent in $D=26$ dimensions, the reason being conformal anomaly cancellation. Because its spectrum only contains bosons and because of the presence of tachyons (particles with negative mass squared), it was clear from the start that this theory could not represent a description of our Universe. Further development led to the appearance of superstring theory, where both bosons and fermions are present and where the dimension of the spacetime is restricted to $D=10$. No tachyons are present in superstring theory and the spectrum of excitations is governed by supersymmetry. Phenomenologically, this makes superstring theory a possible extension of physics at higher energies. After the first string revolution in 1984, five different superstring theories were developed, namely Type I, Type IIA, Type IIB, heterotic $SO(32)$ and heterotic $E_8 \times E_8$. Type I is based on both open and closed strings, while the other four only contain closed strings. During the second string revolution (1994), it was proved that these theories may be the lower limits of an underlying 11-dimensional theory, called the M-theory (see Figure~\ref{figsuperstrings}). Moreover, the five superstring theories are connected by dualities: for example, Type IIA and Type IIB are T-dual, and so are heterotic $SO(32)$ and heterotic $E_8 \times E_8$. There is also an S-duality between heterotic $SO(32)$ and Type I. We will further discuss in more detail what a heterotic string theory is, since they are the class of string models on which this thesis is based. The reason why they are so important in this context is because they connect with particle physics naturally. One $E_8$ factor of the gauge group $E_8 \times E_8$ contains all the GUT groups in \eqref{gutgroups}. By comparison, IIB has no gauge symmetry and IIA has only $U(1)$ gauge symmetry, thus making them more problematic from a phenomenological perspective, unless D-branes are introduced. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{superstrings2} \caption{The various types of superstring theories arising as lower limits of the M-theory. \cite{kiritsislecture}} \label{figsuperstrings} \end{figure} \subsection{Heterotic string theory} \noindent The heterotic string theory is one of the most promising candidates for a unified theory of physics. It was first introduced in 1985 as a theory of closed strings with decoupled left- and right-moving modes, where the left sector is defined in 26 dimensions as a bosonic string theory with spacetime coordinates $\lbrace X_L^i(\sigma + \tau)\rbrace_{i=0,...,25} $, while the right sector is defined in 10 dimensions as a superstring theory, with bosonic and fermionic coordinates denoted as $\lbrace X_R^i (\sigma-\tau)\rbrace_{i=0,...,9}$ and $\lbrace\psi^i_R(\sigma-\tau)\rbrace_{i=0,...,9}$ respectively, where $\tau$ and $\sigma$ are worldsheet coordinates of the string. The extra $16$ degrees of freedom in the left sector are regarded as dimensions of an internal compact space, namely a maximal $16\textrm{d}$ torus $T^{16}$ with critical radius $R = \sqrt{\alpha'}$. It is useful to re-label these parameters as $\lbrace X^I(\sigma + \tau)\rbrace_{I=10,...,25}$, and separate them from the rest of the bosonic left-movers $\lbrace X_L^i(\sigma + \tau) \rbrace_{i=0,...,9}$, which are to be combined with their right-moving counterparts in order to form the physical spacetime coordinates of the string in 10 dimensions \begin{eqnarray} X^i(\sigma, \tau) = X_L^i(\sigma + \tau) + X_R^i(\sigma - \tau). \end{eqnarray} \noindent In conformity with Ref.~\cite{stringquartet1}, the worldsheet action which characterises the dynamics of the heterotic string can be written as \begin{eqnarray} S = - \int d \tau \int_0^\pi d \sigma \dfrac{1}{4 \pi \alpha '} \left[ \partial_{\alpha} X^i \partial^{\alpha} X^i + \partial_{\alpha} X^I \partial^{\alpha} X^I + \overline{\psi}{}^i_R \Gamma^{\alpha} \partial_{\alpha} \psi_R^i \right], \end{eqnarray} \noindent where $\Gamma^\alpha$ ($\alpha=0,1$) are the two-dimensional Dirac matrices satisfying $\lbrace \Gamma^\alpha,\Gamma^\beta\rbrace = 2 \eta^{\alpha \beta}$. Under the light-cone quantisation, which is introduced to remove all negative-norm states, the coordinates $X^{\pm} = (X^0 \pm X^{9})/\sqrt{2}$ and $\psi^{\pm}_R=(\psi_R^0 \pm \psi_R^{9})/\sqrt{2}$ are gauge-fixed. This has the apparent effect of breaking manifest $SO(1,9)$ Lorentz symmetry down to the rotational subgroup $SO(8)$ of the transverse coordinates $\lbrace X^i \rbrace_{i=1,...,8}$ and $\lbrace \psi_R^i \rbrace_{i=1,...,8}$. Furthermore, under an alternative formulation of heterotic string theory, known as the fermionic construction, the extra 16 degrees of freedom $\lbrace X^I\rbrace_{I=10,...,25}$ are redefined as 32 left-moving spin-$1/2$ Majorana fermions $\lbrace \lambda^A\rbrace_{A=1,...,32}$, whose action according to Ref.~\cite{bbschwarz} is of the form \begin{eqnarray} S \sim \int d^2 \sigma \overline{\lambda}{}^A \Gamma^{\alpha} \partial_{\alpha} \lambda^A. \end{eqnarray} \noindent Through this formulation, the entire action of the string acquires a global $SO(32)$ symmetry: fermions $\lambda^A$ transform in the fundamental representation, while coordinates $X^i$, $\psi_R^i$ are singlets. For this reason, left-movers can be specified as $SO(8) \times SO(32)$ multiplets, while right movers, which are only affected by the $SO(8)$ rotation, are labeled by their $SO(8)$ quantum numbers. It is intuitive to assume that the global $SO(32)$ symmetry descends locally to a gauge symmetry. In fact, it can be shown that heterotic string theories possess either gauge group $SO(32)$ or $E_8 \times E_8$, depending on the choice of GSO projection. The GSO projection is essential for realising space-time supersymmetry, because it truncates the spectrum on every mass level, so that the bosonic and fermionic degrees of freedom become equal. In particular, the tachyonic ground state of the bosonic sector is completely eliminated. Since we will solely focus on the $E_8\times E_8$ theory, the GSO projection involves splitting the fermions $\lambda^A$ into two groups and imposing a separate projection condition on each set, so that bosonic states $\lambda^A \lambda^B$ form the adjoint representation $(\mathbf{248}, \mathbf{1}) \oplus (\mathbf{1}, \mathbf{248})$ of $E_8 \times E_8$. As for the other heterotic string theory, it is often disregarded by phenomenology, because the adjoint of $SO(32)$ does not contain the spinor representation of $SO(10)$, therefore the connection to SUSY GUTs is harder to realise. Now, in order to build the massless spectrum of the $E_8\times E_8$ theory, the left-moving and right-moving modes of zero mass have to be paired, so that their tensor product is interpreted as a physical state. The massless states of the right-moving sector are the vector $\mathbf{8}_V$ and the spinor $\mathbf{8}$ representations of $SO(8)$, while in the left-moving sector, we encounter a vector $(\mathbf{8}_V, \mathbf{1})$ and a tensor $(\mathbf{1},\textbf{Adj}_{E_8 \times E_8})$ representation of $SO(8) \times (E_8 \times E_8)$. The combination of the two sectors gives the following spectrum of particles \begin{align} (\mathbf{8}_V,\mathbf{1}) \times (\mathbf{8}_V + \mathbf{8}) & = (\mathbf{1},\mathbf{1}) + (\mathbf{28},\mathbf{1}) + (\mathbf{35},\mathbf{1}) + (\mathbf{56},\mathbf{1}) + (\mathbf{8}',\mathbf{1}), \\ (\mathbf{1},\textbf{Adj}_{E_8 \times E_8})\times (\mathbf{8}_V + \mathbf{8}) & = (\mathbf{8}_V ,\textbf{Adj}_{E_8 \times E_8})+(\mathbf{8} ,\textbf{Adj}_{E_8 \times E_8}), \end{align} \noindent which are to be interpreted in the next section as the gravity multiplet $\lbrace \phi, B, g, \psi, \lambda \rbrace$ and the $E_8 \times E_8$ gauge multiplet $\lbrace A^a, \chi^a \rbrace$ of an effective $N=1$ supergravity theory. \section{The 10d Heterotic $N = 1$ Supergravity} \label{n=1susy} \noindent In the low energy limit, where string excitations are much smaller than $M_s$, the heterotic string theory is described by a 10-dimensional $N = 1$ supergravity, which is coupled to a 10-dimensional $E_8 \ \times E_8$ super-Yang-Mills theory. The field content of this theory is given by a gravity multiplet, which contains the metric $g_{M N}$, the NS two-form $B_{M N}$, and the dilaton $\phi$, as well as their fermionic superpartners, the gravitino $\psi_M$ and the dilatino $\lambda$, and an $E_8 \times E_8$ gauge multiplet, consisting of the vector potential $A_M^a$ and its superpartner, the gaugino $\chi^a$. The field strength associated to the gauge field is defined as $F = dA + A \wedge A$, while the spin connection $\omega$ gives rise to the curvature tensor $R= d\omega + \omega \wedge \omega$. With these notations, using Refs.~\cite{GSW},~\cite{bbschwarz} and~\cite{Polchinski}, the bosonic part of the 10d supergravity action is written up to first order in $\alpha '$ as \begin{gather} \label{10daction} S = \dfrac{1}{2 \kappa^2} \int d^{10} x \sqrt{-g} e^{- 2 \phi} \left( R + 4 (\partial \phi)^2 - \dfrac{1}{2} H^2 - \dfrac{\alpha'}{4} \textrm{Tr} F^2 \right) , \\ \textrm{with } H = d B - \dfrac{\alpha'}{4} (\omega_{\textrm{YM}} - \omega_{\textrm{L}})\, , \notag \end{gather} \noindent where $\kappa$ is the ten-dimensional gravitational coupling constant, while $\omega_{\textrm{YM}}$ and $\omega_{\textrm{L}}$ are the gauge and gravitational Chern-Simons forms, respectively. In a similar way, the fermionic terms of the action are given by Eq.~\eqref{fermionicaction}, however due to supersymmetry, they can be omitted. Before moving on with our discussion, a close examination of anomalies is in order. Since heterotic string theories are chiral, gauge and gravitational anomalies are expected to arise from hexagon loops of chiral fermions. These anomalies are analogous to the triangle diagrams in the Standard Model and their external fields are combinations of gauge bosons and gravitons. Mathematically, the 10d anomaly is represented by a 12-form polynomial, which factorises into a 4-form and an 8-form term for specific gauge groups such as $E_8\times E_8$ and $SO(32)$. In fact, those two groups are the only viable gauge groups for a consistent 10d super-Yang-Mills and supergravity theory primarily because of this factorisation, since anomalies of this form are reducible and can be canceled out completely \cite{fabianruehle}. This is realised through the famous Green-Schwarz mechanism, by introducing counter-terms corresponding to tree-level exchanges of a two-form $B$ between external fields (see Fig.~\ref{figgreenschwarz}). Here, $B$ is known to be the anti-symmetric component of the gravity multiplet, and $H$ is its field strength. It is important to note that the Chern-Simons forms $\omega_{\textrm{YM}} $ and $\omega_{\textrm{L}}$ are added to the expression of $H$ in order for the Green-Schwarz mechanism to take place. By definition, the Chern-Simons forms are expressed as \begin{eqnarray} \omega_{\textrm{YM}} = \textrm{Tr} \left(A \wedge dA + \dfrac{2}{3} A \wedge A \wedge A\right), \,\,\,\,\,\,\,\,\,\,\,\, \omega_{\textrm{L}} = \textrm{Tr} \left(\omega \wedge d \omega + \dfrac{2}{3} \omega \wedge \omega \wedge \omega\right) \end{eqnarray} \noindent and satisfy $d\omega_{\textrm{YM}} = \textrm{Tr} (F\wedge F)$ and $d \omega_{\textrm{L}} = \textrm{Tr}(R\wedge R) $. Because of this, the field strength $H$ must obey the modified Bianchi identity \begin{eqnarray} \label{modifiedbianchi} d H = \dfrac{\alpha'}{4}\left(\textrm{Tr} (R \wedge R) - \textrm{Tr} (F \wedge F)\right) . \end{eqnarray} \noindent With this setup, one can finally build an anomaly-free particle model. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{green-schwarz-mechanism12} \caption{Diagram depiction of the Green-Schwarz mechanism. Anomalies arising from chiral fermion loops are canceled by the tree-level exchange of a 2-form $B$.} \label{figgreenschwarz} \end{figure} In order for supersymmetry to be realised, it is required that the variation of all fields under the SUSY transformation is zero. More exactly, suppose $\epsilon(x^M)$, a spinor of $SO(1,9)$, is the parameter of local $N=1$ supersymmetry with corresponding supercharge $Q$. Then the operator $Q$ must annihilate the vacuum state $\vert 0 \rangle$, so that $\langle \delta_{\epsilon} \Phi \rangle \equiv \langle 0\vert \left[ \overline{\epsilon} Q, \Phi \right] \vert 0 \rangle \stackrel{!}{=} 0$ for any field $\Phi $. If, $\Phi$ is bosonic, the equation is trivial, because the supersymmetric variation of a bosonic field is equal to a sum of fermionic fields, whose vevs always vanish, otherwise they would violate Lorentz invariance. Therefore, one only needs to ensure that the variation $\langle \delta_{\epsilon} \Phi \rangle$ vanishes in the case where $\Phi$ is fermionic. Following \cite{GSW}, the supersymmetry variations of the fermionic fields (the gravitino, the dilatino and the gaugino) are given by \begin{align} \label{killingspinoreq} \delta \psi_M & = \dfrac{1}{\kappa} D_M \epsilon + \dfrac{1}{8 \sqrt{3} \kappa} e^{-\phi} \left( \Gamma_{M}{}^{NPQ} - 9 \delta_M^N \Gamma^{PQ}\right) \epsilon H_{NPQ} + (\textrm{Fermi})^2 ,\notag \\ \delta \chi^a & = - \dfrac{\sqrt{\alpha'}}{4 \sqrt{2} \kappa } e^{-\phi/2} \Gamma^{M N} F_{MN}^a \epsilon + (\textrm{Fermi})^2 , \\ \delta \lambda & = - \dfrac{1}{\sqrt{2}} \left(\Gamma \cdot \partial \phi \right) \epsilon + \dfrac{1}{4 \sqrt{6} \kappa}e^{-\phi} \Gamma^{MNP} \epsilon H_{MNP} + (\textrm{Fermi})^2. \notag \end{align} \noindent Therefore, the Killing spinor equations are obtained by imposing $\langle\delta \psi_M\rangle, \langle\delta \chi^a\rangle, \langle\delta \lambda\rangle \stackrel{!}{=} 0 $. \subsection{The compactification ansatz} \label{compactificationansatz} \noindent In the remainder of this section, we discuss, in a somewhat informal manner, the preliminary mathematical conditions for a realistic heterotic string model. More technical details regarding the mathematics used here are provided in Section~\ref{mathschapter}. The goal of constructing 10d heterotic string theories is to connect them to observable 4d physics and, in particular, to $N=1$ SUSY extensions of the Standard Model. This is done by compactifying the six extra dimensions at a compactification scale $M_c=1/l_c$, large enough to escape detection (with $l_c$ being the typical length of the curled up dimensions). The simplest ansatz is to assume that the 10d background is a direct product $M_4 \times X$, where $M_4$ is a 4d maximally symmetric space (Minkowski, de Sitter or anti-de Sitter), as suggested by current cosmological models\footnote{Experimental evidence indicates that the Universe is de Sitter, however the cosmological constant is so small, compared to the string scale $M_s$, that string compactifications with Minkowski space are a good approximation. In addition, string models with AdS vacuum are also compatible with observation, provided that the vacuum is uplifted via the KKLT mechanism.}, and $X$ is a 6d compact manifold with tangent bundle $TX$. To avoid confusion, the 10d coordinates will be labeled as $\lbrace x^{M}\rbrace_{M=0,...,9}$, the 4d external coordinates as $\lbrace x^{\mu}\rbrace_{\mu=0,...,3}$ and the 6d internal coordinates as $\lbrace y^{m}\rbrace_{m=4,...,9}$. With these notations, every field can be decomposed explicitly into external and internal components. For example, the background metric (i.e. the vev of $g_{MN}$) becomes block diagonal \begin{eqnarray} d s^2 = g_{\mu \nu} dx^{\mu} dx^{\nu} + g^{(6)}_{m n} dy^m dy^n, \end{eqnarray} \noindent as the Lorentz group $SO(1,9)$ breaks down to $SO(1,3)\times SO(6)$. Any non-diagonal perturbation $\delta g_{\mu n}$ is forbidden to acquire a vev, since it transforms as a vector under $SO(1,3)$ and would therefore violate Lorentz invariance. In more general cases, warp factors $A(y^m)$ are introduced by fluxes to modify the 4d metric to $e^{2 A(y)}g_{\mu \nu}$. Although useful in the context of moduli-stabilisation, such factors will not be considered here. In addition to specifying a spacetime ansatz, one needs to break the gauge group $E_8 \times E_8$ down to the gauge group $H_{4d}$ of a 4d theory. This is achieved by turning on background values for the internal components of gauge fields $A^a_m$, which are interpreted as connections of a vector bundle $V \rightarrow X$, with structure group $G \subset E_8$. The effect is that one $E_8$ factor splits as $G \times H_{4d}$, provided that $H_{4d}$ is the commutant of $G$ in $E_8$. The other $E_8$ is considered to be in the ``hidden sector", which couples only gravitationally to the physical theory and therefore its effects are negligible. If $\tilde{V}\rightarrow X$ is the hidden sector vector bundle, with structure group in $E_8$, then the complete $E_8 \times E_8$ bundle over $X$ is given by $V \oplus \tilde{V}$. In most calculations however, we will assume $\tilde{V}$ to be trivial. The choice of internal manifold $X$ and vector bundle $V$ is not arbitrary. As argued in Section~\ref{susymssm}, preserving $N=1$ SUSY in 4d is important for phenomenology and therefore certain restrictions have to be imposed. Supersymmetry is needed at low energies in order to stabilise the Higgs mass and we only want $N=1$, rather than $N \geq 2$ supersymmetry, because the theory has to contain chiral fermions. \subsection{Conditions for 4d $N = 1$ supersymmetry} \label{conditionsforn=1susy} \noindent Finding the conditions for 4d $N = 1$ SUSY amounts to applying the compactification ansatz $M_4 \times X$ to the Killing spinor equations in \eqref{killingspinoreq}. One looks for a solution which preserves precisely $1/4$ of the original $16$ supercharges. In order to simplify the discussion, we will assume that the vev of the dilaton $\phi$ is a constant and the vev of the field strength $H$ vanishes. Under these assumptions, the equation for the dilatino is automatically satisfied. However, the equations corresponding to the other two fermionic fields, the gravitino and the gaugino, are non-trivial and can be recast in the following form \begin{align} \label{covconst} \langle \delta_{\epsilon} \psi_M \rangle & = \nabla_M \epsilon \stackrel{!}{=} 0 \\ \label{gaugino} \langle \delta_{\epsilon} \chi^a\rangle & = \Gamma^{m n} F^a_{m n} \epsilon \stackrel{!}{=} 0 . \end{align} \noindent Here, the SUSY parameter $\epsilon$ is a $\mathbf{16}$ Majorana-Weyl spinor of $SO(1,9)$, which breaks into $(\mathbf{2}, \mathbf{4}) \oplus (\mathbf{2}', \overline{\mathbf{4}})$ under $SO(1,3)\times SO(6)$. It is convenient to express $\epsilon$ as $\eta \otimes \xi$, where $\eta$ is the external spinor $\mathbf{2}$ and $\xi$ is the internal spinor $\mathbf{4}$. The requirement of \eqref{covconst} that $\epsilon$ is covariantly constant means that both $\nabla_{\mu} \eta$ and $\nabla_m \xi$ must vanish. Using the relation \begin{eqnarray} \left[ \nabla_{\mu}, \nabla_{\nu}\right]\eta = \dfrac{1}{4} R_{\mu\nu\rho\sigma}\gamma^{\rho\sigma} \eta, \end{eqnarray} \noindent and the fact that, for a maximally symmetric space, the Riemann curvature tensor is \begin{eqnarray} R_{\mu\nu\rho\sigma} = \dfrac{R}{12} \left( g_{\mu\rho} g_{\nu\sigma} - g_{\mu\sigma} g_{\nu\rho} \right), \end{eqnarray} \noindent one can see that the only way for $\nabla_{\mu} \eta$ to be zero is when the 4d manifold $M_4$ is flat (Minkowski)\footnote{To our knowledge, no dS vacuum was ever achieved in heterotic models. On the other hand, up-lifting to dS is claimed to be done in KKLT IIB models using anti-D3-branes. \cite{kkltref}}. In a similar way, the condition $\nabla_m \xi \stackrel{!}{=}0$ implies that $R_{m n p q} \Gamma^{p q} \xi$ must vanish, but without a maximally symmetric restriction, the internal manifold $X$ is not required to be flat, only Ricci-flat, i.e. $R_{m n} = 0$. In addition to that, the holonomy group of $X$ has to ensure that the spinor $\xi$ stays invariant under parallel transport. The most general holonomy of a 6d manifold is $SO(6) \simeq SU(4)$, however in our case it has to be a subgroup $\textrm{Hol}(X) \subset SU(4) $ such that every element $U \in \textrm{Hol}(X)$ satisfies $U \xi = \xi$. One can easily see that the largest subgroup with this property is $SU(3)$, as $\xi$ can be rotated to take the form $(0,0,0,\xi_0)^T$, so that $SU(3)$ transformations acting on the first three components leave $\xi$ as a singlet. In fact, $SU(3)$ and discrete subgroups of $SU(3)$, which will not be considered here\footnote{For simplicity of the model, we assume the holonomy group of $X$ is $SU(3)$, however in orbifold constructions, a discrete subgroup of $SU(3)$ can be considered.}, are the only viable picks, because smaller subgroups of $SU(4)$ would allow too many supercharges in the 4d theory. Therefore, in addition to being Ricci-flat, $X$ must have precisely $SU(3)$ holonomy. Finding manifolds with such properties is in principle not an easy task. Luckily, a conjecture by Eugenio Calabi \cite{calabiconj1, calabiconj2}, followed by a proof by Shing-Tung Yau \cite{yautheorem} led to the following theorem \begin{theorem} [Yau's theorem] \label{yautheorem} A compact, 2n-dimensional K\"ahler manifold with vanishing first Chern class admits a unique Ricci-flat K\"ahler metric, for each given K\"ahler class. \end{theorem} \noindent Here, K\"ahler means complex manifold with holonomy $U(n)$ and the first Chern class $c_1(TX)$ is a topological invariant given by the cohomology class $[R]$ of the Ricci form \begin{eqnarray} \label{firstchernclass} c_1 (TX) \equiv \dfrac{1}{2 \pi} [R] \stackrel{!}{=} 0. \end{eqnarray} \noindent Such a manifold is called Calabi-Yau (CY) and its holonomy group is proven to be contained in $SU(n)$. This is because K\"ahler manifolds already have reduced anomaly $U(n) \simeq SU(n) \times U(1)$. The $U(1)$ factor, generated by the Ricci tensor, then vanishes if the Ricci-flat condition is imposed. In Section~\ref{cysection}, we will study the properties of Calabi-Yau manifolds in more detail, in particular the Calabi-Yau threefolds, which have $SU(3)$ holonomy and are therefore suitable for compactification. The other condition for unbroken supersymmetry, \eqref{gaugino}, can be recast in complex coordinates in the form of the Hermitian Yang-Mills equations \begin{eqnarray} \label{hermitianyangmills} g^{a \overline{b}} F_{a \overline{b}} = 0, \,\,\,\,\,\,\,\,\,\,\,\,\, F_{a b} = F_{\overline{a} \overline{b}} = 0 . \end{eqnarray} \noindent The second equation, $ F_{a b} = F_{\overline{a} \overline{b}} \stackrel{!}{=} 0$, can be satisfied for a suitable connection $A$, if the vector bundle $V$ is holomorphic (i.e. the transition functions are holomorphic maps). According to the Donaldson-Uhlenbeck-Yau (DUY) theorem \cite{duycitation1, duycitation2}, there exists a connection for which $g^{a \overline{b}} F_{a \overline{b}} \stackrel{!}{=} 0$ also holds true, if $V$ is poly-stable and has vanishing slope. Mathematically, the notion of stability is introduced by defining the slope of a bundle $V$ as \begin{eqnarray} \label{slopedefinition} \mu(V) \equiv \dfrac{1}{\textrm{rk}(V)} \int_X c_1(V) \wedge J \wedge J, \end{eqnarray} \noindent where $J$ is the K\"ahler form on $X$ and $c_1(V)$ and $\textrm{rk}(V)$ are the first Chern class and the rank of the bundle. Then $V$ is called stable if the slope satisfies $\mu(F) < \mu(V)$, for all sub-sheafs $F \subset V$ with rank $0<\textrm{rk}(F)<\textrm{rk}(V)$, and poly-stable if it is decomposable as a direct sum of stable bundles $V= \bigoplus_{i=1}^n U_i$, with slopes $\mu(U_i) = \mu(V)$. As for the zero-slope condition, it is automatically satisfied if we assume that \begin{eqnarray} \label{c1Viszero} c_1 (V) \equiv \dfrac{i}{2 \pi} [\textrm{Tr} F] = 0, \end{eqnarray} \noindent which is equivalent to saying that the structure group of $V$ is special unitary. This is needed to ensure that the structure group of $V$ embeds into $E_8$, and in addition to that, special unitary groups such as $SU(5)$, $SU(4)$ and $SU(3)$ give rise to the GUT groups $SU(5)$, $SO(10)$ and $E_6$ respectively, in the 4d theory. In general however, $U(n)$ vector bundles with $c_1(V) \neq 0$ can also lead to phenomenologically interesting compactifications \cite{timoweigandunitary}, although they will not be the subject of this thesis. In conclusion, in order for $N=1$ supersymmetry to be preserved at lower energies, the 4d space must be flat, the internal manifold must be Calabi-Yau and the vector bundle must be holomorphic and poly-stable. \subsection{Conditions for anomaly cancellation} \noindent The background geometry is further constrained by the anomaly cancellation condition. Since the left-hand side of \eqref{modifiedbianchi} is exact, $\textrm{Tr} (R \wedge R)$ and $\textrm{Tr} (F \wedge F)$ must be in the same cohomology class, thus leading to a topological identity \begin{eqnarray} \label{anomaly2ndchern} \textrm{ch}_2(TX) = \textrm{ch}_2 (V)+\textrm{ch}_2 (\tilde{V}) \end{eqnarray} \noindent between the tangent bundle of $X$ and the $E_8 \times E_8$ gauge bundle (here $\textrm{ch}_2$ denotes the second Chern character). In certain theories, the vacuum is altered by the presence of 5-branes, so the anomaly condition becomes \begin{eqnarray} \label{ch2ch2ch2w} \textrm{ch}_2(TX) - \textrm{ch}_2 (V) - \textrm{ch}_2 (\tilde{V}) = W, \end{eqnarray} \noindent with $W$ being the homology class of the two-cycles (curves) in $X$, around which the 5-branes wrap. Preservation of supersymmetry requires these cycles to be holomorphic, and consequently $W$ to be effective, i.e. an element of the Mori cone $\lbrace \sum_i a_i [C_i]$, where $C_i \subset X \textrm{ are holomorphic curve representatives}$, and $a_i \in \mathbb{R}^+ \rbrace$. Provided that the left-hand side of \eqref{ch2ch2ch2w} is effective, one can construct anomaly-free models by wrapping 5-branes on the relevant cycles. If we assume $\tilde{V}$ is trivial and use $c_1(TX) = c_1(V) = 0$, we obtain \begin{eqnarray} \label{c2c2W} c_2(TX) - c_2(V) = W, \end{eqnarray} \noindent for $W\in H_2(X,\mathbb{Z})$, an effective class. \section{Mathematical Ingredients for Compactification} \label{mathschapter} \noindent In order to have a proper understanding of Calabi-Yau manifolds and holomorphic vector bundles (the main ingredients of compactification), one needs to be familiarised with concepts from complex geometry and Hodge theory. In this section we will briefly discuss these topics and then proceed to describe a particular class of CY manifolds that is used in this thesis, namely the Complete Intersection Calabi Yau manifolds (CICYs). \subsection{Complex manifolds} \label{complexmanifoldssection} \noindent \begin{definition} An $n$-dimensional complex manifold $M$ is a $2 n$-dimensional real manifold that resembles the complex flat space $\mathbb{C}^n$ locally. \end{definition} \noindent In order to satisfy this, $M$ must be equipped with an atlas of charts $( U_{\alpha}, \phi_{\alpha})$, where $ \lbrace U_{\alpha} \rbrace$ are open subsets that cover $M$, and every map $\phi_{\alpha}: U_{\alpha} \rightarrow \mathbb{C}^n$ must be a homeomorphism. Moreover, for any two given subsets $U_{\alpha}$ and $U_{\beta}$ that satisfy $U_{\alpha}\cap U_{\beta} \neq \emptyset$, the associated transition map $\psi_{\beta \alpha } \equiv \phi_{\beta} \circ \phi^{-1}_{\alpha}$, \ $ \psi_{\beta \alpha }: \phi_{\alpha}(U_{\alpha}\cap U_{\beta})\rightarrow \phi_{\beta}(U_{\alpha}\cap U_{\beta})$ must be holomorphic. This is to ensure that the tangent space can be complexified with respect to some projection operators, such that $T_p M = T_p M^{+} \oplus T_p M^{-} $ and every vector field is decomposable into a holomorphic and an anti-holomorphic component. It has been shown that \begin{theorem} An even-dimensional real manifold is complex if and only if it is endowed with a globally defined almost complex structure $I_{a}{}^{b}$, satisfying $I_{a}{}^{b} I_{b}{}^{c} = - \delta_{a}^{c}$, and the Niejenhuis tensor $N_{a b}{}^c \equiv I_{[a}{}^c{}_{;b]} - I_{[a}{}^d I_{b]}{}^e I^c{}_{d;e}$ vanishes. \end{theorem} \noindent This makes it possible for local complex coordinates $z^a$ and $\overline{z}^{\overline{a}}$ to be introduced, so that on every patch, one has $I_{a}{}^{b} = i \delta_{a}{}^{b}$, $I_{\overline{a}}{}^{\overline{b}} = - i \delta_{\overline{a}}{}^{\overline{b}}$ and $I_{a}{}^{\overline{b}} = I_{\overline{a}}{}^{b}=0$. Any complex manifold admits a metric of the form $d s^2 = g_{a \overline{b}} d z^{a} d \overline{z}^{\overline{b}}$, which is called hermitian. It can be used to define the fundamental 2-form \begin{eqnarray} \label{kahlerform} J = i g_{a \overline{b}} d z^{a} \wedge d \overline{z}^{\overline{b}}, \end{eqnarray} \noindent by lowering one index of the complex structure, i.e. $J_{a \overline{b}} = I_a{}^c g_{c \overline{b}}$. \begin{definition} A K\"ahler manifold is a complex manifold with hermitian metric, for which the form $J$ is closed. \end{definition} \noindent In this case, $J$ is called a K\"ahler form. The restriction $d J = 0$ ensures that the only non-zero coefficients of the Levi-Civita connection are $\Gamma_{a b}^c$ and $\Gamma_{\overline{a} \overline{b}}^{\overline{c}}$, thus preserving holomorphicity under parallel transport, so the holonomy group is $U(n)$. Another important feature of K\"ahler geometry is that on every patch $U_{\alpha}$, we can define a real-valued function $K_{\alpha}$, known as the K\"ahler potential, which specifies the metric \begin{eqnarray} \label{kahlermetricfrompotential} g_{a \overline{b}} = \partial_a \partial_{\overline{b}} K_{\alpha}. \end{eqnarray} \noindent The K\"ahler potential is not unique and, on the intersection of patches, two K\"ahler potentials are related by $K_{\alpha} (z, \overline{z}) = K_{\beta}(z, \overline{z}) + f_{\alpha \beta} (z) + \overline{f}_{\alpha \beta} (\overline{z})$, where $f$ and $\overline{f}$ are a holomorphic and an anti-holomorphic function. Further properties stem from the computation of curvature tensors and, in particular, the Ricci form $R \equiv i R_{a \overline{b}} d z^a \wedge d \overline{z}^b = i \partial \overline{\partial} \textrm{log} ({g}^{1/2}) $, which is closed, globally defined and determines the first Chern class. We conclude our discussion of K\"ahler manifolds with a short look on complex projective spaces. \begin{example} As a subclass of compact K\"ahler manifolds, complex projective spaces $\mathbb{CP}^n$ or, shortly, $\mathbb{P}^n$ will be the building blocks for our Calabi-Yau manifolds. These spaces are obtained by identifying points in $\mathbb{C}^{n+1}\setminus \lbrace 0 \rbrace$ according to the equivalence relation $(x_0,...,x_{n}) \sim \lambda (x_0,...,x_{n})$, for $\lambda \in \mathbb{C}^*$, so that every element $(x_0: ...: x_n) \in \mathbb{P}^n$ corresponds to a line through the origin. The numbers $x_\alpha$ are called homogeneous coordinates on $\mathbb{P}^n$. On each open patch $U_{\alpha} = \lbrace (x_0:...:x_{n}) \vert x_{\alpha} \neq 0 \rbrace$, we can define new parameters $\xi^{\alpha}_{\mu} \equiv x_{\mu}/x_{\alpha} $ ($\mu \neq \alpha$), which map $U_{\alpha}$ to $\mathbb{C}^n$ and are called inhomogeneous coordinates. The transition functions on $U_{\alpha}\cap U_{\beta}$ overlaps are simply multiplications by $x_{\alpha}/x_{\beta}$ so they are holomorphic. In line with \eqref{kahlermetricfrompotential}, the Fubini-Study K\"ahler potential is introduced on every patch $U_{\alpha}$ \begin{eqnarray} K_{\alpha} = \dfrac{i}{2 \pi} \textrm{ln} \kappa_{\alpha} \,\textrm{,}\,\,\,\,\,\,\,\,\,\,\, \textrm{where } \kappa_{\alpha} \equiv \sum_{\substack{\mu = 0}}^{n} \vert \xi_{\mu}^{\alpha}\vert^2 , \end{eqnarray} \noindent which allows one to calculate the metric and the K\"ahler form $J= i \partial \overline{\partial} K_{\alpha}$. In particular, the Ricci tensor $R=-(n+1)J$ shows that projective spaces are not Calabi-Yau, even though many well-known Calabi-Yau manifolds are submanifolds of projective spaces. \end{example} \subsection{Hodge theory} \label{hodgetheorychapter} \noindent Hodge theory is an area of algebraic geometry that studies cohomology groups. The reason why it is important to compactification is that there is a correlation between the topology of the internal manifold and the low-energy spectrum of particles. In this chapter we will analyse the most basic types of cohomologies: de Rham and Dolbeault. Section~\ref{vectorbundleschapter} will introduce vector bundle cohomologies, which are more elaborate. We start by defining $\Omega^p(M)$, as the set of all $p$-forms that live on a Riemannian $n$-dimensional manifold $M$, and we assume that $M$ is compact and without a boundary. The exterior derivative is an operator $d:\Omega^p(M)\rightarrow \Omega^{p+1}(M)$, so that \begin{eqnarray} d \omega_p = \dfrac{1}{p!} \partial_{i_0} \omega_{i_1 ... i_p} dx^{i_0} \wedge dx^{i_1} \wedge ... \wedge dx^{i_p}, \,\,\,\,\,\,\,\,\,\, \omega_p \in \Omega^{p}(M). \end{eqnarray} \noindent A $p-$form $\omega_p$ is called closed if $d \omega_p = 0$ and exact if $\omega_{p} = d \nu_{p-1}$, for some $\nu_{p-1} \in \Omega^{p-1}(M)$. Since $d^2 = 0$, all exact forms are also closed, while closed forms are not necessarily exact (although on local patches, they can be expressed as $\omega_{p} = d \nu_{p-1}$). \begin{definition} Let $Z^{p}$ be the set of closed $p$-forms on $M$, and $B^{p}$ the set of exact $p$-forms on $M$. Then the quotient $H^p(M)=Z^p/B^p$ is called the $p$th de Rham cohomology group. \end{definition} \noindent The elements of $H^p(M)$ are cohomology classes $[\omega_p]$, obtained through setting the equivalence relation $\omega_p \sim \omega_p + d \nu_{p-1}$, and $\omega_p$ is called the representative of the class. The dimension of $H^p(M)$ is a topological invariant, referred to as the Betti number $b_p$. It is useful to define the Hodge star operator $\ast: \Omega^p(M) \rightarrow \Omega^{n-p}(M)$ \begin{eqnarray} \ast (d x^{i_1} \wedge ... \wedge d x^{i_p}) = \dfrac{\sqrt{\vert g \vert}}{(n-p)!}\epsilon^{i_1...i_p}{}_{i_{p+1} ... i_{n}} dx^{i_{p+1}} \wedge ... \wedge dx^{i_n}, \end{eqnarray} \noindent in order to introduce the inner product on $p$-forms \begin{eqnarray} (\alpha, \beta) = \int_M \alpha \wedge \ast \beta, \,\,\,\,\,\,\,\,\,\,\, \textrm{where} \,\,\,\, \alpha, \beta \in \Omega^p(M), \end{eqnarray} \noindent as well as the adjoint exterior derivative $d^{\dagger}:\Omega^p(M)\rightarrow \Omega^{p-1}(M)$, $d^{\dagger}= (-1)^{n p + n+1} \ast d \ast $, for which $(\alpha_p,d \beta_{p-1}) = (d^\dagger \alpha_p, \beta_{p-1})$, and the Laplace operator $\Delta = d d^{\dagger} + d^{\dagger} d$. \begin{definition} A form $\gamma_p$ is said to be harmonic if $\Delta \gamma_p = 0$. \end{definition} \noindent Using the inner product, one can prove that on a compact manifold without a boundary, $\gamma_p$ is harmonic if and only if $d \gamma_p = d^{\dagger} \gamma_p = 0$. Moreover, the Hodge decomposition theorem states that for any $p$-form $\omega_p$, there is a unique decomposition $\omega_p = \gamma_p + d \alpha_{p-1} + d^{\dagger} \beta_{p+1}$, where $\gamma_p$ is the corresponding harmonic $p$-form. The consequence of this is that every cohomology class contains precisely one harmonic representative. The Betti number $b_p$ is therefore identical with the number of linearly independent harmonic $p$-forms on $M$. In particular, if $\gamma_p$ is harmonic, then $\ast \gamma_p$ is also harmonic, which means that $H^p(M)$ and $H^{n-p}(M)$ are isomorphic and $b_p=b_{n-p}$ (the Poincar\'e duality). \vspace{10mm} Moving on to complex manifolds, we define $M$ as an $n$-dimensional compact complex manifold and $\Omega^{p,q}(M)$ as the space of $(p,q)$-forms on $M$. The exterior derivative can be split into $d = \partial + \bar{\partial}$, where $\partial$ and $\bar{\partial}$ are called Dolbeault operators, acting separately as $\partial: \Omega^{p,q}(M)\rightarrow \Omega^{p+1,q}(M)$ and $\bar{\partial}: \Omega^{p,q}(M)\rightarrow \Omega^{p,q+1}(M)$. They satisfy $\partial^2 = \bar{\partial}^2 =0$. As before, a $(p,q)$-form $\omega_{p,q}$ is $\bar{\partial}$-closed if $\bar{\partial} \omega_{p,q} = 0$ and $\bar{\partial}$-exact if $ \omega_{p,q} = \bar{\partial} \nu_{p,q-1}$ for some $\nu_{p,q-1} \in \Omega^{p,q-1}(M)$. \begin{definition}The $(p,q)$th Dolbeault cohomology is defined as the quotient $H_{\bar{\partial}}^{p,q}(M) = Z^{p,q}_{\bar{\partial}}(M)/B^{p,q}_{\bar{\partial}}(M)$, where $Z^{p,q}_{\bar{\partial}}(M)$ and $B^{p,q}_{\bar{\partial}}(M)$ are the sets of $\bar{\partial}$-closed and $\bar{\partial}$-exact $(p,q)$-forms, respectively. \end{definition} \noindent The dimension of $H_{\bar{\partial}}^{p,q}(M)$ is called the Hodge number $h^{p,q}$. On complex manifolds with hermitian metric, an inner product between $(p,q)$-forms is introduced \begin{eqnarray} (\alpha, \beta) = \int_M \alpha \wedge \overline{\ast} \beta, \,\,\,\,\,\,\,\,\,\,\, \textrm{where} \,\,\,\, \alpha, \beta \in \Omega^{p,q}(M) \,\,\,\,\,\textrm{and} \,\,\,\,\, \overline{\ast} \beta \equiv \ast \bar{\beta}, \end{eqnarray} \noindent which allows one to define adjoint operators $\partial^{\dagger}$, $\bar{\partial}^{\dagger}$ and Laplace operators \begin{eqnarray} \Delta_{\partial} = \partial\partial^{\dagger} + \partial^{\dagger} \partial, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\Delta_{\bar{\partial}} = \bar{\partial} \bar{\partial}^{\dagger} + \bar{\partial}^{\dagger} \bar{\partial} . \end{eqnarray} \noindent A $(p,q)$-form $\gamma_{p,q}$ is said to be $\bar{\partial}$-harmonic, if it is annihilated by the Laplacian $\Delta_{\bar{\partial}}$. One can prove that this is equivalent to $\bar{\partial} \gamma_{p,q} = 0$ and $\bar{\partial}^{\dagger} \gamma_{p,q} = 0$. Moreover, $\bar{\partial}$-harmonic forms are in one-to-one correspondence with the cohomology classes of $H^{p,q}_{\overline{\partial}}(M)$, due to the unique Dolbeault decomposition $\omega_{p,q} = \gamma_{p,q} + \bar{\partial} \alpha_{p, q-1} + \bar{\partial}^{\dagger}\beta_{p, q+1}$ for any form $\omega_{p,q}$, where $\gamma_{p,q}$ is $\bar{\partial}$-harmonic. This is in analogy with the de Rham case, although in general the de Rham and Dolbeault cohomologies are not related: de Rham is purely topological, while Dolbeault depends on the specific choice of complex structure. Only on compact K\"ahler manifolds is there an explicit relation, namely through the identity $\Delta = 2\Delta_{\bar{\partial}}$, which ensures that \begin{eqnarray} H^r(M) = \bigoplus_{p+q=r} H^{p,q}_{\bar{\partial}}(M), \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, b_{r} = \sum_{p+q=r} h^{p,q}. \end{eqnarray} \noindent In this case, the Dolbeault cohomology is considered quasi-topological, because it only depends on complex structure and not on the choice of K\"ahler metric \cite{GSW} \cite{dominicjoyce}. The corresponding Hodge numbers are constrained to satisfy \begin{eqnarray} \label{hodgerule1} h^{p,q} = h^{q,p} \textrm{ (Hodge symmetry)}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, h^{p,q} = h^{n-p,n-q} \textrm{ (Serre duality)}. \end{eqnarray} \noindent Now, on a compact K\"ahler manifold, the K\"ahler form \eqref{kahlerform} is not just closed, but can be chosen to be co-closed (and therefore harmonic), so it can be expanded in a basis of harmonic $(1,1)$-forms $\lbrace \omega_i \rbrace$ as \begin{eqnarray} \label{kahlercone} J = \sum^{h^{1,1}}_{i=1} t^i \omega_i, \end{eqnarray} \noindent where the parameters $t^i$ give the K\"ahler cone, i.e. the set of possible K\"ahler forms on $M$. Such parameters are constrained by the requirement that $\int_M J^n > 0$, since $J^n$ is proportional to the volume element, and also by $\int_{C_1} J > 0$, $\dots$, $\int_{C_{n-1}} J^{n-1}>0$, where $C_i$ is an $i$-cycle. It is often possible and convenient to choose a basis $\lbrace \omega_i \rbrace$, such that the K\"ahler cone is $t^i > 0$. On a final note, the Ricci form is also closed, but not necessarily harmonic, and it defines the cohomology class $c_1(TM) \in H_{\bar{\partial}}^{1,1}(M)$. If $c_1(TM)$ is trivial (i.e. $R$ is exact), the compact K\"ahler manifold is called Calabi-Yau. \subsection{Calabi-Yau manifolds} \label{cysection} As mentioned before, a Calabi-Yau $n$-fold is an $n$-dimensional compact K\"ahler manifold with vanishing first Chern class \eqref{firstchernclass}. On such a manifold, Theorem~\ref{yautheorem} (Yau's Theorem) guarantees the existence of a unique Ricci-flat metric. Moreover, the metric has special holonomy group $SU(n)$, which means that the manifold admits covariantly constant spinors $\xi$, $\overline{\xi}$ of opposite chirality. Simple examples of Calabi-Yau $n$-folds include the complex elliptic curve, i.e. the two-torus $T^2$ ($n=1$) and the $K3$ surfaces ($n=2$). However, in conformity with our compactification ansatz in Section~\ref{conditionsforn=1susy}, we require that $X$ is a Calabi-Yau threefold. Moreover, $X$ must have precisely $SU(3)$ holonomy and not a subgroup thereof, so trivial cases such as $T^6$ or $T^2 \times K3$ are excluded. \begin{theorem} A compact K\"ahler threefold is Calabi-Yau if and only if it admits a nowhere vanishing holomorphic $(3,0)$-form $\Omega$. \end{theorem} \noindent More explicitly, a holomorphic $(3,0)$-form can be written as $\Omega_{m n r} = f(z) \epsilon_{m n r}$ on every patch, where $\epsilon_{m n r}$ is the Levi-Civita symbol and $f(z)$ is a nowhere vanishing holomorphic function. This ensures that the Ricci form is exact, so the first Chern class vanishes. Conversely, for any Calabi-Yau manifold, one can construct a nowhere vanishing $(3,0)$-form $\Omega$, using the covariantly constant spinor $\xi$ and the $\gamma$-matrices \begin{align} \label{holomorphic30} \Omega_{m n r} = \xi^{T}\gamma_{m n r} \xi. \end{align} \noindent It can be proven that $\Omega$ is harmonic and (up to a constant) unique. The corresponding cohomology class $[\Omega]$ is therefore the only element in $H^{3,0}(X)$, so $h^{3,0} = 1$. In addition to that, the uniqueness of $\Omega$ sets a duality \begin{eqnarray} \label{hodgerule2} h^{0,q} = h^{0,3-q}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, h^{p,0} = h^{3-p,0}, \end{eqnarray} because for any given class $[\alpha] \in H^{0,q} (X)$, there exists a unique class $[\beta] \in H^{0,3-q}(X)$, such that $\int_X \Omega \wedge \alpha \wedge \beta = 1$. The rules listed in \eqref{hodgerule1}, together with the fact that $h^{0,0} = 1$ for a connected manifold, and $h^{0,1} = 0$ for strictly $SU(3)$ holonomy,\footnote{The reason behind this is that spinors on a CY manifold correspond to $(0,k)$-forms, for $k=0,...,3$, due to Dirac matrices acting like creation and annihilation operators: $\lbrace \gamma^m, \gamma^n\rbrace = \lbrace \gamma^{\overline{m}}, \gamma^{\overline{n}}\rbrace=0$ and $\lbrace \gamma^{m}, \gamma^{\overline{n}}\rbrace = 2 g^{m\overline{n}}$. In particular, the states $\gamma^{\overline{m}}\xi$ and $\gamma^{\overline{m}}\gamma^{\overline{n}}\xi$ get multiplied to $(0,1)$ and $(0,2)$-forms respectively, and as they are not allowed to transform trivially under $SU(3)$, the restriction $h^{0,1}=h^{0,2}=0$ has to be imposed.} show that the only unconstrained Hodge numbers remain $h^{1,1}$ and $h^{2,1}$. This information is best encoded by the Hodge diamond diagram \begin{eqnarray} \label{hodgediamond} \begin{tabular}{p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}} & & & $h^{0,0}$ & & & \\ & &$h^{1,0}$ & & $h^{0,1}$ & & \\ &$h^{2,0}$ & & $h^{1,1}$ & &$h^{0,2}$ & \\ $h^{3,0}$ & &$h^{2,1}$ & & $h^{1,2}$ & & $h^{0,3}$ \\ &$h^{3,1}$ & & $h^{2,2}$ & &$h^{1,3}$ & \\ & &$h^{3,2}$ & & $h^{2,3}$ & & \\ & & & $h^{3,3}$ & & & \\ \end{tabular} \,\,\,\,\,\,\,\, = \begin{tabular}{p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}p{1mm}} & & & $\,1$ & & & \\ & &$ \,0$ & & $\,0$ & & \\ &$\, 0$ & & $h^{1,1}$ & &$\,0$ & \\ $\, 1$ & &$h^{2,1}$ & & $h^{2,1}$ & & $\,1$ \\ &$\,0$ & & $h^{1,1}$ & &$\,0$ & \\ & &$\,0$ & & $\,0$ & & \\ & & & $\,1$ & & & \\ \end{tabular} \,\,. \end{eqnarray} \noindent Additionally, one can introduce the Euler number, which simplifies to \begin{eqnarray} \chi \equiv \sum_{p=0}^6 (-1)^p b_p = 2 \left(h^{1,1}-h^{2,1}\right). \end{eqnarray} \noindent Being K\"ahler, Calabi-Yau manifolds admit a metric $d s^2 = g_{m \overline{n}} d z^{m} d \overline{z}^{\overline{n}}$ with a corresponding K\"ahler form $J = i g_{m \overline{n}} d z^{m} \wedge d \overline{z}^{\overline{n}}$ that is closed and expanded in a basis of harmonic $(1,1)$-forms $\lbrace \omega_i \rbrace$ as in \eqref{kahlercone}. Integrating $J \wedge J \wedge J$ over $X$ gives the volume $\mathcal{V}$ of the manifold \begin{eqnarray} \label{cyvolume} \int_X J \wedge J \wedge J = d_{ijk} t^i t^j t^k = 6 \mathcal{V} , \end{eqnarray} \noindent where $d_{i j k}$ are the intersection numbers \begin{eqnarray} \label{intersectionnumbers} d_{ijk} = \int_X \omega_i \wedge \omega_j \wedge \omega_k. \end{eqnarray} \noindent It is obvious from \eqref{kahlercone} that the choice of K\"ahler form is determined by $h^{1,1}$ real parameters $t^i$. In a similar way, the complex structure is specified by $h^{2,1}$ complex parameters $z^a$, as seen by expanding the $(2,1)$-form $I_{m n\overline{r}} = \Omega_{mns} I^s{}_{\overline{r}}$ in a basis of harmonic forms. Consequently, the Hodge numbers $h^{1,1}$ and $h^{2,1}$ describe the topology of the manifold in two separate ways: $h^{1,1}$ counts the independent size deformations that keep the complex structure invariant, while $h^{2,1}$ counts deformations of shape. An important observation, which led to the concept of mirror symmetry, was that Calabi-Yau manifolds come in pairs $(X,\tilde{X})$, satisfying $H^{1,1}(X) \simeq H^{2,1}(\tilde{X})$ and $H^{2,1}(X) \simeq H^{1,1}(\tilde{X})$. The type IIA string theory compactified on $X$ was proven to be dual to the type IIB string theory compactified on $\tilde{X}$. In the case of heterotic string theory however, mirror symmetry is not so well understood. \subsection{Complete Intersection Calabi-Yau manifolds} \label{cicysection} \noindent One reliable method of constructing Calabi-Yau Manifolds is by embedding them in a product of projective spaces $\mathcal{A}= \mathbb{P}^{n_1} \times ... \times \mathbb{P}^{n_m}$, referred to as the ambient space. The reason why complex projective spaces are used instead of $\mathbb{C}^n$ is because none of the K\"ahler submanifolds of $\mathbb{C}^n$ are compact, while all analytic submanifolds of $\mathbb{P}^n$ are guaranteed to be K\"ahler and compact \cite{Candelas:1987kf}. A Complete Intersection Calabi-Yau Manifold (CICY) $X$ is defined as the intersection of $K$ hypersurfaces $\lbrace{M_j}\rbrace_{j=1,...,K}$, \begin{eqnarray} X = M_1 \cap ... \cap M_K, \end{eqnarray} \noindent where each $M_j$ is the zero locus of a polynomial $p_j$, with variables in the ambient space $\mathcal{A}$. In order for the intersection to be called complete, it is necessary that the $K$-form \begin{eqnarray} \Psi = d p_1 \wedge ... \wedge d p_K, \end{eqnarray} \noindent is nowhere vanishing on $X$, to ensure that $X$ does not possess any singularities. Additionally, this imposes that the dimension of the CICY is equal to the dimension of the ambient space minus the number of polynomials. In the case where $X$ is a threefold, this means \begin{eqnarray} \sum^m_{i=1} n_i - K = 3 . \end{eqnarray} \noindent Now, suppose the homogenous coordinates of each projective space $\mathbb{P}^{n_i}$ are written as $\mathbf{x}^{(i)} = (x_0^{(i)}:x_1^{(i)}:...:x_{n_i}^{(i)} )$, so that $\mathcal{A}$ has projective coordinates $(\mathbf{x}^{(1)}, ..., \mathbf{x}^{(m)})$. Every polynomial $p_j$ defined on $\mathcal{A}$ is characterised by a multi-degree vector $\mathbf{q}_j = (q^1_j,...,q^m_j)$, where $q^i_j$ specifies the degree in the $\mathbf{x}^{(i)}$ coordinate. It is useful to represent the corresponding CICY through the following configuration matrix \begin{eqnarray} \begin{bmatrix} \mathbb{P}^{n_1}& \vline & q^1_1 & q^1_2 & \dots & q^1_K \\ \mathbb{P}^{n_2}&\vline &q^2_1 & q^2_2 & \dots & q^2_K \\ \vdots &\vline &\vdots & \vdots & \ddots & \vdots \\ \mathbb{P}^{n_m}& \vline &q^m_1 & q^m_2 & \dots & q^m_K \end{bmatrix}^{h^{1,1},h^{2,1}}_{\chi} , \end{eqnarray} \noindent where the condition $\sum_{j=1}^K q^i_j = n_i + 1$ needs to be imposed for every $i=1,...,m$, in order for $c_1(TX)$ to vanish. There is a finite number ($7890$, to be precise) of possible CICY configurations, as originally established in Refs. \cite{Candelas:1987kf} and \cite{7890}. Out of those, we are only interested in the favourable configurations, for which the K\"ahler form $J$ descends directly from the K\"ahler forms $J_i$ of the projective spaces $\mathbb{P}^{n_i}$, for $i=1,...,m$. These CICYs have $h^{1,1} = m$ and their K\"ahler cone \eqref{kahlercone} and intersection numbers \eqref{intersectionnumbers} are simply obtained by setting the basis of $(1,1)$-forms $\lbrace\omega_i\rbrace$ to be $\omega_i = J_i\vert_X$. It is for this reason that favourable CICYs are preferred. Well-known examples include \begin{eqnarray} \label{examplesofcicys} \begin{matrix} \textrm{the quintic} &&& \textrm{the bicubic} &&& \textrm{the tetraquadric} \\[0.15cm] \,\,\,\ \left[ \mathbb{P}^{4} \, \vline \ 5 \right]^{1,101}_{-200}, &&& \,\,\,\,\,\,\,\, \setlength\arraycolsep{2.5pt} \begin{bmatrix} \mathbb{P}^{2}& \vline & 3 \\ \mathbb{P}^{2}&\vline & 3 \end{bmatrix}^{2,83}_{-162} , &&& \,\,\,\,\,\,\,\, \setlength\arraycolsep{2.5pt} \begin{bmatrix} \mathbb{P}^{1}& \vline & 2 \\ \mathbb{P}^{1}&\vline & 2 \\ \mathbb{P}^{1} &\vline & 2\\ \mathbb{P}^{1}& \vline & 2 \end{bmatrix}^{4,68}_{-128} . \end{matrix} \end{eqnarray} \noindent In particular, the tetraquadric manifold will constitute the focus of Chapter~\ref{tetraquadricchapter}. \subsection{Holomorphic vector bundles and their cohomologies} \label{vectorbundleschapter} We conclude our review of mathematical concepts with a short discussion of holomorphic vector bundles. More useful information can be found in Appendix~\ref{appendixvectorbundles}. \begin{definition} A vector bundle $E$ over an $n$-dimensional complex manifold $M$ is called holomorphic if it is endowed with a holomorphic projection $\pi: E \rightarrow M$ and the local trivialisation maps $\phi_\alpha: \pi^{-1}(U_\alpha) \rightarrow U_\alpha \times \mathbb{C}^r$ are biholomorphic. \end{definition} \noindent An equivalent statement is that on every overlap $U_{\alpha} \cap U_{\beta} \neq \emptyset$, the transition function $t_{\alpha \beta} \equiv \phi_{\alpha} \circ \phi^{-1}_{\beta}$, \ $ t_{\alpha \beta}: U_{\alpha} \cap U_{\beta} \rightarrow GL(r,\mathbb{C})$ is holomorphic. At every point $p \in M$, the fiber $E_p \equiv \pi^{-1}(p)$ is an $r$-dimensional complex vector space, thus giving the rank $r$ of the vector bundle. In particular, a vector bundle of rank $1$ is called a line bundle. Moreover, for each vector bundle $E$, one can define the dual bundle $E^*$ over $M$, whose fiber $E^*_p$ is the set of linear maps $f: E_p \rightarrow \mathbb{C}$. A local section is a map $\sigma: U_{\alpha} \rightarrow E$ that satisfies $\pi \circ \sigma = \textrm{id}_M$. It can be expanded as $\sigma = \sum_{i=1}^r \sigma^i s_i$ with respect to a local frame of $r$ linearly independent sections $( s_1,...,s_r )$ that span the fiber $E_p$ at every point $p \in U_{\alpha}$. In a similar way, bundle-valued $(p,q)$-forms are written as $\alpha = \sum_{i=1}^r \alpha^i \otimes s_i$, where $\alpha^i \in \Omega^{p,q}(M)$. The space of these $(p,q)$-forms is denoted $\mathcal{A}^{p,q} (E)$. \begin{example} Obvious examples of holomorphic vector bundles are the holomorphic tangent bundle $T^{1,0}M$ and its dual, the holomorphic cotangent bundle $T_{1,0}^*M$, whose sections are expanded by $\lbrace\partial_i \rbrace$ and $\lbrace d z_i \rbrace$ respectively, for $i=1,...,n$. Since their relation to the anti-holomorphic bundles $T^{0,1}M$ and $T^*_{0,1}M$ is isomorphic, through charge conjugation, it is customary to work only with $T^{1,0}M$ and $T_{1,0}^*M$ and simply refer to them as $TM$ and $T^*M$. In general, the set of $(p,q)$-forms $\Omega^{p,q}(M)$ represents the space of sections of the bundle $\wedge^p T_{1,0}^*M \otimes \wedge^q T_{0,1}^*M $. \end{example} In analogy to Section~\ref{hodgetheorychapter}, the operator $\overline{\partial}: \Omega^{p,q}(M) \rightarrow \Omega^{p,q+1}(M)$ can be generalised to act on $E$-valued forms as $\overline{\partial}_E: \mathcal{A}^{p,q}(E)\rightarrow\mathcal{A}^{p,q+1}(E)$, such that locally $\overline{\partial}_E \alpha = \sum_{i=1}^r \overline{\partial} \alpha^i \otimes s_i$. Essentially, the procedures of Hodge theory are repeated to reveal that $\overline{\partial}_E$-harmonic forms are in one-to-one correspondence with cohomology classes of \begin{eqnarray} H^{p,q}(M,E)= \dfrac{\textrm{Ker}\left(\overline{\partial}_E: \mathcal{A}^{p,q}(E)\rightarrow\mathcal{A}^{p,q+1}(E) \right)}{\textrm{Im}\left(\overline{\partial}_E: \mathcal{A}^{p,q-1}(E)\rightarrow\mathcal{A}^{p,q}(E) \right)}, \end{eqnarray} \noindent where $H^{p,q}(M,E) \simeq H^q(M,E \otimes \wedge^p T^*_{1,0}M)$. On a Calabi-Yau threefold $X$ with poly-stable vector bundle $V$, several simplifications occur. Firstly, the fact that $TX \simeq \wedge^2 T^*X$ (due to the uniqueness of $\Omega$), implies that $H^1(X,TX) \simeq H^{2,1}(X)$ and $H^1(X,T^*X) \simeq H^{1,1}(X)$. Secondly, because the canonical bundle $K_X = \wedge^3 T^*_{1,0}X$ is trivial, the following version of the Serre duality holds \begin{eqnarray} H^q(X,V)\simeq H^{3-q}(X, V^*). \end{eqnarray} \noindent Finally, the index of $V$ is given by the Hirzebruch–Riemann–Roch theorem \begin{eqnarray} \label{atiyahsingerindex} \textrm{ind}(V) \equiv \sum_{q=0}^{3} (-1)^q h^q(X,V) = \int_X \textrm{Td}(X) \wedge \textrm{ch} (V) = \dfrac{1}{2} \int_X c_3 (V), \end{eqnarray} \noindent where $\textrm{Td}(X)$ and $\textrm{ch}(V)$ are topological invariants of $V$ and $X$, known as the Todd class and the Chern character respectively, while $c_3 (V)$ is the third Chern class of $V$. In particular, for a stable $SU(r)$ bundle, $h^0(X,V)=h^3(X,V)=0$, so $\textrm{ind}(V) = - h^1(X,V) + h^1(X,V^*)$. \begin{definition} Let us assume $X$ is a Complete Intersection Calabi-Yau manifold embedded in an ambient space $\mathcal{A}$. Then the normal bundle $\mathcal{N}$ on $\mathcal{A}$ is the quotient \begin{eqnarray} \mathcal{N}_{\mathcal{A}\vert X} ={T \mathcal{A}\vert_X}\big/{TX}, \end{eqnarray} \noindent where $T \mathcal{A}\vert_X$ is the restriction of the tangent bundle $T \mathcal{A}$ on $X$. \end{definition} \noindent The rank of $\mathcal{N}$ is precisely $K$, the co-dimension of $X$. If we denote by $i$ the natural injection of $TX$ into $T \mathcal{A}\vert_X$, one can construct a short exact sequence \begin{eqnarray} 0 \longrightarrow TX \stackrel{i}{\longrightarrow} T \mathcal{A} \vert_X \stackrel{\nabla \mathbf{p}}{\longrightarrow} \mathcal{N}_{\mathcal{A}\vert X} \longrightarrow 0, \end{eqnarray} \noindent where $\mathbf{p} = (p_1,...,p_K)$ is the $K$-tuple of defining polynomials. In general, it is convenient to build short exact sequences of vector bundles, because they induce long exact sequences in cohomology, from which the cohomology groups can be calculated. More precisely, if $A$, $B$, $C$ are three vector bundles on an $n$-dimensional base space $M$, satisfying \begin{eqnarray} \label{veryshortformulacohomologiesveryshort} 0 \longrightarrow A \stackrel{f}{\longrightarrow} B \stackrel{g}{\longrightarrow} C \longrightarrow 0, \end{eqnarray} \noindent then the corresponding relation between cohomology groups is \begin{align} \label{verylongformulaoncohomologiesverylong} 0 & \longrightarrow H^0(M,A)\stackrel{f}{\longrightarrow} H^0(M,B)\stackrel{g}{\longrightarrow} H^0(M,C) \stackrel{\delta_0}{\longrightarrow} \notag \\ & \longrightarrow H^1(M,A) \stackrel{f}{\longrightarrow}H^1(M,B)\stackrel{g}{\longrightarrow} H^1(M,C) \stackrel{\delta_1}{\longrightarrow } \notag \\ & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \vdots \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\, \vdots \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \vdots \notag \\ & \longrightarrow H^ n (M,A) \stackrel{f}{\longrightarrow} H^n(M,B)\stackrel{g}{\longrightarrow} H^n(M,C) \longrightarrow 0 , \end{align} \noindent where $\delta_i$ are the coboundary maps.\footnote{More information about the coboundary map can be found in Appendix~\ref{coboundarymapappendix}.} Of particular importance is the Koszul sequence, which relates a vector bundle $\mathcal{V}$ on the ambient space $\mathcal{A}$ to its restriction $V=\mathcal{V}\vert_X$ on the Calabi-Yau manifold, using the dual to the normal bundle $\mathcal{N}^*$, \begin{eqnarray} \label{Koszulseq1sttime} 0 \longrightarrow \wedge^K \mathcal{N}^* \otimes \mathcal{V} \longrightarrow ... \longrightarrow \wedge^2 \mathcal{N}^* \otimes \mathcal{V} \longrightarrow \mathcal{N}^* \otimes \mathcal{V} \longrightarrow \mathcal{V} \longrightarrow V \longrightarrow 0. \end{eqnarray} \noindent This sequence is short exact only if $K=1$, but even for higher co-dimensions we can split \eqref{Koszulseq1sttime} into short exact pieces in order to express cohomology groups $H^q(X,V)$ in terms of ambient space cohomologies. In this thesis we will make a major simplification, by assuming models in which the vector bundle on $X$ is a Whitney sum of line bundles, i.e. $V=\bigoplus_{i=1}^r L_i$. Such vector bundles have structure group $S(U(1)^r)$, rather than $SU(r)$, and are motivated by the fact that cohomologies of line bundles are much easier to calculate. Moreover, line bundles are automatically stable,\footnote{By the definition of stability in Section~\ref{conditionsforn=1susy}, all line bundles are trivially stable because they have no proper subsheaf.} so $V$ is poly-stable, provided that the slope of each line bundle is $\mu(L_i) = \mu(V) \stackrel{!}{=} 0$. For these reasons, it is proper to discuss here the line bundles that will serve as building blocks for our model. \begin{example} The tautological (or universal) line bundle on $\mathbb{P}^n$ is defined as a sub-bundle of $\mathbb{P}^n \times \mathbb{C}^{n+1}$ for which the fiber at every point $(x_0:...:x_n)\in \mathbb{P}^n$ is the line through the origin $\lbrace{ (\lambda x_0, ..., \lambda x_n), \lambda\in\mathbb{C}^*\rbrace}$ or, more formally written, \begin{eqnarray} \mathcal{O}_{\mathbb{P}^n}(-1)=\lbrace (l,v)\in \mathbb{P}^n \times \mathbb{C}^{n+1} \vert v \in l\rbrace . \end{eqnarray} \noindent Its dual, the hyperplane line bundle $\mathcal{O}_{\mathbb{P}^n}(1)$ is a sub-bundle of $\mathbb{P}^n \times (\mathbb{C}^{n+1})^*$, whose fiber at every point is the space of linear functionals $\sum_{i=0}^n \lambda_{i} x^{i} \in \mathbb{C}$ (hence it is a bundle of hyperplanes). More line bundles can be defined on $\mathbb{P}^n$ as tensor products $\mathcal{O}_{\mathbb{P}^n}(k) = \mathcal{O}_{\mathbb{P}^n}(1)^{\otimes k}$ and $\mathcal{O}_{\mathbb{P}^n}(-k) = \mathcal{O}_{\mathbb{P}^n}(-1)^{\otimes k}$, for $k\in \mathbb{Z}$. They are used to build line bundles on the ambient space $\mathcal{O}_{\mathcal{A}}(\mathbf{k}) = O_{\mathbb{P}^{n_1}}(k_1) \otimes ... \otimes O_{\mathbb{P}^{n_m}}(k_m)$, where $\mathbf{k} = (k_1,...,k_m)$, and through restriction, line bundles on the Calabi-Yau manifold $\mathcal{O}_X(\mathbf{k}) = \mathcal{O}_{\mathcal{A}}(\mathbf{k})\vert_X$. \end{example} \noindent In particular, the cohomology groups of line bundles on $\mathcal{A}$ are related via the K\"unneth formula to cohomology groups of line bundles on individual projective spaces \begin{eqnarray} \label{kunneth} H^{q}(\mathcal{A}, \mathcal{O}_{\mathcal{A}}(\mathbf{k})) = \bigoplus_{q_1+...+q_m=q} H^{q_1}(\mathbb{P}^{n_1},\mathcal{O}(k_1)) \times ... \times H^{q_m}(\mathbb{P}^{n_m},\mathcal{O}(k_m)). \end{eqnarray} \noindent In future chapters we will learn how exactly to calculate these cohomologies and how to use the Koszul sequence \eqref{Koszulseq1sttime} in order to determine $H^q(X,\mathcal{O}_X(\mathbf{k}))$. \section{Dimensional Reduction of the 10d Theory} \label{dimredchapter} \noindent Now that we have the mathematical tools to proceed with the compactification, our goal is to dimensionally reduce the heterotic 10d theory down to 4d. There are several steps that need to be taken. The first is the dimensional reduction of the gravitational sector, namely the dilaton, $B$-field and Einstein-Hilbert terms of the bosonic action \eqref{10daction}. Next in line is the dimensional reduction of the matter sector, or the $\alpha'$-dependent part, with the emergence of 4d matter multiplets in the resulting GUT group representations. In passing, we will also discuss what happens with the fermionic action and how to obtain holomorphic Yukawa couplings from the 10d theory. In the last part of this section, we will specify how to further break the GUT group down to the SM group via Wilson lines, as this is the final stage through which the heterotic string theory is connected to particle physics. Most of the results in this chapter are well-known in the literature (\cite{GSW},\cite{Candelas:1990pi},\cite{bbschwarz},\cite{ibanezuranga}), however some of them, such as Eqs.~\eqref{crosstermaction}--\eqref{originalresults}, are based on original work. A more detailed description of these original formulae will be provided later, in Section~\ref{kahlersec2}. \subsection{Dimensional reduction of the bosonic gravity sector} \label{gravitysectorsection} \noindent At the first stage of compactification, we can neglect all $\mathcal{O}(\alpha')$ contributions to the 10d bosonic action, in order to focus on the gravity sector. We start by expanding the bosonic fields of the gravity multiplet according to the compactification ansatz. \paragraph{Expanding the dilaton} \noindent Under the assumption that 10d fields are perturbations around the $M_4 \times X$ vacuum, any scalar field $\phi(x^M)$ can be expanded in terms of the external and the internal coordinates as $\phi(x,y) \simeq \phi(x) \phi(y)$, where $x$ and $y$ are short-hand notations for $x^{\mu}$ and $y^m$. The solution to the equation of motion $\Delta \phi = 0$ implies that $\phi(y)$ is a constant, which can be taken to be $1$, therefore the 10d dilaton is trivially expanded as $\phi(x,y)=\phi(x)$. Of particular significance is the background value of the dilaton, which determines the string coupling constant via $g_s = e^{\langle \phi \rangle}$. \paragraph{Expanding the $B$ field} For the case of the $B$ field, it is again necessary to analyse the equations of motion. In order for $B$ to be physical, it must be invariant under a gauge transformation $\delta B = d \Lambda$ which decouples the time-like modes of $B$ (these modes are responsible for negative norm states). The usual gauge choice, $d^\dagger B = 0$, together with the equation $d^\dagger d B = 0$, derived from the minimisation of action,\footnote{Writing the action of $B$ as an inner product $( H, H ) = \int H \wedge \ast H$, where $H=dB + \mathcal{O}(\alpha')$ is the field strength, and then imposing $\delta (H,H) \stackrel{!}{=}$ for small variations of $B$ leads to the result $d^\dagger d B = 0$.} show that $\Delta B= 0$, i.e. $B$ is harmonic. Now, on a vacuum space $M_4 \times X$, the Laplacian splits into $\Delta = \Delta_4 + \Delta_X$, which means that massless fields\footnote{In the equation $\Delta_4 B + \Delta_X B = 0$, $\Delta_X B$ has the role of a mass-squared term for the effective 4d field, however non-zero eigenvalues of $\Delta_X$ are too large to be measured, as they are proportional to $1/l_c^2 \gg 1$, where $l_c$ is the typical length scale of the internal manifold. It is for this reason that only massless fields are relevant at low energy.} of the 4d theory correspond to zero modes of the Laplace operator $\Delta_X$, and therefore to harmonic forms representing classes in $H^{p,q}(X)$. Applying what we learned about cohomology groups of Calabi-Yau manifolds, the expansion of $B$ reads \begin{eqnarray} \label{decompbfield} B(x,y) = B(x) + \sum_{i=1}^{h^{1,1}} \tau^i (x) \omega_i(y) , \end{eqnarray} \noindent where $B(x)$ is the rank-2 tensor field in the 4d theory, $\lbrace \tau^i\rbrace$ is a set of $h^{1,1}$ scalar fields, known as moduli, and $\lbrace \omega_i \rbrace$ is a basis of harmonic $(1,1)$-forms. In particular, $B(x)$ has a single degree of freedom and is a pseudoscalar, so it can be dualised to a 4d real scalar field $\gamma (x)$ \begin{eqnarray} d B = \ast_4 d \gamma . \end{eqnarray} \paragraph{Expanding the metric} Following \cite{Candelas:1990pi}, the internal manifold remains Ricci-flat upon $\delta g$ deformations of the metric, provided that the Lichnerowicz equation is satisfied \begin{equation} \nabla^F \nabla_F \delta g_{A B} + 2 R_A{}^C{}_B{}^D \delta g_{CD}=0. \end{equation} \noindent Hence the solution is of the form $\delta g = \delta g_{\overline{m} \overline{n}} dy^{\overline{m}} dy^{\overline{n}} + \delta g_{m \overline{n}} dy^{m} dy^{\overline{n}} + \textrm{c.c.}$, where \begin{align} \label{ansatzg1} \delta g_{m \overline{n}} & = - i \sum_{i=1}^{h^{1,1}} t^{i}(x) \omega_{i \, m \overline{n}}(y), \\ \label{ansatzg2} \delta g_{\overline{m} \overline{n}} & = - \dfrac{1}{\Vert\Omega\Vert^2} \overline{\Omega}_{\overline{m}}{}^{p q} \sum_{a=1}^{h^{2,1}} z^{a}(x) \rho_{a \, p q \overline{n}}(y) \end{align} \noindent are expansions in bases of harmonic $(1,1)$-forms $\lbrace \omega_i \rbrace$ and harmonic $(2,1)$-forms $\lbrace \rho_a \rbrace$, respectively. The 4d real scalar fields $t^i$ are called K\"ahler moduli and parametrise size deformations, while the 4d complex scalar fields $z^a$ are the complex structure moduli parametrising deformations of shape. It is now possible to apply the field expansions \eqref{decompbfield},~\eqref{ansatzg1} and \eqref{ansatzg2} to the 10d action \eqref{10daction}, but while doing so, we want to remove the exponential prefactor $e^{- 2 \phi}$ from the Ricci scalar term. This involves moving from the string frame of tree-level string interactions to the Einstein frame through a Weyl rescaling of the external metric \begin{eqnarray} g_{\mu\nu} \rightarrow e^{2 \phi} g_{\mu\nu} \, . \end{eqnarray} \noindent Under this transformation, the 4d bosonic action at zeroth order ($\alpha' \approx 0$) reads \begin{eqnarray} \label{0thorderbosonicaction} \!\!\!\! S_0 = \dfrac{1}{2\kappa_4^2} \int \! d^4 x \sqrt{- g} \left( R - \! \dfrac{\!\!2}{(S\! + \! \overline{S})^2}\partial^{\mu} S \partial_{\mu} \overline{S} - 2 G_{i j} \partial^{\mu} T^i \partial_{\mu} \overline{T}^j \! - \! 2 G_{a \overline{b}} \partial^\mu z^a \partial_{\mu} \overline{z}^{\overline{b}}\right)\!, \end{eqnarray} \noindent where \begin{eqnarray} \label{modulispacemetrics} G_{ij} = \dfrac{1}{4 \mathcal{V}}\int_X \omega_i \wedge \ast_X \omega_j \, , \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, G_{a\overline{b}} = \dfrac{\int_X \rho^a \wedge \overline{\rho}^{\overline{b}}}{\int_X \Omega \wedge \overline{\Omega} } \end{eqnarray} \noindent are the K\"ahler and complex structure moduli space metrics respectively, and \begin{equation} \label{modulifields} S= e^{-2 \phi} + i \gamma , \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, T^i = t^i + i \tau^i \end{equation} \noindent are defined as 4d complex moduli fields, together with $z^a$. In total, the gravity sector contains $h^{1,1}+h^{2,1}+1$ such fields, sitting in $N = 1$ chiral supermultiplets. It is also useful to introduce K\"ahler potentials \begin{eqnarray} K^{(J)} = - \textrm{ln}\left(\dfrac{4}{3}\int_X J \wedge J \wedge J\right),\,\,\,\,\,\,\,\,\,\,\,\,\,\, K^{(\textrm{CS})}= - \textrm{ln}\left( i \int_X \Omega \wedge \overline{\Omega} \right), \end{eqnarray} \noindent from which the moduli space metrics can be derived, using $G_{i j} = \partial_{i}\partial_{j} K^{(J)}$ and $G_{\overline{a} b} = \partial_{\overline{a}} \partial_b K^{(\textrm{CS})}$. Finally, note that in the expression \eqref{0thorderbosonicaction}, $\kappa_4$ is assumed to be the 4d gravitational coupling, so that $\kappa^2_4=8 \pi G_N$, where $G_N$ is Newton's constant. The relation between $\kappa_4^2$ and its 10d counterpart $\kappa^2 \sim g^2_s l_s^8$ is given by $\kappa^2 = \kappa_4^2 \mathcal{V}$, where $\mathcal{V}$ is the volume of the compact manifold. Interestingly, this also relates the string length and the Planck length via $l_{\textrm{P}} \sim g_s l_s^4/\sqrt{\mathcal{V}}$, showing that $l_{\textrm{P}} < l_s$ for adequate values of $g_s$ and $\mathcal{V}$. With these observations, we conclude our discussion of the gravity sector. \subsection{Dimensional reduction of the bosonic matter sector} \label{mattersectorsection} \noindent Compactifying the $\alpha'$-dependent part of the bosonic action~\eqref{10daction} involves expanding the gauge field as $A = A^{(0)} + A^{(1)}$, where $A^{(0)}$ is the non-zero vacuum expectation value on $X$ and $A^{(1)}$ is an infinitesimal fluctuation. Being in the adjoint representation $\mathbf{248}$ of $E_8$, $A^{(1)}$ must split as \begin{equation} \label{248repdecomposition} {\bf 248}\rightarrow \left[({\rm Adj}_G,{\bf 1})\oplus ({\bf 1},{\rm Adj}_H)\oplus\bigoplus ({\cal R}_G, {\cal R}_H)\right]_{G\times H} \end{equation} \noindent under the symmetry breaking $E_8 \rightarrow G\times H$, where $G$ is the group structure of the holomorphic vector bundle $V$ and $H$ is the effective 4d GUT group. In particular, the piece transforming in the adjoint of $H$ is interpreted to be the 4d gauge boson $A(x)=A_{\mu}(x) dx^{\mu}$, while for the other components the ansatz is \begin{eqnarray} \label{expansiona} A^{(1)}_{(\mathcal{R}_G,\mathcal{R}_H)} = C^I(x) \nu_{I\overline{m}}(y) d y^{\overline{m}} + \overline{D}{}^{P}(x) \overline{\sigma}_{P m}(y) dy^m, \end{eqnarray} \noindent where $C^I$ and $D^P$ are matter fields in the representations $\mathcal{R}_H$ and $\overline{\mathcal{R}}_H$ respectively, while $\nu_{I} \in H^1(X, V_{\mathcal{R}_G})$ and $\sigma_P \in H^1(X, V^*_{\mathcal{R}_G})$ are harmonic $(0,1)$-forms on the bundles $V_{\mathcal{R}_G}$ and $V^*_{\mathcal{R}_G}$, associated to representation $\mathcal{R}_G$ of $G$. This is in line with the requirement that $A^{(1)}$ is harmonic, which comes from the Yang-Mills condition $d_A \ast F = 0$ and the gauge choice $d_A \ast A^{(1)} = 0$, with $d_A$ being the covariant derivative on the bundle. With the ansatz \eqref{expansiona} and redefining $F$ as the 4d field strength of $A(x)$, the Yang-Mills action becomes\footnote{One notices that representations $(\mathcal{R}_G,\mathcal{R}_H)$ and $(\overline{\mathcal{R}}_G,\overline{\mathcal{R}}_H)$ are both present in the decomposition of $\mathbf{248}$ and the relation $A^{(1)}_{(\overline{\mathcal{R}}_G,\overline{\mathcal{R}}_H)} = A^{(1)*}_{(\mathcal{R}_G,\mathcal{R}_H)}$ is implied by complex conjugation.} \begin{eqnarray} \label{reducedyangmillsaction} S_{\textrm{YM}} = - \dfrac{\alpha'}{2\kappa_4^2} \int d^4 x \sqrt{-g}\left(\dfrac{1}{4} \textrm{Re} (f) \, \textrm{Tr}F^2 + 2 G_{I J} D_{\mu} \overline{C}{}^I D^{\mu} C^J +...\right), \end{eqnarray} \noindent where $f=S$ is the gauge kinetic function, $G_{IJ}$ is the matter field K\"ahler metric \begin{eqnarray} \label{matterfieldmetric} G_{I J} = \dfrac{1}{2 \mathcal{V}} \int_X d^6 y \sqrt{g^{(6)}} g^{ (6)\overline{m} n} \nu_{I \overline{m}} \overline{\nu}_{J n} = \dfrac{1}{2 \mathcal{V}} \int_X \nu_I \wedge \overline{\ast}_V \nu_J \, , \end{eqnarray} \noindent and by the ellipsis we indicate that further kinetic terms must be added, one for each type of $\mathcal{R}_H$-multiplets in the decomposition \eqref{248repdecomposition}. The number of families in a representation $\mathcal{R}_H$ is given by $n_{\mathcal{R}_H} = h^1(X, V_{\mathcal{R}_G})$, while the number of anti-families is $n_{\overline{\mathcal{R}}_H} = h^1(X, V^*_{\mathcal{R}_G})$. This means that the net number $n_{\mathcal{R}_H} - n_{\overline{\mathcal{R}}_H}$ of $\mathcal{R}_H$-multiplets is a topological invariant of the bundle, namely the index $\textrm{ind}(V_{\mathcal{R}_G})$, as defined in \eqref{atiyahsingerindex}. In the particular case of the representation $({\rm Adj}_G, \mathbf{1})_{G\times H}$, the resulting 4d scalars are uncharged under the GUT group $H$, so they are called vector bundle moduli. As the corresponding bundle for ${\rm Adj}_G$ is $V \otimes V^*$, the number of bundle moduli is $n_{\mathbf{1}}=h^1({X,V \otimes V^*})$. For the other representations of $G$, $V_{\mathcal{R}_G}$ descends from the principal bundle $V$, either as $V$ itself, if $\mathcal{R}_G$ is the fundamental representation, or as the dual $V^*$ or the wedge products $\wedge^2 V$, $\wedge^2 V^*$. Quoting \cite{Anderson:2009ge}, the main results for vector bundles with structure groups $G = SU(3)$, $SU(4)$ and $SU(5)$ are presented in Table~\ref{mytablenow}. All three cases lead to a corresponding GUT that was discussed in Section~\ref{gutsubsection}. Finally, the coefficient in front of the Yang-Mills action \eqref{reducedyangmillsaction} can be identified with the GUT coupling $1/g_{\textrm{GUT}}^2$, thus giving a proportionality of the form $g_{\textrm{GUT}} \sim g_s l_s^3 /\sqrt{\mathcal{V}}$ between $g_{\textrm{GUT}}$ and the parameters of the 10d theory. \begin{table}[h] \begin{center} \begin{footnotesize} \begin{tabular}{|l|l|l|}\hline $E_8 \rightarrow G \times H$ & Decomposition of $\mathbf{248}$& Spectrum \\\hline\hline & & $n_{\textbf{27}} = h^1(V)$ \\ $SU(3)\times E_6$& $({\bf 8},{\bf 1})\oplus ({\bf 1},{\bf 78})\oplus ({\bf 3},{\bf 27})\oplus (\overline{\bf 3},\overline{\bf 27})$ & $n_{\overline{\textbf{27}}} = h^1(V^*)$ \\ & & $n_{\textbf{1}} = h^1(V\otimes V^*)$\\\hline & & $n_{\textbf{16}} = h^1(V)$ \\ & & $n_{\overline{\textbf{16}}} = h^1(V^*)$ \\ $SU(4) \times SO(10)$& $({\bf 15},{\bf 1})\oplus ({\bf 1},{\bf 45})\oplus ({\bf 4},{\bf 16})\oplus (\overline{\bf 4},\overline{\bf 16})\oplus ({\bf 6},{\bf 10})$ & $n_{\textbf{10}} = h^1(\wedge^2 V)$ \\ & & $n_{\textbf{1}} = h^1(V\otimes V^*)$\\\hline & & $n_{\textbf{10}} = h^1(V^*)$ \\& & $n_{\overline{\textbf{10}}} = h^1(V)$ \\$SU(5)\times SU(5)$& $({\bf 24},{\bf 1})\oplus ({\bf 1},{\bf 24})\oplus ({\bf 5},\overline{\bf 10})\oplus (\overline{\bf 5},{\bf 10})\oplus ({\bf 10},{\bf 5})\oplus (\overline{\bf 10},\overline{\bf 5})$ & $n_{\overline{\textbf{5}}}= h^1(\wedge^2 V^*)$ \\& & $n_{\textbf{5}} = h^1(\wedge^2 V)$ \\& & $n_{\textbf{1}} = h^1(V\otimes V^*)$\\\hline \end{tabular} \end{footnotesize} \end{center} \caption{\it Different symmetry breaking scenarios for the $E_8$ gauge group, as dictated by the choice of a vector bundle $V$ with structure group $G = SU(3)$, $SU(4)$ or $SU(5)$. The resulting spectrum of particles (matter fields and bundle moduli) is determined by the cohomology groups of the bundle.}\label{mytablenow} \end{table} As a side note, one of the original methods of heterotic compactification was the standard embedding, through which the vector bundle $V$ is chosen to be identical with the holomorphic tangent bundle $TX$, so that the anomaly cancellation condition \eqref{c2c2W} is automatically satisfied. Since the holonomy of $X$ is $SU(3)$, the spin connection is embedded in the gauge group $E_8$ as the gauge connection of an $SU(3)$ subgroup. Thus, the standard embedding is a case of $SU(3)\times E_6$ compactification and has a net number of families $\vert n_{\textbf{27}}-n_{\overline{\textbf{27}}}\vert = \vert h^{2,1}(X) - h^{1,1}(X) \vert = \vert \chi \vert/2$. Despite its computational triumphs (4d physical Yukawa couplings, in particular), the model is too restrictive, therefore we look for models with a more general embedding. To conclude this section, the compactification of the $\alpha'$-dependent part of the bosonic action does not only give rise to matter field kinetic terms. There are also several cross-terms arising from the $\vert H \vert^2$ action. This is seen by expanding $B$ in terms of the moduli $\tau^i$ as in \eqref{decompbfield} and the Chern-Simons form $\omega_{\textrm{YM}}$ in terms of the matter fields $C^I$ to obtain \begin{eqnarray} \label{crosstermaction} S_{\textrm{cross terms}} = \dfrac{\alpha'}{2 \kappa_4^2} \int d^4 x \sqrt{-g} \left(\dfrac{1}{2} \Lambda_{i I J} \partial^{\mu} \tau^i \overline{C}{}^I D_{\mu} C^J + \textrm{c.c.}\right), \end{eqnarray} \noindent where $\Lambda_{i I J}$ is a coupling \begin{eqnarray} \Lambda_{i I J} = \dfrac{1}{2\mathcal{V}} \int_X (\ast_X \omega_i) \wedge \nu_I \wedge (\mathcal{H} \overline{\nu}_J), \end{eqnarray} \noindent and $\mathcal{H}$ is the Hermitian structure on the bundle. It is important to note that in the presence of matter fields, the moduli fields in Eq.~\eqref{modulifields} get modified to an expression of the form \begin{eqnarray} T^i = t^i + i \tau^i + \alpha' \, \Gamma^{i}_{I J} C^I \overline{C}{}^J, \end{eqnarray} \noindent for certain coefficients $\Gamma^i_{I J}$. Using this and the K\"ahler potential to compute the terms $K_{T^i \overline{C}{}^J} \partial^{\mu} T^i D_{\mu} \overline{C}{}^J$, one identifies \begin{eqnarray} \label{originalresults} \Lambda_{i I J} = - i \dfrac{\partial G_{IJ}}{\partial t^i}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \Gamma^{i}_{I J} = - \dfrac{1}{4} G^{i j} \dfrac{\partial G_{I J}}{\partial t^j}. \end{eqnarray} \noindent where $G^{i j}$ is the inverse K\"ahler moduli metric and $G_{I J}$ is the matter field metric.\footnote{It is easy to see that $G_{IJ}$ is indeed $t$-moduli dependent, by expressing it as \begin{eqnarray} G_{IJ} = - \dfrac{i}{4 \mathcal{V}}\int_X J \wedge J \wedge \nu_I \wedge (\mathcal{H} \overline{\nu}_J) \notag, \,\,\,\,\,\, \textrm{where} \,\,\,\,\,\, J=t^i \omega_i , \,\,\,\,\,\, \mathcal{V} = \dfrac{1}{6} d_{i j k} t^i t^j t^k . \end{eqnarray}} \subsection{Dimensional reduction of the fermionic sector} \label{fermionicsection} \noindent Due to supersymmetry, all the fermionic fields in the 4d theory are expected to be the superpartners of the bosonic fields derived in Sections~\ref{gravitysectorsection} and~\ref{mattersectorsection}. For this reason, compactifying the fermionic sector may seem redundant and is mainly useful as a consistency check. As a reminder, the 10d fermions are the spin-$3/2$ gravitino $\psi_M$, the spin-$1/2$ dilatino $\lambda$ and the spin-$1/2$ gaugino $\chi$, which transforms in the adjoint representation of $E_8$. They have the following kinetic terms \begin{eqnarray} \label{fermionicaction} \resizebox{0.89\hsize}{!}{$S_{\textrm{f}} = - \dfrac{1}{2 \kappa^2} \mathop{\mathlarger{\int}} d^{10}x \sqrt{-g} e^{-2 \phi}\bigg [\overline{\psi}_M\Gamma^{MNP}D_N \psi_P + \dfrac{1}{2} \overline{\lambda} \Gamma^M D_M \lambda + \dfrac{\alpha'}{2} \textrm{Tr}(\overline{\chi}\Gamma^M D_M\chi) \bigg ],$} \end{eqnarray} \noindent where all $\Gamma^N$'s are 10d $\Gamma$-matrices and their antisymmetrised product is given by \begin{eqnarray} \Gamma^{N_1 N_2 ... N_n} = \dfrac{1}{n!} \Gamma^{[N_1}\Gamma^{N_2}...\Gamma^{N_n]} \, . \end{eqnarray} \noindent Constructing the Clifford algebra in various dimensions is thoroughly discussed in sources like Ref.~\cite{Polchinski} (Appendix B). For our particular case, \begin{eqnarray} \Gamma^{\mu} = \gamma^{\mu} \otimes \mathbbmss{1}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \Gamma^{m} = \gamma^5 \otimes \gamma^{m}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \Gamma^{\overline{m}} = \gamma^5 \otimes \gamma^{\overline{m}}, \end{eqnarray} \noindent where $\gamma^{\mu}$ are the standard Dirac matrices in 4d, $\gamma^5$ is their corresponding chirality operator, and $\gamma^{m}$, $\gamma^{\overline{m}}$ are internal manifold gamma-matrices satisfying \begin{eqnarray} \lbrace \gamma^{m},\gamma^{\overline{n}}\rbrace = 2 g^{m\overline{n}}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \lbrace \gamma^{m},\gamma^{n}\rbrace = \lbrace\gamma^{\overline{m}}, \gamma^{\overline{n}} \rbrace=0. \end{eqnarray} \noindent Spinors are defined in $D$ dimensions as $2^{[D/2]}$-dimensional representations of the Lorentz group $SO(1,D-1)$, by taking generators $\sigma^{MN} = \frac{1}{4}[\Gamma^M, \Gamma^N]$. If $D$ is even, a Dirac spinor can be split into two irreducible Weyl representations and if a constraint $\Psi^\dagger = \Psi^T C$ can be applied, where $C$ is the charge conjugation matrix, then the spinor $\Psi$ is said to be Majorana. In 10d, the Majorana-Weyl spinor $\mathbf{16}$ of $SO(1,9)$ decomposes under $SO(1,9) \rightarrow SO(1,3) \times SO(6)$ as \begin{eqnarray} \label{decomp2424} \mathbf{16} \rightarrow (\mathbf{2},\mathbf{4}) \oplus (\mathbf{2}',\overline{\mathbf{4}}), \end{eqnarray} \noindent from which one can see that the compactification ansatz $M_4 \times X$ renders $\Psi(x,y) = \Psi^{4d}(x) \otimes \Psi^X(y) + \overline{\Psi}{}^{4d}(x) \otimes \overline{\Psi}{}^X(y)$ for any spinor field $\Psi$. The representation $\mathbf{4}$ of $SO(6)$ is further split into $\mathbf{1} \oplus \mathbf{3}$ under the reduced $SU(3)$ holonomy, so in general $\Psi^X$ can be expanded as \begin{eqnarray} \Psi^X(y) = a(y) \overline{\xi} + b_{\overline{m}}(y) \gamma^{\overline{m}} \xi, \end{eqnarray} \noindent where $a$ is a smooth function and $b$ is a $(0,1)$-form, both being defined on $X$ or a bundle thereof, depending on the gauge representation of $\Psi^X$, while $\gamma^{\overline{m}}$ and $\gamma^{m}$ act as raising and lowering operators, and $\xi$, $\overline{\xi}$ are the covariantly constant spinors of opposite chirality, satisfying $\gamma^m \xi =0$ and $\gamma^{\overline{m}} \overline{\xi} =0$ (see Ref. \cite{Candelas:1987is}). In terms of the equations of motion, the gravitino, being a vector-spinor field,\footnote{Note that unlike spinors, vector-spinor fields have $ 2^{[D/2]-1}(D-3)$ components in $D$ dimensions, if they are massless.} respects the Rarita-Schwinger equation $\Gamma^{MNP} D_N \psi_P =0$, while the dilatino and the gaugino satisfy Dirac equations $\slashed{D} \lambda =0$ and $\slashed{D} \chi=0$ respectively, where the 10d Dirac operator $\slashed{D}$ is $\Gamma^M D_M$ and the covariant derivative $D_M$ is defined with respect to the spin connection and (in the case of the gaugino) the background gauge fields. Now, the Dirac operator splits as $\slashed{D} = \slashed{D}_4 + \slashed{D}_X$, when acting on the $(\mathbf{2},\mathbf{4})$ component of $\mathbf{16}$, and as $\slashed{D} = \slashed{D}^\dagger_4 + \slashed{D}^\dagger_X$, when acting on $(\overline{\mathbf{2}},\overline{\mathbf{4}})$. Therefore, 4d massless left-handed fermions correspond to zero modes of $\slashed{D}_X$, while 4d massless right-handed fermions correspond to zero modes of $\slashed{D}_X^{\dagger}$. The net chirality is given by the index $\textrm{ind}(\slashed{D}_X) = \textrm{dim} \, \textrm{ker} \slashed{D}_X - \textrm{dim} \, \textrm{ker} \slashed{D}^\dagger_X$, which was shown to be a topological invariant of the manifold via the Atiyah-Singer index theorem, \begin{eqnarray} \textrm{ind}(\slashed{D}_X, V)= \int_X \hat{A} (X) \wedge \textrm{ch}(V), \end{eqnarray} \noindent where $\hat{A} (X)$ is the $A$-roof genus, a topological quantity of $X$ isomorphically related to the Todd class $\textrm{Td}(X)$, so that on an almost complex manifold $\textrm{Td}(X) = e^{c_1(X)/2} \hat{A} (X)$ \cite{dsfreed}. This identifies $\textrm{ind}(\slashed{D}_X, V)$ with the index of the Hirzebruch–Riemann–Roch theorem \eqref{atiyahsingerindex} that was used to describe the bosonic spectrum, thus equating the number of bosonic and fermionic degrees of freedom. For example, 4d matter fermions $\tilde{C}^I$ (the superpartners of bosonic matter fields $C^I$) arise from the dimensional reduction of 10d gauginos, having a number of $\mathcal{R}_H$-multiplets that is equal to $\textrm{ind}(\slashed{D}_X, V_{\mathcal{R}_G})$, where representations $\mathcal{R}_G$ and $\mathcal{R}_H$ are defined as in \eqref{248repdecomposition}. The $\text{Adj}_H$ component of $\chi(x)$ corresponds to the 4d gaugino and together with the gauge field $A_{\mu}(x)$, it forms the vector supermultiplet. On the other hand, the 10d dilatino only gives rise to a 4d Weyl spinor (the 4d dilatino), which is the superpartner of the modulus field $S$. Altogether, the explicit decompositions for the gaugino and the dilatino read\footnote{Note that the Majorana condition ${\chi}^{\dagger}_{(\mathcal{R}_G, \mathcal{R}_H)} = \chi^T_{(\overline{\mathcal{R}}_G, \overline{\mathcal{R}}_H)} C$ is satisfied if $\chi_{(\overline{\mathcal{R}}_G,\overline{\mathcal{R}}_H)}$ is similarly expanded in terms of $\tilde{D}^P$ and $\overline{\tilde{C}}{}^I$.} \begin{align} \label{decompgaugino} \chi_{(\mathcal{R}_G,\mathcal{R}_H)} & = \tilde{C}^I(x) \otimes \nu_{I\overline{m}}(y) \gamma^{\overline{m}} \xi + \overline{\tilde{D}}{}^P(x) \otimes \overline{\sigma}_{P m}(y) \gamma^m \overline{\xi}, \\ \label{decompdilatino} \lambda (x,y) & = \lambda(x) \otimes \overline{\xi} + \overline{\lambda}(x) \otimes \xi, \end{align} \noindent where we used the same notations as in \eqref{expansiona}. The compactification of the gravitino is roughly similar, although more elaborate. The components $\psi_{\mu}$, with an external index, transform as a vector-spinor of $SO(1,3)$ and a spinor of $SO(6)$, but only $\psi^{4d}_{\mu} \otimes \overline{\xi}$ is massless, where $\psi^{4d}_{\mu}$ is interpreted as the massless 4d gravitino. The internal component $\psi_m = \psi^{4d}_m \otimes \psi^{X}_m$ however transforms as a spinor of $SO(1,3)$ and a vector-spinor of $SO(6)$, and its 4d massless fields correspond to solutions of the Dirac equation $\slashed{D}_X \psi^{X}_m = 0$, thus giving (see for example Ref. \cite{benmachiche}) \begin{eqnarray} \label{decompgravitino} \psi_m = \tilde{T}^i \otimes (\omega_i){}_{m\overline{n}}\gamma^{\overline{n}}\xi+\dfrac{1}{\Vert\Omega\Vert^2}\overline{\tilde{z}}{}^a\otimes(\overline{\rho}_{a}){}_{m \overline{n}\overline{p}}\Omega_q{}^{\overline{n}\overline{p}}\gamma^q \overline{\xi}, \end{eqnarray} \noindent where $\tilde{T}^i(x)$ and $\tilde{z}^a(x)$ are Weyl fermions that carry no gauge charges, while $\omega_i \in H^{1,1}(X)$ and $\rho_a \in H^{2,1}(X)$ are the same bases of harmonic forms that we used in Eqs. \eqref{ansatzg1}--\eqref{ansatzg2}. Thus, $\tilde{T}^i$ and $\tilde{z}^a$ are indeed the superpartners of moduli fields $T^i$ and $z^a$, with which they form chiral multiplets. All in all, the $N=1$ spectrum in the low-energy theory is summarised in Table~\ref{tablesummary} \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|l|}\hline Multiplet & $H_{\textrm{GUT}}$ rep.& Field content & Number \\\hline\hline gravitational & $\mathbf{1}$ & $(g_{\mu \nu}, \psi_{\mu})$ & $1$ \\ \hline vector & $\textrm{Adj}_H$ & $(A_{\mu}, \chi)$ & $1$\\ \hline linear & $\mathbf{1}$ & $(S, \lambda)$ & $1$ \\ \hline K\"ahler moduli& $\mathbf{1}$ & $(T^i,\tilde{T}^i)$ & $h^{1,1}(X)$\\ \hline complex structure moduli & $\mathbf{1}$ & $(z^a, \tilde{z}^a)$ & $h^{2,1}(X)$\\ \hline matter chiral & $\mathcal{R}_H$ & $(C^I,\tilde{C}^I)$ & $ \textrm{ind} (V_{\mathcal{R}_G})$ \\ \hline \end{tabular} \end{center} \caption{\it The 4d $N=1$ supermultiplets obtained in heterotic compactification.}\label{tablesummary} \end{table} Using the ans\"atze \eqref{decompgaugino}--\eqref{decompgravitino} and after performing a Weyl rescaling of the 4d metric, as well as other rescalings such as $\lambda \rightarrow e^{2 \phi}\lambda$ and $\chi \rightarrow e^{- \phi} \chi$, the 4d fermionic action is brought to the form \begin{multline} \label{reducedfermionaction} S_{\textrm{f}} = - \dfrac{i}{\kappa_4^2}\int d^4 x \sqrt{-g} \bigg [ i\epsilon^{\mu\rho\nu\lambda} \overline{\psi}_{\mu} \overline{\sigma}_{\lambda}D_{\rho} \psi_{\nu} + \dfrac{\alpha'}{2} \textrm{Re}(f) \textrm{Tr}(\overline{\chi} \overline{\sigma}^{\mu} D_{\mu} \chi) + \dfrac{\!\!1}{(S\!+\!\overline{S})^2}\overline{\lambda} \overline{\sigma}^{\mu} D_{\mu} \lambda + \\ + G_{ij} \overline{\tilde{T}}{}^i \overline{\sigma}^{\mu}D_{\mu}\tilde{T}^j+ G_{\overline{a} b} \overline{\tilde{z}}{}^a \overline{\sigma}^{\mu} D_{\mu} \tilde{z}^b + \alpha' G_{IJ} \overline{\tilde{C}}{}^I \overline{\sigma}^{\mu}D_{\mu}\tilde{C}^J \bigg ], \end{multline} \noindent with the metrics $G_{ij}$, $G_{a \overline{b}}$ and $G_{I J}$ defined as in \eqref{modulispacemetrics} and \eqref{matterfieldmetric}.\footnote{In the last stage of compactification, we assumed $\xi$ is normalised and we used the identities $\xi^{\dagger} \gamma^m \gamma^{\overline{n}} \xi = 2 g^{m \overline{n}}$ and $\xi^{\dagger}\gamma^m \gamma^{\overline{n}} \gamma^p \gamma^{\overline{q}}\xi = 4 g^{m \overline{n}} g^{p \overline{q}}$.} \subsection{Holomorphic Yukawa couplings and other interaction terms} \noindent As we saw in Section~\ref{smsection}, Yukawa interactions occur when two fermions are coupled to a scalar field, or in our notation, when $\tilde{C}{}^I \tilde{C}^J C^K$ terms are present in the 4d action. Such terms are obtained from the 10d gaugino kinetic term $\textrm{Tr}(\overline{\chi}\Gamma^M D_M\chi)$, namely from its $\overline{\chi} \gamma^m A_m \chi$ part, by expressing the covariant derivative as $D_M \chi^a = \partial_{M} \chi^a + f^a{}_{b c}A^b_{M}\chi^c$ (where $f^a{}_{b c}$ are the structure constants of $E_8$), and extracting the fields $\tilde{C}{}^I$ and $C^I$ as low-energy massless modes of $\chi$ and $A_m$, respectively. Now, the manner in which $\chi$ and $A_m$ are decomposed is described by representations $(\mathcal{R}^i_{G}, \mathcal{R}^i_{H})$ of $G\times H$, where the index $i=1,2,3$ refers to each of the three fields involved in the coupling and $\mathcal{R}^1_G \otimes \mathcal{R}^2_G \otimes \mathcal{R}^3_G$ forms a singlet under $G$. With the expansion of $\chi$ and $A_{\overline{m}}$ into bases of harmonic $(0,1)$-forms $\nu_{i,I} \in H^1(X, V_{\mathcal{R}^i_G})$, as in \eqref{expansiona} and \eqref{decompgaugino}, and using the definition \eqref{holomorphic30} of $\Omega$, one obtains the compactified formula \begin{eqnarray} \label{yukcouplingdimred} S_{\textrm{Yuk}} \sim \int_{M_4} \lambda_{I J K}\tilde{C}{}_1^I \tilde{C}_2^J C_3^K, \,\ \textrm{where} \,\ \lambda_{I J K} =\int_X \Omega \wedge \nu^a_{1,I} \wedge \nu^b_{2,J} \wedge \nu^c_{3,K} f_{a b c}, \end{eqnarray} \noindent with bundle indices $a,b,c$ running over the dimension of each representation $\mathcal{R}_G^i$. Overall, the structure constants $f_{abc}$ ensure that $\lambda_{IJK}$ is invariant under $G$. The implications of Eq.~\eqref{yukcouplingdimred} are profound. Since $\lambda_{I J K}$ is a quasi-topological quantity, it can be evaluated without knowledge of the internal metric $g_{m \overline{n}}$ or the connections on the bundle. However, one has to know the specific values of harmonic forms $\nu_{i,I}$, which is in general not easily achievable. By contrast, the matter field K\"ahler metric $G_{IJ}$ depends on $g_{m \overline{n}}$, as seen from Eq.~\eqref{matterfieldmetric}, so without a precise geometrical description of $X$, the fields cannot be canonically normalised. In the next chapter, we will continue the discussion on holomorphic Yukawa couplings in more depth, as they are the main focus of this thesis. Having established the holomorphic Yukawa couplings, the superpotential and the matter field K\"ahler potential are given by $W = \lambda^{(ijk)}_{IJK} C^I_i C^J_j C^K_k$ and $K^{(m)} = G^{(i)}_{IJ} C_i^I \overline{C}{}^J_i $, respectively (where by $C^I_i$ we now refer to chiral superfields), and one can see that $W$ is in agreement with the Gukov-Vafa-Witten expression \begin{eqnarray} W = \int_X \Omega \wedge H \, . \end{eqnarray} \noindent Finally, the heterotic compactification is not complete without dimensional reduction of the 10d interaction terms \begin{multline} S_{\textrm{int}} = - \dfrac{1}{2 \kappa^2}\int d^{10}x \sqrt{-g} e^{-2\phi} \bigg[\dfrac{1}{12}\big(\overline{\psi}_M\Gamma^{MNPQR}\psi_R+6 \overline{\psi}{}^N\Gamma^P \psi^Q - \\ - \sqrt{2} \overline{\psi}_M \Gamma^{NPQ}\Gamma^M \lambda\big) H_{NPQ} + \dfrac{1}{\sqrt{2}} \overline{\psi}_M \Gamma^N \Gamma^M \lambda \partial_N \phi - \\ - \dfrac{\alpha'}{8} \textrm{Tr}\big(\overline{\chi} \Gamma^{MNP} \chi\big) H_{MNP} - \dfrac{\alpha'}{2} \textrm{Tr} \big(\overline{\chi}\Gamma^M \Gamma^{NP} \big(\psi_M+\dfrac{\sqrt{2}}{12}\Gamma_M \lambda\big) F_{NP}\big)\bigg], \end{multline} \noindent wherein the first component is responsible for the 4d gravitino mass term \begin{eqnarray} S \sim \int_{M_4} \overline{\psi}_{\mu} \overline{\sigma}^{\mu \nu} \psi_{\nu} e^{K/2} W \, , \end{eqnarray} \noindent while other components of the action give gravitino-fermion interactions and D-terms. \subsection{Moduli stabilisation} As stated at the beginning of this chapter, the presence of moduli fields in the low-energy spectrum is one of the most pressing problems of string compactification. From a phenomenological standpoint, the moduli affect the predictivity of the theory, because they have no potential, so their vevs can vary continuously and arbitrarily, as time-dependent parameters. Moreover, moduli fields can mediate certain long-range forces, for which there is no experimental evidence. For these reasons, moduli must be lifted from the low-energy spectrum. In the context of heterotic compactification, solutions to the moduli problem have been given in Refs.~\cite{aglomoduli, aglomoduli2}, where all geometrical moduli are stabilised, with the exception of one linear combination. The guiding idea is that the Hermitian Yang-Mills equations \eqref{hermitianyangmills} constrain the moduli space by requiring certain F- and D-terms in the effective 4d potential to vanish. Such terms explicitly descend from the 10d action, namely from a component of it that reads\footnote{Here, $S_{\textrm{part.}}$ can be obtained by using the integrability condition on the modified Bianchi identity~\eqref{modifiedbianchi} and then substituting the result into the $\alpha'$-dependent part of the action.} \begin{eqnarray} S_{\textrm{part.}} = - \dfrac{1}{2 \kappa^2}\dfrac{\alpha'}{4} \int d^{10} x \sqrt{-g} \left( -\dfrac{1}{2}\textrm{Tr}(g^{m \overline{n}}F_{m \overline{n}})^2 + \textrm{Tr}(g^{m \overline{m}}g^{n \overline{n}}F_{m n}F_{\overline{m}\overline{n}})\right). \end{eqnarray} \noindent Whenever a deformation of complex structure fails to preserve $F_{mn} = F_{\overline{m}\overline{n}}=0$, at least one F-term becomes non-zero, thus signaling that the modulus of the deformation should not belong to the low-energy theory. Similarly, the failure of metric deformations to satisfy $g^{m \overline{n}} F_{m\overline{n}}=0$ is correlated to non-vanishing D-terms, which stabilise K\"ahler moduli. In general, supersymmetric and non-supersymmetric regions of the K\"ahler cone are separated by “walls of stability”, on which the bundle $V$ must split into direct sums of smaller components, in order to preserve supersymmetry. These regions in particular provide important applications for model building and moduli stabilisation. Other stabilisation techniques involve non-perturbative effects such as gaugino condensation and membrane instantons. In the end, a viable theory would have to ensure stabilisation of all moduli, including the remaining linear combination and the $h^1(V \otimes V^*)$ bundle moduli. As a work on string phenomenology, this thesis is concerned to some extent with the problem of moduli stabilisation, however instead of addressing the problem explicitly, we focus on the dependence of Yukawa couplings to moduli. In certain cases, the Yukawa couplings vanish and this behavior is independent of moduli. When they do not vanish, the Yukawa couplings are expressed as functions of moduli, which can be combined with results from moduli stabilisation. It is also possible to reverse the logic. One might figure out for which moduli values reasonable Yukawa couplings are obtained, and these are the values at which the moduli need to be stabilised. For example, light Higgs usually only occurs for special complex structure choices, which can be inferred from our results. \subsection{Wilson lines} \noindent The problem of descending from a GUT group $H_{\textrm{GUT}}$ down to the Standard Model is resolved in heterotic string theory by turning on Wilson lines. By definition, Wilson lines are configurations of internal gauge fields with vanishing field strength, but non-vanishing parallel transport around non-contractible loops. For example, if $\gamma$ is a homotopically non-trivial loop in the fundamental group $\pi_1(X)$, then the Wilson line induces a homomorphism $\varphi: \pi_1(X) \rightarrow H_{\textrm{GUT}}$ through the path-ordered exponential \begin{eqnarray} U_{\gamma} = P \,\textrm{exp}\left( \oint_{\gamma} A_m d y^m \right), \end{eqnarray} \noindent thus embedding gauge-invariant observables into the GUT group. At low energies, $H_{\textrm{GUT}}$ is broken by the vevs $\langle A_m \rangle$ down to the subgroup commuting with the image of $\varphi$. The advantage of using Wilson lines compared to conventional symmetry breaking is that no additional Higgs bosons are introduced, instead the necessary ingredients are already found in the topology of the internal manifold. The issue however is that the CICYs we defined so far in Section~\ref{cicysection} are simply connected, i.e. they have a vanishing fundamental group. Nevertheless, given a simply-connected manifold $X$, it is possible to construct a non-simply connected space by dividing $X$ by a freely acting\footnote{The action of $\Gamma$ is called free if it has no fixed points in $X$.} discrete symmetry $\Gamma$. In this case, the fundamental group of $X/\Gamma$ is $\Gamma$ and the application of the Lefschetz fixed point theorem ensures that $X/\Gamma$ is indeed a Calabi-Yau manifold.\footnote{More precisely, for any element $g \in \Gamma$, the induced map $g^*$ on cohomology preserves the holomorphic $(3,0)$-form $\Omega$, due to the Lefschetz fixed point theorem $\sum_{q=0}^3 (-1)^q \textrm{Tr} \left(g^* \vert_{H^{q,0}(X)}\right) = 0$.} If $\vert \Gamma \vert$ is the order of the discrete group, then the Euler number of $X/\Gamma$ is $\chi(X)/\vert \Gamma \vert$, and similarly, the index of the bundle $\tilde{V} \rightarrow X/\Gamma$ descending from $V$ is given by $\textrm{ind}(V)/\vert \Gamma \vert$. This means that for a realistic GUT model with three generations of particles, the requirement is that \begin{eqnarray} \vert \textrm{ind}(V) \vert = 3 \vert \Gamma \vert, \end{eqnarray} \noindent where we employ here the data from Table~\ref{mytablenow} and use the fact that for an $SU(5)$ bundle, $\textrm{ind}(\wedge^2 V)=\textrm{ind}(V)$. Altogether, the topological constraints for the vector bundle $V$ are summarised in Table~\ref{tablerestrictions}. \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|l|}\hline Physical requirement & Topological constraint \\\hline\hline GUT group & $c_1(V)=0$ \\ \hline Anomaly cancellation &$c_2(TX) - c_2(V) \geq 0 $ \\ \hline Supersymmetry & $V$ is poly-stable \\ \hline Three generations & $\vert \textrm{ind}(V) \vert = 3 \vert \Gamma \vert$ \\ \hline \end{tabular} \end{center} \caption{\it Phenomenological constraints for a holomorphic vector bundle $V$ on a Calabi-Yau threefold $X$, in the context of heterotic compactification.}\label{tablerestrictions} \end{table} In the next chapter we will show how these conditions are implemented if $V$ is a direct sum of line bundles. By concluding this review, we can now proceed to the main topic of the thesis. \include{chapter2} \include{chapter3} \chapter{Matter Field K\"ahler Metric from Localisation} \label{kahlerchapter} The computation of physical Yukawa couplings from string theory is notoriously difficult, mainly because methods to compute the matter field K\"ahler metric are lacking. In this chapter we report some progress in this direction. We outline a method to calculate the matter field K\"ahler metric in the context of Calabi-Yau compactifications of the heterotic string with Abelian internal gauge fluxes. So far, the only class of heterotic Calabi-Yau models where an analytic expression for the matter field K\"ahler metric is known is for models with standard embedding of the spin connection into the gauge connection. In this case, the matter field K\"ahler metrics for the $(1, 1)$ and $(2, 1)$ matter fields are essentially given by the metrics on the corresponding moduli spaces \cite{Candelas:1987se, Candelas:1990pi}. Recently, Candelas, de la Ossa and McOrist \cite{candelasmetric} (see also Ref. \cite{mcoristeffective}) have proposed an $\alpha'$-correction of the heterotic moduli space metric, which includes bundle moduli. This information may be used to infer the K\"ahler metric of matter fields that arise from bundle moduli. However, we will not pursue this method here, since our main interest is not in bundle moduli but in the gauge matter fields which can account for the physical particles. There are two other avenues for calculating the matter field K\"ahler metric, suggested by results in the literature. The first one relies on Donaldson’s numerical algorithm to determine the Ricci-flat Calabi-Yau metric \cite{donaldson1, donaldson2, donaldson3} and subsequent work applying this algorithm to various explicit examples and to the numerical calculation of the Hermitian Yang-Mills connection on vector bundles \cite{wang, headrick1, douglas1, headrick2, headrick3, douglas2, numericallara1, numericallara2}. At present, this approach has not been pushed as far as numerically calculating physical Yukawa couplings. However, it appears that this is possible in principle and, while constituting a very significant computational challenge, would be very worthwhile carrying out. A disadvantage of this method is that it will only provide the Yukawa couplings at specific points in moduli space and that extracting information about their moduli dependence will be quite difficult. In this chapter, we will focus on a different approach, based on localisation due to flux, which can lead to analytic results for the matter field K\"ahler metric. This method is motivated by work in F-theory \cite{heckmanvafa, fontibanez, conlonpalti, aparicio}, where the localisation of matter fields on the intersection curves of D7-branes and Yukawa couplings on intersections of such curves facilitates local computations of the Yukawa couplings which do not require knowledge of the Ricci-flat Calabi-Yau metric. It is not immediately obvious whether and how this approach might transfer to the heterotic case, since heterotic compactifications lack the intuitive local picture, related to intersecting D-brane models, which is available in F-theory. In this chapter, we will show, using methods from differential geometry developed in previous chapters (see also \cite{yukunification}), that localisation of wave functions can nevertheless arise in heterotic models. The underlying mechanism is, in fact, similar to the one employed in F-theory. Sufficiently large flux -- in the heterotic case, $E_8 \times E_8$ gauge flux -- leads to a localisation of wave functions which allows calculating their normalisation locally, without recourse to the Ricci-flat Calabi-Yau metric. To carry this out explicitly we will proceed in three steps. First, we derive the general formula for the matter field K\"ahler metric for heterotic Calabi-Yau compactifications, as hinted in Section~\ref{mattersectorsection}. This formula, which provides the matter field K\"ahler metric in terms of an integral over harmonic bundle valued forms is not, in itself, new (see, for example, Ref. \cite{boundaryinflation}). Our rederivation serves two purposes. First, we would like to fix conventions and factors as this will be required for an accurate calculation of the physical Yukawa couplings and, secondly, we will show explicitly how this formula for the matter field K\"ahler metric is consistent with four-dimensional $N = 1$ supergravity. We observe that this consistency already determines the dependence of the matter field K\"ahler metric on the T-moduli, a result which, to our knowledge, has not been pointed out in the literature so far. The second step is to show how (Abelian) $E_8 \times E_8$ gauge flux can lead to a localisation of the matter field wave functions around certain points of the Calabi-Yau manifold. We will first demonstrate this for toy examples based on line bundles on $\mathbb{P}^1$, as well as on products of projective spaces and then show that the effect generalises to Calabi-Yau manifolds. As a result, we obtain local matter field wave functions on Calabi-Yau manifolds and explicit results for their normalisation integrals. The final step is to express these results in terms of the global moduli of the Calabi-Yau manifold. We show that this can indeed be accomplished by relating global to local quantities on the Calabi-Yau manifold and by using information from four-dimensional $N = 1$ supersymmetry. In this way, we can obtain explicit results for the matter field K\"ahler metric as a function of the Calabi-Yau moduli and this is carried out for the Calabi-Yau hyper-surface in $\mathbb{P}^1 \times \mathbb{P}^3$. We believe this is the first time such a result for the matter field K\"ahler metric as a function of the properly defined moduli has been obtained in any geometrical string compactification, including F-theory. The plan of the chapter is as follows. In the next section, we sketch the supergravity calculation which leads to the general formula for the matter field K\"ahler metric and we discuss the implications from four-dimensional $N = 1$ supersymmetry. In Section~\ref{kahlersec3}, we show how gauge flux leads to the localisation of matter field wave functions, starting with toy examples on $\mathbb{P}^1$ and then generalising to products of projective spaces. Section \ref{kahlersec4} contains the local calculation of the wave function normalisation on a patch of the Calabi-Yau manifold. In Section \ref{kahlersec5}, we express this result in terms of the properly defined moduli by relating global and local quantities and we obtain an explicit result for the matter field K\"ahler metric on Calabi-Yau hyper-surfaces in $\mathbb{P}^1 \times \mathbb{P}^3$. We conclude in Section \ref{kahlersec6}. \section{The matter field K\"ahler metric in heterotic compactifications} \label{kahlersec2} Our first step is to derive a general formula for the matter field K\"ahler metric, in terms of the underlying geometrical data of the Calabi-Yau manifold and the gauge bundle. The basic structure of this formula is well-known for some time, see, for example Ref. \cite{boundaryinflation}, and our re-derivation here serves two purposes. Firstly, we would like to fix notations and conventions so that our result is accurate, as is required for a detailed calculation of Yukawa couplings. Secondly, we would like to explore the constraints on the matter field K\"ahler metric which arise from four-dimensional $N = 1$ supergravity. The starting point is the ten-dimensional $N = 1$ supergravity coupled to a ten-dimensional $E_8 \times E_8$ super Yang-Mills theory. To first order in $\alpha'$ and at the two-derivative level, the bosonic part of the action is given by Eq.~\eqref{10daction}. As in Section~\ref{n=1susy}, we consider the reduction of this action on a Calabi-Yau three-folds $X$, with Ricci-flat metric $g^{(6)}$ and a holomorphic bundle $V \rightarrow X$ with a connection $A^{(6)}$ that satisfies the Hermitian Yang-Mills equations \eqref{hermitianyangmills}. Let us introduce the K\"ahler form $J$ on $X$, related to the Ricci-flat metric $g^{(6)}$ on $X$ by $g^{(6)}_{m \overline{n}} = - i J_{m \overline{n}}$, and a basis $\lbrace J_i\rbrace$ of harmonic $(1,1)$-forms, where $i = 1,... , h^{1,1}(X)$. The reader is reminded that $J$ and the NS two-form $B$ can be expanded as \begin{eqnarray} J = t^i J_i \, , \qquad B = B^{(4)} + \tau^i J_i \, , \end{eqnarray} \noindent where $t^i$ are the K\"ahler moduli, $\tau^i$ are their axionic partners and $B^{(4)}$ is the four-dimensional two-form, dual to a scalar $\sigma$. In addition, we have the zero mode $\phi^{(4)}$ of the ten-dimensional dilaton $\phi$, as well as the complex structure moduli $z^a$, where $a = 1, ... , h^{2,1}(X)$. In the absence of matter fields, these bosonic fields fit into four-dimensional $N = 1$ chiral multiplets as \begin{eqnarray} \label{modulichiralmultiplets} S = e^{- 2 \phi^{(4)}} + i \sigma \, , \qquad T^i = t^i + i \tau^i \, , \qquad Z^a = z^a \, . \end{eqnarray} \noindent Also, it is important to notice that the Calabi-Yau volume is given by \begin{eqnarray} \label{cyvolume2} \mathcal{V} = \int_X d^6 x \sqrt{ g^{(6)}} = \dfrac{1}{6} \mathcal{K}\, , \quad \mathcal{K} = d_{ijk} t^i t^j t^k \, , \quad d_{i j k} = \int_X J_i \wedge J_j \wedge J_k \, , \end{eqnarray} \noindent where $d_{ijk}$ are the triple intersection numbers of $X$, thus giving the following expression for the K\"ahler moduli space metric \begin{eqnarray} \mathcal{G}_{ij} = - \dfrac{1}{4} \dfrac{\partial^2}{\partial t^i \partial t^j} \, \textrm{ln} \, \mathcal{K} = - \dfrac{3}{2} \left( \dfrac{\mathcal{K}_{ij}}{\mathcal{K}} - \dfrac{3}{2}\dfrac{\mathcal{K}_{i}\mathcal{K}_{j}}{\mathcal{K}^2}\right) \, , \end{eqnarray} \noindent where $\mathcal{K}_i = d_{i j k} t^j t^k $ and $\mathcal{K}_{i j} = d_{i j k} t^k $. Next, we obtain matter fields $C^I$ by expanding the gauge field around the vacuum. The result is imported from Eq.~\eqref{expansiona} in a simplified form \begin{eqnarray} \label{Aexpansion} A = A^{(6)}+C^I \nu_I \, , \end{eqnarray} \noindent where $\nu_I$ are harmonic one-forms which take values in the bundle $V$. It is important to emphasise that the correct matter field metric has to be computed relative to \textit{harmonic} forms $\nu_I$ and this is, in fact, how the dependence on the Ricci-flat metric and the Hermitian Yang-Mills connection comes about. The fields $C^I$ each form the bosonic part of an $N = 1$ chiral supermultiplet. It is known that the definition of the $T^i$ superfields in Eq.~\eqref{modulichiralmultiplets} has to be adjusted in the presence of these matter fields. In the universal case with only one T-modulus and one matter field $C$, the required correction to Eq.~\eqref{modulichiralmultiplets} has been found to be proportional to $\vert C \vert$ (see, for example, Ref. \cite{lukas1997}). For our general case, we therefore start by modifying the definition of the T-moduli in Eq.~\eqref{modulichiralmultiplets} by writing \begin{eqnarray} \label{Tagain} T^i = t^i + i \tau^i + \alpha' \; \Gamma^i_{I J} C^I \overline{C}^J \, , \end{eqnarray} \noindent where $\Gamma^i_{IJ}$ is a set of (potentially moduli-dependent) coefficients to be determined.\footnote{The dilaton superfield $S$ receives a similar correction in the presence of matter fields \cite{lukas1997}, but this arises at one-loop level and will not be of relevance here.} To our knowledge, no general expression for $\Gamma^i_{I J}$ has been obtained in the literature so far. The kinetic terms of the above superfields derive from a K\"ahler potential of the general form \begin{eqnarray} \label{Kaehlercomplete} K = - \textrm{ln} \; (S+\overline{S}) + K_{\textrm{cs}} - \textrm{ln} \; (d_{ijk}(T^i+\overline{T}{}^i)(T^j+\overline{T}{}^j)(T^k+\overline{T}{}^k)) + \alpha' G_{I J} C^I \overline{C}{}^J , \end{eqnarray} \noindent where $K_{\textrm{cs}}$ is the K\"ahler potential for the complex structure moduli $Z^a$ whose explicit form is well-known but is not relevant to our present discussion and $G_{IJ}$ is the (moduli-dependent) matter field K\"ahler metric we would like to determine. The general task is now to compute the kinetic terms which result from this K\"ahler potential, insert the definitions of $S$ in Eq.~\eqref{modulichiralmultiplets} and of $T^i$ in Eq. \eqref{Tagain} and compare the result with what has been obtained from the reduction of the ten-dimensional action \eqref{10daction}. This comparison should lead to explicit expressions for $G_{IJ}$ and $\Gamma^i_{IJ}$. A quick look at the K\"ahler potential \eqref{Kaehlercomplete} shows that achieving this match is by no means a trivial matter. The matter field K\"ahler metric $G_{IJ}$ depends on the $T$-moduli and, hence, the kinetic terms from \eqref{Kaehlercomplete} can be expected to include cross-terms of the form $\partial^{\mu}t^i \partial_{\mu} C^I$. However, such cross-terms can clearly not arise from the dimensional reduction of the 10-dimensional action \eqref{10daction} and, hence, there must be non-trivial cancellations which involve the derivatives of $G_{IJ}$ and $\Gamma^i_{IJ}$. We find that this issue can be resolved and indeed a complete match between the reduced ten-dimensional action \eqref{10daction} and the four-dimensional K\"ahler potential \eqref{Kaehlercomplete} can be achieved provided the following three requirements are satisfied. \begin{itemize} \item The coefficients $\Gamma^i_{IJ}$ which appear in the definition \eqref{Tagain} of the $T^i$ superfields are given by \begin{eqnarray} \label{Gammarestated} \Gamma^i_{IJ} = - \dfrac{1}{2} \mathcal{G}^{i j} \dfrac{\partial G_{I J}}{\partial \overline{T}{}^j} \, , \end{eqnarray} \noindent where $\mathcal{G}^{ij}$ is the inverse of the K\"ahler moduli space metric $\mathcal{G}_{ij}$. \item The matter field K\"ahler metric is given by \begin{eqnarray} \label{GIJrestated} G_{I J} = \dfrac{1}{2\mathcal{V}} \int_X \nu_I \wedge \bar{\star}_V (\nu_J) \, , \end{eqnarray} \noindent where $\bar{\star}_V$ refers to a Hodge dual combined with a complex conjugation and an action of the hermitian bundle metric on $V$. \item Since the Hodge dual on a Calabi-Yau manifold acting on a $(1, 0)$-form $\rho$ can be carried out as $\star \rho = - \tfrac{i}{2} J \wedge J \wedge \rho$, the result \eqref{GIJrestated} for the matter field K\"ahler metric can be re-written as \begin{eqnarray} \label{tdependence} G_{IJ} = - \dfrac{3 i t^i t^j}{2 \mathcal{K}} \Lambda_{ijIJ} \, , \qquad \Lambda_{ijIJ} = \int_X J_i \wedge J_j \wedge \nu_I \wedge (H \overline{\nu}_J) \, , \end{eqnarray} \noindent where $H$ is the hermitian bundle metric on $V$ . The final requirement for a match between the dimensionally reduced ten-dimensional and the four-dimensional theory \eqref{Kaehlercomplete} can then be stated by saying that the above integrals $\Lambda_{ijIJ}$ do not explicitly depend on the K\"ahler moduli $t^i$. \end{itemize} \noindent The above result means that the K\"ahler moduli dependence of the matter field metric is completely determined as indicated in the first equation \eqref{tdependence}, while the remaining integrals $\Lambda_{ijIJ}$ are $t^i$-independent but can still be functions of the complex structure moduli. To our knowledge this is a new result which is of considerable relevance for the structure of the matter field K\"ahler metric and the physical Yukawa couplings. Note that the $t^i$ dependence of $G_{IJ}$ in Eq. \eqref{tdependence} is homogeneous of degree $-1$, as expected on general grounds. It is worth noting that the K\"ahler potential \eqref{Kaehlercomplete} with the matter field K\"ahler metric as given in Eq. \eqref{tdependence} can, alternatively, also be written in the form \begin{equation} \label{altK} \begin{array}{l} K = - \textrm{ln} \; (S+\overline{S}) + K_{\textrm{cs}} - \textrm{ln} \; (d_{ijk}(T^i+\overline{T}{}^i - \gamma^i)(T^j+\overline{T}{}^j- \gamma^j)(T^k+\overline{T}{}^k- \gamma^k)) \, \\[1mm] \gamma^i = 2 \alpha' \, \Gamma^{i}_{I J} C^I \overline{C}{}^J \, , \end{array} \end{equation} \noindent provided that terms of higher than quadratic order in the matter field $C^I$ are neglected. This can be seen by expanding the logarithm in Eq.~\eqref{altK} to leading order in $\gamma^i$ and by using $\tfrac{3 \mathcal{K}_i}{\mathcal{K}} \Gamma^i_{I J} = G_{I J}$. (The latter identity follows from $\mathcal{G}^{ij} \tfrac{3 \mathcal{K}_j}{4 \mathcal{K}} = t^i$, the fact that $G_{IJ}$ is homogeneous of degree $-1$ in $t^i$ and the result \eqref{Gammarestated} for $\Gamma^i_{IJ}$). This form of the K\"ahler potential, together with the definition \eqref{Tagain} of the fields $T^i$, means that, in terms of the underlying geometrical K\"ahler moduli $t^i$, the dependence on the matter fields $C^I$ cancels. Indeed, inserting the definition \eqref{Tagain} of the $T^i$ moduli into Eq. \eqref{altK} turns the last logarithm into $- \textrm{ln} \,(8\mathcal{K})$. That this part of the K\"ahler potential can be written as the negative logarithm of the Calabi-Yau volume is in fact expected and provides a check of our calculation. \section{Localisation of matter field wave functions on projective spaces} \label{kahlersec3} As a warm-up, we first discuss wave function normalisation on $\mathbb{P}^n$ and products of projective spaces, beginning with the simplest case of $\mathbb{P}^1$. (For a related discussion, in the context of F-theory, see Ref. \cite{paltiwavefunctions}.) In doing so we have two basic motivations in mind. First, considering projective space and $\mathbb{P}^1$ in particular provides us with a toy model for the actual Calabi-Yau case which we will tackle later. From this point of view, the following discussion will provide some intuition as to when wave function localisation occurs and when it leads to a good approximation for the normalisation integrals. On the other hand, projective spaces and their products provide the ambient spaces for the Calabi-Yau manifolds of interest and, hence, this chapter will be setting up some of the requisite notation and results we will be using later. \subsection{Wave functions on $\mathbb{P}^1$} Homogeneous coordinates on $\mathbb{P}^1$ are denoted by $x_0$, $x_1$, the affine coordinates on the patch $\lbrace x_0 \neq 0\rbrace$ by $z = x_1/x_0$ and we also define $\kappa = 1 + \vert z\vert^2$. For simplicity, we will write all quantities in terms of the affine coordinate $z$ and we will ensure they are globally well-defined by demanding the correct transformation property under the transition $z \rightarrow 1/z$. In terms of $z$, the standard Fubini-Study K\"ahler potential and K\"ahler form can be written as \begin{eqnarray} \label{F-SKpotential} K = \dfrac{i}{2 \pi}\textrm{ln} \, \kappa \, , \qquad J = \partial \overline{\partial} K = \dfrac{i}{2 \pi \kappa^2} d z \wedge d \overline{z} \, . \end{eqnarray} \noindent Here, $J$ has the standard normalisation, that is, $\int_{\mathbb{P}^1} J =1$. The associated Fubini-Study metric is K\"ahler-Einstein and, hence, the closest analogue of a Ricci-flat Calabi-Yau metric we can hope for on $\mathbb{P}^1$. We are interested in line bundles $L = \mathcal{O}_{\mathbb{P}^1}(k)$ on $\mathbb{P}^1$ with first Chern class $c_1(L) = k J$ on which we introduce a hermitian structure with the bundle metric and the associated (Chern) connection and field strength given by \begin{eqnarray} H=\kappa^{-k} \, , \qquad A = \partial \, \textrm{ln} \bar{H} = - \dfrac{k \overline{z}}{\kappa} dz \, , \qquad F= d A = \overline{\partial} \partial \, \textrm{ln} \bar{H} = - 2 \pi i k J \, . \end{eqnarray} \noindent The analogue of the harmonic forms $\nu_I$ in Eq.~\eqref{Aexpansion} associated to matter fields are harmonic $L$-valued $\alpha$, that is, forms satisfying the equations \begin{eqnarray} \label{harmoniceqs} \overline{\partial} \alpha = 0 \, , \qquad \partial(\bar{H}\star \alpha) = 0 \, , \end{eqnarray} \noindent where the Hodge star is taken with respect to the Fubini-Study metric. We would like to compute their normalisation integrals \begin{eqnarray} \label{normalisationintegral} \langle \alpha, \beta \rangle = \int_{\mathbb{P}^1} \alpha \wedge \star (H \overline{\beta}) \, , \end{eqnarray} \noindent the analogue of the matter field K\"ahler metric \eqref{GIJrestated}. These harmonic forms are in one-to-one correspondence with the bundle cohomologies $H^p(\mathbb{P}^1, L)$ and, depending on the value of $k$, we should distinguish three cases. \begin{itemize} \item $k \geq 0$: In this case, the only non-vanishing cohomology of $L$ is $h^0(\mathbb{P}^1, L) = k + 1$, so that the relevant harmonic forms $\alpha$ are $L$-valued zero forms. The relevant solutions to Eqs.~\eqref{harmoniceqs} are explicitly given by the degree $k$ polynomials in $z$. \item $k = -1$: In this case, all cohomologies of $L$ vanish so there are no harmonic forms. \item $k \leq -2$: In this case, the only non-vanishing cohomology of $L$ is $h^1(\mathbb{P}^1,L) = -k - 1$ and the corresponding $L$-valued $(0, 1)$-forms which solve Eqs.~\eqref{harmoniceqs} can be written as $\alpha = \kappa^k h(\overline{z})d \overline{z}$, where $h$ is a polynomial of degree $-k -2$ in $\overline{z}$. In the following, it is useful to work with the monomial basis \begin{eqnarray} \label{monomialbasis} \alpha_q = \kappa^k \overline{z}{}^q d \overline{z} \, , \qquad q = 0, ... , -k - 2 \end{eqnarray} \noindent for these forms. \end{itemize} \noindent Given that the forms $\nu_I$ which appear in the actual reduction \eqref{Aexpansion} are $(0, 1)$-forms the most relevant case is the last one for $k \leq -2$. In this case, inserting the forms \eqref{monomialbasis} into the the normalisation integral \eqref{normalisationintegral} leads to \begin{eqnarray} \langle \alpha_q, \alpha_p \rangle = - i \int_{\mathbb{C}} z^q \overline{z}{}^p \kappa^k d z \wedge d \overline{z} = \dfrac{2 \pi q!}{(-k-1)...(-k-1-q)} \delta_{q p} \, . \end{eqnarray} \noindent In physical terminology, the integer $k$ quantifies the flux and the integer $q$ labels the families of matter fields. It is clear that the above integrals receive their main contribution from a patch near the affine origin $z \simeq 0$, provided that the flux $\vert k \vert$ is sufficiently large and the family number $q$ is sufficiently small. In this case, it seems that the above integrals can be approximately evaluated locally near $z \simeq 0$, by using the flat metric instead of the Fubini-Study metric as well as the corresponding flat counterparts of the bundle metric and the harmonic forms. Formally, these flat space quantities can be obtained from the exact ones by setting $\kappa$ to one in the expression \eqref{F-SKpotential} for the K\"ahler form and by the replacement $\kappa^k \rightarrow e^{k \vert z \vert^2}$ in the other quantities. That is, we use the replacements \begin{equation} \begin{array}{l} J = \dfrac{i}{2 \pi \kappa^2} d z \wedge d \overline{z} \rightarrow \dfrac{i}{2 \pi} d z \wedge d \overline{z} \, , \qquad H = \kappa^{-k} \rightarrow e^{-k\vert z\vert^2} \, , \\[3mm] \alpha_q = \kappa^k \overline{z}{}^q d \overline{z} \rightarrow e^{k \vert z\vert^2} \overline{z}{}^q d \overline{z} \, , \end{array} \end{equation} \noindent to work out the local version of the normalisation integrals, which leads to \begin{eqnarray} \langle \alpha_q, \alpha_p \rangle_{\textrm{loc}} = -i \int_{\mathbb{C}} z^q \overline{z}{}^p e^{k \vert z \vert^2} d z \wedge d \overline{z} = \dfrac{2 \pi q!}{(-k-1)^{q+1}} \delta_{q p} \, . \end{eqnarray} \noindent For the ratio of local to exact normalisation this implies \begin{eqnarray} \dfrac{\langle \alpha_q, \alpha_q \rangle_{\textrm{loc}}}{\langle \alpha_q, \alpha_q \rangle} = \dfrac{(-k-1)...(-k-1-q)}{(-k-1)^{q+1}} = 1- \mathcal{O} \left( \dfrac{q^2}{-k-1}\right) \, . \end{eqnarray} \noindent Hence, as long as the flux $\vert k \vert$ is sufficiently large and the family number satisfies $q^2 \ll \vert k \vert$, the local versions of these integrals do indeed provide a good approximation. The above ratio expressing the accuracy of the local correction can be roughly evaluated by computing the coefficient of the $\mathcal{O} \big( \tfrac{q^2}{-k-1}\big)$ term. For example, if $\vert k \vert =11$ and $q=1$, then the ratio is $0.9$ and the error is $10 \%$. If $\vert k \vert=101$ and $q=2$, then the ratio is close to $0.96$ and the error is approximately $4 \%$ . It is worth noting that a transformation $z \rightarrow 1/z$ to the other standard coordinate patch of $\mathbb{P}^1$ transforms the monomial basis forms $\alpha_q$ into forms of the same type but with the family number changing as $q \rightarrow (-k - 1) - q$. This means that families with a large family number $q$ close to $-k - 1$ in the patch $\lbrace x_0 \neq 0\rbrace$ acquire a small family number when transformed to the patch $\lbrace x_1 \neq 0\rbrace$ and, hence, localise at the affine origin of this patch, that is near $z = \infty$. From this point of view it is not surprising that families with large $q$ in the patch $\lbrace x_0 \neq 0\rbrace$ cannot be dealt with by a local calculation near $z \simeq 0$. Instead, for such modes, we can carry out a local calculation analogous to the above one but near the affine origin of the patch $\lbrace x_1 \neq 0 \rbrace$. \vspace{3mm} In summary, the harmonic bundle valued $(0, 1)$-forms for $L = \mathcal{O}_{\mathbb{P}^1}(k)$, where $k \leq -2$, are given by $\alpha_q$ as in Eq.~\eqref{monomialbasis}. For sufficiently large flux $\vert k \vert$, the modes with small family number $q$ localise near the affine origin of the patch $\lbrace x_0 \neq 0 \rbrace$, that is at $z \simeq 0$ and their normalisation can be obtained from a local calculation near this point. The modes with large family number $q$ localise near the affine origin of the other patch $\lbrace x_1 \neq 0\rbrace$, that is, near $z = \infty$ and their normalisation can be obtained by a similar local calculation around this point. \subsection{Wave functions on products of projective spaces} \label{introducingP1P3} The previous discussion for line bundles on $\mathbb{P}^1$ can be straightforwardly generalised to line bundles on arbitrary products of projective spaces. For the sake of keeping notation simple, we will now illustrate this for the case of $\mathcal{A} = \mathbb{P}^1 \times \mathbb{P}^3$ which is, in fact, the ambient space of the Calabi-Yau manifold on which we focus later. The situation for general products of projective spaces is easily inferred from this discussion. Homogeneous coordinates on $\mathcal{A} = \mathbb{P}^1 \times \mathbb{P}^3$ are denoted by $x_0$, $x_1$ for the $\mathbb{P}^1$ factor and by $y_0$, $y_1$, $y_2$, $y_3$ for $\mathbb{P}^3$. The associated affine coordinates on the patch $\lbrace x_0 \neq 0, y_0 \neq 0\rbrace$ are $z_1 = x_1/x_0$ and $z_{\alpha+1} = y_{\alpha}/y_0$ for $\alpha = 1, 2, 3$, and we define $\kappa_1 = 1 + \vert z_1\vert^2$ and $\kappa_2 = 1 + \sum_{\alpha=2}^4 \vert z_{\alpha} \vert^2$. The Fubini-Study K\"ahler forms for the two projective factors are\footnote{As in Chapters~\ref{tetraquadricchapter} and \ref{chaptern>1codimension}, we will denote quantities defined on the “ambient space" $\mathcal{A}$ by a hat, in order to distinguish them from their Calabi-Yau counterparts to be introduced later.} \begin{equation} \begin{array}{l} \hat{J}_1 = \dfrac{i}{2 \pi} \partial \overline{\partial} \, \textrm{ln} \kappa_1 = \dfrac{i}{2 \pi \kappa_1^2} d z_1 \wedge d \overline{z}_1 \, , \\[2mm] \hat{J}_2 = \dfrac{i}{2 \pi} \partial \overline{\partial} \, \textrm{ln} \kappa_2 = \dfrac{i}{2 \pi \kappa_2^2} \sum_{\alpha, \beta = 2}^4 (\kappa_2 \delta_{\alpha \beta} - \overline{z}_{\alpha} z_{\beta}) d z_{\alpha} \wedge d \overline{z}_{\beta} \, , \end{array} \end{equation} \noindent and, more generally, we can introduce the K\"ahler forms \begin{equation} \hat{J} = t_1\hat{J}_1 + t_2\hat{J}_2 \, , \end{equation} \noindent with K\"ahler parameters $t_1 > 0$, $t_2 > 0$ on $\mathcal{A}$. Line bundles $\hat{L} = \mathcal{O}_{\mathcal{A}}(k_1, k_2)$ with first Chern class $c_1(\hat{L}) = k_1\hat{J}_1+k_2\hat{J}_2$ can be equipped with the hermitian bundle metric \begin{eqnarray} \label{hermitianmetriceq3.12} \hat{H}= \kappa_1^{-k_1} \kappa_2^{-k_2} \quad \Rightarrow \quad \hat{F} = \overline{\partial} \partial \, \textrm{ln} \bar{\hat{H}} = - 2 \pi i (k_1\hat{J}_1+k_2\hat{J}_2) \, . \end{eqnarray} \noindent Specifically, we are interested in those line bundles $\hat{L}$ with a non-vanishing first cohomology which are precisely those with $k_1 \leq -2$ and $k_2 \geq 0$. In these cases (see also Eq.~\eqref{generalizedbott}), \begin{eqnarray} \label{dimeq3.13} h^1(\mathcal{A},\mathcal{O}_{\mathcal{A}}(k_1,k_2)) = (- k_1 -1) \dfrac{(k_2+3)(k_2+2)(k_2+1)}{6} \, , \end{eqnarray} \noindent and a basis for the associated harmonic $\hat{L}$-valued $(0, 1)$-forms is provided by \begin{eqnarray} \label{basiseq3.14} \hat{\nu}_{\mathbf{q}} = \kappa_1^{k_1} \overline{z}{}^{\hat{q}_1}_1 z^{\hat{q}_2}_2 z^{\hat{q}_3}_3 z^{\hat{q}_4}_4 d \overline{z}_1 \, , \end{eqnarray} \noindent where $\hat{\mathbf{q}} = (\hat{q}_1, \hat{q}_2, \hat{q}_3, \hat{q}_4)$ is a positive integer vector which labels the families and whose entries are constrained by $\hat{q}_1 = 0, ..., -k_1-2$ and $\hat{q}_2+\hat{q}_3+\hat{q}_4 \leq k_2$. Given these quantities, the integrand of the normalisation integral is proportional to \begin{eqnarray} \hat{\nu}_{\hat{\mathbf{q}}} \wedge \star (\hat{H} \bar{\hat{\nu}}_{\hat{\mathbf{q}}} ) \sim \kappa_1^{k_1} \kappa_2^{-k_2} \prod_{\alpha=1}^4 \vert z_{\alpha}\vert^{2 \hat{q}_{\alpha}} \, . \end{eqnarray} \noindent Hence, provided the fluxes $\vert k_1 \vert$ and $k_2$ are sufficiently large and the family numbers $\hat{q}_{\alpha}$ sufficiently small, we expect localisation on a patch $\hat{U}$ around the affine origin $z_{\alpha} \simeq 0$. In this case, we can again work with the flat limit where the above quantities turn into \begin{equation} \label{localapprox} \begin{array}{lll} \hat{J}_1 \rightarrow \dfrac{i}{2\pi} dz_1 \wedge d \overline{z}_1 \, , & \qquad & \hat{J}_2 \rightarrow \dfrac{i}{2\pi} \sum_{\alpha=2}^4 dz_{\alpha} \wedge d \overline{z}_{\alpha} \, ,\\[3mm] \hat{H} \rightarrow e^{-k_1 \vert z_1 \vert^2 -k_2 \sum_{\alpha=2}^4 \vert z_{\alpha} \vert^2} \, , & \qquad & \hat{\nu}_{\mathbf{q}} \rightarrow e^{k_1 \vert z_1 \vert^2} \overline{z}{}^{\hat{q}_1}_1 z^{\hat{q}_2}_2 z^{\hat{q}_3}_3 z^{\hat{q}_4}_4 d \overline{z}_1 \, . \end{array} \end{equation} \noindent A few general conclusions can be drawn from this. First, localisation near a point in $\mathcal{A}$ does require all fluxes $\vert k_i\vert$ to be large. If one of the fluxes is not large then localisation will happen near a higher-dimensional variety in $\mathcal{A}$. For example, if $\vert k_1\vert$ is not large then the wave function will localise near $\mathbb{P}^1$ times a point in $\mathbb{P}^3$. We note that such a partial localisation may actually be sufficient when we come to discuss Calabi-Yau manifolds embedded in $\mathcal{A}$. For example, localisation near a curve in $\mathcal{A}$ will typically lead to localisation near a point on a Calabi-Yau hyper-surface embedded in $\mathcal{A}$. Secondly, provided all $\vert k_i\vert$ are indeed large, localisation on $\hat{U}$ near the affine origin $z_{\alpha} \simeq 0$, for $\alpha = 1, 2, 3, 4$, requires all $\hat{q}_{\alpha}$ to be sufficiently small. If a certain $\hat{q}_{\alpha}$ is large, localisation may still arise near another point in $\mathcal{A}$. For example, if $\hat{q}_{1}$ is large while the other $\hat{q}_{\alpha}$ are small, then localisation occurs near $z_1 = \infty$, $z_2 = z_3 = z_4 = 0$. \section{A local Calabi-Yau calculation} \label{kahlersec4} So far, we have approached the problem of computing wave function normalisations on Calabi-Yau manifolds from the viewpoint of the prospective ambient embedding spaces. In this section, we will take the complementary point of view and carry out a local calculation on a Calabi-Yau manifold. In the next section, we will show how to connect this local Calabi-Yau calculation with the ambient space point of view in order to obtain results as functions of globally defined moduli. We start with a Calabi-Yau three-fold $X$ and a line bundle $L \rightarrow X$ with a non-vanishing first cohomology and associated $L$-valued harmonic $(0, 1)$-forms. Our goal is to determine the normalisation of these harmonic forms by a local calculation, assuming, at this stage, that localisation indeed occurs. To do this, we focus on a patch $U \subset X$ with local complex coordinates $Z_a$, where $a = 1, 2, 3$, chosen such that the K\"ahler form $J$, associated to the Ricci-flat Calabi-Yau metric, is locally on $U$ well approximated by\footnote{We will denote local quantities, defined on the patch $U$, by script symbols.} \begin{eqnarray} \label{eqlocalJ} \mathcal{J} = \dfrac{i}{2\pi} \sum_{a=1}^3 \beta_a d Z_a \wedge d \bar{Z}_a \, , \end{eqnarray} \noindent where the $\beta_a$ are positive constants. (It is, of course, possible to set $\beta_a$ equal to one by further coordinate re-definitions but, for later purposes, we find it useful to keep these explicitly.) On $U$, we can approximate the hermitian bundle metric $H$ and the associated field strength $F$ of $L$ by \begin{eqnarray} \label{eqlocalH} \mathcal{H} = e^{-\sum_{a=1}^3 K_a \vert Z_a\vert^2} \quad \Rightarrow \quad \mathcal{F} = \overline{\partial} \partial \, \textrm{ln} \, \bar{\mathcal{H}} = \sum_{a=1}^3 K_a d Z_a \wedge d \bar{Z}_a \, , \end{eqnarray} \noindent where $K_a$ are constants which will ultimately become functions of the Calabi-Yau moduli. The Hermitian Yang-Mills equation, $J \wedge J \wedge F = 0$, should be satisfied locally which leads to \begin{eqnarray} \mathcal{J} \wedge \mathcal{J} \wedge \mathcal{F} \quad \Leftrightarrow \quad \beta_1 \beta_2 K_3 + \beta_1 \beta_3 K_2 +\beta_2 \beta_3 K_1 = 0 \, . \end{eqnarray} \noindent The resulting equation for the $K_a$ will translate into a constraint on the Calabi-Yau moduli in a way that will become more explicit later. For now we should note that it implies not all $K_a$ can have the same sign (given that the $\beta_a$ need to be positive). Consider harmonic $(0, 1)$-forms $v \in H^1(X,L)$. On $U$, they are approximated by $(0, 1)$-forms $\nu$, which must satisfy the local version of the harmonic equations \begin{eqnarray} \overline{\partial} \nu = 0 \, , \qquad \mathcal{J} \wedge \mathcal{J} \wedge \partial (\mathcal{H} \nu) = 0 \, . \end{eqnarray} \noindent In analogy with the projective case, specifically Eq.~\eqref{localapprox}, we assume that $K_1 < 0$ and $K_2, K_3 > 0$. Whether these sign choices are actually realised cannot be checked locally but requires making contact with the global picture -- we will come back to this later. If they are, potentially localising solutions to these equations are of the form $\nu = e^{K_1\vert Z_1 \vert^2} P(\bar{Z}_1, Z_2, Z_3)d \bar{Z}_1$, where $P$ is an arbitrary function of the variables indicated. Localisation of these solution still depends on the precise form of the function $P$, which cannot be determined from a local calculation. We will return to this issue in the next section when we discuss the relation to the global picture. For now, we take a practical approach and work with a monomial basis of solutions given by \begin{eqnarray} \label{monomialbasis2} \nu_{\mathbf{q}} = e^{K_1 \vert Z_1 \vert^2} \bar{Z}{}^{q_1}_1 Z_2^{q_2} Z_3^{q_3} d \bar{Z}_1 \, , \end{eqnarray} \noindent where $\mathbf{q} = (q_1, q_2, q_3)$ is a vector with non-negative integers. The normalisation of these monomial solutions can be explicitly computed and is given by \begin{align} M_{\mathbf{q},\mathbf{p}}& \; := \; \langle \nu_{\mathbf{q}}, \nu_{\mathbf{p}}\rangle_{\textrm{loc}} = \int_U \nu_{\mathbf{q}} \wedge \star (\mathcal{H} \overline{\nu}_{\mathbf{p}}) = \dfrac{i}{2} \delta_{\mathbf{q},\mathbf{p}} \int_U \mathcal{J} \wedge \mathcal{J} \wedge \nu_{\mathbf{q}} \wedge (\mathcal{H} \overline{\nu}_{\mathbf{q}}) \notag \\ &\; \simeq \; \dfrac{i}{4 \pi^2} \beta_2 \beta_3 \delta_{\mathbf{q},\mathbf{p}} \prod_{a=1}^3 \int_{\mathbb{C}} d Z_a \wedge d \bar{Z}_a \vert Z_a \vert^{2 q_a} e^{-\vert K_a\vert \vert Z_a\vert^2} \, . \end{align} \noindent After performing the integration, we find for the locally-computed normalisation \begin{eqnarray} \label{Mlocalresult} M_{\mathbf{q},\mathbf{p}} = \langle \nu_{\mathbf{q}}, \nu_{\mathbf{p}}\rangle_{\textrm{loc}} = 2 \pi \beta_2 \beta_3 \delta_{\mathbf{q},\mathbf{p}} \prod_{a=1}^3 q_a! \, \vert K_a \vert^{ - q_a - 1} \, . \end{eqnarray} \noindent The appearance of the exponential in each of the integrals in the second line indicates that there is indeed a chance for localisation to occur. However, the validity and practical usefulness of this result depends on a number of factors which are impossible to determine in the local picture. First of all, we should indeed have $K_1 < 0$ and $K_2, K_3 > 0$ for localisation to happen, but these conditions can only be verified by relating to the global picture. Secondly, families are defined as cohomology classes in $H^1(X, L)$ and at this stage it is not clear precisely how these relate to the monomial basis forms \eqref{monomialbasis2}. The above calculation shows that the smaller the integers in $\mathbf{q} = (q_1, q_2, q_3)$ the better the localisation and this ties in with the result on projective spaces in the previous section. Finding the relation between the elements of $H^1(X, L)$ and the local basis forms $\nu_{\mathbf{q}}$ is, therefore, crucial in deciding the validity and accuracy of the approximation for the physical families. Finally, we would like to express the local result \eqref{Mlocalresult} in terms of the properly defined global Calabi-Yau moduli. We will now address these issues by relating the above local calculation to the full Calabi-Yau manifold. \section{Relating local and global quantities} \label{kahlersec5} We will start by relating the local quantities which have entered the previous calculation to global quantities on the Calabi-Yau manifold, starting with the K\"ahler form and the connection on the bundle and then proceeding to bundle-valued forms. This will allows us to express the result \eqref{Mlocalresult} for the wave function normalisation in terms of properly defined moduli. \subsection{K\"ahler form and connection} We begin, somewhat generally, with a Calabi-Yau three-fold $X$, a basis $\lbrace J_i \rbrace$ of its second cohomology, where $i = 1,... , h^{1,1}(X)$, and K\"ahler forms \begin{eqnarray} \label{kaehlerconeagain} J=\sum_i t^i J_i \, , \end{eqnarray} \noindent with the K\"ahler moduli $\mathbf{t} = (t^i)$ restricted to the K\"ahler cone. Further, we assume that all the forms $J_i$, and, hence, $J$ are chosen to be harmonic relative to the Ricci-flat metric on $X$ specified by the K\"ahler class $[J]$. Note that, despite what Eq. \eqref{kaehlerconeagain} might seem to suggest, the harmonic forms $J_i$ are typically $t^i$-dependent – all we know is that their cohomology classes $[J_i]$ do not change with the K\"ahler class, so they are allowed to vary by exact forms. On a small patch $U \subset X$, we would like to introduce the forms $\mathcal{J}_i$, where $i = 1, . . . , h^{1,1}(X)$, and \begin{eqnarray} \label{kaehlerconeagain2} \mathcal{J}=\sum_i t^i \mathcal{J}_i \, , \end{eqnarray} \noindent which are local $(1, 1)$-forms with constant coefficients which approximate their global counterparts $J_i$ and $J$ on $U$. How are these global and local forms related? We first note that the top forms $J \wedge J \wedge J$ and $J_i \wedge J \wedge J$ are harmonic and must therefore be proportional \begin{eqnarray} \label{relation5.3} J_i \wedge J \wedge J = c_i (\mathbf{t}) J \wedge J \wedge J \, , \end{eqnarray} \noindent where $c_i (\mathbf{t})$ are functions of the K\"ahler moduli but independent of the coordinates of $X$. By inserting Eq.~\eqref{kaehlerconeagain} and integrating over $X$ we can easily compute these constants as \begin{eqnarray} \label{Ki/Kratio} c_i (\mathbf{t}) = \dfrac{\mathcal{K}_i}{\mathcal{K}} \, , \end{eqnarray} \noindent where the quantities $\mathcal{K}$ and $\mathcal{K}_i$ were defined in and around Eq. \eqref{cyvolume2}. On the other hand, the relation \eqref{relation5.3} holds point-wise and, hence, has a local counterpart \begin{eqnarray} \label{relation5.5} \mathcal{J}_i \wedge \mathcal{J} \wedge \mathcal{J} = c_i (\mathbf{t}) \mathcal{J} \wedge \mathcal{J} \wedge \mathcal{J} \, , \end{eqnarray} \noindent which must involve the same constants $c_i (\mathbf{t})$. Inserting flat forms into Eq.~\eqref{relation5.5} then allows us to determine the $c_i (\mathbf{t})$ in terms of the parameters in these forms and equating these expressions to the global result \eqref{Ki/Kratio} leads to constraints on the local forms $\mathcal{J}_i$. This global-local correspondence has an immediate implication for bundles on $X$ and their local counterparts on $U$. Consider a line bundle $L \rightarrow X$ with first Chern class $c_1(L) = \sum_i k^iJ_i$ and field strength $F = -2\pi i \sum_i k^i J_i$. Then, for the local version $\mathcal{F} = -2\pi i \sum_i k^i \mathcal{J}_i$ of the field strength we find, using Eqs. \eqref{Ki/Kratio} and \eqref{relation5.5}, that \begin{eqnarray} \label{relation5.6} \mathcal{F} \wedge \mathcal{J} \wedge \mathcal{J} = - 2 \pi i\, \dfrac{k^i \mathcal{K}_i}{\mathcal{K}} \mathcal{J} \wedge \mathcal{J} \wedge \mathcal{J} \end{eqnarray} \noindent and, hence, that the local version of the Hermitian Yang-Mills equation is satisfied as long as the slope $\mu(L)= k^i \mathcal{K}_i$ of $L$ vanishes. To work out the above global-local correspondence more explicitly, we consider a case with two K\"ahler moduli, so $h^{1,1}(X) = 2$. In this case, we can choose complex coordinates $z_a$, where $a = 1, 2, 3$, on the patch $U \subset X$ such that \begin{equation} \label{eq5.7J} \begin{array}{l} \mathcal{J}_1 = \dfrac{i}{2\pi} \sum_{a=1}^3 \lambda_a d z_a \wedge d \overline{z}_a \, , \qquad \mathcal{J}_2 = \dfrac{i}{2\pi} \sum_{a=1}^3 d z_a \wedge d \overline{z}_a \, , \\[4mm] \mathcal{J} = \dfrac{i}{2\pi} \sum_{a=1}^3 (\lambda_a t_1 + t_2) d z_a \wedge d \overline{z}_a \, , \end{array} \end{equation} \noindent where the $\lambda_a$ are constants. (More specifically, starting with two arbitrary $(1, 1)$-forms $\mathcal{J}_1$ and $\mathcal{J}_2$ with constant coefficients, by standard linear algebra, we can always diagonalise $\mathcal{J}_2$ into “unit matrix form" and then further diagonalise $\mathcal{J}_1$ without affecting $\mathcal{J}_2$.) Inserting the above forms into Eq. \eqref{relation5.5} gives \begin{eqnarray} \label{coef5.8} c_1(\mathbf{t}) = \dfrac{\sum_a \lambda_a \prod_{b \neq a} (\lambda_b t_1+t_2)}{3 \prod_c (\lambda_c t_1 + t_2)} \, , \qquad c_2(\mathbf{t}) =\dfrac{\sum_a \prod_{b \neq a} (\lambda_b t_1+t_2)}{3 \prod_c (\lambda_c t_1 + t_2)} \, , \end{eqnarray} \noindent and equating these results to the global ones in Eq.~\eqref{Ki/Kratio} imposes constraints on the unknown local coefficients $\lambda_a$. However, it is not obvious that the $\lambda_a$ are K\"ahler moduli independent, particularly since the forms $J_i$ do, in general, depend on K\"ahler moduli. In the following, we will assume that this is indeed the case, although we do not, at present, have a clear-cut proof. There are two pieces of evidence which support this assumption. First, it is not obvious that equating \eqref{coef5.8} with \eqref{Ki/Kratio} allows for a solution with constant $\lambda_a$ (valid for all $\mathbf{t}$) but we find that, in all cases which we have checked, that it does. Secondly, it is hard to see how a local calculation of the integrals in Eq. \eqref{tdependence} can lead to K\"ahler moduli independent results for $\Lambda_{ijIJ}$, as four-dimensional supersymmetry demands, if the $\lambda_a$ are $t^i$-dependent. In the following, we will proceed on the assumption that the $\lambda_a$ are indeed $t^i$-independent. \subsection{An example} To complete the above calculation we should consider a specific Calabi-Yau manifold. As before, we focus on the ambient space $\mathcal{A} = \mathbb{P}^1 \times \mathbb{P}^3$, discussed in Section \ref{introducingP1P3}, and use the same notation for coordinates, K\"ahler forms and K\"ahler potentials as introduced there. The Calabi-Yau hyper-surfaces $X \subset \mathcal{A}$ we would like to consider are then defined as the zero loci of bi-degree $(2, 4)$ polynomials $p$, that is sections of the bundle $\hat{N} = \mathcal{O}_{\mathcal{A}}(2, 4)$. This manifold has Hodge numbers $h^{1,1}(X) = 2$, $h^{2,1}(X) = 86$ and Euler number $\eta(X) = -168$. Its second cohomology is spanned by the restrictions $\hat{J}_i\vert_X$, where $i = 1, 2$, of the two ambient space K\"ahler forms and, relative to this basis, the second Chern class of the tangent bundle is $c_2(T X) = (24, 44)$. The K\"ahler class on X can be parametrised by the restricted ambient space K\"ahler forms \begin{eqnarray} \hat{J}\vert_X = t_1 \hat{J}_1\vert_X + t_2 \hat{J}_2\vert_X \, , \end{eqnarray} \noindent where $t_1, t_2 > 0$ are the two K\"ahler parameters. Of course neither of these forms is harmonic relative to the Ricci-flat metric on $X$ associated to the class $[\hat{J}\vert_X]$ (as they are obtained by restricting the ambient space Fubini-Study K\"ahler forms) but there exist forms $J_i$ and $J$ in the same cohomology classes which are. In other words, $J$ and $J_i$ are the harmonic forms introduced in Eq.~\eqref{kaehlerconeagain} and we demand that their cohomology classes satisfy $[J]=[\hat{J}\vert_X]$, $[J_i]=[\hat{J}_i\vert_X]$. The non-vanishing triple intersection numbers of this manifold are given by \begin{eqnarray} \label{exampleintnumbers} d_{122}=4\, , \quad d_{222} = 2 \quad \Rightarrow \quad \mathcal{K} = d_{ijk} t^i t^j t^k = 2 t_2^2 (6 t_1 +t_2) \, . \end{eqnarray} \noindent Inserting these results into Eq.~\eqref{Ki/Kratio} we find \begin{eqnarray} c_1 (\mathbf{t}) = \dfrac{2}{6 t_1 + t_2} \, , \qquad c_2 (\mathbf{t}) = \dfrac{4 t_1+t_2}{t_2 (6 t_1+t_2)} \, , \end{eqnarray} \noindent and equating these expressions to the local results \eqref{coef5.8} leads to the solution \begin{eqnarray} \lambda_1 = 6\, , \qquad \lambda_2=\lambda_3 =0 \, , \end{eqnarray} \noindent which is unique, up to permutations of the coordinates $z_a$. This means, from Eqs.~\eqref{eq5.7J}, the local forms $\mathcal{J}_i$ and $\mathcal{J}$ can (after another coordinate re-scaling $z_1 \rightarrow z_1/\sqrt{6}$) be written as \begin{align} \mathcal{J}_1 & = \dfrac{i}{2\pi} d z_1 \wedge d \overline{z}_1 \, , \\ \mathcal{J}_2 & = \dfrac{i}{2\pi} \left( \dfrac{1}{6} d z_1 \wedge d \overline{z}_1 + d z_2 \wedge d \overline{z}_2 + d z_3 \wedge d \overline{z}_3 \right) \, , \\ \label{Jeq5.15} \mathcal{J} & = \dfrac{i}{2\pi} \left(t_1 d z_1 \wedge d \overline{z}_1 + t_2 \left( \dfrac{1}{6} d z_1 \wedge d \overline{z}_1 + d z_2 \wedge d \overline{z}_2 + d z_3 \wedge d \overline{z}_3 \right)\right) \, . \end{align} \noindent We note that $\mathcal{J}$ is of the form \eqref{eqlocalJ} used in our local calculation and we can match expressions by setting $z_a = Z_a$ and \begin{eqnarray} \label{beta123} \beta_1 = t_1 + \dfrac{1}{6} t_2\, , \qquad \beta_2 = \beta_3 = t_2 \, . \end{eqnarray} \noindent Another interesting observation is that these forms satisfy \begin{eqnarray} \mathcal{J}_i \wedge \mathcal{J}_j \wedge \mathcal{J}_k = - \dfrac{1}{16 \pi^3} d_{ijk} \bigwedge_{a=1}^3 d z_a \wedge d \overline{z}_a \, , \end{eqnarray} \noindent where $d_{ijk}$ are the intersection numbers \eqref{exampleintnumbers} of the manifold in question, that is, our local forms “intersect" on the global intersection numbers. They also relate in an interesting way to the ambient space K\"ahler forms $\hat{J}_i$. So far, we have considered an arbitrary patch $U$ on $X$, but from now on let us focus on a specific choice, starting with the ambient space patch $\hat{U} \subset \mathcal{A}$ near the affine origin $z_{\alpha} \simeq 0$. This patch is of obvious interest since we know from the ambient space discussion in Section \ref{introducingP1P3} that some wave functions localise on it. If it is sufficiently small, the defining equation of the Calabi-Yau manifold on $\hat{U}$ can be approximated by \begin{eqnarray} \label{polyapprox} p = p_0 + \sum_{\alpha=1}^4 p_{\alpha} z_{\alpha} + \mathcal{O}(z^2) \, , \end{eqnarray} \noindent where $p_0$ and $p_{\alpha}$ are some of the parameters in $p$. It is possible, by linear transformations of the homogeneous coordinates on $\mathbb{P}^1$ and $\mathbb{P}^3$, to eliminate the $p_0$ term and, in the following, we assume that this has been done. Then, the Calabi-Yau manifold $X = \lbrace p = 0\rbrace$ intersects the patch $\hat{U}$ at the affine origin and near it $X$ is approximately given by the hyper-plane equation $\sum_{\alpha=1}^4 p_{\alpha}z_{\alpha} = 0$. By a further linear re-definition of coordinates on the $\mathbb{P}^3$ factor of the ambient space, this equation can be brought into the simpler form \begin{eqnarray} \label{simpleeq5.19} z_4 = a z_1 \, , \end{eqnarray} \noindent where $a$ is a constant. If we restrict the flat versions of the ambient space K\"ahler forms, as given in Eq.~\eqref{localapprox}, to $U$ using Eq.~\eqref{simpleeq5.19}, we find that \begin{eqnarray} \hat{J}_i\vert_U = \mathcal{J}_i \, , \end{eqnarray} \noindent provided we set $a=1/\sqrt{6}$. This means on the patch $U$ we understand the relation between ambient space K\"ahler forms $\hat{J}_i$, local K\"ahler forms $\mathcal{J}_i$ and their global counterparts $J_i$ on $X$. We can now extend this correspondence to (line) bundles and their connections. As in Section~\ref{introducingP1P3}, we consider line bundles $\hat{L} = \mathcal{O}_{\mathcal{A}}(k_1, k_2)$ and we restrict these to line bundles $L = \mathcal{O}_X (k_1, k_2) := \hat{L}\vert_X$ on the Calabi-Yau manifold $X$. (Of course, the line bundle $L$ should be thought of as merely part of the full vector bundle of the compactification in question.) The hermitian bundle metric $\hat{H}$ for $\hat{L}$ was given in Eq.~\eqref{hermitianmetriceq3.12} and its local approximation on $\hat{U}$ in Eq.~\eqref{localapprox}. If we restrict this local bundle metric on $\hat{U}$ to $U$, using the defining equation \eqref{simpleeq5.19} with $a = 1/\sqrt{6}$, we find \begin{equation} \begin{array}{c} \mathcal{H} = \hat{H}\vert_U = \textrm{exp} \, (-(k_1 + k_2/6)\vert z_1\vert^2 - k_2\vert z_2\vert^2 - k_2 \vert z_3 \vert^2) \\[1mm] \Downarrow \\[1mm] \mathcal{F} = \overline{\partial} \partial \, \textrm{ln} \,\bar{\mathcal{H}} = - 2 \pi i (k_1 \mathcal{J}_1 + k_2 \mathcal{J}_2) \, . \end{array} \end{equation} \noindent We note that this expression of $\mathcal{H}$ is of the general form \eqref{eqlocalH} used in the local calculation, provided we set $z_a = Z_a$ and identify \begin{eqnarray} \label{identificationeq5.22} K_1 = k_1 + \dfrac{1}{6} k_2 \, , \qquad K_2 = K_3 = k_2 \, . \end{eqnarray} \noindent From the discussion around Eq.~\eqref{relation5.6} we also conclude that the Hermitian Yang-Mills equation is locally satisfied for $\mathcal{F}$, provided that the slope $\mu(L) = d_{ijk} k^i t^j t^k = 2t_2(2k_1 t_2 + k_2(4t_1 + t_2))$ vanishes. As usual, this is the case on a certain sub-locus of K\"ahler moduli space, provided that $k_1$ and $k_2$ have opposite signs. \subsection{Wave functions and the matter field K\"ahler metric} As the last step, we should work out the global-local correspondence for wave functions. As in Section \ref{introducingP1P3}, we consider line bundles $\hat{L} = \mathcal{O}_{\mathcal{A}}(k_1, k_2)$ with $k_1 \leq -2$ and $k_2 > 0$ with a non-zero first cohomology $H^1(\mathcal{A},\hat{L})$, whose dimension is given in Eq.~\eqref{dimeq3.13} and with harmonic basis forms $\hat{\nu}_{\hat{\mathbf{q}}}$ introduced in Eq.~\eqref{basiseq3.14}. These line bundles restrict to line bundle $L = \mathcal{O}_X (k_1, k_2) := \hat{L}\vert_X$ on the Calabi-Yau manifold $X$ with a non-vanishing first cohomology (see, for example, Ref.~\cite{yukunification}) \begin{eqnarray} \label{quotienteq5.23} H^1(X,L)\cong \dfrac{H^1(\mathcal{A}, \hat{L})}{p(H^1(\mathcal{A},\hat{N}^* \otimes \hat{L}))} \; . \end{eqnarray} \noindent Explicit representatives for this cohomology can be obtained by restrictions $\hat{\nu}_{\hat{\mathbf{q}}}\vert_X$, although these forms are not necessarily harmonic with respect to any particular metric. (Also, they have to be suitably identified due to the quotient in Eq.~\eqref{quotienteq5.23}. As long as $k_2 < 4$, the cohomology in the denominator of Eq.~\eqref{quotienteq5.23} vanishes, so that the quotient is trivial and the restrictions $\hat{\nu}_{\hat{\mathbf{q}}}\vert_X$ form a basis of $H^1(X, L)$ as stands.) Finally, we have the monomial basis $\nu_{\mathbf{q}}$ of locally harmonic forms defined in Eq.~\eqref{monomialbasis2}. In summary, we are dealing with three sets of basis forms and their linear combinations, namely \begin{equation} \begin{array}{ll} \hat{\nu}_{\hat{\mathbf{q}}} = e^{k_1 \vert z_1 \vert^2} \overline{z}{}^{\hat{q}_1}_1 z^{\hat{q}_2}_2 z^{\hat{q}_3}_3 z^{\hat{q}_4}_4 d \overline{z}_1 \, , & \qquad \hat{\nu}(\hat{\mathbf{a}})= \sum_{\hat{\mathbf{q}}} \hat{a}_{\hat{\mathbf{q}}} \hat{\nu}_{\hat{\mathbf{q}}} \, , \\[1mm] \tilde{\nu}_{\tilde{\mathbf{q}}} = e^{k_1 \vert z_1 \vert^2} \overline{z}{}^{\tilde{q}_1}_1 z^{\tilde{q}_2}_2 z^{\tilde{q}_3}_3 z^{\tilde{q}_4}_1 d \overline{z}_1 \, , & \qquad \tilde{\nu}(\tilde{\mathbf{a}})= \sum_{\tilde{\mathbf{q}}} \tilde{a}_{\tilde{\mathbf{q}}} \tilde{\nu}_{\tilde{\mathbf{q}}} \, , \\[1mm] \nu_{\mathbf{q}} = e^{K_1 \vert z_1 \vert^2} \overline{z}{}^{q_1}_1 z_2^{q_2} z_3^{q_3} d \overline{z}_1 \, , & \qquad \nu(\mathbf{a})= \sum_{\mathbf{q}} a_{\mathbf{q}} \nu_{\mathbf{q}} \, . \end{array} \end{equation} \noindent To be clear, hatted wave functions $\hat{\nu}_{\hat{\mathbf{q}}}$ are defined on the ambient space $\mathcal{A}$, wave functions $\tilde{\nu}_{\tilde{\mathbf{q}}}$ refer to their restrictions to the Calabi-Yau patch $U$ and the $\nu_{\mathbf{q}}$ are the harmonic wave functions on the patch $U$. Recall that we need $K_1 < 0$ as a necessary condition for the harmonic solutions $\nu_{\mathbf{q}}$ to have a finite norm and, by virtue of the identification \eqref{identificationeq5.22}, this translates into \begin{eqnarray} \label{conditioneq5.25} K_1 < 0 \quad \Leftrightarrow \quad -k_1>\dfrac{k_2}{6} \, . \end{eqnarray} \noindent Hence, for this particular example, the condition $K_1 < 0$ is not moduli-dependent and can be satisfied by a suitable choice of line bundle. We would like to determine the relation between the above three types of forms, or, equivalently, the relation between the coefficients $\hat{\mathbf{a}}$, $\tilde{\mathbf{a}}$ and $\mathbf{a}$, given that $\tilde{\nu}(\tilde{\mathbf{a}}) = \hat{\nu}(\hat{\mathbf{a}})\vert_U$ are related by restriction and that $\tilde{\nu}(\tilde{\mathbf{a}})$ and $\nu(\mathbf{a})$ are in the same cohomology class, so must differ by a $\overline{\partial}$-exact $L$-valued $(0, 1)$-form. The first of these correspondences, between $\hat{\mathbf{a}}$ and $\tilde{\mathbf{a}}$, is easy to establish. Given the relation is by restriction, there is a matrix $\mathcal{S}$ such that $\tilde{\mathbf{a}}= \mathcal{S} \hat{\mathbf{a}}$, and using the approximate defining equation \eqref{simpleeq5.19}, we find that \begin{eqnarray} \mathcal{S}_{\tilde{\mathbf{q}},\hat{\mathbf{p}}} = \delta_{\tilde{\mathbf{q}},\hat{\mathbf{p}}} 6^{\hat{q}_4/2} \, . \end{eqnarray} \noindent To establish the correspondence between $\mathbf{a}$ and $\tilde{\mathbf{a}}$, we first define the matrix $\mathcal{T}$ by \begin{eqnarray} \label{Tdef5.27} \langle \nu_{\mathbf{q}} , \tilde{\nu}_{\tilde{\mathbf{p}}}\rangle = (M \mathcal{T})_{\mathbf{q},\tilde{\mathbf{p}}} \, , \end{eqnarray} \noindent where $M$ is the local normalisation matrix computed in Eq.~\eqref{Mlocalresult}. Since $\nu(\mathbf{a})$ and $\tilde{\nu}(\tilde{\mathbf{a}})$ differ by an exact form, we know that $\langle \nu(\mathbf{a}), \nu(\mathbf{b}) \rangle = \mathbf{a}^{\dagger} M \mathbf{b}$ and $ \langle \nu(\mathbf{a}), \tilde{\nu}(\tilde{\mathbf{b}}) \rangle = \mathbf{a}^{\dagger} M \mathcal{T} \tilde{\mathbf{b}}$ must be equal to each other and, since this holds for all $\mathbf{a}$, it follows that \begin{eqnarray} \mathbf{b} = \mathcal{T} \tilde{\mathbf{b}} \, . \end{eqnarray} \noindent The explicit form of the matrix $\mathcal{T}$, from its definition \eqref{Tdef5.27}, is \begin{eqnarray} \mathcal{T}_{\mathbf{q},\tilde{\mathbf{p}}} = \delta_{q_1 , \tilde{p}_1 - \tilde{p}_4} \delta_{q_2 , \tilde{p}_2} \delta_{q_3 , \tilde{p}_3} \dfrac{\tilde{p}_1!\vert k_1\vert^{-\tilde{p}_1 -1}}{q_1 ! \vert K_1\vert^{-q_1 -1}} \; . \end{eqnarray} \noindent As discussed earlier, the families correspond to cohomology classes in $H^1(X, L)$ and, in view of Eq.~\eqref{quotienteq5.23} and subject to possible identifications, it makes sense to label families by the hatted basis $\hat{\nu}_{\hat{\mathbf{q}}}$ on the ambient space. For simplicity of notation, we write the hatted indices as $\mathbf{I} = \hat{\mathbf{q}}$ from now on. We also recall from Section~\ref{introducingP1P3} that these indices are non-negative and further constrained by $I_1 = 0,..., - k_1 - 2$ and $I_2 +I_3 +I_4 \leq k_2$. With this notation, the matter field K\"ahler metric is given by the general expression \begin{eqnarray} \label{generalexpressionkahler} G_{\mathbf{I},\mathbf{J}} := \dfrac{1}{2 \mathcal{V}} (\mathcal{S}^{\dagger}\mathcal{T}^{\dagger} M \mathcal{T} \mathcal{S})_{\mathbf{I},\mathbf{J}} \; . \end{eqnarray} \noindent Inserting the above results for $\mathcal{S}$ and $\mathcal{T}$ as well as the local normalisation matrix \eqref{Mlocalresult}, we find explicitly \begin{eqnarray} \label{resultdependence} G_{\mathbf{I},\mathbf{J}} = \dfrac{\mathcal{N}_{\mathbf{I},\mathbf{J}} }{6t_1+t_2} \, , \end{eqnarray} \noindent where the constants $\mathcal{N}_{\mathbf{I},\mathbf{J}}$ are given by \begin{eqnarray} \label{longeq5.32} \!\!\!\!\! \mathcal{N}_{\mathbf{I},\mathbf{J}} = \dfrac{\pi J_1! I_1! I_2! I_3!\vert k_1 +k_2/6\vert^{I_1-I_4+1}6^{I_4/2+J_4/2+1}}{2(I_1-I_4)!\vert k_1\vert^{I_1+J_1+2}k_2^{I_2+I_3+2}} \theta (I_1-I_4)\delta_{I_1-I_4, J_1-J_4} \delta_{I_2,J_2} \delta_{I_3,J_3} . \end{eqnarray} \noindent For the lowest mode, $\mathbf{I} = \mathbf{0}$, this number specialises to \begin{eqnarray} \mathcal{N}_{\mathbf{0},\mathbf{0}} = 3 \pi \dfrac{\vert k_1+k_2/6\vert}{\vert k_1\vert^2 k_2^2} \, . \end{eqnarray} \noindent A few remarks about this result are in order. First, we note that the K\"ahler moduli dependence in Eq.~\eqref{resultdependence} is in line with the result \eqref{tdependence} from dimensional reduction, as homogeneity of degree $-1$ is expected. What is surprising however is that Eq.~\eqref{generalexpressionkahler}, involving $\mathcal{V}$, a cubic function of K\"ahler moduli, reduces to Eq.~\eqref{resultdependence}, an inverse linear function of K\"ahler moduli. This cancellation may just be a property of our particular example, stemming from the fact that the parameters $\beta_i$ in Eq.~\eqref{beta123} are proportional to the factors in $\mathcal{K}$ in Eq.~\eqref{exampleintnumbers}, which comes out of the global-local matching. Typically, one would expect quadratic over cubic functions of the K\"ahler moduli. In general, the matter field K\"ahler metric is also a function of complex structure moduli. For our example, this dependence has dropped out completely, that is, the quantities $N_{\mathbf{I},\mathbf{J}}$ are constants. This feature results from our linearised local approximation $\eqref{simpleeq5.19}$ of the Calabi-Yau manifold, where all remaining complex structure parameters can be absorbed into coordinate re-definitions. We do expect complex structure dependence to appear at the next order, that is, if we approximate the defining equation locally by a quadric in affine coordinates. Also, our result \eqref{resultdependence} has an implicit complex structure dependence, in that its validity depends on the choice of complex structure. Whether neglecting the quadratic and higher terms in $z$ in Eq.~\eqref{polyapprox} does indeed provide a good approximation depends, among other things, on the choice of coefficient in the defining equation $p$, that is, on the choice of complex structure. Another feature of our result \eqref{resultdependence} is that it is diagonal in family space and, formally, this happens because the matrices $M$, $\mathcal{S}$ and $\mathcal{T}$ are all diagonal. We have seen in Section~\ref{kahlersec4} that this is a general feature of the matrix $M$. However, $\mathcal{S}$ and $\mathcal{T}$ do not need to be diagonal in general. In our example, this happens due to the simple form \eqref{simpleeq5.19} of the local Calabi-Yau defining equation and the resulting diagonal form of the local K\"ahler form $\mathcal{J}$ in Eq.~\eqref{Jeq5.15}. Finally, we remind the reader that the result \eqref{resultdependence} can only be trusted if the line bundle $L = \mathcal{O}_X(k_1, k_2)$ satisfies the condition \eqref{conditioneq5.25}, if the flux parameters $\vert k_i\vert$ are sufficiently large and if the family numbers $\mathbf{I}$ are sufficiently small, in line with our discussion in Section~\ref{kahlersec3}. \section{Final remarks} \label{kahlersec6} In this chapter, we have reported progress on computing the matter field K\"ahler metric in heterotic Calabi-Yau compactifications. Three main results have been obtained. First, by dimensional reduction we have derived a general formula \eqref{tdependence} for the matter field K\"ahler metric and we have argued that constraints from four-dimensional supersymmetry already fully determine the K\"ahler moduli dependence of this metric. Secondly, provided large flux leads to localisation of the matter field wave function, we have shown how the matter field K\"ahler metric can be obtained from a local computation on the Calabi-Yau manifold, leading to the general result \eqref{Mlocalresult}. This result, while quite general, is unfortunately of limited use, mainly since it is not expressed in terms of the global moduli of the Calabi-Yau manifold. This makes it difficult to identify the conditions for its validity and it falls short of the ultimate goal of obtaining the matter field K\"ahler metric as a function of the properly defined moduli superfields. We have attempted to address these problems by working out a global-local relationship and by expressing the local result in terms of global quantities. This has been explicitly carried out for the example of Calabi-Yau hyper-surfaces $X$ in the ambient space $\mathbb{P}^1 \times \mathbb{P}^3$, but the method can be applied to other Calabi-Yau hyper-surfaces (and, possibly complete intersections) as well. Our main result in this context is the K\"ahler metric for matter fields from line bundles $L = \mathcal{O}_X (k_1, k_2)$ on $X$ given in Eqs.~\eqref{resultdependence}, \eqref{longeq5.32}, and is expressed as a function of the proper four-dimensional moduli fields. We have also stated the conditions for this result to be trustworthy, namely the constraint \eqref{conditioneq5.25} on the line bundle $L$ as well as large fluxes $\vert k_i \vert$ and small family numbers. The global-local relationship established in this way points to two problems of localised calculations both of which are intuitively plausible. First, the large flux values demanded by localisation typically also lead to large numbers of families. For this reason, there is a tension between localisation and the phenomenological requirement of three families. Secondly, large flux typically leads to a “large" second Chern class $c_2(V)$ of the vector bundle, which might violate the anomaly constraint $c_2(V ) \leq c_2(TX)$. Hence, there is also a tension between localisation and consistency of the models. It remains to be seen and is a matter of ongoing research whether consistent three-family models with localisation of all relevant matter fields can be constructed. It is likely that some of our methods can be applied to F-theory and be used to express local F-theory results in terms of global moduli of the underlying four-fold. It would be interesting to carry this out explicitly and check if the tension between localisation on the one hand and the phenomenological requirement of three families and cancelation of anomalies on the other hand persists in the F-theory context. \chapter{Conclusion} \noindent The purpose of this thesis was to expand the area of string phenomenology by proposing methods to calculate holomorphic Yukawa couplings for a specific class of $E_8 \times E_8$ heterotic models, namely for line bundle models on Complete Intersection Calabi-Yau manifolds. In addition, we identified a method to evaluate the matter field K\"ahler metric for models where sufficiently large gauge fluxes permit the localisation of matter fields around certain points. Both the holomorphic Yukawa couplings and the matter field K\"ahler metric are required to compute the physical Yukawa couplings of a given heterotic model, so that it can be eventually compared to measurable physics. The line bundle models that we considered give rise to the correct MSSM spectrum, with some additional gauge-neutral bundle moduli. They were borrowed from a rich database of quasi-realistic models, that was generated several years before this thesis, through an automated scan \cite{Anderson:2011ns,Anderson:2012yf,Anderson:2013xka}. At various energy levels, we wanted the Standard Model, the MSSM and a SUSY GUT to be naturally embedded in the string model, and similarly, General Relativity and $N=1$ supergravity to be low-energy limits of the gravity sector. For this reason, we compactified the $E_8 \times E_8$ string theory over a Calabi-Yau manifold $X$ with holomorphic poly-stable vector bundle $V$, and we ensured that the resulting 4d action matched the standard $N=1$ supergravity action in Ref.~\cite{wessandbagger} (plus some kinetic terms for the moduli). From this point forward, we evaluated the holomorphic Yukawa coupling, using the simplifications that our class of models provided. We started with the well-known integral $ \int_X \Omega \wedge \nu_1 \wedge \nu_2 \wedge \nu_3$ and expressed line bundle-valued $(0,1)$-forms $\nu_i$ in terms of projective ambient space $(0,a)$-forms $\hat{\nu}_{i,a}$, where $a=1,...,k+1$ and $k$ is the co-dimension of $X$. By defining the \textit{type} of a form $\nu_i$ as the number $\tau_i$ for which $\hat{\nu}_{i,\tau_i} \neq 0$ and $\hat{\nu}_{i,a} = 0$, for all $ \ a > \tau_i$, we were able to formulate a vanishing theorem, Eq.~\eqref{4.12}, according to which the holomorphic Yukawa couplings are zero if $\tau_1 + \tau_2 + \tau_3$ is smaller than the ambient space dimension. In the non-vanishing case, the Yukawa couplings are calculated as ambient space integrals over products of forms $\hat{\nu}_{i,a}$ and we have showed that the result can also be obtained algebraically, in a way that relates our method to Refs.~\cite{Candelas:1987se, Anderson:2009ge}. Explicit results were obtained for line-bundle models on the tetra-quadric (Chapter~\ref{tetraquadricchapter}) and on a co-dimension two CICY (Chapter~\ref{chaptern>1codimension}), although the method is general enough to be applied to any CICY in the database. Altogether, our computational techniques have revealed some interesting phenomenological features. For example, topological constraints such as theorem \eqref{4.12} give a condition for the vanishing of Yukawa couplings that is not based on symmetry (only topological reasoning is involved). This has been expected since the early days of heterotic model building (see, for example, Ref.~\cite{GSW}), and can provide an explanation for the relatively light masses of the electron and first-generation quarks. In some examples discussed in Chapter~\ref{tetraquadricchapter}, it was found that the holomorphic Yukawa couplings depend explicitly on the complex structure moduli and their rank is reduced for certain regions of the moduli space. Such a dependence can be used to fine-tune the model according to observation. In addition, global $U(1)$ symmetries arising from the line bundle sum construction can motivate why certain proton decay operators are forbidden in the MSSM or various Grand Unified models. The same symmetry criteria can also be applied perturbatively to models with non-Abelian bundle structure group that are obtained through smooth deformations from the Abelian locus. In Chapter~\ref{kahlerchapter}, our computation of the matter field K\"ahler metric was supported by the observation that for large gauge flux, the integral $\int_X \nu_I \wedge \bar{\star}_V \nu_J$ is localised on a patch $U$, so that precise knowledge of the Calabi-Yau metric is not needed. The computation was performed locally and then re-expressed in terms of the global moduli of the Calabi-Yau manifold, via a global-local relationship. It has to be noted however that the requirement of a large gauge flux may often be in conflict with the anomaly cancellation condition $c_2(V) \leq c_2(TX)$ or the phenomenological requirement for three families. It constitutes an object of future research to establish whether a consistent and realistic model can be built using the localisation method. Finally, despite the progress reported in this thesis, a lot of problems in string phenomenology still remain unresolved. Ideally, we would have liked to construct a method to evaluate the matter field K\"ahler metric for general line bundle models, and not only for models with large flux. It is uncertain however whether this goal is achievable, given that the knowledge of the specific Calabi-Yau metric is lacking. An alternative research avenue would be to compactify on other classes of Calabi-Yau manifolds, such as hypersurfaces in toric varieties, for which similar methods for calculating Yukawa couplings could be constructed. In addition, one could investigate how to apply these methods to vector bundles with non-Abelian structure group, and in particular to monad bundles, which are built from sums of line bundles \cite{Anderson:2008uw, Anderson:2009mh}. In the end, a complete string model has to also contain mechanisms for moduli stabilisation and soft supersymmetry breaking. It is only when these problems are solved that the model can be proposed as ``realistic".
2,877,628,090,638
arxiv
\section{Introduction}\label{sec:intro} The nebular emission from primeval galaxies represents one of our best hopes to constrain the physical processes that dominated reionization of our Universe. Beyond the physical conditions of pristine gas, emission lines are sensitive to different components expected to characterise primeval galaxies: hot massive stars, often considered as the main source of ionizing radiation; active galactic nuclei (AGN), arising from gas accretion onto primordial black holes; radiative shocks, induced by large-scale gas flows; and the leakage of Lyman-continuum (LyC) photons through a porous interstellar medium (ISM), contributing to reionization \citep[see, e.g., the review by][]{stark16}. In waiting for advent of the {\it James Webb Space Telescope} (\textit{JWST}), which will enable deep rest-frame ultraviolet and optical emission-line spectroscopy of galaxies into the reionization era at redshifts $z\sim10$--15, more nearby metal-poor galaxies approaching the properties of primeval galaxies offer a useful laboratory in which to test our ability to interpret emission-line spectra. Observationally, a fast-growing number of studies have progressively uncovered the spectroscopic properties of distant, metal-poor star-forming galaxies at redshifts out to $z\gtrsim7$ \citep[e.g.,][]{erb10, stark14, stark15, rigby15, amorin17, laporte17, schmidt17, mainali17, vanzella17, berg18, nakajima18, nanayakkara19, tang19} and in parallel those of nearby analogues of these pristine galaxies \citep[e.g.,][]{berg16, berg19, senchyna17, senchyna19}. Most of these studies focused on identifying promising tracers and diagnostics of the early chemical enrichment and gas conditions in primeval galaxies, such as for example the \hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ lines and \hbox{C/O}\ abundance ratio. Other studies were more specifically aimed at probing LyC-photon leakage from young star-forming galaxies, using clues such as a small velocity spread of the \hbox{Ly$\alpha$}\ double-peaked emission or large ratios of high- to low-ionization lines \citep[e.g.,][]{jaskot13, izotov16oct, izotov17oct, izotov18aug, debarros16, vanzella16}. Meanwhile, on the theoretical front, much effort was invested into characterising the ionizing spectra and ultraviolet emission-line signatures of young stellar populations \citep[e.g.,][]{gutkin2016, vidal17, byler18}, along with the dependence of these on stellar rotation and binary interactions \citep[e.g.,][]{levesque13,stanway19}, as well as the signatures of active galactic nuclei \citep[AGN; e.g.,][]{Feltre2016,Hirschmann2017,Hirschmann2019,nakajima18} and shock-ionized gas \citep[e.g.,][]{allen08, izotov12, 3MdBs19}. In practice, the observational studies mentioned above provide a valuable set of, in some respects, independent investigations focusing individually on the analysis of a specific set of emission lines with specific models (for the production of radiation and the photoionization calculations), parametrized in a specific way (e.g., element abundances and depletions; inclusion or not of dust physics in the photoionization calculations), depending on the nature and redshift of the sample and the spectrograph employed. These analyses have brought important lessons, such as the usefulness of the \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ and even \hbox{C\,\textsc{iv}\,$\lambda1549$}\ lines as signposts of galaxies in the reionization era given the expected strong attenuation of \hbox{Ly$\alpha$}\ \citep[e.g.,][]{stark14,senchyna19} and the potentiality of the \hbox{C\,\textsc{iv}\,$\lambda1549$}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ luminosity ratio for identifying AGN \citep[e.g.,][]{Feltre2016,nakajima18}, the \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ ratio for identifying shock-ionized gas \citep{jaskot16} and the \hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}\ ratio for identifying LyC-photon leakage \citep[e.g.,][]{nakajimaetouchi14, izotov16oct}. A current difficulty in reaching a robust picture from this progress on several fronts in parallel is that the conclusions drawn from fitting a restricted set of emission lines using specific models may not be consistent with findings based on other lines and different models. This may be particularly important, for example, in the context of interpreting the exceedingly large strengths of He\,\textsc{ii} recombination lines (requiring photon energies $E_\mathrm{ion}>54.4$\,eV) found in very metal-poor, actively star-forming galaxies, which seem to elude standard model predictions \citep[e.g.,][]{shirazi12,senchyna17,berg18,nanayakkara19,stanway19}. Based on various arguments, contributions from very massive stars \citep[e.g.,][]{Graefener15}, stripped stars produced by close-binary evolution or X-ray binaries \citep[e.g.,][]{SenchynaStark19, schaerer19}, AGN \citep[e.g.,][]{nakajima18} and radiative shocks \citep[e.g.,][]{izotov12} have been proposed to account for the required hard radiation. Also, while the \hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}\ diagnostic to characterise galaxies leaking LyC photons may not be as reliable as expected and alternative diagnostics based, e.g., on He\,\textsc{i} lines have been suggested, different types of investigations of the ultraviolet and optical signatures of LyC leakage have so far focused on rather limited sets of emission lines \citep[e.g.,][]{jaskot13, zackrisson13, zackrisson17, stasinska15, jaskot16, izotov17oct}. These examples illustrate the need for a homogeneous investigation of emission-line diagnostics of metal-poor star-forming galaxies with a wide collection of intercomparable models (of the type of that proposed by \citealt{stasinska15} for a few optical lines). In this paper, we examine a full set of ultraviolet/optical observables of metal-poor star-forming galaxies with a library of nebular-emission models enabling the exploration of a wide range of physical parameters. To conduct this analysis, we build a reference sample of ultraviolet and optical observations of metal-poor star-forming galaxies and confirmed and candidate LyC leakers (and other star-forming galaxies and AGN) in a wide redshift range. This sample allows us to simultaneously explore diagnostic diagrams involving more emission lines than typically available at once for individual subsamples. We use this sample to investigate potentially discriminating signatures of single- and binary-star populations (using the most recent versions of the \citealt{Bruzual2003} and \citealt{BPASSv21} models), narrow-line regions of AGN \citep{Feltre2016} and radiative shocks \citep{3MdBs19} on the emission-line properties of metal-poor star-forming galaxies, adopting throughout the same parametrization of nebular-gas abundances \citep{gutkin2016}. Our analysis confirms that current single- and binary-star population synthesis models do not produce hard-enough ionizing radiation to account for the strong \hbox{He\,\textsc{ii}}\ emission seen in some of the most metal-poor galaxies, although with slightly better agreement than concluded recently by \citet{stanway19}. We show that an AGN or radiative-shock component allows models to reproduce observations in nearly all the ultraviolet and optical line-ratio diagrams we investigate. We also consider X-ray binaries as a potential source of ionizing radiation, using the model recently proposed by \citet{schaerer19}. This can reproduce the observed rise in \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratio toward low metallicities in star-forming galaxies, but not the high observed \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratios of galaxies with large EW(\hbox{H$\beta$}). A source of harder ionizing radiation appears to be required in these extreme objects. In the end, we find that while none of the ultraviolet and optical emission-line diagrams we consider allows simple by-eye diagnostics of the nature of ionizing sources and the escape of LyC photons in metal-poor star-forming galaxies, differences exist in the spectral signatures of these physical quantities, which should enable more stringent constraints from simultaneous fits of several lines using tools such as \textsc{beagle}\ \citep{Chevallard2016}. We present our models of ionization-bounded and density-bounded galaxies, AGN narrow-line regions and radiative shocks in Section~\ref{sec:models}. In Section~\ref{sec:obs}, we assemble the reference sample of metal-poor star-forming galaxies, LyC leakers and other star-forming galaxies and AGN, which we use in Section~\ref{sec:params} to explore the influence of the different adjustable parameters of the models on emission-line spectra. In Section~\ref{sec:constraints}, we investigate potentially discriminating emission-line diagnostics of the production and escape of ionizing radiation in metal-poor star-forming galaxies. Our conclusions are summarized in Section~\ref{sec:conclu}. \section{Modelling approach}\label{sec:models} In this section, we present the set of versatile models that will be used in Section~\ref{sec:params} to explore, in a physically consistent way, the influence of a wide range of parameters on the observed ultraviolet and optical nebular emission from young star-forming galaxies. We start by describing the models we adopt to compute properties of ionization-bounded galaxies. Then, we describe our approach to model density-bounded galaxies. We also appeal to existing prescriptions to include the contributions by AGN and shock-ionized gas to the nebular emission from galaxies. \subsection{Ionisation-bounded models}\label{sec:ionb} We adopt the approach introduced by \citet[][see also \citealt{gutkin2016}]{ChaLon01} to compute the nebular emission from ionization-bounded galaxies. This is based on the combination of a stellar population synthesis model with a photoionization code to compute the luminosity per unit wavelength $\lambda$ emitted at time $t$ by a star-forming galaxy as \begin{equation} L_{\lambda}(t)=\int_0^t \mathrm{d}\hbox{$t^\prime$}\, \psi(t-\hbox{$t^\prime$}) \, S_{\lambda}[\hbox{$t^\prime$},Z(t-\hbox{$t^\prime$})] \, T_{\lambda}(t,\hbox{$t^\prime$})\,, \label{eq:flux_gal} \end{equation} where $\psi(t-\hbox{$t^\prime$})$ is the star formation rate at time $t-\hbox{$t^\prime$}$, $S_\lambda[\hbox{$t^\prime$},Z(t-\hbox{$t^\prime$})]$ the luminosity produced per unit wavelength per unit mass by a single stellar generation of age $\hbox{$t^\prime$}$ and metallicity $Z(t-\hbox{$t^\prime$})$ and $T_\lambda(t,\hbox{$t^\prime$})$ the transmission function of the ISM. Following \citet[][see also \citealt{vidal17}]{CharlotFall2000}, we write \begin{equation} \label{eq:transmission} T_{\lambda}(t,\hbox{$t^\prime$})= T_{\lambda}^{\rm BC}(\hbox{$t^\prime$}) \, T_{\lambda}^{\rm ICM}(t). \end{equation} where $T_{\lambda}^{\rm BC}(\hbox{$t^\prime$})$ is the transmission function of stellar birth clouds (i.e. giant molecular clouds) and $T_{\lambda}^{\rm ICM}(t)$ that of the intercloud medium (i.e. diffuse ambient ISM). In the present study, we focus on young galaxies with ages close to the typical timescale for the dissipation of giant molecular clouds in star-forming galaxies \citep[$\sim10$\,Myr, ; e.g.,][]{Murray2010,Murray2011} and do not include any intercloud medium. The birth clouds, assumed all identical, are described as an inner \hbox{H{\sc ii}}\ region ionized by young stars and bounded by an outer \hbox{H{\sc i}}\ region \citep{CharlotFall2000}. We thus write \begin{equation}\label{eq:transBC} T_{\lambda}(t,\hbox{$t^\prime$})= T_{\lambda}^{\rm BC}(\hbox{$t^\prime$})= T_{\lambda}^{\rm HII}(\hbox{$t^\prime$}) \, T_{\lambda}^{\rm HI}(\hbox{$t^\prime$})\,. \end{equation} By analogy with \citet[][see also \citealt{gutkin2016}]{ChaLon01}, we compute the transmission function $T_{\lambda}^{\rm HII}(\hbox{$t^\prime$})$ of the ionized gas [$T_{\lambda}^{+}(\hbox{$t^\prime$})$ in their notation] using the photoionization code \textsc{cloudy}\ (we adopt here version c17.00; \citealt{cloudyc17}). In this approach, the galaxy-wide transfer of stellar radiation through ionized gas is described via a set of `effective' parameters. The main adjustable parameters are (see \citealt{gutkin2016} for details): \begin{enumerate} \item The (hydrogen) gas density, \hbox{$n_{\mathrm{H}}$}. \item The total gas metallicity, assumed to be equal to that of the ionizing stars, $Z$. We adopt the chemical-element abundances listed in table~1 of \citet{gutkin2016},\footnote{These are based on the solar chemical abundances compiled by \citet{Bressan2012} from the work of \cite{Grevesse1998}, with updates from \citet[][see table~1 of \citealt{Bressan2012}]{Caffau2011}, and small adjustments of the solar nitrogen ($-0.15$\,dex) and oxygen ($+0.10$\,dex) abundances relative to the mean values quoted in table~5 of \citet[][see \citealt{gutkin2016} for details]{Caffau2011}.} corresponding to a present-day solar (photospheric) metallicity $\hbox{${Z}_{\odot}$}=0.01524$ and a protosolar metallicity (i.e. before the effects of diffusion) $\hbox{${Z}_{\odot}^0$}=0.01774$. Nitrogen and carbon are assumed to both have primary and secondary nucleosynthetic components. The total (primary+secondary) nitrogen abundance is related to that of oxygen via equation~(11) of \citet{gutkin2016}. \item The carbon-to-oxygen abundance ratio, \hbox{C/O}. This adjustable parameter allows secondary C production to be kept flexible [for reference, $\hbox{(C/O)$_\odot$}=0.44$]. \item The dust-to-metal mass ratio, \hbox{$\xi_{\rm{d}}$}, which reflects the depletion of heavy elements on to dust grains ($\hbox{$\xi_{\rm{d\odot}}$}=0.36$; see table~1 of \citealt{gutkin2016}). \item The volume-averaged ionisation parameter at age $\hbox{$t^\prime$}=0$, noted simply $\hbox{$\langle U\rangle$}\equiv\hbox{$\langle U\rangle$}(\hbox{$t^\prime$}=0)$. The volume-averaged ionisation parameter of a spherical \hbox{H{\sc ii}}\ region can be expressed as \citep[e.g., equation~3 of][]{Panuzzo03} \begin{equation}\label{eq:Udef} \hbox{$\langle U\rangle$} (\hbox{$t^\prime$})= \frac{3\alpha_B^{2/3}}{4c} \left[ \frac{3Q(\hbox{$t^\prime$})\epsilon^2n_\mathrm{H}}{4 \pi} \right]^{1/3}\,, \end{equation} where $Q(\hbox{$t^\prime$})$ is the time-dependent rate of ionizing photons produced by a single stellar generation of age $\hbox{$t^\prime$}$, $\epsilon$ the volume-filling factor of the gas (i.e., the ratio of the volume-averaged hydrogen density to \hbox{$n_{\mathrm{H}}$}) and $\alpha_{\rm B}$ the case-B hydrogen recombination coefficient. We note that the volume-averaged ionization parameter in expression~\eqref{eq:Udef} is a factor of 9/4 larger than the zero-age ionization parameter at the Str\"omgren radius used by \citet{gutkin2016} and a factor of 3/4 smaller than the quantity defined by equation~(7) of \citet{ChaLon01}. These different model-labelling choices are transparent to the \textsc{cloudy}\ calculations. Also, since \hbox{$\langle U\rangle$}\ is proportional to $[Q(0)\epsilon^2\hbox{$n_{\mathrm{H}}$}]^{1/3}$, at fixed \hbox{$\langle U\rangle$}\ and \hbox{$n_{\mathrm{H}}$}, there is a degeneracy in the calculations between the adopted normalisation of $Q(0)$ (via an effective mass of ionizing star cluster) and $\epsilon$. \end{enumerate} The \textsc{cloudy}\ calculations to compute $T_{\lambda}^{\rm HII}(\hbox{$t^\prime$})$ are performed in closed geometry, adopting a small inner radius of the gaseous nebula, $r_\mathrm{in}=0.01$\,pc, to ensure spherical geometry. The photoionization calculations are stopped at the edge of the \hbox{H{\sc ii}}\ region, when the electron density falls below 1 per cent of \hbox{$n_{\mathrm{H}}$}. As noted by \citet{vidal17}, the above standard \textsc{cloudy}\ calculations do not account for interstellar-{\em line} absorption in the ionized gas. In the following, we also wish to investigate the effects on nebular emission of interstellar-line absorption in the \hbox{H{\sc ii}}\ interiors and \hbox{H{\sc i}}\ envelopes of stellar birth clouds. To compute $ T_{\lambda}^{\rm HII}(\hbox{$t^\prime$}) \, T_{\lambda}^{\rm HI}(\hbox{$t^\prime$})$ in equation~\eqref{eq:transBC} in this case, we appeal to the prescription of \citet[][see their section~4]{vidal17}, which extends the computations of \citet{gutkin2016} to account for interstellar-line absorption in stellar birth clouds. This is achieved through the combination of \textsc{cloudy}\ with the general spectrum synthesis program \textsc{synspec}\ \citep[e.g.,][]{Hubeny2011}\footnote{See \url{http://nova.astro.umd.edu/Synspec49/synspec.html}} via an interactive program called \textsc{cloudspec}\ \citep[][see also \citealt{Heap2001}]{Hubeny2000}. For this purpose, the \textsc{cloudy}\ calculations are stopped when the kinetic temperature of the gas falls below 50\,K, assumed to define the \hbox{H{\sc i}}\ envelope of a typical stellar birth cloud (see \citealt{vidal17} for more details). We require a stellar population synthesis model to compute the spectral evolution of a single stellar generation, $S_\lambda[\hbox{$t^\prime$},Z(t-\hbox{$t^\prime$})]$, in equation~\eqref{eq:flux_gal}. In most applications in this paper, we use the latest version of the \citet{Bruzual2003} stellar population synthesis model (Charlot \& Bruzual, in preparation, hereafter {\small C\&B}). This differs from the version used by \citet{gutkin2016} in the inclusion of updated spectra of Wolf-Rayet (hereafter WR) stars from the Potsdam Wolf-Rayet (PoWR) model library (see Appendix~\ref{app:wrmodels}) and of main-sequence massive stars from \citet{Chen2015}. When indicated, we also use the Binary Population and Spectral Synthesis (\textsc{bpass}\,v2.2.1) models of \citet{BPASSv22} to explore the effects of binary interactions on the spectral evolution of young stellar populations, in particular the enhancement of extreme ultraviolet radiation by envelope stripping (of primary stars) and chemical homogeneisation (of rapidly rotating secondaries). We adopt throughout a \citet{Chabrier2003} initial mass function (IMF), with lower mass cutoff 0.01\,\hbox{M$_{\rm{\odot}}$}\ and upper mass cutoff in the range $100\leq\hbox{$m_{\rm{up}}$}\leq600\,$\hbox{M$_{\rm{\odot}}$}. IMF upper mass cutoffs well in excess of 100\,\hbox{M$_{\rm{\odot}}$}\ have been suggested by models and observations of massive, low-metallicity star clusters \citep[e.g.,][see also \citealt{vink11}]{crowther16, smith16}. For the star formation history, $\psi(t-\hbox{$t^\prime$})$ in equation~\eqref{eq:flux_gal}, we adopt either a delta function (Simple Stellar Population, hereafter SSP) or constant star formation rate. We consider ages of up to 50\,Myr, as, even though 99.9 per cent of H-ionizing photons are produced at ages less than 10\,Myr in single-star models \citep[e.g.,][]{CharlotFall1993,Binette1994}, binary interactions can extend the production over longer timescales \citep[e.g.][]{stanway16}. \subsection{Density-bounded models}\label{sec:fesc} \begin{figure*} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure1_light.png}} \end{center} \caption{Relationship between zero-age optical depth to LyC photons, $\tau_\lambda$, fraction of escaping LyC photons, \hbox{$f_{\rm{esc}}$}, and H-column density, \hbox{$N_{\mathrm{H}}$}, in the density-bounded models of Section~\ref{sec:fesc}. (a) $\tau_\lambda$ plotted against $\lambda$ for models with $Z=0.002$, $\log\hbox{$\langle U\rangle$}=-2.0$, $\hbox{$\xi_{\rm{d}}$}=0.3$, $\hbox{$m_{\rm{up}}$}=300\,$\hbox{M$_{\rm{\odot}}$}\ and for 5 choices of \hbox{$\tau_{570}$}\ (from $-1.0$ to +1.0 in steps of 0.5, as identifiable from the dotted vertical line). Different colours reflect the \hbox{$f_{\rm{esc}}$}\ values of these models (see scale on the right), while the black line corresponds to the ionization-bounded model. (b) Monochromatic photon rate $Q_\lambda$ emerging at age zero from the models of panel (a), plotted against $\lambda$. Triangles indicate the photon-weighted mean wavelength of each spectrum. The grey curve shows the input stellar population spectrum, with photon-weighted mean wavelength marked by the dotted vertical line. (c) \hbox{$f_{\rm{esc}}$}\ plotted against \hbox{$\tau_{570}$}\ for the same models as in (a) (black-contoured circles) and for models with different metallicities (upside-down triangles: $Z=0.0005$; squares: $Z=0.008$) and $\log\hbox{$\langle U\rangle$}=-3.0$, $-2.0$ and $-1.0$ (in order of increasing symbol size). (d) H-column density plotted against \hbox{H{\sc ii}}-region age for the same models as in (a). (e) \hbox{$f_{\rm{esc}}$}\ plotted against \hbox{H{\sc ii}}-region age for the same models as in (a). (f) Same as (e), but for a galaxy with constant star formation rate (i.e., adopting $\psi=\,$cst in equation~\ref{eq:flux_gal}). In panels (d)--(f), different colours reflect the \hbox{$f_{\rm{esc}}$}\ values of the density-bounded models computed at discrete ages (see scale on the right), } \label{fig:fesc} \end{figure*} We are also interested in the influence of LyC-photon leakage on the nebular emission from young star-forming galaxies. This leakage is generally thought to occur in two main possible ways: through holes carved into the neutral ISM by extreme galactic outflows (`picket-fenced' model), which can be traced by the presence of residual flux in the cores of saturated interstellar low-ionization absorption lines \citep[e.g., \hbox{C\,\textsc{ii}\,$\lambda\lambda1036,1037$};][]{Heckman11,Alexandroff15} and reduced nebular emission-line equivalent widths \citep{zackrisson13}, but with no effect on line ratios \citep{zackrisson17}; or through density-bounded (i.e. optically thin to LyC photons) \hbox{H{\sc ii}}\ regions, which are expected to lead to weak low-ionization emission lines, a small velocity spread of the Ly$\alpha$ double-peaked emission and large ratios of high- to low-ionization lines \citep{giammanco05, pellegrini12, jaskot13, zackrisson13, nakajimaetouchi14, Nicholls14, stasinska15, jaskot16, Alexandroff15, izotov18aug, dAgostino19b}. We note that, in addition to these two commonly cited scenarios, direct ionizing radiation from runaway massive stars could also contribute significantly to LyC leakage \citep{ConroyKratter12}. We focus here on the modelling of density-bounded \hbox{H{\sc ii}}\ regions, which is the only LyC-leakage scenario affecting ratios of nebular emission lines and also seems to be favoured by current observations (see Section~\ref{obs:leakers} below). \subsubsection{Modelling approach} In the framework of photoionization modelling described in Section~\ref{sec:ionb}, we can write the (effective) time-dependent fraction of LyC photons escaping from a density-bounded \hbox{H{\sc ii}}\ region ionized by a single stellar generation as \begin{equation}\label{eq:fesc} \hbox{$f_{\rm{esc}}$}(\hbox{$t^\prime$})=\dfrac{Q^\mathrm{out}(\hbox{$t^\prime$})}{Q(\hbox{$t^\prime$})}, \end{equation} where $Q(\hbox{$t^\prime$})$ is the rate of LyC photons produced by the stellar population at age \hbox{$t^\prime$}, and $Q^\mathrm{out}(\hbox{$t^\prime$})$ the rate emerging from the nebula at that age. The quantity $Q^\mathrm{out}(\hbox{$t^\prime$})$ encompasses both the fraction of LyC photon initially produced by stars that escape from the nebula, and the LyC photons created within the nebula (via free-bound emission) that also escape from it. This latter contribution is negligible, as the ionizing recombination continuum amounts to less than 0.001 per cent of $Q(\hbox{$t^\prime$})$ for an ionization-bounded nebula. It is convenient to parametrize density-bounded models in terms of the zero-age optical depth to LyC photons, rather than the H-column density of the \hbox{H{\sc ii}}\ region. This is because at fixed H-column density, the optical depth, which controls the quantity \hbox{$f_{\rm{esc}}$}\ we are interested in, can vary greatly depending on gas composition and ionization state. While \textsc{cloudy}\ computes the optical depth to LyC photons in a self-consistent way, it is useful, for the purpose of describing the sensitivity of observed line ratios on model parameters (Section~\ref{sec:params}), to express the optical depth at wavelength $\lambda$ and radius $r$ at age $\hbox{$t^\prime$}=0$ as the sum of the optical depths arising from the gas and dust phases, \begin{equation}\label{eq:tautot} \tau_\lambda(r)=\tau_{\lambda,\mathrm{gas}}(r)+\tau_{\lambda,\mathrm{dust}}(r)\,. \end{equation} The optical depth from neutral hydrogen and other gaseous species is \citep[e.g.][]{Osterbrock2006} \begin{equation}\label{eq:taugas} \tau_{\lambda,\mathrm{gas}}(r)=\sigma_\lambda(\mathrm{H}^0) N(\mathrm{H}^0,r) + \sum_{\mathrm{X},i} \sigma_\lambda({\mathrm{X}^{+i}}) N(\mathrm{X}^{+i},r)\,, \end{equation} where $\sigma_\lambda(\mathrm{H}^0)$ and $\sigma_\lambda({\mathrm{X}^{+i}})$ are the monochromatic absorption cross-sections of neutral hydrogen and element X (with atomic number $\geq2$) in ionization state $+i$, and $N(\mathrm{H}^0,r)$ and $N(\mathrm{X}^{+i},r)$ the column densities of H$^0$ and X$^{+i}$ out to radius $r$. Both $N(\mathrm{H}^0,r)$ and $N(\mathrm{X}^{+i},r)$ are proportional to the gas filling factor $\epsilon$ (Section~\ref{sec:ionb}). The optical depth arising from dust can be expressed as \begin{equation}\label{eq:taudust} \tau_{\lambda,\mathrm{dust}}(r)= \sigma_{\lambda,\mathrm{dust}}\,\hbox{$\xi_{\rm{d}}$} Z \hbox{$N_{\mathrm{H}}$}(r)\,, \end{equation} where $ \sigma_{\lambda,\mathrm{d}}$ is the dust absorption cross-section at wavelength $\lambda$, and $\hbox{$N_{\mathrm{H}}$}(r)=\epsilon \hbox{$n_{\mathrm{H}}$} r$ the H-column density at radius $r$. In practice, we parametrise density-bounded models in terms of the zero-age optical depth of the \hbox{H{\sc ii}}\ region to LyC photons with wavelength $\lambda=570\,$\AA, noted \hbox{$\tau_{570}$}. This corresponds to the photon-rate-weighted mean wavelength of H-ionizing radiation produced by a zero-age stellar population with metallicity $Z=0.002$ and IMF upper-mass cutoff $\hbox{$m_{\rm{up}}$}=300\,$\hbox{M$_{\rm{\odot}}$}\ in the {\small C\&B}\ models. For chosen input parameters, including \hbox{$\tau_{570}$}, we run \textsc{cloudy}\ at age $\hbox{$t^\prime$}=0$ in the same way as described in the previous section for ionization-bounded \hbox{H{\sc ii}}\ regions, but stopping this time the calculation when the optical depth at $\lambda=570\,$\AA\ reaches \hbox{$\tau_{570}$}. At the end of the calculation, we record the H-column density corresponding to this model of density-bounded nebula. Then, for all ages $\hbox{$t^\prime$}>0$, we compute the nebular emission with \textsc{cloudy}, stopping the calculation when the H-column density reaches that determined at $\hbox{$t^\prime$}=0$, or when the electron density falls below 1 per cent of \hbox{$n_{\mathrm{H}}$}. \subsubsection{Properties of density-bounded models} Fig.~\ref{fig:fesc} illustrates the relationship between \hbox{$\tau_{570}$}, \hbox{$f_{\rm{esc}}$}\ and H-column density, \hbox{$N_{\mathrm{H}}$}, in these density-bounded models. Fig.~\ref{fig:fesc}a shows the wavelength dependence of the zero-age optical depth, $\tau_\lambda$ (equation~\ref{eq:tautot}), for models with fixed metallicity $Z=0.002$, ionization parameter $\log\hbox{$\langle U\rangle$}=-2.0$, dust-to-metal mass ratio $\hbox{$\xi_{\rm{d}}$}=0.3$, IMF upper mass cutoff $\hbox{$m_{\rm{up}}$}=300\,$\hbox{M$_{\rm{\odot}}$}, and for 5 choices of \hbox{$\tau_{570}$}, from $-1.0$ to +1.0 in steps of 0.5. The curves are colour-coded to reflect the \hbox{$f_{\rm{esc}}$}\ values of these models. Also shown for comparison is the ionization-bounded model with same parameters (in black). The breaks in the curves correspond to the ionization potentials of helium (at 228 and 504\,\AA) and hydrogen (at 912\,\AA), which give rise to sharp features in the ionizing spectra emerging from these \hbox{H{\sc ii}}\ regions (Fig.~\ref{fig:fesc}b). Also, the increase in $\tau_\lambda$ at wavelengths from 228 to 912\,\AA\ implies that the photon-weighted mean wavelength of ionizing photons emerging from the \hbox{H{\sc ii}}\ region increases from high to low \hbox{$\tau_{570}$}\ (as indicated by the triangles at the bottom of Fig.~\ref{fig:fesc}b). This also implies that ionizing photons with wavelengths less than 912\,\AA\ can escape the nebula when the optical depth at the Lyman edge is unity. In Fig.~\ref{fig:fesc}c, we show \hbox{$f_{\rm{esc}}$}\ as a function of \hbox{$\tau_{570}$}\ for models with different metallicities, $Z=0.0005$ (upside-down triangles), 0.002 (circles) and 0.008 (squares), and different ionization parameters, $\log\hbox{$\langle U\rangle$}=-3.0$, $-2.0$ and $-1.0$ (in order of increasing symbol size). At fixed \hbox{$\tau_{570}$}, differences in \hbox{$f_{\rm{esc}}$}\ between these models arise from differences in the wavelength dependence of $\tau_\lambda$. Increasing $Z$ at fixed $\log\hbox{$\langle U\rangle$}$ implies a larger contribution to the optical depth by metals and dust, and hence, at fixed \hbox{$\tau_{570}$}, a smaller one by H$^0$ (equations~\ref{eq:tautot}--\ref{eq:taudust}). This turns out to produce a flatter dependence of $\tau_\lambda$ on wavelength relative to that shown in Fig.~\ref{fig:fesc}a, which makes \hbox{$f_{\rm{esc}}$}\ drop at fixed \hbox{$\tau_{570}$}\ in Fig.~\ref{fig:fesc}c. Also, increasing $\log\hbox{$\langle U\rangle$}$ at fixed $Z$ (which can be achieved by raising $\epsilon$ at fixed $Q$ in equation~\ref{eq:Udef}) makes \hbox{$N_{\mathrm{H}}$}, and hence, the optical depths from metals and dust, increase, implying a smaller H$^0$ optical depth at fixed \hbox{$\tau_{570}$}. This (and the higher ionization state of metals) again contributes to making \hbox{$f_{\rm{esc}}$}\ drop when $\log\hbox{$\langle U\rangle$}$ increases in Fig.~\ref{fig:fesc}c. The effect is largest around the critical regime $\hbox{$\tau_{570}$}\sim1$, where \hbox{$f_{\rm{esc}}$}\ can change from 0.35 to 0.55 depending on the adopted metallicity and ionization parameter. In Fig.~\ref{fig:fesc}d, we plot \hbox{$N_{\mathrm{H}}$}\ as a function of \hbox{H{\sc ii}}-region age for the same models with $Z=0.002$ and (zero-age) $\log\hbox{$\langle U\rangle$}=-2.0$ as in Fig.~\ref{fig:fesc}a. For the reference ionization-bounded model (black curve), \hbox{$N_{\mathrm{H}}$}\ rises until ages around 1\,Myr, as massive stars evolve on the main sequence, and then drops and exhibits a secondary peak around 2.5\,Myr, when the hard ionizing radiation from hot WR stars induces a peak in $Q(\hbox{$t^\prime$})$ (and , $\log\hbox{$\langle U\rangle$}$; equation~\ref{eq:Udef}). Then, at later ages, \hbox{$N_{\mathrm{H}}$}\ drops as the supply of ionizing photons dries up. For density-bounded models, by design, \hbox{$N_{\mathrm{H}}$}\ remains constant at all ages until $Q(\hbox{$t^\prime$})$ drops enough for the region to become ionization-bounded, reducing \hbox{$f_{\rm{esc}}$}\ to zero (Fig.~\ref{fig:fesc}e). In the case of a galaxy with constant star formation rate (i.e., adopting $\psi=\,$cst in equation~\ref{eq:flux_gal}), \hbox{$f_{\rm{esc}}$}\ does not reach zero at ages $\hbox{$t^\prime$}\gtrsim10\,$Myr, as newly formed \hbox{H{\sc ii}}\ regions continue to maintain leakage of LyC photons (Fig.~\ref{fig:fesc}f). \begin{figure*} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure2_light.png}} \end{center} \caption{Relationship between optical depth to LyC photons at $\lambda=570\,$\AA, \hbox{$\tau_{570}$}, neutral-H column density, $N(\mathrm{H}^0)$, and total H-column density, \hbox{$N_{\mathrm{H}}$}, at age $\hbox{$t^\prime$}=0$ in the density-bounded models of Section~\ref{sec:fesc}. (a) $N(\mathrm{H}^0)$ plotted against \hbox{$\tau_{570}$}\ for the same models as in Fig.~\ref{fig:fesc}c. (b) $N(\mathrm{H}^0)$ plotted against \hbox{$N_{\mathrm{H}}$}\ for the subset of models in (a) with metallicity $Z=0.0005$. Lines join models of fixed $\log\hbox{$\langle U\rangle$}=-3.0$, $-2.0$ and $-1.0$ (in order of increasing thickness). At the top of each line, a black diamond indicates the location of the ionization-bounded model. (c) Same as (b), but for $Z=0.008$. } \label{fig:NH0} \end{figure*} It is also interesting to examine the dependence of the neutral-H column density, \hbox{$N(\mathrm{H}^0)$}, on \hbox{$\tau_{570}$}\ and \hbox{$N_{\mathrm{H}}$}\ in the density-bounded models of Fig.~\ref{fig:fesc}. Fig.~\ref{fig:NH0}a shows \hbox{$N(\mathrm{H}^0)$}\ against \hbox{$\tau_{570}$}\ for the same zero-age models with various metallicities and ionization parameters as in Fig.~\ref{fig:fesc}c. At fixed \hbox{$\tau_{570}$}, the drop in \hbox{$N(\mathrm{H}^0)$}\ mentioned above to compensate the enhanced opacity from metals and dust when increasing $Z$ and \hbox{$\langle U\rangle$}\ is clearly apparent in this diagram, especially at low \hbox{$\tau_{570}$}, when the outer H$^0$ layer of the density-bounded \hbox{H{\sc ii}}\ regions is very thin [i.e., $\hbox{$N(\mathrm{H}^0)$}\lesssim1.6\times10^{17}\rm{cm}^{-2}$, the column density required to produce unit optical depth at the Lyman edge]. In Figs~\ref{fig:NH0}b and \ref{fig:NH0}c, we plot \hbox{$N(\mathrm{H}^0)$}\ against total H-column density \hbox{$N_{\mathrm{H}}$}, for $Z=0.0005$ and 0.008, respectively, and in each case for the same models as in Fig.~\ref{fig:NH0}a with $\log\hbox{$\langle U\rangle$}=-3.0$, $-2.0$ and $-1.0$ (the lines join models of fixed \hbox{$\langle U\rangle$}). As noted previously, \hbox{$N_{\mathrm{H}}$}\ increases together with \hbox{$\langle U\rangle$}. Also, at fixed \hbox{$\langle U\rangle$}\ and \hbox{$\tau_{570}$}, \hbox{$N_{\mathrm{H}}$}\ is smaller for $Z=0.008$ than for $Z=0.0005$, because more ionizing photons are absorbed by metals and dust relative to hydrogen at higher $Z$. At fixed ionization parameter, decreasing \hbox{$\tau_{570}$}\ relative to the ionization-bounded model firstly amounts to making \hbox{$N(\mathrm{H}^0)$}\ decrease at nearly fixed \hbox{$N_{\mathrm{H}}$}, until the outer H$^0$ layer of the \hbox{H{\sc ii}}\ region is nearly peeled off [i.e., around $\hbox{$N(\mathrm{H}^0)$}\sim1.6\times10^{17}\rm{cm}^{-2}$]. Further reducing \hbox{$\tau_{570}$}\ requires a drop in the optical depth to LyC photons arising from metals and dust, and hence smaller \hbox{$N_{\mathrm{H}}$}. The transition between the two regimes occurs at smaller \hbox{$f_{\rm{esc}}$}\ for $Z=0.008$ (Fig.~\ref{fig:NH0}c) than for $Z=0.0005$ (Fig.~\ref{fig:NH0}b), because of the larger metal and dust optical depths at higher $Z$. \begin{figure*} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure3_light.png}} \end{center} \caption{Carbon emission-line properties at age $\hbox{$t^\prime$}=0$ in the density-bounded models of Section~\ref{sec:fesc}. (a) Fractional abundances of C$^{2+}$ (dashed lines) and C$^{3+}$ (solid lines) plotted against radius $r$ (in units of the ionization-bounded \hbox{H{\sc ii}}-region radius, \hbox{$r^{\mathrm{IB}}$}), for reference ionization-bounded models with $Z=0.0005$, $\hbox{$\xi_{\rm{d}}$}=0.3$, $\hbox{$m_{\rm{up}}$}=300\,$\hbox{M$_{\rm{\odot}}$}\ and 3 values of the ionization parameter, $\log\hbox{$\langle U\rangle$}=-3.0$, $-2.0$ and $-1.0$ (in order of increasing line thickness). Triangles at the bottom locate the cutoff radii of the models with different \hbox{$\tau_{570}$}\ and $\log\hbox{$\langle U\rangle$}$ (in order of increasing symbol size) of Fig.~\ref{fig:NH0}b. (b) Same as (a), but for $Z=0.008$. (c) Equivalent width of the \hbox{C\,\textsc{iv}\,$\lambda1549$}\ nebular emission line (in units of the equivalent width in the ionization-bounded case) plotted against \hbox{$f_{\rm{esc}}$}, for the same density-bounded models as in Fig.~\ref{fig:fesc}c. (d) Same as (c), but for the equivalent width of \hbox{C\,\textsc{iii}]\,$\lambda1908$}. (e) Same as (c), but for the \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ emission-line luminosity ratio. } \label{fig:struction_carbon} \end{figure*} \subsubsection{Implications for emission-line properties} We now turn to the emission-line properties of these density-bounded models. Figs~\ref{fig:struction_carbon}a and \ref{fig:struction_carbon}b show the fractions of total C abundance in the form of C$^{2+}$ (dashed lines) and C$^{3+}$ (solid lines), as a function of radius, in three reference ionization-bounded models with $\log\hbox{$\langle U\rangle$}=-3.0$, $-2.0$ and $-1.0$ (in order of increasing line thickness), for $Z=0.0005$ and 0.008, respectively, at age $\hbox{$t^\prime$}=0$. The cutoff radii of the density-bounded models with different \hbox{$\tau_{570}$}\ and $\log\hbox{$\langle U\rangle$}$ from Fig.~\ref{fig:NH0}b are indicated by triangles at the bottom of each panel. At fixed \hbox{$\langle U\rangle$}, the fractional abundance of C$^{3+}$ is largest in the inner, highly-ionized parts of the nebula, while C$^{2+}$ dominates on the outer, lower-ionization parts. Increasing \hbox{$\langle U\rangle$}\ (which can be achieved by raising $\epsilon$ at fixed $Q$ in equation~\ref{eq:Udef}) increases the probability for carbon to be multiply ionized in the inner parts of the nebula, causing an inner C$^{4+}$ zone (not shown) to develop, while the C$^{3+}$ zone thickens to the detriment of the C$^{2+}$ zone. At fixed \hbox{$\tau_{570}$}, the cutoff radii corresponding to density-bounded models with different \hbox{$\langle U\rangle$}\ and $Z$ sample different global abundances of C$^{2+}$ and C$^{3+}$ in Figs~\ref{fig:struction_carbon}a and \ref{fig:struction_carbon}b. The implications for the \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ emission-line properties of models with different \hbox{$f_{\rm{esc}}$}\ are shown in the bottom panels of Fig.~\ref{fig:struction_carbon}. Figs~\ref{fig:struction_carbon}c and \ref{fig:struction_carbon}d show the equivalent widths EW(\hbox{C\,\textsc{iii}]\,$\lambda1908$}) and EW(\hbox{C\,\textsc{iv}\,$\lambda1549$}) (in units of the equivalent widths in the ionization-bounded case), respectively, as a function of \hbox{$f_{\rm{esc}}$}, for the same zero-age models with different \hbox{$\langle U\rangle$}\ and $Z$ as in Fig.~\ref{fig:fesc}c above. Fig.~\ref{fig:struction_carbon}e shows the \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ line-luminosity ratio. As expected from Figs~\ref{fig:struction_carbon}a and \ref{fig:struction_carbon}b, the gradual removal of the outer low-ionization zone when \hbox{$f_{\rm{esc}}$}\ rises makes EW(\hbox{C\,\textsc{iii}]\,$\lambda1908$}) decrease more rapidly than EW(\hbox{C\,\textsc{iv}\,$\lambda1549$}), and the \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ ratio drop, the strengths of these effects increasing with both \hbox{$\langle U\rangle$}\ and $Z$. We note that, for low \hbox{$\langle U\rangle$}\ and $Z$, the rise in EW(\hbox{C\,\textsc{iv}\,$\lambda1549$}) when \hbox{$f_{\rm{esc}}$}\ increases in Fig.~\ref{fig:struction_carbon}c is caused by the drop in recombination-continuum flux at nearly constant line luminosity, since the \hbox{C\,\textsc{iv}\,$\lambda1549$}\ zone (thin solid lines in Figs~\ref{fig:struction_carbon}a and \ref{fig:struction_carbon}b) is unaffected by the cuts in \hbox{$N_{\mathrm{H}}$}\ \citep[small coloured triangles; see also][]{raiter10,jaskot16}. In Fig.~\ref{fig:struction_oxygen}, we show the analogues of Figs~\ref{fig:struction_carbon}a and \ref{fig:struction_carbon}e for the \hbox{[O\,\textsc{ii}]$\lambda3727$}\ and \hbox{[O\,\textsc{iii}]$\lambda5007$}\ lines. The fractional abundances of O$^{2+}$ and O$^+$ (Fig.~\ref{fig:struction_oxygen}a) exhibit a dependence on radius similar to that of C$^{2+}$ and C$^{3+}$ (Fig.~\ref{fig:struction_carbon}a), except that the outer low-ionization O$^+$ zone is thinner than the outer C$^{2+}$ zone at all ionization parameters. This causes the \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ ratio (Fig.~\ref{fig:struction_oxygen}b) to drop more steeply than the \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ ratio (Figs~\ref{fig:struction_carbon}e) when \hbox{$f_{\rm{esc}}$}\ increases, until the O$^+$ zone disappears. \begin{figure*} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure4_light.png}} \end{center} \caption{Oxygen emission-line properties at age $\hbox{$t^\prime$}=0$ in the density-bounded models of Section~\ref{sec:fesc}. (a) Same as Fig.~\ref{fig:struction_carbon}a, but for the fractions of O$^{+}$ (dashed lines) and O$^{2+}$ (solid lines). (b) Same as Fig.~\ref{fig:struction_carbon}e, but for the \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ emission-line luminosity ratio.} \label{fig:struction_oxygen} \end{figure*} \begin{figure*} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure5_light.png}} \end{center} \caption{Emission-line properties of the density-bounded models of Section~\ref{sec:fesc}. (a) \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ emission-line luminosity plotted against \hbox{H{\sc ii}}-region age \hbox{$t^\prime$}\ for the same models with $Z=0.002$, $\log\hbox{$\langle U\rangle$}=-2.0$, $\hbox{$\xi_{\rm{d}}$}=0.3$, $\hbox{$m_{\rm{up}}$}=300\,$\hbox{M$_{\rm{\odot}}$}\ and 5 choices of \hbox{$\tau_{570}$}\ as in Fig.~\ref{fig:fesc}d. (b) Same as (a), but for \hbox{C\,\textsc{iv}\,$\lambda1549$}. (c) \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ emission-line luminosity ratio for the same models with different $Z$ and $\log\hbox{$\langle U\rangle$}$ as in Fig.~\ref{fig:struction_carbon}e, but at age $t=10\,$Myr for a galaxy with constant star formation rate (i.e., adopting $\psi=\,$cst in equation~\ref{eq:flux_gal}). (d) Same as (a), but for \hbox{[O\,\textsc{ii}]$\lambda3727$}. (e) Same as (a), but for \hbox{[O\,\textsc{iii}]$\lambda5007$}. (f) Same as (c), but for the \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ ratio.} \label{fig:convol} \end{figure*} So far, we have described the emission-line properties of density-bounded \hbox{H{\sc ii}}-region models at age $\hbox{$t^\prime$}=0$ only. Fig.~\ref{fig:convol} shows the evolution of the \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ (Fig.~\ref{fig:convol}a), \hbox{C\,\textsc{iv}\,$\lambda1549$}\ (Fig.~\ref{fig:convol}b), \hbox{[O\,\textsc{ii}]$\lambda3727$}\ (Fig.~\ref{fig:convol}d) and \hbox{[O\,\textsc{iii}]$\lambda5007$}\ (Fig.~\ref{fig:convol}e) emission-line luminosities as a function of \hbox{$t^\prime$}\ for the same models with $Z=0.002$, $\log\hbox{$\langle U\rangle$}=-2.0$, $\hbox{$\xi_{\rm{d}}$}=0.3$, $\hbox{$m_{\rm{up}}$}=300\,$\hbox{M$_{\rm{\odot}}$}\ and 5 choices of \hbox{$\tau_{570}$}\ as in Fig.~\ref{fig:fesc}d. As in the case of \hbox{$N_{\mathrm{H}}$}\ in Fig.~\ref{fig:fesc}d, the luminosity of emission lines in the reference ionization-bounded model (black curve in Figs~\ref{fig:convol}a--\ref{fig:convol}b and \ref{fig:convol}d--\ref{fig:convol}e) reaches a maximum at ages around 1\,Myr and exhibits a secondary peak when the hard ionizing radiation from hot WR stars kicks in, around 2.5\,Myr. Then, after the most massive stars have died, line emission fades. The secondary peak is most prominent in the evolution of the \hbox{C\,\textsc{iv}\,$\lambda1549$}\ luminosity, since \hbox{C\,\textsc{iv}}\ requires the most energetic photons to be produced ($E_\mathrm{ion}>47.9\,$eV, compared to $35.1\,$eV for O\,\textsc{iii}). In contrast, the \hbox{[O\,\textsc{ii}]$\lambda3727$}\ luminosity, which requires the least energetic photons ($E_\mathrm{ion}>13.6\,$eV to produce O\,\textsc{ii}, compared to $24.4\,$eV for C\,\textsc{iii}) does not drop as sharply as that of the other three lines at ages greater than a few Myr. For density-bounded models, the gradual removal of the outer low-ionization envelope when \hbox{$\tau_{570}$}\ decreases reduces the \hbox{[O\,\textsc{ii}]$\lambda3727$}\ luminosity more strongly than the \hbox{C\,\textsc{iii}]\,$\lambda1908$}, \hbox{O\,\textsc{iii}]\,$\lambda1664$}\ and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ ones at early ages, until the ionizing flux has dropped low enough for the nebula to become ionization-bounded. The low-ionization zone reappears, causing a sharp rise in \hbox{[O\,\textsc{ii}]$\lambda3727$}\ luminosity at ages $\hbox{$t^\prime$}\gtrsim3\,$Myr. In Figs~\ref{fig:convol}c and \ref{fig:convol}f, we show the resulting dependence on \hbox{$f_{\rm{esc}}$}\ of the \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ and \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ emission-line luminosity ratios, respectively, at age $t=10\,$Myr for a galaxy with constant star formation rate (i.e., adopting $\psi=\,$cst in equation~\ref{eq:flux_gal}), for the same models with different $Z$ and $\log\hbox{$\langle U\rangle$}$ as in Fig.~\ref{fig:struction_carbon}e. The results for \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ are very similar to those described above for single zero-age \hbox{H{\sc ii}}\ regions (Fig.~\ref{fig:struction_carbon}e), as expected from the similar effect of reducing \hbox{$\tau_{570}$}\ on the evolution of the \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ luminosities (Figs~\ref{fig:convol}a and \ref{fig:convol}b). Instead, the dependence of \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ on \hbox{$f_{\rm{esc}}$}\ in Fig.~\ref{fig:convol}f differs from that found for zero-age \hbox{H{\sc ii}}\ regions in Fig.~\ref{fig:struction_oxygen}b (for $ \log\hbox{$\langle U\rangle$}\geq-2.0$). For $\hbox{$\tau_{570}$}=1$, for example, corresponding roughly to $\hbox{$f_{\rm{esc}}$}\approx0.3$ in the different models of Fig.~\ref{fig:convol}f (and $\hbox{$f_{\rm{esc}}$}\approx0.5$ at $\hbox{$t^\prime$}=0$ in the single \hbox{H{\sc ii}}-region model of Fig.~\ref{fig:convol}d), the rise in \hbox{[O\,\textsc{ii}]$\lambda3727$}\ luminosity (Fig.~\ref{fig:convol}d) and corresponding drop in \hbox{[O\,\textsc{iii}]$\lambda5007$}\ luminosity (Fig.~\ref{fig:convol}e) at ages $\hbox{$t^\prime$}\gtrsim3\,$Myr in the evolution of single \hbox{H{\sc ii}}\ regions can cause \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ to exceed the ionization-bounded value at $t=10\,$Myr for a galaxy with constant star formation rate, especially for large \hbox{$\langle U\rangle$}\ (Fig.~\ref{fig:convol}f). Hence, while a small \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ ratio can be a clue of significant LyC leakage in models of individual density-bounded \hbox{H{\sc ii}}\ regions, this is not the case for model galaxies containing several generations of \hbox{H{\sc ii}}\ regions \citep[see also][]{jaskot13}. It is worth noting that the models presented here are highly idealized, and that, in practice, a galaxy will contain different types of \hbox{H{\sc ii}}\ regions with different optical depths to LyC photons, metallicities and ionization parameters. In any case, the complex dependence of the \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{C\,\textsc{iv}\,$\lambda1549$}\ and \hbox{[O\,\textsc{ii}]$\lambda3727$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ ratios on \hbox{$f_{\rm{esc}}$}\ identified in Figs~\ref{fig:struction_carbon}--\ref{fig:convol} above illustrates the difficulty of observationally tracking galaxies which lose significant amounts of LyC photons. We will return to this point in Section~\ref{sec:constraints}. \subsection{Nebular emission from AGN}\label{sec:agn} To explore the influence of an AGN on the nebular emission from a young star-forming galaxy, we use photoionization calculations of AGN narrow-line regions based on the approach of \citet{Feltre2016}. This relies on a parametrization of \textsc{cloudy}\ similar to that described above in terms of hydrogen density \hbox{$n_{\mathrm{H}}$}, gas metallicity $Z$, C/O ratio, dust-to-metal mass ratio \hbox{$\xi_{\rm{d}}$}\ and volume-averaged ionization parameter \hbox{$\langle U\rangle$}, but using the emission from an accretion disc in place of equation~\eqref{eq:flux_gal} as input radiation. The spectral energy distribution of the accretion disc is parametrized as \begin{equation} S_\nu \propto \left \{ \begin{array}{ll} \nu^{\alpha} & \mbox{at wavelengths} \quad 0.001\leq \lambda/\micron \leq 0.25\,,\\ \nu^{-0.5} & \mbox{at wavelengths} \quad 0.25< \lambda/\micron \leq 10.0\,,\\ \nu^{2} & \mbox{at wavelengths} \quad \lambda/\micron >10.0\,.\\ \end {array} \right. \label{eq:Lagn} \end{equation} We adopt here for simplicity a fixed slope $\alpha=-1.7$ at high energies \citep{Zheng97,Lusso15} and a fixed gas density $\hbox{$n_{\mathrm{H}}$}=10^3\,$cm$^{-3}$ in the narrow-line region \citep[e.g.,][]{Osterbrock2006}. The \textsc{cloudy}\ calculations are performed in `open geometry' \citep[see][for more details]{Feltre2016}. The models adopted here differ from those originally published by \citet{Feltre2016} in that they are computed using version c17.00 of \textsc{cloudy}\ \citep{cloudyc17} and include dissipative microturbulence in the gas clouds in the narrow-line region (with a microturbulence velocity of 100\,km\,s$^{-1}$) and a smaller inner radius of this region (90\,pc instead of 300\,pc, for an AGN luminosity of $10^{45}$\,erg\,s$^{-1}$). These parameters were found to better reproduce the observed ultraviolet emission-line spectra (in particular, the \hbox{N\,\textsc{v}\,$\lambda1240$}\ emission) of a sample of 90 type-2 AGN at redshifts $z=1.5$--3.0 \citep{Mignoli19}. \subsection{Nebular emission from radiative shocks}\label{sec:shocks} We are also interested in the effects of a contribution by shock-ionized gas to the nebular emission from actively star-forming galaxies. We appeal to the 3MdBs database\footnote{See \url{http://3mdb.astro.unam.mx:3686}} of fully radiative shock models recently computed by \citet{3MdBs19} using the \textsc{mappings\,v}\ shock and photoionization code \citep{MappingsV2017}. The models (computed in plane-parallel geometry) are available for the same sets of element abundances as adopted in the stellar and AGN photoionization models described in Sections~\ref{sec:ionb}--\ref{sec:agn} above (albeit for only two values of the C/O ratio: 0.11 and 0.44). Metal depletion on to dust grains is not included in this case, as in fast shocks, dust can be efficiently destroyed by grain-grain collisions, through both shattering and spallation, and by thermal sputtering \citep{allen08}. The other main adjustable parameters defining the model grid are the shock velocity (from $10^2$ to $10^3$\,km\,s$^{-1}$), pre-shock density (from 1 to $10^4$\,cm$^{-2}$) and transverse magnetic field (from $10^{-4}$ to 10\,$\mu$G). The pre-shock density (\hbox{$n_{\mathrm{H}}$}) and transverse magnetic field (noted $B$) have a much weaker influence than shock velocity on most emission lines of interest to us (see Section~\ref{sec:params}). Thus, in the following, to probe global trends in the influence of radiative shocks on the nebular emission from star-forming galaxies, we consider for simplicity models with fixed $\hbox{$n_{\mathrm{H}}$}=10^2\,$cm$^{-3}$ and $B=1\,\mu$G in the full available range of shock velocities. We focus here on the predictions of models including nebular emission from both shocked and shock-precursor (i.e., pre-shock gas photoionized by the shock) gas (see \citealt{3MdBs19} for more details). \afterpage{% \begin{landscape} \begin{table} \begin{threeparttable} \centering \caption{Published ultraviolet (/optical) spectroscopic analyses of metal-poor star-forming galaxies at redshifts between 0.003 and 7.1.} \begin{tabular}{lclllcl} \toprule Reference & $z$ & Sample & \hbox{12 + log(\OH)$_\mathrm{gas}$}\ & Modelling & Physical parameters\tnote{\it a} & Comment \\ \midrule \citet{senchyna17,senchyna19} & $<0.05$ & 10 extreme SF regions and & 7.5--8.5 & \textsc{beagle}\ & $-1.4<\log\hbox{C/O}<-0.7$\tnote{\it b} & Transition from primarily stellar to purely nebular \hbox{He\,\textsc{ii}\,$\lambda1640$}\ \\ & & 6 extremely metal-poor SF & & \textsc{cloudy}\ 13.03 & $-3.6<\log \hbox{$\langle U\rangle$}<-1.9$\tnote{\it b} & near $\hbox{12 + log(\OH)}\lesssim 8.0$; no evidence for shocks nor XRBs;\tnote{\it b} \\ & & galaxies from SDSS & & (dust physics) & sSFR$\,\sim\,$2--300\,Gyr$^{-1}$ & strong \hbox{C\,\textsc{iv}\,$\lambda1549$}\ traces young ages, extremely low $Z$ and $\alpha$/Fe \\ \rule{-2pt}{3ex} \citet{berg16,berg19} & $<0.14$ & 32 compact SF regions with & 7.4--8.0 & \textsc{bpass}\ v2.14\tnote{\it c} & $-1.0<\log\hbox{C/O}<-0.3$ & No obvious AGN ($\hbox{C\,\textsc{iv}\,$\lambda1549$}/\hbox{C\,\textsc{iii}]\,$\lambda1908$} < 1$) nor shock \\ & & EW$(\hbox{[O\,\textsc{iii}]$\lambda5007$})>50\,$\AA\ & & \textsc{cloudy}\ 17.00 & $-2.8<\log U_0<-1.8$ & ($\hbox{[O\,\textsc{i}]\,$\lambda6300$}/\hbox{[O\,\textsc{iii}]$\lambda5007$} < 0.01$; see Fig.~\ref{fig:OI}) contribution \\ & & & & (no dust) & sSFR$\,\sim\,$1--40\,Gyr$^{-1}$ & \\ \rule{-2pt}{3ex} \citet{berg18} & 1.8 & Lensed, extreme-SF galaxy & $\sim7.5$ & \textsc{bpass}\ v2.14 & $\log\hbox{C/O}\sim-0.8$ & Strong nebular \hbox{He\,\textsc{ii}\,$\lambda1640$}\ not reproducible by models; no \\ & & SL2SJ021737-051329 & & \textsc{cloudy}\ 17.00 & $\log U_0\sim-1.5$ & obvious AGN nor shock contribution (\citealt{groves04}-AGN \\ & & & & (no dust) & sSFR$\,\sim\,$100\,Gyr$^{-1}$ & and \citealt{allen08}-shock prescriptions deteriorate fits) \\ \rule{-2pt}{3ex} \citet{stark14} & 1.5--3.0 & 17 lensed, dwarf SF galaxies & 7.3--7.8 & \textsc{beagle}-like & $-1.0<\log\hbox{C/O}<-0.3$ & \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ detected in 16 systems; strongest (EW>10\AA) \\ & & & & \textsc{cloudy}\ 13.03 & $-1.8<\log \hbox{$\langle U\rangle$}<-1.5$\tnote{\it d} & emitters show \hbox{C\,\textsc{iv}\,$\lambda1549$}\ emission, while nebular \hbox{He\,\textsc{ii}\,$\lambda1640$}\ \\ & & & & (dust physics) & sSFR$\,\sim\,$2--40\,Gyr$^{-1}$ & is weak or non-detected in many systems \\ \rule{-2pt}{3ex} \citet{erb10} & 2.3 & Low-mass SF galaxy & $\sim7.8$ & \textsc{starburst}\small{99}\tnote{\it e} & $\log\hbox{C/O}\sim-0.6$ & Unlikely AGN contribution ($\hbox{C\,\textsc{iv}\,$\lambda1549$}/\hbox{C\,\textsc{iii}]\,$\lambda1908$}\sim0.3$)\\ & & Q2343-BX418 & & \textsc{cloudy}\ 08.00 & $\log U_0\sim-1.0$ & \\ & & & & (dust physics) & sSFR$\,\sim\,$16\,Gyr$^{-1}$ & \\ \rule{-2pt}{3ex} \citet{amorin17} & 2.4--3.5 & 10 VUDS SF galaxies with & 7.4--7.7 & \textsc{popstar}\tnote{\it f} & $-1.0<\log\hbox{C/O}<-0.4$ & No obvious AGN contributions (from \hbox{C\,\textsc{iv}\,$\lambda1549$}, \hbox{He\,\textsc{ii}\,$\lambda1640$},\\ & & \hbox{Ly$\alpha$}, \hbox{O\,\textsc{iii}]\,$\lambda1664$}\ and & & \textsc{cloudy}\ 13.03 & $-2.3<\log U_0<-1.7$ & \hbox{C\,\textsc{iii}]\,$\lambda1908$}, \hbox{O\,\textsc{iii}]\,$\lambda1664$}; no X-ray emission nor broad lines) \\ & & \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ emission & & (dust physics) & sSFR$\,\sim\,$5--50\,Gyr$^{-1}$ & \\ \rule{-2pt}{3ex} \citet{nakajima18} & 2--4 & 450 VUDS SF galaxies with & $\sim8.3$ (C) & \textsc{popstar}\ & $\log\hbox{C/O}\sim-0.5$ (C), & AGN contribution to ionizing radiation required in strongest \\ & & \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ emission (C)\tnote{\it g} & $\sim7.6$ (B) & \textsc{bpass}\ v2.0 &$-0.3$ (B) and $-0.4$ (A) & \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ emitters, irrespective of the inclusion of binary \\ & & 43 with EW$\,=\,$10--20\,\AA\ (B)\tnote{\it g} & $\sim7.8$ (A) & \textsc{cloudy}\ 13.03 & $\log U_0\sim-2.9$ (C), & stars in the stellar population modelling \\ & & 16 with EW$\,>\,$20\,\AA\ (A)\tnote{\it g} & & (dust physics) & $-1.7$ (B) and $-1.6$ (A) & \\ \rule{-2pt}{3ex} \citet{nanayakkara19} & 2--4 & 12 MUSE SF galaxies with & 7.9--8.6 & \textsc{bpass}\ v2.1 & $\log\hbox{C/O}\sim-0.4$ & Rest-frame \hbox{He\,\textsc{ii}\,$\lambda1640$}\ equivalent widths of 0.2--10\,\AA\ not \\ & & \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission & & \textsc{cloudy}\ 13.03 & $-2.5<\log U_0<-1.5$ & reproducible by models with or without binary stars \\ & & & & (no dust) & & \\ \rule{-2pt}{3ex} \citet{vanzella17} & 3.2 & Lensed double-super star & $\sim7.7$ & & sSFR$\,\sim\,$20\,Gyr$^{-1}$ & Emission-line spectrum from \hbox{C\,\textsc{iv}\,$\lambda1549$}\ through \\ & & cluster ID14 & & & & \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ consistent with photoionization by stars \\ \rule{-2pt}{3ex} \citet{fosbury03} & 3.4 & Lensed \hbox{H{\sc ii}}\ galaxy & $\sim7.6$ & Pure blackbody & $\log U_0\sim-1.0$ & Absence of \hbox{N\,\textsc{v}\,$\lambda1240$}\ and weakness of \hbox{N\,\textsc{iii}]\,$\lambda1750$}\ \\ & & RX J0848+4456 & & \textsc{mappings\,v}\ Ic & & taken as evidence against photoionization by an AGN \\ & & & & (no dust) & & \\ \rule{-2pt}{3ex} \citet{schmidt17} & 6.1 & Lensed Ly$\alpha$ galaxy & $<8.3$ & & sSFR$\,\sim\,$40\,Gyr$^{-1}$ & Emission-line spectrum from \hbox{C\,\textsc{iv}\,$\lambda1549$}\ through\\ & & & & & & \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ consistent with photoionization by stars \\ & & & & & & \\ \rule{-2pt}{3ex} \citet{stark15} & 7.0 & Lensed Ly$\alpha$ galaxy & $\sim7.0$ & \textsc{beagle}-like & $\log \hbox{$\langle U\rangle$}\sim1.0$ & \hbox{C\,\textsc{iv}\,$\lambda1549$}\ emission and upper limits on \hbox{He\,\textsc{ii}\,$\lambda1640$}\ \\ & & & & \textsc{cloudy}\ 13.03 & & and \hbox{O\,\textsc{iii}]\,$\lambda1664$}\ emission consistent with photoionization \\ & & & & (dust physics) & & by both an AGN and stars \\ \rule{-2pt}{3ex} \citet{laporte17} & 7.1 & \hbox{[O\,\textsc{iii}]$\lambda5007$}-strong SF & & & & Prominent \hbox{N\,\textsc{v}\,$\lambda1240$}\ (and \hbox{He\,\textsc{ii}\,$\lambda1640$}) emission supports \\ & & galaxy COSY & & & & photoionization by an AGN \\ \bottomrule \end{tabular} \label{tab:analogs} \begin{tablenotes} \item [{\it a}] $U_0$ refers to the ionization parameter at the inner edge of the gas cloud, which, in models with spherical geometry, corresponds to the inner radius of the \hbox{H{\sc ii}}\ region and is roughly 3 times larger than the volume-averaged ionization parameter \hbox{$\langle U\rangle$}\ described in Section~\ref{sec:ionb}. $^{\it b}$ Pertains to the 10 extreme SF regions studied by \citet{senchyna17}. $^{\it c}$ \citet{berg16} used \textsc{starburst}\small{99}. $^{\it d}$ Range spanned by the 4 most extreme \hbox{C\,\textsc{iii}]\,$\lambda1908$}-emitting galaxies in the sample. $^{\it e}$ \citep{popstar}. $^{\it f}$ \citep{starburst99,Leitherer14}. $^{\it g}$ Letter referring to te corresponding sample in \citet{nakajima18}. \end{tablenotes} \end{threeparttable} \end{table} \end{landscape} } \section{Observational constraints}\label{sec:obs} In this section, we build a reference sample of the nebular emission from metal-poor star-forming galaxies and LyC leakers at various redshifts, including also other star-forming galaxies and AGN, which we will use in Sections~\ref{sec:params} and \ref{sec:constraints} to explore potentially discriminating signatures of the different adjustable parameters of the versatile models presented in Section~\ref{sec:models}. To this end, we wish to assemble a large homogeneous sample of observations of metal-poor star-forming galaxies at ultraviolet and optical wavelengths, by gathering from the literature data often analysed in independent ways using different models and assumptions. In the following, we assemble observations of such galaxies (Section~\ref{obs:analogs}), as well as of confirmed and candidate LyC leakers (Section~\ref{obs:leakers}) and other star-forming galaxies and AGN (Section~\ref{obs:normal}) in a wide redshift range. We consider here only observational studies which gathered enough nebular emission-line properties to be plotted in at least one of the diagrams we investigate. Observations involving ultraviolet lines are presented in Fig.\ref{fig:obs_uv}, and those involving optical lines in Fig.~\ref{fig:obs_opt}. We comment on the general properties of this reference sample in Section~\ref{sec:obsprop}. \subsection{Metal-poor star-forming galaxies}\label{obs:analogs} We list in Table~\ref{tab:analogs} the main characteristics of 13 samples of low-metallicity, actively star-forming galaxies. The samples are arranged in order of increasing redshift. In each case, we indicate the nature of the sample; the published gas-phase oxygen abundances of galaxies; the modelling tools used to interpret the observations (ionizing stellar population spectra and photoionization model); the constraints derived on physical parameters such as \hbox{C/O}\ ratio, ionization parameter and specific star formation rate; and the main conclusions drawn in the original studies. The gas-phase oxygen abundance, \hbox{12 + log(\OH)$_\mathrm{gas}$}, is usually estimated using the direct-$T_\mathrm{e}$ method (see section~5.1 of \citealt{gutkin2016} for potential caveats of this method), and otherwise through photoionization calculations, including or not (in which case the total gas+dust-phase O abundance is not computed) depletion of oxygen onto dust grains. Also, we note that the differences in model analyses between the different studies in Table~1 go beyond the listed details. For example, the ionizing stellar population spectra can refer to different IMF shapes and upper-mass limits, star formation histories and ages. We do not focus on such differences here, as our main goal is to provide rough estimates of the characteristics of the various sample, which we will compare globally with a homogeneous set of models in Section~\ref{sec:params}. We now briefly describe these samples. In the nearby Universe ($z\lesssim0.1$), \textit{HST}/COS observations have brought valuable insight into the rest-ultraviolet properties of metal-poor star-forming galaxies with hard ionizing spectra. \citet{senchyna17} observed 10 galaxies from the sample of \hbox{He\,\textsc{ii}$\lambda4686$}-emitting, Sloan Digital Sky Survey (SDSS) star-forming galaxies of \citet{shirazi12}, \citet{senchyna19} 6 extremely metal-poor ($Z/\hbox{${Z}_{\odot}$}\lesssim 0.1$) galaxies from the SDSS sample of \citet{Morales11} and \citet{berg16,berg19} 32 compact, ultraviolet-bright, SDSS star-forming galaxies with \hbox{[O\,\textsc{iii}]$\lambda5007$}\ emission equivalent widths larger than 50\,\AA. Objects in these samples show no sign of AGN activity and range from high-ionization \hbox{H{\sc ii}}\ regions embedded in larger galaxies to blue compact dwarf galaxies. They can reach \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ equivalent widths as large as $\sim15\,$\AA, similar to those found in galaxies at redshifts $z>6$ (see below). \citet{senchyna17} identify a marked transition with decreasing metallicity around $\hbox{12 + log(\OH)}\approx8.0$, from stellar-wind dominated to nebular-dominated \hbox{He\,\textsc{ii}\,$\lambda1640$}\ and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ emission. Analysis with the \textsc{beagle}\ code \citep{Chevallard2016}, allows them to reproduce all the stellar (e.g., \hbox{C\,\textsc{iv}\,$\lambda1549$}\ P-Cygni and broad-\hbox{He\,\textsc{ii}\,$\lambda1640$}\ wind features) and nebular ultraviolet/optical emission-line properties of their sample \citep[see also][]{Chevallard18}, except for the strong nebular \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission in the most metal-poor systems, which does not seem to be reproducible by any current stellar population synthesis prescription. Like \citet{berg16,berg19}, they do not find any strong evidence for a contribution by radiative shocks to the nebular emission of galaxies in their sample. At intermediate redshifts ($2\lesssim z\lesssim4$), spectroscopic observations with large optical telescopes have allowed detailed studies of the rest-ultraviolet spectra of unusually bright or lensed, dwarf star-forming galaxies (Table~\ref{tab:analogs}). The galaxies in these sample exhibit spectral characteristics typical of high-ionization, metal-poor star-forming galaxies. Remarkably, they also often show strong nebular \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission, which cannot be reproduced by any current stellar population synthesis models, even when including enhanced production of hard ionizing radiation via binary mass transfer \citep{berg18,nakajima18,nanayakkara19}. Significant contribution from a luminous AGN is disfavoured in most cases, based on the weakness of \hbox{N\,\textsc{v}\,$\lambda1240$}, the low observed \hbox{C\,\textsc{iv}\,$\lambda1549$}/\hbox{C\,\textsc{iii}]\,$\lambda1908$}\ ratio and the lack of X-ray detection and broad emission lines, standard optical \citep[e.g.][hereafter BPT]{BPT} diagnostic diagrams being generally not available in this redshift range \citep{stark14,erb10,amorin17,vanzella17,berg18,nanayakkara19}. This is not the case for \citet[][we adopt here the 3 composite spectra representative of classes A, B and C from this sample, with additional data from \citealt{lefevre19} for classes A and B; see Table~\ref{tab:analogs}]{nakajima18}, who argue that an AGN contribution is required to account for the \hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ properties of the galaxies with $\mathrm{EW(\hbox{C\,\textsc{iii}]\,$\lambda1908$})}>20\,$\AA\ in their sample. At the highest redshifts ($z>6$), we report in Table~\ref{tab:analogs} the constraints from near-infrared spectroscopy on the rest-ultraviolet emission of three galaxies probing the reionization era. In one of these, the emission-line spectrum from \hbox{C\,\textsc{iv}\,$\lambda1549$}\ through \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ favours photoionization by stars rather than by an AGN \citep{schmidt17}. In another, photoionization could arise from an AGN or hot stars \citep{stark15}. In the latter, prominent \hbox{N\,\textsc{v}\,$\lambda1240$}\ emission and the low \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ ratio both support photoionization by an AGN \citep{laporte17}. Remarkably, all three galaxies show strong Ly$\alpha$ emission, suggesting intense radiation fields capable of creating early ionized bubbles in the surrounding hydrogen distribution. Overall, the published spectral analyses of the observations listed in Table~\ref{tab:analogs} consistently point toward galaxies with low metallicities, typically $7.5\lesssim\hbox{12 + log(\OH)}\lesssim8.0$, low C/O ratios, $-1.0\lesssim\log\hbox{C/O}\lesssim-0.3$, high ionization parameters, $-3.0\lesssim\log \hbox{$\langle U\rangle$}\lesssim-1.5$ and large specific star formation rates, from $\sim10$ to a few $\times100$\,Gyr$^{-1}$, across a wide redshift range. The properties of these galaxies approach those expected for primeval galaxies near the reionization epoch \citep[e.g.,][]{stark16}. \afterpage{% \begin{landscape} \begin{table} \begin{threeparttable} \centering \caption{Published ultraviolet (/optical) spectroscopic analyses of confirmed and candidate LyC leakers at redshifts between 0.02 and 3.2.} \begin{tabular}{lclllcl} \toprule Reference & $z$ & Sample & \hbox{12 + log(\OH)$_\mathrm{gas}$}\ & Modelling & Physical parameters\tnote{\it a} & Comment \\ \midrule \citet{leitet11} & $0.02$ & {\it FUSE} rest-900\,\AA\ detection of & $\sim7.9$ & & $\hbox{$f_{\rm{esc}}$}\sim3\%$ & Consistent with 2-dimensional data, count rates, and limits on \\ & & blue compact galaxy Haro~11 & & & & residual flux in \hbox{C\,\textsc{ii}\,$\lambda\lambda1036,1037$}\ interstellar absorption line \\ \rule{-2pt}{3ex} \citet{izotov17oct} & $<0.14$ & 5 SDSS compact SF galaxies & 7.5--7.8 & \textsc{starburst}\small{99} & sSFR$\,\sim\,$50--400\,Gyr$^{-1}$ & $\hbox{O$_\textrm{32}$}$ alone not good indicator of LyC leakage because depends \\ & & with $\hbox{O$_\textrm{32}$}>20$ and no AGN & & \textsc{cloudy}\ 13.04 & $\hbox{$N_{\mathrm{H}}$}<\hbox{$N_{\mathrm{H}}^{\mathrm{IB}}$}$ in 3 galaxies\tnote{\it b} & on details of ionizing spectrum; \hbox{He\,\textsc{i}\,$\lambda3889$}, \hbox{He\,\textsc{i}\,$\lambda6678$}\ and \\ & & spectral feature\tnote{\it c} & & (dust physics) & (potentially $\hbox{$f_{\rm{esc}}$}>20$\%) & \hbox{He\,\textsc{i}\,$\lambda7065$}\ more promising tracers of density-bounded regions\\ \rule{-2pt}{3ex} \citet{chisholm17} & 0.04--0.2 & {\it HST}/COS rest-900\,\AA\ archival & 8.1--8.7 & & $\hbox{$f_{\rm{esc}}$}\sim0.4$--1.9\% & Weak \hbox{Si\,\textsc{ii}\,$\lambda1260$}\ and \hbox{Si\,\textsc{iii}\,$\lambda1206$}\ absorption consistent with \\ & & data of 3 confirmed leakers & & & & density-bounded regions, although gas covering factor may vary \\ \rule{-2pt}{3ex} \citet{jaskot13} & 0.1--0.2 & 6 SDSS Green-Pea galaxies & 7.9--8.0 & \textsc{starburst}\small{99} & sSFR$\,\sim\,$70--200\,Gyr$^{-1}$ & Large \hbox{He\,\textsc{ii}\,$\lambda1640$}/\hbox{H$\beta$}\ ratio not reproducible by standard stellar \\ & & with $\hbox{O$_\textrm{32}$}>7$ and no AGN & & \textsc{cloudy}\ 10.00 & burst age$\,\sim\,$3--5\,Myr & population models; if arising from a contribution by shocks, the \\ & & spectral feature\tnote{\it c} & & (dust physics) & & associated large $\hbox{O$_\textrm{32}$}$ may indicate $\hbox{$N_{\mathrm{H}}$}<\hbox{$N_{\mathrm{H}}^{\mathrm{IB}}$}$ \tnote{\it b} \\ \rule{-2pt}{3ex} \citet{izotov16oct,izotov16jan} & 0.3--0.4 & {\it HST}/COS rest-900\,\AA\ detection & 7.8--8.0 & \textsc{starburst}\small{99} & sSFR$\,\sim\,$10--200\,Gyr$^{-1}$ & Compact SF galaxies with large \hbox{O$_\textrm{32}$}\ appear to pick up \\ & & of 5 compact galaxies with & & & burst age$\,<\,$4\,Myr & efficiently LyC leakers \\ & & $\hbox{O$_\textrm{32}$}>5$ and no AGN feature\tnote{\it c} & & & $\hbox{$f_{\rm{esc}}$}\sim6$--13\% & \\ \rule{-2pt}{3ex} \citet{izotov18mar,izotov18aug} & 0.3--0.4 & {\it HST}/COS rest-900\,\AA\ detection & 7.6--8.2 & \textsc{starburst}\small{99} & sSFR$\,\sim\,$3--1,000\,Gyr$^{-1}$ & General increase of \hbox{$f_{\rm{esc}}$}\ with increasing \hbox{O$_\textrm{32}$}\ (and decreasing \\ & & of 6 compact galaxies with & & & burst age$\,\sim\,$2--3\,Myr & velocity spread of Ly$\alpha$ double-peaked emission) \\ & & $\hbox{O$_\textrm{32}$}>8$ and no AGN feature\tnote{\it c} & & & $\hbox{$f_{\rm{esc}}$}\sim2$--70\% & \\ \rule{-2pt}{3ex} \citet{nanayakkara19} & 2.2 & MUSE SF galaxy 1273, also & $\sim8.3$ & \textsc{fast}\tnote{\it d} & sSFR$\,\sim\,$1.5\,Gyr$^{-1}$ & $\mathrm{EW(\hbox{O\,\textsc{iii}\,$\lambda\lambda4959,5007$})\sim1200}$\,\AA \\ & & candidate LyC leaker from & & & (potentially $\hbox{$f_{\rm{esc}}$}\sim60$\%) & \\ & &\citet[][GS 30668]{Naidu17} & & & & \\ \rule{-2pt}{3ex} \citet{nakajima16} & 3.0--3.7 & 13 candidate LyC-leaker Ly$\alpha$ & $\sim8.1$ & \textsc{starburst}\small{99} & sSFR$\,\sim\,$1--50\,Gyr$^{-1}$ & Ly$\alpha$ emitters have larger \hbox{O$_\textrm{32}$}\ than Lyman-break galaxies \\ & & emitters (including 1 AGN) & &\textsc{cloudy}\ 13.02 & (Ly$\alpha$ emitters have & at similar metallicity, suggesting that the ionized regions \\ & & and 2 Lyman-break galaxies & & (dust physics) & potentially $\hbox{$f_{\rm{esc}}$}>0$) & are, at least in part, density bounded \\ \rule{-2pt}{3ex} \citet{vanzella16} & 3.1 & Lensed compact Ly$\alpha$ emitter & <7.8 & & $\hbox{$f_{\rm{esc}}$}>0$ expected & $\hbox{O$_\textrm{32}$}>10$, small velocity spread of Ly$\alpha$ double-peaked emission \\ & & ID11 & & & & and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ emission suggest low \hbox{$N_{\mathrm{H}}$}\ and LyC leakage \\ & & & & & & \\ \rule{-2pt}{3ex} \citet{debarros16} & 3.2 & VLT/VIMOS rest-900\,\AA\ & $\sim8.1$ & \textsc{popstar} & sSFR$\,\sim\,$10\,Gyr$^{-1}$ & LyC leakage, Ly$\alpha$ emission, $\hbox{O$_\textrm{32}$}>10$ and weak \hbox{C\,\textsc{ii}\,$\lambda1335$}\ and \\ & & detection of candidate LyC & & \textsc{cloudy}\ 13.03 & $\log\hbox{C/O}\sim-0.8$ & \hbox{Si\,\textsc{ii}\,$\lambda1260$}\ absorption suggest density-bounded \hbox{H{\sc ii}}\ region; \\ & & leaker Ion2 & & (dust physics) & $\hbox{$f_{\rm{esc}}$}\sim64$\% & a faint AGN not excluded \\ \bottomrule \end{tabular} \label{tab:leakers} \begin{tablenotes} \item [{\it a}] sSFR typically quoted under the assumption $\hbox{$f_{\rm{esc}}$}=0$. $^{\it b}$ $\hbox{$N_{\mathrm{H}}^{\mathrm{IB}}$}$ is the H-column density required to produce an ionization-bounded nebula. $^{\it c}$ $\hbox{O$_\textrm{32}$}=\hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}$. $^{\it d}$ \citep{Kriek09}. \end{tablenotes} \end{threeparttable} \end{table} \end{landscape} } \subsection{LyC leakers}\label{obs:leakers} We are also interested in existing ultraviolet and optical observations of confirmed or potential LyC-leaking galaxies at any redshift. In Table~\ref{tab:leakers}, we list the main characteristics of 10 such samples, arranged as before in order of increasing redshift. These include 5 samples of confirmed LyC leakers, in which the rest-ultraviolet emission around 880--912\,\AA\ (i.e. just blueward of the Lyman limit) has been directly observed, typically with {\it FUSE} and \textit{HST}\ at low redshift and by means of deep optical spectroscopy at higher redshift \citep{leitet11,debarros16,izotov16oct,izotov16jan,chisholm17,izotov18mar,izotov18aug}. The reported fractions of escaping LyC photons span a wide range, $1\lesssim\hbox{$f_{\rm{esc}}$}\lesssim70$ per cent. As mentioned in Section~\ref{sec:fesc} above, this leakage is generally thought to occur either through holes carved into the neutral ISM by extreme galactic outflows or through density-bounded (i.e. optically thin to LyC photons) \hbox{H{\sc ii}}\ regions, which are expected to lead to weak low-ionization emission lines, a small velocity spread of the Ly$\alpha$ double-peaked emission and a large \hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}\ ratio. The observations reported in Table~\ref{tab:leakers} seem to support the occurrence of enhanced \hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}\ ratios in confirmed LyC leakers (along with, in some cases, weak low-ionization emission lines and a small velocity spread of the \hbox{Ly$\alpha$}\ double-peaked profile), which favours density-bounded \hbox{H{\sc ii}}\ regions as the main leakage mechanism. In Table~\ref{tab:leakers}, we also list 5 samples of candidate LyC leakers, selected on the basis of large observed \hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}\ ratios, the presence of \hbox{Ly$\alpha$}\ emission or deep ultraviolet imaging \citep{jaskot13, nakajima16, vanzella16, izotov17oct,nanayakkara19}. Among these studies, \citet{izotov17oct} caution that the \hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}\ ratio alone is not a certain indicator of LyC leakage, as it depends also on other parameters, such as the ionization parameter, the hardness of ionizing radiation and metallicity. These authors propose an alternative spectral diagnostic of density-bounded \hbox{H{\sc ii}}\ regions, based on the \hbox{He\,\textsc{i}\,$\lambda3889$}, \hbox{He\,\textsc{i}\,$\lambda6678$}\ and \hbox{He\,\textsc{i}\,$\lambda7065$}\ lines. A comparison between Tables~\ref{tab:analogs} and \ref{tab:leakers} reveals that LyC leakers have, typically, gas-phase oxygen abundances similar to those of (presumably ionization-bounded) metal-poor star-forming, but specific star formation rates several times larger. The implied extreme radiation fields of these intensively star-forming galaxies likely contribute to the escape of ionizing photons, as suggested by the apparent trend of increasing \hbox{$f_{\rm{esc}}$}\ with increasing strength of stellar-wind features in the sample of \citet{izotov18aug}. \subsection{Other star-forming galaxies and AGN}\label{obs:normal} To complement our reference observational sample, we also include constraints on the ultraviolet and optical nebular emission of star-forming galaxies and AGN at various redshifts from more heterogeneous surveys. In the local Universe, we appeal to the samples of 21 low-metallicity starburst galaxies from \citet{giavalisco96}, 20 Wolf-Rayet galaxies from \citet[][see also \citealt{lopezsanchez10}]{lopezsanchez08} and 28 star-forming galaxies from \citet[][including \hbox{H{\sc ii}}, Seyfert-2 and LINER galaxies]{leitherer11}. These samples span wide ranges of metallicities, $7.2\lesssim\hbox{12 + log(\OH)}\lesssim9.2$. We also gather different samples of more distant galaxies: 9 lensed star-forming galaxies at redshifts $z=1.0$--3.5 with optical (3 with ultraviolet) emission-line measurements from \citet[][with complementary data from \citealt{patricio16}]{christensen12} and metallicities $7.6\lesssim\hbox{12 + log(\OH)}\lesssim8.9$; a composite spectrum of 30 star-forming galaxies with median redshift $z\approx2.4$ and metallicity $\hbox{12 + log(\OH)}\approx8.14$ from \citet{steidel16}; 20 Lyman-break galaxies at redshifts $z=3.0$--3.8 from \citet[][we retain 15 galaxies with H$\beta$ signal-to-noise ratio greater than 2]{schenker13}; and 24 Lyman-break galaxies at redshifts $z=3.2$--3.7 from \citet[][we retain 19 galaxies with H$\beta$ signal-to-noise ratio greater than 2]{holden16}. We note that the above samples span wide ranges of metallicities, stellar masses and specific star formation rates, which can overlap in part with those of the samples in Table~\ref{tab:analogs}. Finally, we include constraints on the ultraviolet and optical nebular emission of a sample of 12 nearby ($z\lesssim0.04$) Seyfert-2 galaxies and 59 radio galaxies and 10 type-2 (i.e. obscured) quasars at redshifts $1\lesssim z\lesssim4$ from \citet[][with complementary data from \citealt{diaz88} and \citealt{kraemer94}]{dors14}, sampling metallicities in the range from roughly 0.1 to 1.0 times solar. We also report the emission-line properties measured by \citet{Mignoli19} in a composite spectrum of 92 type-2 AGN in massive galaxies at $1.45<z<3.05$ from the zCOSMOS-deep survey \citep{Lilly07}. \begin{figure*} \begin{center} \resizebox{0.95\hsize}{!}{\includegraphics{./figure6_light.png}} \end{center} \caption{Ultraviolet emission-line properties of the reference observational sample of star-forming galaxies and AGN described in Section~\ref{sec:obs}. Different symbols refer to different samples, as indicated at the bottom of Fig.~\ref{fig:obs_opt}, with blue-like colours corresponding to metal-poor star-forming galaxies, orange-like colours to LyC leakers, purple-like colours to other star-forming galaxies and grey to AGN. The diagrams show different combinations of equivalent widths and ratios of the \hbox{N\,\textsc{v}\,$\lambda1240$}, \hbox{C\,\textsc{iv}\,$\lambda1549$}, \hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{O\,\textsc{iii}]\,$\lambda1664$}\ and \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ nebular emission lines [and the gas-phase oxygen abundance in (a)]. In (c) and (e), the dashed lines show the criteria proposed by \citet{nakajima18} to separate AGN-dominated from star-forming galaxies. All line fluxes are corrected for attenuation by dust, as prescribed in the original studies. Arrows show 1$\sigma$ upper limits. See description in Section~\ref{sec:obsprop}.} \label{fig:obs_uv} \end{figure*} \begin{figure*} \begin{center} \resizebox{0.95\hsize}{!}{\includegraphics{./figure7_light.png}} \end{center} \caption{Optical emission-line properties of the reference observational sample of star-forming galaxies and AGN described in Section~\ref{sec:obs}. Different symbols refer to different samples, as indicated at the bottom of the figure, with blue-like colours corresponding to metal-poor star-forming galaxies, orange-like colours to LyC leakers, purple-like colours to other star-forming galaxies and grey to AGN. The diagrams show different ratios of the \hbox{[O\,\textsc{ii}]$\lambda3727$}, \hbox{He\,\textsc{i}\,$\lambda3889$}, \hbox{He\,\textsc{ii}$\lambda4686$}, \hbox{H$\beta$}, \hbox{[O\,\textsc{iii}]$\lambda5007$}, \hbox{H$\alpha$}, \hbox{[N\,\textsc{ii}]$\lambda6584$}, \hbox{He\,\textsc{i}\,$\lambda6678$}\ and \hbox{He\,\textsc{i}\,$\lambda7065$}\ nebular emission lines (and the \hbox{H$\beta$}\ equivalent width). In (a), the dotted and dashed lines show the criteria of \citet{Kewley01} and \citet{Kauffmann03}, respectively, to separate AGN-dominated from star-forming galaxies. All line fluxes are corrected for attenuation by dust, as prescribed in the original studies. See description in Section~\ref{sec:obsprop}.} \label{fig:obs_opt} \end{figure*} \subsection{Global observational properties of the full sample}\label{sec:obsprop} We plot in Fig.~\ref{fig:obs_uv} different ultraviolet properties of the full reference sample of metal-poor star-forming galaxies (blue-like colours; see coding at the bottom of Fig.~\ref{fig:obs_opt}), LyC leakers (orange-like colours) and other star-forming galaxies (purple-like colours) and AGN (grey). After exclusion of a few galaxies with incomplete data, the final sample includes 68 metal-poor star-forming galaxies, 16 confirmed and 23 candidate LyC leakers, 75 other star-forming galaxies and 73 AGN. The diagrams in Fig.~\ref{fig:obs_uv} include several combinations of equivalent widths and ratios of the \hbox{N\,\textsc{v}\,$\lambda1240$}\ (hereafter simply \hbox{N\,\textsc{v}}), \hbox{C\,\textsc{iv}\,$\lambda\lambda1548,1551$}\ (hereafter simply \hbox{C\,\textsc{iv}}), \hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{O\,\textsc{iii}]$\lambda\lambda1661,1666$}\ (hereafter simply \hbox{O\,\textsc{iii}]}) and \hbox{[C\,\textsc{iii}]$\lambda1907$+C\,\textsc{iii}]$\lambda1909$}\ (hereafter simply \hbox{C\,\textsc{iii}]}) nebular emission lines (and the gas-phase oxygen abundance in Fig.~\ref{fig:obs_uv}a). All but one diagram (Fig.~\ref{fig:obs_uv}g) involve the \hbox{He\,\textsc{ii}\,$\lambda1640$}\ line, as a main goal of our analysis in the next section will be to assess the specific influence of a comprehensive set of model parameters on predictions of this relative to other line intensities. In Fig.~\ref{fig:obs_opt}, we show the corresponding optical properties of this reference sample, through different ratios involving the \hbox{[O\,{\sc ii}]$\lambda\lambda3726,3729$}\ (hereafter simply \hbox{[O\,\textsc{ii}]}), \hbox{He\,\textsc{i}\,$\lambda3889$}, \hbox{He\,\textsc{ii}$\lambda4686$}, \hbox{H$\beta$}, \hbox{[O\,\textsc{iii}]$\lambda5007$}\ (hereafter simply \hbox{[O\,\textsc{iii}]}), \hbox{H$\alpha$}, \hbox{[N\,\textsc{ii}]$\lambda6584$}\ (hereafter simply \hbox{[N\,\textsc{ii}]}), \hbox{He\,\textsc{i}\,$\lambda6678$}\ and \hbox{He\,\textsc{i}\,$\lambda7065$}\ nebular emission lines (and the \hbox{H$\beta$}\ equivalent width). All line fluxes in Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt} are corrected for attenuation by dust, as prescribed in the original studies. We note that the line ratios in these figures are subject to uncertainties linked to the different apertures used to observe different galaxy samples, as high- and low-ionization lines are not necessarily co-spatial \citep[see, e.g.,][]{kehrig18}. The fact that different symbols can populate different diagrams in Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt} illustrate how the many spectral properties we consider are not always available homogeneously for all galaxy samples. This highlights, by itself, the value of the reference sample we have assembled, which allows one to grasp at once the broad ultraviolet and optical properties of the closest known analogues to primeval galaxies, LyC leakers and other star-forming galaxies and AGN. In Fig.~\ref{fig:obs_uv}, over three quarters of all galaxies with ultraviolet data have \hbox{C\,\textsc{iii}]}\ measurements, sometimes by requirement \citep{amorin17,nakajima18}. The line is unavailable for the highest-redshift galaxies, because in part of the limitations affecting ground-based infrared spectroscopy \citep{schmidt17,stark15,laporte17}. Many galaxies with \hbox{C\,\textsc{iii}]}\ measurements also have \hbox{O\,\textsc{iii}]}\ ones. Over half of all galaxies have \hbox{He\,\textsc{ii}\,$\lambda1640$}\ and/or \hbox{C\,\textsc{iv}}\ measurements, the availability of one line relative to the other being independent of redshift. Finally, in Fig.~\ref{fig:obs_uv}d, \hbox{N\,\textsc{v}}\ is available for only a few galaxies (a young star-forming galaxy with extended \hbox{Ly$\alpha$}\ halo from the \citealt{christensen12} sample, observed by \citealt{patricio16}, which also appears in Figs~\ref{fig:obs_uv}a,b,d and f; the \citealt{laporte17} galaxy; and samples A and B of \citealt{nakajima18}). The dashed lines in Figs~\ref{fig:obs_uv}c and \ref{fig:obs_uv}e show the criteria proposed by \citet{nakajima18} to separate AGN-dominated from star-forming galaxies, based on the \hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{C\,\textsc{iii}]}\ and \hbox{C\,\textsc{iv}}\ emission lines. Aside from the \citet{dors14} AGN sample, only a few galaxies with strong \hbox{C\,\textsc{iv}}\ emission lie above the \citet{nakajima18} AGN criterion: a few galaxies from the \citet{amorin17} sample (although within 2$\sigma$ of the criterion); the \citet{stark15} lensed \hbox{Ly$\alpha$}\ galaxy, which could be powered by an AGN (Table~\ref{tab:analogs}); the \citet{schmidt17} lensed \hbox{Ly$\alpha$}\ galaxy; in one diagram only (Fig.~\ref{fig:obs_uv}e), the most extreme star-forming galaxy SB~111 in the \citet{senchyna17} sample (with an upper limit on \hbox{C\,\textsc{iii}]}\ emission, accounted for in the horizontal error bar); and, again in one diagram only (Fig.~\ref{fig:obs_uv}c), the \citet{vanzella17} lensed double-super star cluster. We note that \citet{schmidt17}, \citet{senchyna17} and \citet{vanzella17} find these last three galaxies to be more likely powered by star formation than by an AGN, based on the \citet{gutkin2016} and \citet{Feltre2016} photoionization models. A particularly notable feature of Fig.~\ref{fig:obs_uv} is the trend of increasing \hbox{He\,\textsc{ii}\,$\lambda1640$}\ equivalent width with decreasing gas-phase oxygen abundance (Fig.~\ref{fig:obs_uv}a). High \hbox{He\,\textsc{ii}\,$\lambda1640$}\ equivalent widths ($\gtrsim1\,$\AA) correspond typically to galaxies with low \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ ratios ($\lesssim10$; Fig.~\ref{fig:obs_uv}b) and potentially \hbox{N\,\textsc{v}}\ emission (according to Fig.~\ref{fig:obs_uv}d). At low EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}), we find more metal-rich galaxies, with generally lower-ionization gas (e.g., larger \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ and lower \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ ratios). This is the case for the few `normal' star-forming galaxies of our sample with enough ultraviolet data to appear in at least some diagrams of Fig.~\ref{fig:obs_uv} (e.g.; sample C of \citealt{nakajima18}; one of the \citealt{christensen12} galaxies, appearing only in Fig.~\ref{fig:obs_uv}h). Remarkably, galaxies in wide ranges of redshift populate similar regions of the diagrams in Fig.~\ref{fig:obs_uv}, which confirms that modelling low-redshift, metal-poor star-forming galaxies is a useful step toward understanding the physical properties of reionization-era galaxies. A potential shortcoming of such studies is the occurrence at redshifts $z\gtrsim6$ of \hbox{C\,\textsc{iv}}\ equivalent widths well in excess of those found in the nearby Universe (Fig.~\ref{fig:obs_uv}c), which could be attributable to enhanced $\alpha$/Fe abundance ratios in high-redshift galaxies \citep{senchyna19}. As expected from the comparison between Tables~\ref{tab:analogs} and \ref{tab:leakers} (Section~\ref{obs:leakers}), the handful of LyC leakers with available ultraviolet data in our sample tend to overlap with the most extreme star-forming galaxies in Fig.~\ref{fig:obs_uv} (but notice the low \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ ratio of the \citealt{debarros16} galaxy, in which the \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission could also be dominated by winds of WR stars; see \citealt{Vanzella19}). This is even more apparent in Fig.~\ref{fig:obs_opt}, as more optical than ultraviolet data are available for LyC leakers and quiescent star-forming galaxies in our sample. In Figs~\ref{fig:obs_opt}a and \ref{fig:obs_opt}b, for example, all but a few LyC leakers are concentrated -- on top of the general galaxy population -- in the high-ionization parts (low \hbox{[N\,\textsc{ii}]}/\hbox{H$\alpha$}\ and \hbox{[O\,\textsc{ii}]}/\hbox{[O\,\textsc{iii}]}) of the standard BPT diagrams defined by the \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}, \hbox{[N\,\textsc{ii}]}/\hbox{H$\alpha$}\ and $\hbox{[O\,\textsc{ii}]}/\hbox{[O\,\textsc{iii}]}$ ratios. In Figs~\ref{fig:obs_opt}c, \ref{fig:obs_opt}e and \ref{fig:obs_opt}f, they are concentrated in the regions of highest \hbox{H$\beta$}\ equivalent width, highest \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratio and lowest gas-phase oxygen abundance. The exceptions are a few weak ($\hbox{$f_{\rm{esc}}$}\lesssim\;$a few per cent) nearby leakers from \citet{leitet11} and \citet[][Fig.~\ref{fig:obs_opt}c]{chisholm17}. The dotted and dashed lines in Fig.~\ref{fig:obs_opt}a show the criteria of \citet{Kewley01} and \citet{Kauffmann03}, respectively, to separate AGN-dominated from star-forming galaxies. Only the \citet{dors14} AGN lie above these lines. Finally, Fig.~\ref{fig:obs_opt}d shows the diagram advertised by \citet{izotov17oct} to diagnose density-bounded \hbox{H{\sc ii}}\ regions, based on the \hbox{He\,\textsc{i}\,$\lambda3889$}, \hbox{He\,\textsc{i}\,$\lambda6678$}\ and \hbox{He\,\textsc{i}\,$\lambda7065$}\ lines (Section~\ref{obs:leakers}). \section{Emission-line signatures of galaxy physical parameters}\label{sec:params} In this section, we use the models introduced in Section~\ref{sec:models} to investigate the ultraviolet and optical emission-line signatures of a wide range of physical parameters of metal-poor star-forming galaxies in the reference observational diagrams assembled in Section~\ref{sec:obs}. We consider physical parameters pertaining to the interstellar gas, stellar populations, LyC-photon leakage, AGN narrow-line regions and radiative shocks. As described in Section~\ref{sec:models}, a main attribute of our approach is the adoption of a common parametrization of nebular-gas abundances in all calculations, allowing direct comparisons between models powered by different sources. To explore the observable signatures of any specific physical parameters in the emission-line diagrams of Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt}, it is convenient to examine the offsets implied by changes in this parameter with respect to a `standard' model with stellar and interstellar properties typical of those expected for young, metal-poor star-forming galaxies. Referring to Table~\ref{tab:analogs}, we take this model to correspond to an ionization-bounded galaxy with constant star formation rate [$\psi(t)=\mathrm{constant}$] and the following parameters (see Section~\ref{sec:ionb}): \begin{itemize} \item[] $\hbox{$f_{\rm{esc}}$}=0$\;; \item[] $\hbox{$n_{\mathrm{H}}$}=10^2\,{\rm cm}^{-3}$\;; \item[] $Z=0.002$\;; \item[] $\hbox{C/O}=0.38\hbox{(C/O)$_\odot$}\approx0.17$\;; \item[] $\hbox{$\xi_{\rm{d}}$}=0.3$\;; \item[] $\log\hbox{$\langle U\rangle$}=-2$\;; \item[] $\hbox{$m_{\rm{up}}$}=300\,\hbox{M$_{\rm{\odot}}$}$\;; \item[] $t=3\,$Myr. \end{itemize} The gas-phase oxygen abundance corresponding to these choices of $Z$ and \hbox{$\xi_{\rm{d}}$}\ is $\hbox{12 + log(\OH)$_\mathrm{gas}$}\approx7.83$ \citep[see table~1 of][]{gutkin2016}. In this standard model, we do not include interstellar-line absorption in the \hbox{H{\sc ii}}\ interiors and \hbox{H{\sc i}}\ envelopes of stellar birth clouds, nor any contribution by an AGN or radiative shocks to nebular emission. We compute the emission-line properties of the model using the {\small C\&B}\ stellar population synthesis code. Figs~\ref{fig:baton3z_uv} and \ref{fig:baton3z_opt} show the ultraviolet and optical emission-line properties of the standard model (black circle) in the same diagrams as in Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt}, where all the observations have been greyed for clarity. Also shown are a more metal-poor model with $Z=0.0005$ (black upside-down triangle) and a more metal-rich one with $Z=0.008$ (black square), with all other parameters fixed. Figs~\ref{fig:baton3u_uv} and \ref{fig:baton3u_opt} show the standard model again, along with a lower-ionization model with $\log\hbox{$\langle U\rangle$}=-3$ (small black circle) and a higher-ionization one with $\log\hbox{$\langle U\rangle$}=-1$ (large black circle), at fixed other parameters. Overall, Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} indicate that these five models sample reasonably well the observed ultraviolet and optical emission-line properties of metal-poor, actively star-forming galaxies [i.e., with $\hbox{[N\,\textsc{ii}]}/\hbox{H$\alpha$}\lesssim0.1$ and EW(\hbox{H$\beta$})$\,\gtrsim200\,$\AA\ in Figs~\ref{fig:baton3z_opt} and \ref{fig:baton3u_opt}], except in diagrams involving the \hbox{He\,\textsc{ii}\,$\lambda1640$}\ and \hbox{He\,\textsc{ii}$\lambda4686$}\ lines. In such diagrams, the data for low oxygen abundances [$\hbox{12 + log(\OH)$_\mathrm{gas}$}\lesssim8.0$; see Figs~\ref{fig:baton3z_uv}a, \ref{fig:baton3z_opt}f, \ref{fig:baton3u_uv}a and \ref{fig:baton3u_opt}f] tend to exhibit much stronger \hbox{He\,\textsc{ii}}\ emission than predicted by models with classical stellar populations, as pointed out in several previous studies (Section~\ref{sec:intro}). The five benchmark models also do not quite reach the highest observed equivalent widths of \hbox{C\,\textsc{iv}}\ ($\gtrsim10\,$\AA; Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c) and \hbox{C\,\textsc{iii}]}\ ($\gtrsim20\,$\AA; e.g., Figs~\ref{fig:baton3z_uv}g and \ref{fig:baton3u_uv}g), nor the highest \hbox{C\,\textsc{iii}]}/\hbox{O\,\textsc{iii}]}\ ratios ($\gtrsim3$; Figs~\ref{fig:baton3z_uv}f and \ref{fig:baton3u_uv}f). We now examine the observable signatures of each adjustable parameter of the models in the emission-line diagrams of Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}. We describe the effects of altering a single parameter at a time, keeping all other parameters fixed: \begin{figure*} \begin{center} \resizebox{0.95\hsize}{!}{\includegraphics{./figure8_light.png}} \end{center} \caption{Same diagrams as in Fig.~\ref{fig:obs_uv}, but where the observations have been greyed for clarity. The black circle corresponds to the `standard' model (with metallicity $Z=0.002$ and $\log\hbox{$\langle U\rangle$}=-2$) described in Section~\ref{sec:params}, while the black upside-down triangle and black square are benchmark models with the same parameters, but with $Z=0.0005$ and 0.008, respectively. Segments of different colours show the effect of altering a single parameter at the time (as summarized at the bottom of Fig.~\ref{fig:baton3z_opt}): rise in \hbox{C/O}\ ratio from 0.17 to $\hbox{(C/O)$_\odot$}=0.44$ (blue); drop in dust-to-mass ratio from $\hbox{$\xi_{\rm{d}}$}=0.3$ to 0.1 (dark green); rise in \hbox{$n_{\mathrm{H}}$}\ from $10^2$ to $10^3\,$cm$^{-3}$ (yellow); inclusion of interstellar-line absorption in the \hbox{H{\sc ii}}\ interiors and outer \hbox{H{\sc i}}\ envelopes of stellar birth clouds (light green); increase in stellar population age from 3 to 10\,Myr (brown); rise in \hbox{$m_{\rm{up}}$}\ from 100, to 300, to 600\,\hbox{M$_{\rm{\odot}}$}\ (dark purple); adopting the \textsc{bpass}\ single- (light purple) and binary-star (magenta) models in place of the {\small C\&B}\ model (\textsc{bpass}\ models are not available for $Z=0.0005$); drop in the optical depth \hbox{$\tau_{570}$}\ from +1.0 to $-1.0$ (light blue); inclusion of an AGN component contributing from 0 to 99 per cent of the total \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission (orange); and inclusion of a radiative-shock component contributing 90 per cent of the total \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission [red symbols, with shape corresponding to the metallicity of the associated benchmark model, and darkness to the shock velocity, from $10^2\,$km\,s$^{-1}$ (light) to $10^3\,$km\,s$^{-1}$ (dark)].} \label{fig:baton3z_uv} \end{figure*} \begin{figure*} \begin{center} \resizebox{0.95\hsize}{!}{\includegraphics{./figure9_light.png}} \end{center} \caption{Same diagrams as in Fig.~\ref{fig:obs_opt}, but where the observations have been greyed for clarity. The models are the same as in Fig.~\ref{fig:baton3z_uv}.} \label{fig:baton3z_opt} \end{figure*} \begin{figure*} \begin{center} \resizebox{0.95\hsize}{!}{\includegraphics{./figure10_light.png}} \end{center} \caption{Same as Fig.~\ref{fig:baton3z_uv}, but for models with the metallicity $Z= 0.002$ only, and for three values of the zero-age volume-averaged ionisation parameter, $\log\hbox{$\langle U\rangle$}=-3$, $-2$ and $-1$ (in order of increasing symbol size).} \label{fig:baton3u_uv} \end{figure*} \begin{figure*} \begin{center} \resizebox{0.95\hsize}{!}{\includegraphics{./figure11_light.png}} \end{center} \caption{Same as Fig.~\ref{fig:baton3z_opt}, but for the same models as in Fig.~\ref{fig:baton3u_uv}.} \label{fig:baton3u_opt} \end{figure*} \begin{itemize} \item[] \textit{Metallicity, $Z$}. Increasing metallicity from $Z=0.0005$ (black upside-down triangle), to 0.002 (circle), to 0.008 (square) in Figs~\ref{fig:baton3z_uv} and \ref{fig:baton3z_opt} raises cooling through collisionally-excited metal transitions, causing the electronic temperature, \hbox{$T_{\rm{e}}$}, to drop. The luminosity ratios of metal-to-H and He lines tend to increase at first, and then drop when \hbox{$T_{\rm{e}}$}\ is low enough for cooling to shift from ultraviolet and optical to infrared transitions \citep[e.g.][]{Spitzer78}. This is particularly visible for \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ [Fig.~\ref{fig:baton3z_uv}b; note also the behaviour of EW(\hbox{C\,\textsc{iii}]}) in Fig.~\ref{fig:baton3z_uv}g], while for \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ (Fig.~\ref{fig:baton3z_opt}a) the maximum is reached around $Z\approx0.006$ \citep[see fig.~2 of][]{gutkin2016}. In contrast, \hbox{N\,\textsc{v}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3z_uv}d) and \hbox{[N\,\textsc{ii}]}/\hbox{H$\alpha$}\ (Fig.~\ref{fig:baton3z_opt}a) keep rising as $Z$ increases, because of the inclusion of secondary nitrogen production in our model. Meanwhile, \hbox{He\,\textsc{i}\,$\lambda3889$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ and \hbox{He\,\textsc{i}\,$\lambda7065$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ both decrease as $Z$ rises and \hbox{$T_{\rm{e}}$}\ declines \citep{izotov17oct}. At fixed ionization parameter, a rise in $Z$ also implies lower ratios of high- to low-ionization lines, such as smaller \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ [Fig.~\ref{fig:baton3z_uv}c; note also the behaviour of EW(\hbox{C\,\textsc{iv}}) in Fig.~\ref{fig:baton3z_uv}c] and larger \hbox{C\,\textsc{iii}]}/\hbox{O\,\textsc{iii}]}\ (Fig.~\ref{fig:baton3z_uv}f) and \hbox{[O\,\textsc{ii}]}/\hbox{[O\,\textsc{iii}]}\ (Fig.~\ref{fig:baton3z_opt}b), because the inner high-ionization parts of \hbox{H{\sc ii}}\ regions shrink (e.g., Fig.~\ref{fig:struction_carbon} above; see also \citealt{stasinska80}). The behaviour of the recombination-line ratio \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ in Fig.~\ref{fig:baton3z_opt}e is linked to the evolution of very massive stars ($>100\,\hbox{M$_{\rm{\odot}}$}$) in the models. As $Z$ increases, the competing effects of the weakening of ionizing radiation caused by the lower effective temperatures of metal-rich relative to metal-poor stars \citep[e.g., fig.~15 of][]{Bressan2012} and the hardening of ionizing radiation caused by an increase in mass-loss rate \citep[e.g.,][]{Vink2001,Crowther06} conspire in making \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ larger at $Z=0.002$ than at 0.0005 and 0.008 (see also Fig.~\ref{fig:HeII_ssp_age} of Section~\ref{sec:constraints}; we note that this is not the case for $\hbox{$m_{\rm{up}}$}=100\,$\hbox{M$_{\rm{\odot}}$}; see below). The rise in EW(\hbox{H$\beta$}) as $Z$ increases in this diagram results from the slower evolution of metal-rich relative to metal-poor stars, delaying the appearance of evolved stars with strong optical continua. \item[] \textit{Zero-age volume-averaged ionisation parameter, \hbox{$\langle U\rangle$}}. Increasing $\log\hbox{$\langle U\rangle$}$ from $-3$ (small black circle), to $-2$ (medium-size circle), to $-1$ (large circle) in Figs~\ref{fig:baton3u_uv} and \ref{fig:baton3u_opt}, which can be achieved in our model by raising the gas-filling factor $\epsilon$ at fixed ionizing-photon rate $Q$ and H-density \hbox{$n_{\mathrm{H}}$}\ (equation~\ref{eq:Udef}), increases the probability of multiple ionization. This causes \hbox{N\,\textsc{v}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3u_uv}d), \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ (Fig.~\ref{fig:baton3u_uv}e), \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ (Fig.~\ref{fig:baton3u_opt}a), \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ (inverse abscissa of Fig.~\ref{fig:baton3u_opt}b) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ (Fig.~\ref{fig:baton3u_opt}f) to rise. The equivalent widths of \hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3u_uv}a), \hbox{C\,\textsc{iv}}\ (Fig.~\ref{fig:baton3u_uv}c) and \hbox{C\,\textsc{iii}]}\ (Fig.~\ref{fig:baton3u_uv}h) also increase. Instead, \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3u_uv}b), \hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3u_uv}c) and \hbox{O\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (inverse abscissa of Fig.~\ref{fig:baton3u_uv}h) first increase, and then decrease when \hbox{$\langle U\rangle$}\ rises. This is because while the \hbox{He\,\textsc{ii}\,$\lambda1640$}\ luminosity continues to rise, the rise of \hbox{C\,\textsc{iii}]}, \hbox{C\,\textsc{iv}}\ and \hbox{O\,\textsc{iii}]}\ is slowed down by the conversion of C$^+$ into C$^{2+}$, C$^{2+}$ into C$^{3+}$ and O$^{2+}$ into O$^{3+}$. Similarly, \hbox{[N\,\textsc{ii}]$\lambda6584$}/\hbox{H$\alpha$}\ drops (Fig.~\ref{fig:baton3u_opt}a) because of the conversion of N$^+$ into N$^{2+}$. Increasing \hbox{$\langle U\rangle$}\ at fixed other parameters also makes the H-column density, and hence the dust optical depth, larger (Section~\ref{sec:fesc}). The enhanced absorption of ionizing photons by dust is the reason for the drop in EW(\hbox{H$\beta$}) in Fig.~\ref{fig:baton3u_opt}e. Since the grain opacity peaks near 912\,\AA\ and declines toward shorter wavelengths \citep[e.g.,][]{Bottorff98}, \hbox{He\,\textsc{ii}$\lambda4686$}-ionizing photons are less absorbed than H-ionizing ones, an effect amplified by the fact that the \hbox{H$\beta$}\ line is produced further out in \hbox{H{\sc ii}}\ regions than the \hbox{He\,\textsc{ii}$\lambda4686$}\ line. This effect contributes to the rise in \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ from $\log\hbox{$\langle U\rangle$}=-3$ to $-1$ in this diagram \citep[see also][]{erb10}. Also, the associated increase in \hbox{He\,\textsc{i}}-column density amplifies the effects of fluorescence, causing \hbox{He\,\textsc{i}\,$\lambda3889$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ to drop and \hbox{He\,\textsc{i}\,$\lambda7065$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ to rise \citep[Fig.~\ref{fig:baton3u_opt}d; see ][]{izotov98}. \item[] \textit{Carbon-to-oxygen abundance ratio}. Increasing the \hbox{C/O}\ ratio from 0.17 to $\hbox{(C/O)$_\odot$}=0.44$ (blue segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}) is achieved in our model by raising the carbon abundance and lowering the abundances of all other metallic elements -- including oxygen -- at fixed total metallicity $Z$ (see section~2.3.1 of \citealt{gutkin2016}). This makes the \hbox{C\,\textsc{iii}]\,$\lambda1908$}\ and \hbox{C\,\textsc{iv}\,$\lambda1549$}\ lines stronger (Figs~\ref{fig:baton3z_uv} and \ref{fig:baton3u_uv}) and the \hbox{[O\,\textsc{ii}]}, \hbox{[O\,\textsc{iii}]}\ and \hbox{[N\,\textsc{ii}]}\ lines slightly weaker (Figs~\ref{fig:baton3z_opt} and \ref{fig:baton3u_opt}), while the H and He lines are negligibly affected (Figs~\ref{fig:baton3z_opt}d--f and \ref{fig:baton3u_opt}d--f ). \item[] \textit{Dust-to-metal mass ratio, \hbox{$\xi_{\rm{d}}$}}. Lowering the dust-to-mass ratio from $\hbox{$\xi_{\rm{d}}$}=0.3$ to 0.1 (dark-green segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}) increases the abundance of coolants in the gas phase. This causes line luminosities from the most abundant refractory coolants (such as O and C) to rise, while at the same time, the drop in electronic temperature reduces cooling through ultraviolet and optical transitions, an effect which becomes dominant at high metallicity. Hence, EW(\hbox{C\,\textsc{iii}]}) (Fig.~\ref{fig:baton3z_uv}g), EW(\hbox{C\,\textsc{iv}}) (Fig.~\ref{fig:baton3z_uv}c), \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3z_uv}b) and \hbox{[O\,\textsc{iii}]}/\hbox{H$\alpha$}\ (Fig.~\ref{fig:baton3z_opt}a) rise at low $Z$ but drop at high $Z$ when \hbox{$\xi_{\rm{d}}$}\ declines from 0.3 to 0.1. In Fig.~\ref{fig:baton3z_uv}f, \hbox{C\,\textsc{iii}]}/\hbox{O\,\textsc{iii}]}\ increases because carbon is more depleted than oxygen (see, e.g., table~1 of \citealt{gutkin2016}). Lowering \hbox{$\xi_{\rm{d}}$}\ also makes the dust optical depth smaller, causing \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ to drop and EW(\hbox{H$\beta$}) to rise in Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e (see discussion of \hbox{$\langle U\rangle$}\ above). We note that the dust optical depth is not simply proportional to the product $\hbox{$\xi_{\rm{d}}$}{Z}$, as absorption of ionizing photons by dust when $Z$ increases also reduces the \hbox{H{\sc ii}}-region radius, and hence \hbox{$N_{\mathrm{H}}$}\ (equation~\ref{eq:taudust} and Fig.~\ref{fig:NH0}). \item[] \textit{Hydrogen gas density, \hbox{$n_{\mathrm{H}}$}}. Increasing \hbox{$n_{\mathrm{H}}$}\ from $10^2$ to $10^3\,$cm$^{-3}$ (yellow segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}) enhances collisional excitation, but also favours collisional over radiative de-excitation of excited species. The cooling through infrared transitions is reduced and that through ultraviolet and optical transitions enhanced, because the critical density for collisional de-excitation is lower for infrared fine-structure than for ultraviolet and optical transitions. The effect is most visible at high metallicity, where infrared transitions tend to dominate the cooling \citep[e.g.,][]{oey93}. Thus, EW(\hbox{C\,\textsc{iii}]}) (Fig.~\ref{fig:baton3z_uv}g), \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3z_uv}b), EW(\hbox{C\,\textsc{iv}}) and \hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Fig.~\ref{fig:baton3z_uv}c), \hbox{O\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (inverse abscissa of Fig.~\ref{fig:baton3z_uv}h) and \hbox{[N\,\textsc{ii}]}/\hbox{H$\alpha$}\ and \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ (Fig.~\ref{fig:baton3z_opt}a) rise together with \hbox{$n_{\mathrm{H}}$}. Since increasing \hbox{$n_{\mathrm{H}}$}\ at fixed other parameters in our model implies reducing the gas-filling factor as $\epsilon\propto1/\sqrt{\hbox{$n_{\mathrm{H}}$}}$ (equation~\ref{eq:Udef}), this causes the dust optical depth to rise as $\sqrt{\hbox{$n_{\mathrm{H}}$}}$ (equation~\ref{eq:taudust}), the effect of which is nonetheless subtle in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}. In contrast, the \hbox{He\,\textsc{i}}\ lines (Figs~\ref{fig:baton3z_opt}d and \ref{fig:baton3u_opt}d) are quite sensitive to changes in \hbox{$n_{\mathrm{H}}$}. This is because the $\lambda$7065 transition is much more responsive to collisional enhancement than the $\lambda6678$ transition, itself more so than the $\lambda$3889 transition \citep{izotov17oct}, causing \hbox{He\,\textsc{i}\,$\lambda3889$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ to drop and \hbox{He\,\textsc{i}\,$\lambda7065$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ to rise significantly as \hbox{$n_{\mathrm{H}}$}\ increases. \item[] \textit{Interstellar-line absorption}. The light-green segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} show the effect of accounting for interstellar-line absorption in the \hbox{H{\sc ii}}\ interiors and outer \hbox{H{\sc i}}\ envelopes of stellar birth clouds, following the prescription of \citet[][see Section~\ref{sec:ionb} above]{vidal17}. As expected, the effect is most striking for the \hbox{C\,\textsc{iv}\,$\lambda1549$}\ and \hbox{N\,\textsc{v}\,$\lambda1240$}\ resonance lines, whose net emission can be drastically reduced and even entirely canceled -- as pictured by light-green aureolas around some benchmark models -- in Figs~\ref{fig:baton3z_uv} and \ref{fig:baton3u_uv}. \item[] \textit{Stellar population age, $t$}. At constant star formation rate, the age of the stellar population sets the age of the oldest \hbox{H{\sc ii}}\ regions contributing to the nebular emission from a galaxy in our model (equation~\ref{eq:flux_gal}). In an individual \hbox{H{\sc ii}}\ region, the rate of ionizing photons, and hence, the ionization parameter (equation~\ref{eq:Udef}), drop sharply at ages after about 3\,Myr (e.g., Fig.~\ref{fig:fesc}d). For $\psi(t)=\mathrm{constant}$, therefore, the global effective ionization parameter of the population of \hbox{H{\sc ii}}\ regions declines until a stationary population of ionizing stars is reached, which happens around $t=10\,$Myr in the C\&B models (but see also Section~\ref{sec:lycprod} below). Thus, increasing $t$ from 3 to 10\,Myr (brown segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}) tends to have an effect similar on emission lines to that of decreasing \hbox{$\langle U\rangle$}\ (see above), such as making \hbox{[N\,\textsc{ii}]}/\hbox{H$\alpha$}\ (Figs~\ref{fig:baton3z_opt}a and \ref{fig:baton3u_opt}a) and \hbox{[O\,\textsc{ii}]}/\hbox{[O\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_opt}b and \ref{fig:baton3u_opt}b) larger and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ (Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e) smaller. The equivalent widths of \hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}b and \ref{fig:baton3u_uv}b), \hbox{C\,\textsc{iv}}\ (Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c) and \hbox{C\,\textsc{iii}]}\ (Fig.~\ref{fig:baton3z_uv}g and \ref{fig:baton3u_uv}g) drop because of the build-up of continuum flux from older stellar populations. \item[] \textit{Upper mass cut-off of the IMF, \hbox{$m_{\rm{up}}$}}. At fixed \hbox{$\langle U\rangle$}\ and other parameters, increasing \hbox{$m_{\rm{up}}$}\ from 100, to 300, to 600\,\hbox{M$_{\rm{\odot}}$}\ (dark-purple segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}) hardens the ionizing spectrum, because massive stars evolve at higher temperatures than lower-mass stars. The effect is much stronger from 100 to 300\,\hbox{M$_{\rm{\odot}}$}\ than from 300 to 600\,\hbox{M$_{\rm{\odot}}$}, because of the upturn of the upper main sequence in the Hertzsprung-Russell diagram. The hardening of the spectrum primarily implies larger ratios of \hbox{He\,\textsc{ii}}-to-other lines, such as \hbox{He\,\textsc{ii}\,$\lambda1640$}/\hbox{O\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}h and \ref{fig:baton3u_uv}h) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ (Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e), and in turn, smaller \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}b and \ref{fig:baton3u_uv}b) and \hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c). We note that, for $\hbox{$m_{\rm{up}}$}=100\,\hbox{M$_{\rm{\odot}}$}$, \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ increases steadily from $Z=0.008$, to 0.002, to 0.0005 (Fig.~\ref{fig:baton3z_opt}e), because of the higher effective temperatures of metal-poor relative to metal-rich stars (see above). Raising \hbox{$m_{\rm{up}}$}\ also strengthens the equivalent width of \hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}a and \ref{fig:baton3u_uv}a), and to a lesser extent, those of \hbox{C\,\textsc{iv}}\ (Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c) and \hbox{C\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}g and \ref{fig:baton3u_uv}g). Finally, we note that, for $\hbox{$m_{\rm{up}}$}=600\,\hbox{M$_{\rm{\odot}}$}$, a stellar \hbox{He\,\textsc{ii}\,$\lambda1640$}\ wind feature can arise even at the metallicity $Z=0.0005$ in the models of Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} (with predicted equivalent width $\sim1.7\,$\AA\ and full width at half-maximum $\sim1800\,$km\,s$^{-1}$). \item[] \textit{Stellar population synthesis model}. The light-purple and magenta segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} show the effect of using the \textsc{bpass}\ single- and binary-star models, respectively, in place of the {\small C\&B}\ model, to compute $S_{\lambda}(\hbox{$t^\prime$})$ in equation~\eqref{eq:flux_gal}. At the considered age of 3\,Myr, both versions of the \textsc{bpass}\ model tend to produce slightly softer ionizing radiation than the {\small C\&B}\ model, which incorporates recent evolutionary tracks and model atmospheres for massive stars (Section~\ref{sec:ionb}; see also Section~\ref{sec:lycprod} below). As a result, in all panels of Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, changing from the {\small C\&B}\ to \textsc{bpass}\ models has an effect on line ratios and equivalent widths similar to that of lowering \hbox{$m_{\rm{up}}$}\ (see above).\footnote{The photoionization modelling in the comparison of {\small C\&B}\ and \textsc{bpass}\,v2.2.1 models presented in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} is fully self-consistent and includes dust physics. We note that \citet{Xiao18} compare the \citet{gutkin2016} models, which include dust physics, with dust-free photoionization models computed using \textsc{bpass}\,v2.1 (\citealt{Xiao18} also inadvertently plotted \hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ in place of \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ from the \citealt{gutkin2016} models in their fig.~B1; E. Stanway, private communication).} As expected, the \textsc{bpass}\ binary-star model produces harder radiation than the \textsc{bpass}\ single-star model \citep[e.g.,][]{stanway19}, the effect increasing toward later ages (not shown). It is also worth noting that, since a majority of massive stars are expected to undergo binary interactions \citep[e.g.,][]{Sana12}, the \hbox{He\,\textsc{ii}}-line strengths predicted by the single-star {\small C\&B}\ models should be considered as lower limits. \item[] \textit{Fraction of escaping LyC photons, \hbox{$f_{\rm{esc}}$}}. The light-blue segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} show the effect of decreasing \hbox{$\tau_{570}$}, the zero-age optical depth of \hbox{H{\sc ii}}\ regions to LyC photons with wavelength $\lambda=570\,$\AA\ (Section~\ref{sec:fesc}), from +1.0 to $-1.0$. This is equivalent to increasing \hbox{$f_{\rm{esc}}$}\ from zero to nearly unity (Fig.~\ref{fig:fesc}c). As seen in Section~\ref{sec:fesc} (Figs~\ref{fig:struction_carbon} and \ref{fig:struction_oxygen}), increasing \hbox{$f_{\rm{esc}}$}\ progressively removes the outer low-ionization zones of \hbox{H{\sc ii}}\ regions, causing \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c) and \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ (inverse abscissa of Figs~\ref{fig:baton3z_opt}b and \ref{fig:baton3u_opt}b) to rise, while \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}b and \ref{fig:baton3u_uv}b) and the equivalent widths of \hbox{C\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}g and \ref{fig:baton3u_uv}g) and \hbox{H$\beta$}\ (Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e) drop sharply (at ages $t>3\,$Myr for large \hbox{$f_{\rm{esc}}$}, the effect on \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ would be inverted; see Fig.~ \ref{fig:convol}f). Also, as noted by \citet{izotov17oct}, increasing \hbox{$f_{\rm{esc}}$}\ makes \hbox{He\,\textsc{i}\,$\lambda3889$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ larger because of the high sensitivity of the $\lambda$3889 transition to fluorescence (which increases the line luminosity as the optical depth decreases), while the effect on \hbox{He\,\textsc{i}\,$\lambda7065$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ is weaker (Figs~\ref{fig:baton3z_opt}d and \ref{fig:baton3u_opt}d). This led \citet{izotov17oct} to argue that \hbox{He\,\textsc{i}}\ lines could be a promising alternative to \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ to constrain \hbox{$f_{\rm{esc}}$}\ in star-forming galaxies. \item[] \textit{AGN component}. The orange segments in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} show the effect of adding an AGN component contributing from 0 to 99 per cent of the total \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission, using the prescription of Section~\ref{sec:agn} (this corresponds roughly to a contribution from 0 to 40--80 per cent of the total \hbox{H$\beta$}\ emission, depending on the model). In these Seyfert 2-galaxy models, the AGN narrow-line region contributes to the nebular (line and recombination-continuum) emission, but not the underlying ultraviolet and optical emission. Thus, the equivalent widths of \hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}a and \ref{fig:baton3u_uv}a), \hbox{C\,\textsc{iv}}\ (Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c), \hbox{C\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}h and \ref{fig:baton3u_uv}h) and \hbox{H$\beta$}\ (Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e) all rise. The much harder spectra of AGN relative to stars at high energies \citep[e.g., fig.~1 of][]{Feltre2016} imply larger ratios of \hbox{He\,\textsc{ii}}-to-other lines and larger \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}e and \ref{fig:baton3u_uv}e) and \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ (inverse abscissa of Figs~\ref{fig:baton3z_opt}b and \ref{fig:baton3u_opt}b), the AGN component accounting for nearly all the \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission in the most extreme models. We note that \hbox{N\,\textsc{v}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ drops in Figs~\ref{fig:baton3z_uv}d and \ref{fig:baton3u_uv}d, because of the conversion of N$^{4+}$ into N$^{5+}$. In these figures, the \citet{dors14} and \citet{Mignoli19} observations of powerful AGN hosted by massive galaxies can be reproduced by models with metallicity $Z\approx0.008$ and high ionization parameters, $-2\lesssim \log \hbox{$\langle U\rangle$} \lesssim -1$ \citep[see also section~4.2 of][]{Mignoli19}. Finally, the larger \hbox{$n_{\mathrm{H}}$}\ of the AGN models ($10^3\,{\rm cm}^{-3}$) relative to \hbox{H{\sc ii}}-region models ($10^2\,{\rm cm}^{-3}$) makes \hbox{He\,\textsc{i}\,$\lambda7065$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ rise because of collisional enhancement when the AGN contribution rises, while the inclusion of microturbulence in the AGN models (Section~\ref{sec:agn}) reduces the $\lambda$3889-line optical depth and hence the effects of fluorescence \citep{Benjamin02}, causing \hbox{He\,\textsc{i}\,$\lambda3889$}/\hbox{He\,\textsc{i}\,$\lambda6678$}\ to also rise (Figs~\ref{fig:baton3z_opt}d and \ref{fig:baton3u_opt}d). \item[] \textit{Shock component}. The series of red symbols of different darkness in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} show the effect of adding a radiative-shock component contributing 90 per cent of the total \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission, using the prescription of Section~\ref{sec:shocks}. The symbol shape corresponds to the metallicity of the associated benchmark model (upside-down triangle: $Z=0.0005$; circle: $Z=0.002$; square: $Z=0.008$) and the darkness to the shock velocity (from $10^2\,$km\,s$^{-1}$: light; to $10^3\,$km\,s$^{-1}$: dark). The signatures of a shock component are very similar to those identified above for an AGN component, in particular very strong ratios of \hbox{He\,\textsc{ii}}-to-other lines, the fraction of, for example, total \hbox{H$\beta$}\ luminosity shocks account for being typically less than 15 per cent in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}. This is because collisional ionization in the high-temperature ($\hbox{$T_{\rm{e}}$}>10^6\,$K) radiative zone of a shock produces He$^{2+}$ along with other highly-ionized species (e.g., C$^{4+}$ to C$^{6+}$, N$^{4+}$ to N$^{7+}$, O$^{4+}$ to O$^{8+}$), whose recombination generates strong \hbox{He\,\textsc{ii}}\ emission and extreme ultraviolet and soft X-ray emission capable of producing lower-ionization species upstream and downstream of the shock \citep[see figs.~8--11 of][]{allen08}. Hence, the main effect of adding a contribution by shock-ionized gas is to raise the \hbox{He\,\textsc{ii}\,$\lambda1640$}\ equivalent width (Figs~\ref{fig:baton3z_uv}b and \ref{fig:baton3u_uv}b) and all ratios of \hbox{He\,\textsc{ii}}-to-other lines, such as \hbox{He\,\textsc{ii}\,$\lambda1640$}/\hbox{O\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}h and \ref{fig:baton3u_uv}h) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ (Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e), and the inverse of \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}b and \ref{fig:baton3u_uv}b) and \hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c). The intensity and hardness of the ionizing radiation increases with shock velocity. We have checked that adopting different pre-shock densities (in the range $1\leq \hbox{$n_{\mathrm{H}}$}\leq 10^4$\,cm$^{-2}$; see Section~\ref{sec:shocks}) and transverse magnetic fields ($10^{-4}\leq B\leq 10\,\mu$G) has a negligible effect on the results of Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, except for the low-ionization \hbox{[N\,\textsc{ii}]}\ and \hbox{[O\,\textsc{ii}]}\ lines (Figs~\ref{fig:baton3z_opt}a,b and \ref{fig:baton3u_opt}a,b), whose fluxes tend to decrease when \hbox{$n_{\mathrm{H}}$}\ rises and $B$ drops. \end{itemize} \section{Constraints on the production and escape of ionizing radiation}\label{sec:constraints} In the previous section, we described the ultraviolet and optical emission-line signatures of a wide range of ISM, stellar-population, AGN and radiative-shock parameters in metal-poor star-forming galaxies. We now interpret these results to investigate emission-line diagnostics of the production and escape of ionizing radiation in such galaxies. Specifically, we wish to assess the hints provided by the reference observational sample of Section~\ref{sec:obs} on the sources dominating the production of ionizing photons (Section~\ref{sec:lycprod}) and on LyC-photon leakage (Section~\ref{sec:lycfesc}) in these galaxies. We mention along the way how our findings compare to those of previous studies relying on investigations of often fewer emission lines with different models. \subsection{Diagnostics of ionizing sources}\label{sec:lycprod} \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure12_light.png}} \end{center} \caption{Evolution of the \hbox{He\,\textsc{ii}\,$\lambda1640$}\ equivalent width (left) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratio (right) in models of ionization-bounded \hbox{H{\sc ii}}\ regions powered by different types of SSPs. The black curves show the {\small C\&B}-based benchmark models with $\log\hbox{$\langle U\rangle$}=-2$ of Section~\ref{sec:params}, for $Z=0.0005$ (top), 0.002 (middle) and 0.008 (bottom). The light-purple and magenta curves show the corresponding models powered by \textsc{bpass}\ single- and binary-star SSPs, respectively (\textsc{bpass}\ models are not available for $Z=0.0005$). The dotted and solid dark-purple lines show SSP models with the same parameters as black curves, but for IMF upper mass cut-offs $\hbox{$m_{\rm{up}}$}=100$ and 600\,\hbox{M$_{\rm{\odot}}$}, respectively (the area between these two models has been shaded in purple, for clarity).The grey curves show the same models as the black curves, but for $\log\hbox{$\langle U\rangle$}=-1$.} \label{fig:HeII_ssp_age} \end{figure} \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure13_light.png}} \end{center} \caption{EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) plotted against \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ (left; as in Fig.~\ref{fig:obs_uv}b) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ plotted against EW(\hbox{H$\beta$}) (right; as in Fig.~\ref{fig:obs_opt}e). The observations (greyed for clarity) are the same as in Figs.~\ref{fig:obs_uv}b and \ref{fig:obs_opt}e, while the models are the same as in Fig.~\ref{fig:HeII_ssp_age} (without the purple shading between models for $\hbox{$m_{\rm{up}}$}=100$ and 600\,\hbox{M$_{\rm{\odot}}$}).} \label{fig:HeII_ssp_obs} \end{figure} \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure14_light.png}} \end{center} \caption{Same as Fig.~\ref{fig:HeII_ssp_age}, but for models with constant star formation rate.} \label{fig:HeII_cst_age} \end{figure} Among the most challenging lines to reproduce in the spectra of metal-poor star-forming galaxies are the \hbox{He\,\textsc{ii}}\ recombination lines, whose strength can be much stronger than predicted by standard models (Section~\ref{sec:intro}). The signatures of the wide collection of models considered in Section~\ref{sec:params} in spectral diagnostic diagrams involving \hbox{He\,\textsc{ii}}\ lines therefore provide potentially useful hints on the sources powering the ionizing radiation in such galaxies. For example, we saw that, in ionization-bounded models, the equivalent widths of \hbox{He\,\textsc{ii}\,$\lambda1640$}\ and \hbox{H$\beta$}\ and the \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratio depend only moderately on ISM parameters other than \hbox{$\langle U\rangle$}, making these observables selectively sensitive to the source of ionizing radiation. Instead, the ratios of \hbox{He\,\textsc{ii}}-to-metallic lines are also strongly affected by metallicity, the C/O ratio (in the case of \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ and \hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}) and interstellar-line absorption (in the case of \hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}). With this in mind, we investigate below the extent to which stellar populations, AGN and radiative shocks can account for the emission-line signatures of metal-poor, star-forming galaxies. We also discuss X-ray binaries as potential sources of ionizing radiation in these galaxies. \subsubsection{Stellar populations}\label{stelpops} If stars are the main source of ionizing radiation in the galaxies of the reference sample of Section~\ref{sec:obs}, the equivalent widths of \hbox{He\,\textsc{ii}\,$\lambda1640$}\ and \hbox{H$\beta$}\ and the \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratio will depend sensitively on the upper mass cut-off of the IMF, \hbox{$m_{\rm{up}}$}, the stellar population age, $t$, the stellar population model itself and \hbox{$\langle U\rangle$}\ (Figs~\ref{fig:baton3z_uv}a, \ref{fig:baton3z_opt}e, \ref{fig:baton3u_uv}a and \ref{fig:baton3u_opt}e). To further characterise this dependence, we show in Fig.~\ref{fig:HeII_ssp_age} the evolution of EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ for ionization-bounded \hbox{H{\sc ii}}\ regions powered by different types of SSPs, while Fig.~\ref{fig:HeII_ssp_obs} shows these models in the same panels as in Figs~\ref{fig:obs_uv}b and \ref{fig:obs_opt}e defined by EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}), \hbox{C\,\textsc{iii}]\,$\lambda1908$}/\hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ and EW(\hbox{H$\beta$}). The black curves in Figs~\ref{fig:HeII_ssp_age} and \ref{fig:HeII_ssp_obs} show SSP models with the same parameters as the benchmark models with $\log\hbox{$\langle U\rangle$}=-2$ of Section~\ref{sec:params}, for $Z=0.0005$ (top panels), 0.002 (middle panels) and 0.008 (bottom panels). In Fig.~\ref{fig:HeII_ssp_age}, these models show how, as $Z$ increases, the drop in effective temperature of massive stars on the early main sequence (for $\hbox{$t^\prime$}\ll1\,$Myr), the rise in mass-loss rate (a $300\,\hbox{M$_{\rm{\odot}}$}$ star leaving the main sequence, around $\hbox{$t^\prime$}\approx2\,$Myr, has lost 10, 25 and 70 per cent of its mass for $Z=0.0005$, 0.002 and 0.008, respectively) and the development of the WR phase (around $\hbox{$t^\prime$}\approx3\,$Myr) shape the evolution of EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ (see also Section~\ref{sec:params}). In Figs~\ref{fig:HeII_ssp_obs}a, \ref{fig:HeII_ssp_obs}c and \ref{fig:HeII_ssp_obs}e, the above models reach a region populated by galaxies with more extreme EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) and \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ than could be attained by models with constant star formation rate in Fig.~\ref{fig:baton3z_uv}b. This is because, as Fig.~\ref{fig:HeII_cst_age} shows, continuous star formation smoothes out the evolution of the spectral features in Fig.~\ref{fig:HeII_ssp_age}. However, while the WR phase of the model with $Z=0.002$ hardly reaches the high observed $\hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\approx0.01$ around $\mathrm{EW(\hbox{H$\beta$})}\approx200\,$\AA\ (Fig.~\ref{fig:HeII_ssp_obs}d), none of the reference SSP models can account for the extreme $\hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\approx0.02$ of galaxies with $\mathrm{EW(\hbox{H$\beta$})}\approx500\,$\AA\ in the \citet{izotov17oct} sample (Figs~\ref{fig:HeII_ssp_obs}b, \ref{fig:HeII_ssp_obs}d and \ref{fig:HeII_ssp_obs}f). Increasing the ionization parameter from $\log\hbox{$\langle U\rangle$}=-2$ to $-1$ (grey curves) significantly boosts EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ in Fig.~\ref{fig:HeII_ssp_age} (see Section~\ref{sec:params}), but this improves only moderately the discrepancy between observed and predicted \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ for galaxies with $\mathrm{EW(\hbox{H$\beta$})}\approx500\,$\AA\ in Fig.~\ref{fig:HeII_ssp_obs}. The light-purple and magenta curves in Figs~\ref{fig:HeII_ssp_age}--\ref{fig:HeII_cst_age} show the predictions of the \textsc{bpass}\ single- and binary-star models, respectively, for SSPs with the same parameters as the {\small C\&B}\ reference models, for $Z=0.002$ and 0.008 (\textsc{bpass}\ models are not available for $Z=0.0005$). These models start at $\hbox{$t^\prime$}=1\,$Myr, hence the flat evolution of EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ at younger ages in Fig.~\ref{fig:HeII_ssp_obs}. From 1 to 3\,Myr, both \textsc{bpass}\ models show qualitatively the same evolution as the {\small C\&B}\ models, albeit with a weaker WR phase implying smaller EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}. Then, as the \hbox{He\,\textsc{ii}}\ emission dies off in the single-star \textsc{bpass}\ and {\small C\&B}\ models, the production of hard ionizing radiation is maintained through binary mass transfer in the binary-star \textsc{bpass}\ model. The effect is particularly striking in the generation of strong \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ at small EW(\hbox{H$\beta$}) in Figs~\ref{fig:HeII_ssp_obs}d and \ref{fig:HeII_ssp_obs}f. However, this has no effect on the discrepancy between models and observations, which appears to be even more severe at larger EW(\hbox{H$\beta$}) for the \textsc{bpass}\ than for the {\small C\&B}\ models. We note in passing that \citet{BPASSv21} used observations of \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ versus EW(\hbox{H$\beta$}) (Fig.~\ref{fig:obs_opt}c above) to highlight the better performance of binary- versus single-stellar population models. As the brown (age) and magenta (\textsc{bpass}) segments in Figs~\ref{fig:baton3z_opt}c and \ref{fig:baton3u_opt}c suggest, star-forming galaxies with low EW(\hbox{H$\beta$}) and high \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ can be reached by both types of models for continuous star formation at ages $t\gg10\,$Myr.\footnote{In the version of Fig.~\ref{fig:obs_opt}c published by \citet[][fig.~38 in their paper]{BPASSv21}, only SSP models were presented, and the \hbox{H$\beta$}\ equivalent widths from \citet{schenker13} were inadvertently corrected twice for redshift (J.~J. Eldridge, private communication).} In fact, we have checked for example that the model with $\log\hbox{$\langle U\rangle$}=-2$ and $Z=0.002$ in these figures reaches $\mathrm{EW(\hbox{H$\beta$})}\approx50$\,\AA\ (40\,\AA) after 1\,Gyr (2\,Gyr) of constant star formation, at nearly constant \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}. The other parameter strongly affecting the ionizing radiation from young stellar populations is the upper mass cut-off of the IMF. The dotted and solid dark-purple lines in Figs~\ref{fig:HeII_ssp_age}--\ref{fig:HeII_cst_age} show SSP models with the same parameters as the {\small C\&B}\ reference models, but for $\hbox{$m_{\rm{up}}$}=100$ and 600\,\hbox{M$_{\rm{\odot}}$}, respectively. In Figs~\ref{fig:HeII_ssp_age} and \ref{fig:HeII_cst_age}, the area between these two models has been shaded in purple, for clarity. While raising \hbox{$m_{\rm{up}}$}\ hardens the ionizing radiation, the effect is modest from $\hbox{$m_{\rm{up}}$}=300$ and 600\,\hbox{M$_{\rm{\odot}}$}\ (Section~\ref{sec:params}). Fine-tuning the upper IMF therefore does not look promising to improve significantly the agreement between models and observations of EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}), \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ and EW(\hbox{H$\beta$}) in Fig.~\ref{fig:HeII_ssp_obs}. This is consistent with the conclusions reached by \citet{stanway19} based simply on the ratio of \hbox{He\,\textsc{ii}}-to-\hbox{H{\sc i}}\ ionizing photons. We note that models with LyC-photon leakage ($\hbox{$f_{\rm{esc}}$}>0$) can reach larger ratios of \hbox{He\,\textsc{ii}}-to-low ionization lines, such as \hbox{He\,\textsc{ii}\,$\lambda1640$}/\hbox{C\,\textsc{iii}]}\ (inverse abscissa of Figs~\ref{fig:baton3z_uv}d and \ref{fig:baton3u_uv}d), \hbox{He\,\textsc{ii}\,$\lambda1640$}/(\hbox{C\,\textsc{iv}}+\hbox{C\,\textsc{iii}]}) (inverse abscissa of Figs~\ref{fig:baton3z_uv}e and \ref{fig:baton3u_uv}e), \hbox{He\,\textsc{ii}\,$\lambda1640$}/\hbox{O\,\textsc{iii}]}\ (Figs~\ref{fig:baton3z_uv}h and \ref{fig:baton3u_uv}h) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ (Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e). However, such models fail to account simultaneously for the large equivalent widths of low-ionization lines observed in many galaxies (see, e.g., Figs~\ref{fig:baton3z_uv}h and \ref{fig:baton3u_uv}h for \hbox{C\,\textsc{iii}]}, and Figs~\ref{fig:baton3z_opt}e and \ref{fig:baton3u_opt}e for \hbox{H$\beta$}). \subsubsection{AGN and radiative shocks}\label{agnshocks} Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} show how introducing either an AGN or radiative-shock component allows models to reproduce observations of galaxies with high \hbox{He\,\textsc{ii}}\ emission in nearly all ultraviolet and optical line-ratio diagrams. This is not surprising, given the strong \hbox{He\,\textsc{ii}}\ emission produced by AGN and radiative shocks (Section~\ref{sec:params}), which has long made them good candidate sources of hard ionizing radiation in metal-poor star-forming galaxies (Section~\ref{sec:intro}). The novelty of Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} is to illustrate at once, and with a self-consistent modelling of interstellar abundances, the influence of these components on a wide range of ultraviolet and optical emission lines. Also, having assembled a substantial observational reference sample (Section~\ref{sec:obs}) allows us to highlight general trends and derive more robust conclusions than based on individual objects. It is not obvious from Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} which of an AGN or radiative-shock component can best account for the properties of metal-poor star-forming galaxies with strong \hbox{He\,\textsc{ii}}\ emission. \citet{izotov12} find that the production of high-ionization \hbox{[Ne\,\textsc{v}]\,$\lambda3426$}\ emission ($E_\mathrm{ion}>97.2$\,eV) in the spectra of 8 blue compact dwarf galaxies with $\hbox{12 + log(\OH)}=7.3$--7.7 and strong \hbox{He\,\textsc{ii}}\ emission ($\hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\gtrsim0.01$) requires a contribution of about 10 per cent of the total ionizing radiation by AGN or radiative shocks. While these authors favour supernova-driven radiative shocks with velocities around 300--500\,km\,s$^{-1}$ as the source of this emission, they cannot rule out an AGN origin. \citet{stasinska15} also note that shocks can naturally account for the high \hbox{[O\,\textsc{i}]\,$\lambda6300$}/\hbox{[O\,\textsc{iii}]}\ ratios observed in the spectra of blue compact dwarf galaxies with high \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ ratios, as density-bounded models producing high \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ would imply low \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ (see also Section~\ref{sec:fesc}). \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure15_light.png}} \end{center} \caption{\hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ plotted against \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ for: (a) the galaxies in the reference sample of Section~\ref{sec:obs} (available in practice only for the subsample of LyC leakers and two AGN); and (b) the complete set of models from Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, along with observations from panel (a) greyed for clarity.} \label{fig:OI} \end{figure} To further investigate this issue, in Fig.~\ref{fig:OI}, we plot \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ versus \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ for the complete set of models from Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, along with observations from the sample of Section~\ref{sec:obs} -- available in practice only for the subsample of LyC leakers \citep{leitet11, jaskot13, izotov16oct, izotov16jan, izotov17oct, izotov18mar, izotov18aug, chisholm17}, a few Wolf-Rayet galaxies \citep{lopezsanchez08} and three AGN \citep{dors14}. As in other line-ratio diagrams, contributions by AGN and radiative shocks to the ionizing radiation of model galaxies have roughly similar signatures in Fig.~\ref{fig:OI}, increasing \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ typically far more than \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ (except for very low ionization parameter). Hence, these line ratios cannot either help discriminate at first glance between AGN and shock ionization in a galaxy. A more striking feature of Fig.~\ref{fig:OI} is that nearly all observations of (confirmed and candidate) LyC leakers exhibit higher \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ than the benchmark ionization-bounded models in the full explored ranges of $-3\leq\log\hbox{$\langle U\rangle$}\leq-1$ and $0.0005\leq Z\leq0.008$ at fixed \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}. Tuning the stellar population parameters, including the star formation history, can bring the models only slightly closer to the data. In density-bounded models, the ratio of low- to high-ionization lines further drops (Section~\ref{sec:fesc}), worsening the agreement between models and observations (light-blue segments in Fig.~\ref{fig:OI}). The only way to account for the observed properties of LyC leakers in Fig.~\ref{fig:OI} is to invoke a significant contribution by an AGN or radiative shocks (or X-ray binaries, but see Section~\ref{sec:xrb} below) to the ionizing radiation. This is because the hard penetrating X-ray and extreme-ultraviolet radiation from such sources produces higher electronic temperatures than stellar radiation in the outskirts of \hbox{H{\sc ii}}\ regions, thereby enhancing \hbox{O\,\textsc{i}}\ collisional excitation (we note that, in young shocks which have not yet developed a cool tail, \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ can be significantly reduced and \hbox{He\,\textsc{ii}}/\hbox{H$\beta$}\ slighly enhanced relative to the models shown in Fig.~\ref{fig:OI}; see \citealt{3MdBs19}). This can arise in the context of both density-bounded and ionization-bounded (in the picket-fence leakage scenario; see Section~\ref{sec:fesc}) models. This conclusion is consistent with that drawn by \citet{stasinska15} from the analysis of a sample of blue compact dwarf galaxies with very high excitation. \citet{stasinska15} also pointed out the interest of the \hbox{[Ar\,\textsc{iii}]\,$\lambda7135$}\ and \hbox{[Ar\,\textsc{iv}]\,$\lambda4740$}\ (hereafter simply \hbox{[Ar\,\textsc{iii}]}\ and \hbox{[Ar\,\textsc{iv}]}) lines to probe ionizing-photon energies greater than the ionization potential of Ar$^{2+}$ (40.7\,eV), which lies between the ionization potentials of O$^+$ (35.1\,eV) and He$^+$ (54.4\,eV). In Fig.~\ref{fig:Ar}, we plot \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ against \hbox{[Ar\,\textsc{iv}]}/\hbox{[Ar\,\textsc{iii}]}\ for the same models and observations as in Fig.~\ref{fig:OI}; in practice, data are available only for a few candidate LyC leakers \citep{jaskot13,izotov17oct} and an AGN \citep{dors14}. The benchmark ionization-bounded models appear to overlap with the data in this diagram, as do density-bounded models, eventually combined with an AGN or radiative-shock component. Along with this smaller dispersion of models relative to Fig.~\ref{fig:OI}, is worth noting that, in Fig.~\ref{fig:Ar}, only models with a radiative-shock component can reach $\hbox{[Ar\,\textsc{iv}]}/\hbox{[Ar\,\textsc{iii}]}\sim1$ around $\hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\sim10$, where some extreme-excitation galaxies can be found in the \citet{stasinska15} sample (see their fig.~6). \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics{./figure16_light.png}} \end{center} \caption{Same as Fig.~\ref{fig:OI}b, but for \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ plotted against \hbox{[Ar\,\textsc{iv}]}/\hbox{[Ar\,\textsc{iii}]}.} \label{fig:Ar} \end{figure} Radiative shocks from expanding \hbox{H{\sc ii}}\ regions and supernova blast waves are an appealing natural hypothesis for the origin of hard ionizing radiation in actively star-forming, metal-poor galaxies \citep[e.g.,][]{Thuan05,stasinska15}. Fig.~\ref{fig:OI} supports the idea that shocks may be intimately related to the leakage of ionizing photons from such galaxies. Interestingly, the presence of shocks will increase primarily the luminosities of \hbox{He\,\textsc{ii}}\ and very-high-ionization lines, such as \hbox{[Ne\,\textsc{v}]\,$\lambda3426$}\ (but not so much \hbox{N\,\textsc{v}\,$\lambda1240$}, as N$^{4+}$ is converted into N$^{5+}$; see Section~\ref{sec:params}), while the luminosities of lower-ionization lines (including \hbox{H$\beta$}) remain largely controlled by stellar radiation. In fact, \citet{izotov12} find no significant correlation between \hbox{[Ne\,\textsc{v}]}\ and \hbox{H$\beta$}\ emission in the 8 galaxies of their sample. In this context, the absence of correlation between \hbox{He\,\textsc{ii}}\ and \hbox{H$\beta$}\ emission in the sample of extremely metal-poor galaxies studied by \citet[][see also \citealt{senchyna19}]{SenchynaStark19} could be consistent with a radiative-shock origin of the \hbox{He\,\textsc{ii}}\ emission. AGN and radiative-shock components are sometimes discarded as sources of hard ionizing radiation on the basis of spectral fits. For example, \citet{berg18} conclude that an AGN or radiative-shock component is unlikely to account for the strong \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission in the extreme star-forming galaxy SL2SJ021737-051329, as this would make \hbox{C\,\textsc{iii}]}/\hbox{O\,\textsc{iii}]}\ too small and \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ too large, based on AGN models by \citet{groves04} and shock models by \citet[][see Table~\ref{tab:analogs}]{allen08}. While the \hbox{H{\sc ii}}-region, AGN and shock models used by \citet{berg18} were computed using different ISM prescriptions, in the framework of our models, as can be guessed from Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, a combination of $\log\hbox{$\langle U\rangle$}\lesssim-2$, $\hbox{C/O}\gtrsim0.17$, $\hbox{$\xi_{\rm{d}}$}\approx0.1$ and an AGN (or radiative-shock) contribution of $\sim$8 per cent of the total \hbox{H$\beta$}\ emission turns out to accommodate the observed ultraviolet and optical nebular spectrum of this galaxy (see Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt} to locate the galaxy in all panels, the oxygen abundance corresponding to a metallicity between $Z=0.0005$ and 0.002).\footnote{A pure SSP of age $\hbox{$t^\prime$}\approx2.5\,$Myr with the same \hbox{$\langle U\rangle$}, \hbox{C/O}\ and $\hbox{$\xi_{\rm{d}}$}$ as this composite model can also approximate closely all observations of SL2SJ021737-051329 in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, except for the \hbox{H$\beta$}\ equivalent width [$\mathrm{EW(\hbox{H$\beta$})}\approx200\,$\AA\ instead of the observed 517\,\AA]. We consider this model less likely because of the very specific age required.} Hence, in some cases, the assessment of the potential presence of an AGN or radiative-shock component in a galaxy may depend on the adopted model prescription, highlighting once more the importance of a physically-consistent modelling of nebular emission from different sources (Section~\ref{sec:models}). We also recall that the AGN models presented in this paper were computed for a typical ionizing-spectrum slope $\alpha=-1.7$ (Section~\ref{sec:agn}), and that $\alpha$ variations could imply significant dispersion in predicted ultraviolet and optical line ratios \citep[e.g.,][]{Feltre2016}. It is worth noting that while a radiative-shock or AGN component can readily accommodate the emission-line properties of many observed galaxies with strong \hbox{He\,\textsc{ii}\,$\lambda1640$}\ emission in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, including those with weak \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ (see above; there is also the possibility for \hbox{C\,\textsc{iv}}\ emission to be reduced via interstellar absorption; Section~\ref{sec:params}), some outlier galaxies exhibit properties not sampled by the limited set of models presented here. We have checked that some models can account for the properties of such galaxies. For example, we find that galaxies with $\mathrm{EW(\hbox{He\,\textsc{ii}\,$\lambda1640$})}\gtrsim2\,$\AA\ and $\hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\gtrsim4$ (Figs~\ref{fig:baton3z_uv}b and \ref{fig:baton3u_uv}b) can be reached by models with $\hbox{C/O}\gtrsim0.17$ and a radiative-shock or AGN component. In Figs~\ref{fig:baton3z_uv}c and \ref{fig:baton3u_uv}c, the observed $\mathrm{EW(\hbox{C\,\textsc{iv}})}\gtrsim20\,$\AA\ and $\hbox{C\,\textsc{iv}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\approx10$ of the lensed double-super star cluster ID14 \citep[][whose properties approach those of the \hbox{H{\sc ii}}\ galaxy of \citealt{fosbury03}]{vanzella17} can be accommodated by young ($t\sim1\,$Myr), high-ionization ($\log\hbox{$\langle U\rangle$}\sim-1\,$) models with $\hbox{C/O}\gtrsim0.17$ and $\hbox{$\xi_{\rm{d}}$}\lesssim0.3$, also compatible with the other emission-line properties of this object. Very young models with $\hbox{C/O}\gtrsim0.17$ can also reach galaxies with high EW(\hbox{C\,\textsc{iii}]}) at small \hbox{C\,\textsc{iv}}/\hbox{C\,\textsc{iii}]}\ in Figs~\ref{fig:baton3z_uv}g and \ref{fig:baton3u_uv}g, while the \citet{laporte17} galaxy, with low \hbox{C\,\textsc{iii}]}/\hbox{He\,\textsc{ii}\,$\lambda1640$}\ and high \hbox{N\,\textsc{v}}/\hbox{He\,\textsc{ii}\,$\lambda1640$}, could well be a LyC-photon leaker powered by radiative shocks or an AGN (Figs~\ref{fig:baton3z_uv}g and \ref{fig:baton3u_uv}g). The above rough exploration of the parameter space will need to be refined by more robust spectral fits of each galaxy in the sample, using tools such as \textsc{beagle}\ \citep{Chevallard2016}, extended to incorporate AGN and radiative-shock prescriptions. \subsubsection{X-ray binaries}\label{sec:xrb} X-ray binaries, in which a compact object (neutron star or stellar-mass black hole) accretes material from a massive O/B companion, have been proposed as natural sources of hard ionizing photons in metal-poor star-forming galaxies \citep[e.g.,][]{Garnett91}. An argument supporting this hypothesis is the observed increase in hard X-ray luminosity with decreasing oxygen abundance (at a fixed star formation rate) in nearby metal-poor star-forming galaxies \citep[][and references therein]{Brorby16}, which goes in the same sense as the increase in EW(\hbox{He\,\textsc{ii}\,$\lambda1640$}) (Fig.~\ref{fig:obs_uv}a) and \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ (Fig.~\ref{fig:obs_opt}f). Also, the non-correlation of the equivalent width of \hbox{He\,\textsc{ii}$\lambda4686$}\ with that of \hbox{H$\beta$}\ and other emission lines in the sample of extremely metal-poor galaxies studied by \citet{SenchynaStark19} suggests that He$^+$-ionizing photons are produced by sources with timescales greater than massive O/B stars, such as stripped stars produced by close binary evolution and X-ray binaries. The accretion physics of X-ray binaries presents similarities to that of AGN \citep[see the review by][]{Gilfanov14}, which are in fact often considered as scaled-up versions of X-ray binaries \citep[e.g.,][]{McHardy06}. Hence, X-ray binaries are expected to produce ionizing spectra similar to those of AGN \citep[see also fig.~C5 of][]{stasinska15}, implying effects on emission-line ratios and equivalent widths similar to those found for an AGN component in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}. Recently, \citet{schaerer19} computed the time evolution of \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ and EW(\hbox{H$\beta$}) for SSPs including X-ray binaries at different metallicities, by combining the \citet[][see also \citealt{Madau17}]{Fragos13} stellar population synthesis models of X-ray binaries with \textsc{bpass}~v2.1, and adopting an approximate conversion between X-ray luminosity and rate of He$^+$-ionizing photons. This model reproduces roughly the trend of increasing \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ with decreasing \hbox{12 + log(\OH)}\ in nearby metal-poor star-forming galaxies \citep[fig.~1 of][]{schaerer19}. However, it fails to account for the high \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratios of galaxies with large EW(\hbox{H$\beta$}), just as the other stellar population synthesis models considered in Fig.~\ref{fig:HeII_ssp_obs} above \citep[see fig.~3 of][]{schaerer19}. This is consistent with the finding that X-ray binaries have spectra too soft to account for the very hard ionizing radiation of some metal-poor star-forming galaxies \citep[e.g.,][]{Thuan05,izotov12,stasinska15}, that they appear on too-long timescales to account for the emission-line properties of Green-Pea galaxies \citep{jaskot13} and with the stringent observational upper limit from {\it Chandra} on the presence of X-ray binaries in the most extreme \hbox{He\,\textsc{ii}}-emitter observed by \citet{senchyna17}. We conclude that, while X-ray binaries may provide a natural source of hard ionizing photons in metal-poor star-forming galaxies, they cannot account for the entire emission observed in the most extreme, highest-ionization cases. \subsection{Diagnostics of LyC-photon leakage}\label{sec:lycfesc} We now focus on the models of density-bounded \hbox{H{\sc ii}}\ regions in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}, to assess whether the emission-line properties of metal-poor star-forming galaxies can provide useful constraints on the fraction of escaping LyC photons, \hbox{$f_{\rm{esc}}$}. As seen in Section~\ref{sec:fesc} (and references therein), increasing \hbox{$f_{\rm{esc}}$}\ removes the outer low-ionization zones of \hbox{H{\sc ii}}\ regions, making ratios of high- to low-ionization lines (e.g. \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}) rise and the equivalent widths of lines with low ionization potential (e.g. \hbox{C\,\textsc{iii}]}) drop. The interpretation of these signatures in galaxy spectra is unfortunately complicated by the competing effects of other galaxy physical parameters, in particular the nature of the ionizing source, the ionization parameter, \hbox{$\langle U\rangle$}, metallicity, $Z$, and to a lesser extent the gas density, \hbox{$n_{\mathrm{H}}$}, dust-to-metal mass ratio, \hbox{$\xi_{\rm{d}}$}, and \hbox{C/O}\ ratio \citep[Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt}; see also][]{jaskot13, nakajimaetouchi14, stasinska15, jaskot16, izotov17oct}. These degeneracies between the spectral signatures of \hbox{$f_{\rm{esc}}$}\ and other parameters are the reason why LyC leakers appear to overlap with the rest of the population of actively star-forming galaxies in Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt} (see Section~\ref{sec:obsprop}). Several diagnostics must therefore be combined to potentially discriminate the effects of \hbox{$f_{\rm{esc}}$}\ from those of other parameters on emission-line ratios. That \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ alone is not a sufficient condition for LyC leakage is also illustrated by the fact that, as seen in Section~\ref{sec:fesc} (Fig.~\ref{fig:convol}f), this ratio for a density-bounded galaxy with constant star formation can actually be smaller than that of an ionization-bounded one for large \hbox{$f_{\rm{esc}}$}\ and \hbox{$\langle U\rangle$}\ (see age effect on model with $\log\hbox{$\langle U\rangle$}=-1$ in Fig.~\ref{fig:baton3u_opt}b). \citet{jaskot16} suggest that, for example, high \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ ($\gtrsim10$) and low EW(\hbox{C\,\textsc{iii}]}) ($\lesssim4\,$\AA) will tend to select density-bounded galaxies, although they do acknowledge that the scaling of \hbox{$f_{\rm{esc}}$}\ with EW(\hbox{C\,\textsc{iii}]}) will depend sensitively on metal abundances and stellar population age, as Figs~\ref{fig:baton3z_uv}g and \ref{fig:baton3z_opt}b show. We note in this context that the \hbox{He\,\textsc{i}}-based \hbox{$f_{\rm{esc}}$}\ diagnostic proposed by \citet[][see Figs~\ref{fig:baton3z_opt}d and \ref{fig:baton3u_opt}d]{izotov17oct} requires independent constraints on \hbox{$n_{\mathrm{H}}$}, \hbox{$\langle U\rangle$}\ and $Z$. In practice, Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} reveal that few observations fall in regions of diagrams populated purely by density-bounded models (in Figs~\ref{fig:HeII_ssp_obs}d and \ref{fig:HeII_ssp_obs}f, galaxies with low \hbox{H$\beta$}\ equivalent width and high \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ can be accounted for by ionization-bounded models with ages greater than 10\,Myr; see Section~\ref{stelpops}). It is also interesting to note that, for the low-mass star-forming galaxy BX418 with low $\mathrm{EW(\hbox{H$\beta$})}\approx44\,$\AA\ and high $\hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}>26$ (using the 1$\sigma$ limit on the \hbox{[O\,\textsc{ii}]}\ flux), \citet{erb10} constrain an age less than 100\,Myr from ultraviolet and \hbox{H$\alpha$}\ observations as well as dynamical arguments. This young age, despite the location of BX418 at low EW(\hbox{H$\beta$}) and high \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ in Figs~\ref{fig:baton3z_opt}c and \ref{fig:baton3u_opt}c, is suggestive of the fact that the galaxy might be leaking LyC photons, which would be compatible with the other properties of the galaxy in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} .\footnote{An ionization-bounded model with $\log\hbox{$\langle U\rangle$}=-1$ can reach $\mathrm{EW(\hbox{H$\beta$})}\approx44\,$\AA\ after about 100\,Myr of constant star formation, although the corresponding \hbox{He\,\textsc{ii}\,$\lambda1640$}/\hbox{O\,\textsc{iii}]}\ is too small relative to the observed one in Fig.~\ref{fig:baton3u_uv}h (which pertains to the 25-per-cent nebular contribution to the total \hbox{He\,\textsc{ii}}\ emission of this object; see \citealt{erb10}).)} In comparison, the confirmed, per-cent level LyC leakers Haro~11, Tol-0440-381 and Tol-1247-232 \citep[][see Table~\ref{tab:leakers}]{leitet11,chisholm17} also exhibit somewhat low $\mathrm{EW(\hbox{H$\beta$})}\sim40$--100\,\AA\ and high \hbox{[O\,\textsc{iii}]}/\hbox{H$\beta$}\ in these figures, but with more modest \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ around 1.5--4.0 (Figs~\ref{fig:baton3z_opt}b and \ref{fig:baton3u_opt}b), consistent with a picket-fence leakage scenario \citep[Section~\ref{sec:fesc}; see also][]{leitet11}, in addition to density-bounded \hbox{H{\sc ii}}\ regions. Hence, assessing whether a galaxy is leaking LyC photons based on the emission-line diagrams in Figs~\ref{fig:baton3z_uv}--\ref{fig:baton3u_opt} is not straightforward at first glance. Several diagnostics must be examined simultaneously to discriminate the signatures of \hbox{$f_{\rm{esc}}$}\ from those of other physical parameters, which can be best achieved with a full spectral analysis tool incorporating density-bounded models. \section{Conclusions}\label{sec:conclu} We have explored the constraints on the production and escape of ionizing photons in young galaxies by investigating the ultraviolet and optical emission-line properties of a broad collection of models relative to the observations of a reference sample of metal-poor star-forming galaxies and LyC leakers at various redshifts. A main feature of our study is the adoption of models of \hbox{H{\sc ii}}\ regions, AGN narrow-line regions and radiative shocks computed all using the same physically-consistent description of element abundances and depletion on to dust grains down to metallicities of a few per cent of solar \citep[from][]{gutkin2016}. We computed ionizing spectra of single- and binary-star populations using the most recent versions of the \citet{Bruzual2003} and \citet{BPASSv21} stellar population synthesis codes and explored models of ionization-bounded as well as density-bounded (i.e., optically thin to LyC photons) \hbox{H{\sc ii}}\ regions. To compute emission-line spectra of AGN narrow-line regions, we appealed to an updated version of the \citet{Feltre2016} models, while for radiative shocks we adopted the recent computations of \citet{3MdBs19}. The observational sample assembled to constrain these models incorporates data from 13 subsamples of metal-poor star-forming galaxies (Table~\ref{tab:analogs}), 9 subsamples of confirmed and candidate LyC leakers (Table~ \ref{tab:leakers}), as well as a few more quiescent star-forming galaxies and AGN at redshifts out to $z=7.1$. The combined sample of closest known analogues to reionization-era galaxies in Tables~\ref{tab:analogs} and \ref{tab:leakers} allows the simultaneous exploration of diagnostic diagrams involving the \hbox{N\,\textsc{v}\,$\lambda1240$}, \hbox{C\,\textsc{iv}\,$\lambda\lambda1548,1551$}, \hbox{He\,\textsc{ii}\,$\lambda1640$}, \hbox{O\,\textsc{iii}]$\lambda\lambda1661,1666$}, \hbox{[C\,\textsc{iii}]$\lambda1907$+C\,\textsc{iii}]$\lambda1909$}, \hbox{[O\,{\sc ii}]$\lambda\lambda3726,3729$}, \hbox{He\,\textsc{i}\,$\lambda3889$}, \hbox{He\,\textsc{ii}$\lambda4686$}, \hbox{H$\beta$}, \hbox{[O\,\textsc{iii}]$\lambda5007$}, \hbox{H$\alpha$}, \hbox{[N\,\textsc{ii}]$\lambda6584$}, \hbox{He\,\textsc{i}\,$\lambda6678$}\ and \hbox{He\,\textsc{i}\,$\lambda7065$}\ emission lines, of which only a few are typically available at once for individual subsamples. This sample shows that, overall, metal-poor star-forming galaxies in wide ranges of redshift populate similar regions of the diagrams \citep[but see][]{senchyna19}, while LyC leakers tend to overlap with the most extreme star-forming galaxies (Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt}). In agreement with many previous studies, we find that current single- and binary-star population synthesis models do not produce hard-enough ionizing radiation to account for the strong \hbox{He\,\textsc{ii}}\ emission observed in the most metal-poor star-forming galaxies, even when tuning the IMF. Interestingly, the updated {\small C\&B}\ version of the \citet{Bruzual2003} single-star models used here, which differs from that described by \citet[][see also \citealt{vidal17}]{gutkin2016} in the inclusion of updated spectra for hot massive stars, produces altogether more \hbox{He\,\textsc{ii}}-ionizing radiation than the binary-star \textsc{bpass}\,v2.2.1 models, providing slightly better agreement with the observations (Section~\ref{stelpops} and Figs~\ref{fig:HeII_ssp_age}--\ref{fig:HeII_cst_age}). Since a majority of massive stars are expected to undergo binary interactions \citep[e.g.,][]{Sana12}, we consider the \hbox{He\,\textsc{ii}}\ luminosity predicted by the single-star {\small C\&B}\ models as a lower limit, which binary-star models (currently under development) will likely exceed. Also, for completeness, since the \hbox{[O\,\textsc{iv}]$\,25.9\,\mu$m}\ line is often discussed in the same context as the \hbox{He\,\textsc{ii}}\ line \citep[e.g.][]{SchaerStas99}, we checked that the \hbox{[O\,\textsc{iv}]$\,25.9\,\mu$m}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ ratio in our models behaves similarly to the \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratio (this is even more true for the \hbox{[O\,\textsc{iv}]$\,25.9\,\mu$m}/\hbox{[O\,\textsc{iii}]$\,51.8\,\mu$m}\ ratio, which is less sensitive to electronic temperature). Introducing hard ionizing radiation from either an AGN or radiative-shock component allows models to overlap with observations of galaxies with high \hbox{He\,\textsc{ii}}\ emission in nearly all the ultraviolet and optical line-ratio diagrams we investigated. On an object-by-object basis, we find that the conclusion drawn about the potential presence of such ionizing sources using our models can differ from those derived previously using libraries of AGN and radiative-shock models computed with inconsistent descriptions of element abundances. Both AGN and radiative-shock components have very similar signatures in all diagrams, which prevents a simple discrimination between the two at first glance. Similarly, no diagram provides a simple discrimination between LyC-leaking and ionization-bounded galaxies, because of degeneracies in the signatures of \hbox{$f_{\rm{esc}}$}\ and other galaxy physical parameters. This is the case also in the \hbox{[O\,\textsc{iii}]$\lambda5007$}/\hbox{[O\,\textsc{ii}]$\lambda3727$}\ versus \hbox{[O\,\textsc{i}]\,$\lambda6300$}/\hbox{[O\,\textsc{iii}]$\lambda5007$}\ diagram, in which all observations of (confirmed and candidate) LyC leakers exhibit higher \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ than benchmark ionization-bounded models. This is surprising, because density-bounded models produce lower \hbox{[O\,\textsc{i}]}/\hbox{[O\,\textsc{iii}]}\ than ionization-bounded ones at fixed \hbox{[O\,\textsc{iii}]}/\hbox{[O\,\textsc{ii}]}\ (Section~\ref{sec:fesc} and Fig.~\ref{fig:OI}; see also \citealt{stasinska15}). The only way to account for the observed properties of LyC leakers in this diagram is to invoke a systematic significant contribution by a source of hard ionizing radiation. Another potential source of hard ionizing radiation is X-ray binaries, the predicted growing importance of these systems toward low metallicity being supported by the observed increase in hard X-ray luminosity with decreasing oxygen abundance (at fixed star formation rate) in nearby metal-poor star-forming galaxies \citep{Fragos13,Brorby16}. Adopting an approximate conversion of X-ray luminosity into rate of He$^+$-ionizing photons allows one to reproduce roughly the observed rise in \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratio with decreasing oxygen abundance in such galaxies \citep{schaerer19}. However, like other stellar population synthesis models, this fails to account for the high observed \hbox{He\,\textsc{ii}$\lambda4686$}/\hbox{H$\beta$}\ ratios of galaxies with large EW(\hbox{H$\beta$}). A source of harder ionizing radiation must be invoked in these extreme objects, such as an AGN or radiative-shock component. So far, no predictive model has been proposed to link shocks to other galaxy properties and account for, notably, the increase in \hbox{He\,\textsc{ii}}-emission strength with decreasing metallicity. Potential avenues to be explored might be an IMF bias toward massive stars at low metallicities \citep[e.g.,][]{Marks12} or the higher specific star formation rates of metal-poor dwarf galaxies relative to their more metal-rich, massive counterparts \citep[e.g.,][]{Kauffmann06,Yates12}. Both effects would tend to enhance the incidence of radiative shocks from massive stars and supernova blast waves in metal-poor relative to metal-rich galaxies. We also note that gas compression associated with radiative shocks will generate high densities \citep[e.g.,][]{allen08}. In this context, the high gas densities ($\hbox{$n_{\mathrm{H}}$}\gtrsim10^4$\,cm$^{-3}$) measured from the \hbox{[C\,\textsc{iii}]$\lambda1907$+C\,\textsc{iii}]$\lambda1909$}\ doublet in some distant, low-metallicity, actively star-forming galaxies \citep[e.g.,][]{Maseda17,James18} could be suggestive of the presence of radiative shocks. The possibility that fast radiative shocks provide the hard radiation necessary to power strong \hbox{He\,\textsc{ii}}\ emission in metal-poor star-forming galaxies may be tested using high-quality observations of nearby galaxies. In a related paper (Chevallard et al., in preparation), we appeal to spatially-resolved observations of the extremely metal-poor compact dwarf galaxy SBS0335-052E to quantify the relative contributions from supernova-driven radiative shocks and massive stars to the total \hbox{He\,\textsc{ii}}-ionizing emission from this galaxy. While the ultraviolet and optical emission-line diagrams of Figs~\ref{fig:obs_uv} and \ref{fig:obs_opt} do not allow simple by-eye diagnostics of the nature of ionizing sources and the escape of LyC photons in metal-poor star-forming galaxies, differences exist in the spectral signatures of these parameters, which should enable more stringent constraints from simultaneous fits of several lines. This can be best achieved in a Bayesian framework using versatile spectral analysis tools incorporating a physically-consistent description of the sources and transfer of radiation in a galaxy, such as the \textsc{beagle}\ tool \citep{Chevallard2016}. Although this tool was already shown to reproduce remarkably well the fluxes of 20 ultraviolet and optical (not including \hbox{He\,\textsc{ii}}) emission lines in 10 extreme nearby star-forming regions \citep{Chevallard18}, the current version of the code does not incorporate models for density-bounded \hbox{H{\sc ii}}-regions, narrow-line regions of AGN and radiative shocks. The implementation of these components, in progress, should enable valuable constraints on the production and escape of ionizing radiation from the emission-line spectra of metal-poor star-forming galaxies, and soon of reionization-era galaxies observed by \textit{JWST}. \section*{Acknowledgements} We are grateful to D.~Erb, M.~Hirschmann, P.~Petitjean, P.~Senchyna, D.~Stark and A.~Wofford for helpful discussions. We also thank M.~Mignoli for providing us with line-flux measurement in the average spectrum of type-2 AGN from \citet{Mignoli19}. AF, SC, GB, AF and AVG acknowledge financial support from the European Research Council (ERC) via an Advanced Grant under grant agreement no. 321323--NEOGAL. AF acknowledges support from the ERC via an Advanced Grant under grant agreement no. 339659-MUSICOS. GB acknowledges financial support from DGAPA-UNAM through PAPIIT project IG100319. CM acknowledges financial support through grant CONACyT-CB2015-254132. \bibliographystyle{mnras}
2,877,628,090,639
arxiv
\section[Introduction]{Introduction} \label{sec:intro} Multiple testing procedures are important tools for identifying statistically significant findings in massive and complex data while controlling a specific error rate. An important focus has been given to methods controlling the false discovery rate (FDR), i.e., the expected proportion of falsely rejected hypotheses among all rejected hypotheses, which has become the standard error rate for high dimensional data analysis. Since the original procedure of \cite{BenjaminiHochberg95}, much effort has been undertaken to design FDR controlling procedures that adapt to various underlying structures of the data, such as the quantity of signal, the signal strength and the dependencies, among others. \clearpage The \proglang{R} package \pkg{DiscreteFDR}, presented in this paper, deals with adaptation to discrete and non-identically distributed test statics by implementing procedures developed by \cite{DDR2018} (in the sequel abbreviated as \citetalias{DDR2018}). This type of data arises in many relevant applications, in particular when data represent frequencies or counts. Examples can be found in clinical studies (see e.g., \cite{WestWolf1997}), genome-wide association studies (GWAS) (see e.g., \citetalias{Dickhaus2012}) and next generation sequencing data (NGS) (see e.g., \cite{DoergeChen2015}). The primary discrete test we have in mind in this paper is Fisher's exact test, see \cite{LehmannRomano}, but we also sketch an application of \pkg{DiscreteFDR} to multiple Poisson tests in the spirit of \cite{JimenezUnaAlvarez2018}. It is well known (see e.g., \cite{WestWolf1997}) that applying critical values derived for continuous approximations to discrete null distributions can generate a severe power loss, already at the stage of the single tests. A consequence is that using 'blindly' the BH procedure with discrete $p$-values will control the FDR in a too conservative manner. Therefore, more powerful procedures that avoid this conservatism are much sought after in applications, see for instance \citetalias{Karp2016}, \citetalias{vandenBroek2015} and \citetalias{Dickhaus2012}. In the literature, constructing multiple testing procedures that take into account the discreteness of the test statistics has a long history, for more details see \citetalias{DDR2018}. The heuristic motivation for the procedures implemented in \pkg{DiscreteFDR} is as follows. Let $p_{(1)} \le \ldots \le p_{(m)} $ denote the ordered $p$-values and $H_{(1)},\ldots,H_{(m)} $ the corresponding null hypotheses. The BH procedure [BH] works by rejecting $H_{(1)},\ldots,H_{(\hat{k})} $, where $\hat{k}$ is the largest integer $k$ such that \begin{align} p_{(k)} & \le \frac{k}{m}\cdot \alpha. \label{eq:BH:procedure} \end{align} Now suppose that the cumulative distribution functions $F_1, \ldots,F_m$ of the $p$-values under the null hypotheses are known and introduce the transformation \begin{align} \xi(t) & = \frac{1}{m} \sum_{i=1}^m F_i(t) , \:\:t\in[0,1]. \label{eq:Heyse0} \end{align} For continuous settings we often have $F_i(t)=t$ which implies $\xi(t)=t$ and so we can rephrase \eqref{eq:BH:procedure} as \begin{align} \xi(p_{(k)}) & \le \frac{k}{m}\cdot \alpha. \label{eq:Heyse} \end{align} \cite{Heyse2011} proposed to use the transformation $\xi$ in \eqref{eq:Heyse0}, where the $F_i$ need no longer be uniform and identical. The benefit of this approach is that - depending on the discreteness and heterogeneity of the involved $p$-value distributions - $\xi(t)$ may be much smaller than $t$. Clearly, the smaller the $\xi$-values, the more hypotheses can be rejected. Figure~\ref{fig:FbarPlotsHellerForPaper} displays such a function where the functions $F_1, \ldots, F_{2446}$ are derived from $m=2446$ independent Fisher's exact test statistics based on the pharmacovigilance data from \citet{Heller2012} (see Section~\ref{sec:Using} for more details). In this example we have $\xi(t) \approx t/3$, thereby yielding a potentially strong rejection enhancement. Unfortunately, the Heyse procedure does not rigorously control the FDR in general; counter-examples are provided in \cite{Heller2012} and \citetalias{DDR2018}. To correct this, \citetalias{DDR2018} introduce new procedures relying on the following modifications of the $\xi$ function (more details are presented in Section \ref{sec:mathematics}): \begin{align*} \xi_{\textnormal{SU}} (t)= \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(\tau_m\right)};\:\:\: \xi_{\textnormal{SD}} (t)=\frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(t\right)},\:\: t\in[0,1], \end{align*} where $\tau_m$ is the generalized inverse of $\xi_{\textnormal{SD}}$ taken at point $\alpha$. Figure~\ref{fig:FbarPlotsHellerForPaper} demonstrates that the difference between these modifcations and the original $\xi$ can be very small, in particular for small values of $t$. In addition, \citetalias{DDR2018} also introduce more powerful 'adaptive' versions, meaning that the derived critical values are designed in a way that 'implicitly estimates' the overall proportion of true null hypotheses. All these procedures provide rigorous $\textnormal{FDR}$ control under independence of the $p$-values and are implemented in the \proglang{R} package \pkg{DiscreteFDR}. \begin{figure}[htbp] \centering \vspace{-1.5cm} \includegraphics[width=1\textwidth]{FbarPlotsHellerForPaper3.pdf} \vspace{-1.5cm} \caption{Plots of variants of $\xi$ for the pharmacovigilance data. The solid black line corresponds to the uniform case, the discrete variants are represented by blue (for $\xi$), green (for $\xi_{\textnormal{SD}}$) and red (for $\xi_{\textnormal{SU}}$) solid lines. Additionally, five arbitrarily selected $F_i$'s are displayed by using different line types.} \label{fig:FbarPlotsHellerForPaper} \end{figure} While there exist numerous \proglang{R} functions and packages that implement multiple testing procedures in the continuous setting (see e.g., \cite{RMultcomp2008} and \citetalias{RMutoss2017}), there are only relatively few tools available that deal specifically with discrete test statistics. The package \pkg{MHTdiscrete} (see \cite{MHTdiscrete}) is described by its authors as a 'comprehensive tool for almost all existing multiple testing methods for discrete data'. It implements several FWER and FDR procedures designed for discrete data. While the procedures for FWER control are extensively described in an accompanying preprint (see \cite{ZhuGuo2017}), there seems to be no detailed mathematical description of the implemented FDR procedures. The package \pkg{discreteMTP} (see \cite{discreteMTP}) also implements several methods aiming at $\textnormal{FDR}$ control (including the Heyse procedure) described in more detail in \cite{Heller2012}. The main contribution of the package \pkg{DiscreteFDR} is to provide practitioners with a simple to use set of tools (both adaptive and non-adaptive) for analysing discrete data with both proven $\textnormal{FDR}$ control and good performance. In this paper, our primary aim is to introduce \pkg{DiscreteFDR}. As an 'appetizer', we start by illustrating the main ideas through analysis of a toy data set. We hope to convince readers that it is worthwile to use discrete FDR methods. We then review the mathematical methods and results from \citetalias{DDR2018}, followed by some more technical details of the implementation in Section \ref{sec:implementation} . Section \ref{sec:Using} contains an analysis of some real data and includes an example that illustrates how \pkg{DiscreteFDR} can be used for arbitrary discrete tests. The paper concludes with a summary and discussion. We realize - and indeed hope - that the audience of this paper may be quite heterogeneous, which is why we would like to suggest some guidance for possible ways of reading it. For subject matter scientists and practitioners who may not be interested in the mathematical or software details, we especially recommend to study Sections \ref{sec:ToyExample} and \ref{sec:Using}. For readers who additionally want to understand more of the mathematical background we recommend Section \ref{sec:mathematics}, for readers interested in the implementation details of the R-package we recommend Section \ref{sec:implementation}. \section[Examples]{A toy example} \label{sec:ToyExample} To give a first impression of how \pkg{DiscreteFDR} works, we consider an artifical toy example. A more realistic example involving pharmacovigilance data is given in Section \ref{sec:Using}. Suppose we would like to compare two treatments in nine different populations. For each population we do this by evaluating the responders and non-responders for each treatment. This leads to categorical data which can be represented, for each population $i=1, \ldots,9$ in the following 2 $\times$ 2 table: \begin{table}[htb] \begin{center} \begin{tabular}{lccc} & responders & non responders & total \\ treatment $1$ & $x_{1i}$ & $y_{1i}$ & $n_{1i}$ \\ treatment $2$ & $x_{2i}$ & $y_{2i}$ & $n_{2i}$ \\ total & $x_{1i} + x_{2i}$ & $y_{1i} + y_{2i}$ & $n_i = n_{1i} + n_{2i}$ \end{tabular} \caption{2 $\times$ 2 table for population $i$.} \end{center} \end{table} Denoting the responder probabilities for population $i$ by $\pi_{1i}$ and $\pi_{2i}$ we can test e.g. \begin{align*} H_{0i}: \pi_{1i} = \pi_{2i} & \qquad \text{vs.} \qquad H_{1i}: \pi_{1i} \neq \pi_{2i} \end{align*} by using Fisher's (two-sided) exact test (see \cite{LehmannRomano}, which is implemented in the \proglang{R} function \code{fisher.test}). Suppose the data in the nine populations are independent and we observe the following data frame \code{df} \begin{Schunk} \begin{Soutput} X1 Y1 X2 Y2 1 4 144 0 132 2 2 146 0 132 3 2 146 1 131 4 14 134 3 129 5 6 142 2 130 6 9 139 1 131 7 4 144 2 130 8 0 148 2 130 9 1 147 2 130 \end{Soutput} \end{Schunk} In this data frame each of the 9 rows represents the data of an observed 2 $\times$ 2 table: e.g., the third row of the data corresponds to $x_{13} = 2, y_{13} = 146, x_{23} = 1, y_{23} = 131$. Even though in this example, the total number of tested hypotheses $m=9$ is very small, for illustrative purposes we deal with the multiplicity problem here by controlling $\textnormal{FDR}$ at level $\alpha = 5\%$. The DBH step-down procedure (to be explained in more detail in Section \ref{sec:mathematics}) can be applied directly to the data frame object \code{df} and yields the following adjusted $p$-values: \begin{Schunk} \begin{Sinput} R> library("DiscreteFDR") R> DBH.sd.fast <- fast.Discrete(df, alternative = "two.sided", + direction = "sd") R> DBH.sd.fast$Adjusted \end{Sinput} \begin{Soutput} [1] 0.25630985 1.00000000 1.00000000 0.03819796 0.51482782 0.03819796 [7] 1.00000000 0.47895996 1.00000000 \end{Soutput} \end{Schunk} Thus we can reject two hypotheses at $\textnormal{FDR}$-level $\alpha=5\%$. In order to compare this with the usual (continuous) BH procedure we have to determine the raw $p$-values first. This would be possible by applying the \code{fisher.test} function to all nine 2 $\times$ 2 tables. Alternatively, we may use the more convenient function \code{fisher.pvalues.support} included in our package for accessing the raw $p$-values: \begin{Schunk} \begin{Sinput} R> p <- fisher.pvalues.support(df, alternative = "two.sided") R> raw.pvalues <- p$raw R> p.adjust(raw.pvalues, method = "BH") \end{Sinput} \begin{Soutput} [1] 0.37430072 0.74976959 1.00000000 0.09570921 0.51928737 0.09570921 [7] 0.77313633 0.49804147 0.77313633 \end{Soutput} \end{Schunk} Applying the continuous BH procedure from the \pkg{stats} package in the last line of code, we find that we can not reject any hypotheses at $\textnormal{FDR}$-level $\alpha=5\%$. As this example illustrates, the discrete approach can potentially yield a large increase in power. The gain depends on the involved distribution functions and the raw $p$-values. To appreciate where this comes from, it is instructive to consider the distribution functions $F_1, \ldots, F_{9}$ of the $p$-values under the null in more detail. Take for instance the first 2 $\times$ 2 table: \begin{table}[htb] \begin{center} \begin{tabular}{lccc} & responders & non responders & total \\ treatment $1$ & $4$ & $144$ & $148$ \\ treatment $2$ & $0$ & $132$ & $132$ \\ total & $4$ & $276$ & $280$ \end{tabular} \caption{2 $\times$ 2 table for population $1$.} \end{center} \end{table} Fisher's exact test works by determining the probability of observing this (or a more 'extreme') table, given that the margins are fixed. So each $F_i$ is determined by the margins of table $i$. Since $x_{11}+x_{21}=4$, the only potentially observable tables are given by $x_{11}=0, \ldots,4$. For each one of these 5 values a $p$-value can be determined using the hypergeometric distribution. Therefore, the $p$-value of any 2 $\times$ 2 table with margins given by the above table can take (at most) 5 distinct values, say $x_1, \ldots, x_5$. Combining these 5 values into a set, we obtain the \emph{support} $\mathcal{A}_1= \{x_1, \ldots, x_5 \}$ of distribution $F_1$. Now we can continue in this vein for the remaining 2 $\times$ 2 tables to obtain the supports $\mathcal{A}_1, \ldots, \mathcal{A}_9$ for the distributions functions $F_1, \ldots, F_{9}$. The supports can be accessed via the \code{$support} command, e.g. \begin{Schunk} \begin{Sinput} R> p$support[c(1,5)] \end{Sinput} \begin{Soutput} [[1]] [1] 0.04820493 0.12476691 0.34598645 0.62477763 1.00000000 [[2]] [1] 0.002173856 0.007733719 0.028324482 0.069964309 0.154043258 [6] 0.288492981 0.481808361 0.726262402 1.000000000 \end{Soutput} \end{Schunk} returns $\mathcal{A}_1$ and $\mathcal{A}_5$. Panel (a) in Figure \ref{fig:otto} shows a graph of the distribution functions $F_1, \ldots, F_{9}$. Each $F_i$ is a step-function with $F_i(0)=0$, the jumps occuring only on the support $\mathcal{A}_i$ and $F_i(x)=x$ only for $x \in \mathcal{A}_i$. In particular, all distributions are stochastically larger than the uniform distribution (i.e., $F_i(x) \le x$), but in a heterogeneous manner. This heterogeneity can be exploited e.g., by transforming the raw $p$-values from the exact Fisher's test using the function $\displaystyle \xi_{\textnormal{SD}} (x) = \sum_{i=1}^9 \frac{F_i(x)}{1-F_i(x)}$ presented in the Introduction. Panel (b) shows that $\xi_{\textnormal{SD}}$ is a step function. Its jumps occur on the joint support $\mathcal{A}= \mathcal{A}_1 \cup \ldots \cup \mathcal{A}_9$. Panel (b) also shows that $\displaystyle \xi_{\textnormal{SD}} (x) \ll x$, at least for small values of $x$. It turns out that the critical values of our new DBH step-down procedure are essentially given by inverting $\xi_{\textnormal{SD}}$ at the critical values of the [BH] procedure $1 \cdot \alpha/9, 2 \cdot \alpha/9, \ldots, \alpha $, so that these values are considerably larger than the [BH] critical values (for more details see Section \ref{sec:mathematics}). This is illustrated in panel (c) along with the ordered $p$-values. In particular, all asterisks are located above the green [BH] dots, therefore this procedure can not reject any hypothesis. In contrast, the two smallest $p$-values are located below red DBH step-down dots, so that this procedure rejects two hypotheses as we had already seen earlier. \begin{figure}[htb] \centering \includegraphics{article-toy5} \caption{\label{fig:otto} Panel (a) depicts the distribution functions $F_1, \ldots, F_9$ in various colours, (b) is a graph of the transformation $\xi_{\textnormal{SD}}$. The uniform distribution function is shown in light grey in (a) and (b). Panel (c) shows the [BH] critical values (green dots), the DBH step-down critical values (red dots) and the sorted raw $p$-values (asterisks).} \end{figure} \section[New procedures]{Implemented FDR-controlling procedures} \label{sec:mathematics} The procedures used in the package are all based upon a comparison between the ordered $p$-values $p_{(k)}$, $1\leq k \leq m$, and a sequence of nondecreasing {\it critical values} $\tau_{k}$, $1\leq k \leq m$. Depending on how these two sequences intercept allow to define a rejection number $\hat k$ and thus a rejection set of null hypotheses $H_{(1)},\dots,H_{(\hat{k})}$. Classically, the {\it step-up procedure} with critical values $\tau_{k}$, $1\leq k \leq m$, corresponds to choose the last right crossing point $$ \hat{k}_{SU}=\max\{k\::\: p_{(k)}\leq \tau_k \}. $$ Hence, it goes backwards, starting from the largest $p$-value $p_{(m)}$, stopping the first time it finds $k_0$ such that $p_{(k_0)}\leq \tau_{k_0}$ and returning $\hat{k}_{SU}=k_0$. By contrast, the {\it step-down procedure} with critical values $\tau_{k}$, $1\leq k \leq m$ uses the first left crossing point $$ \hat{k}_{SD}=\max\{k\::\: \mbox{ for all } k'\leq k,\:p_{(k')}\leq \tau_{k'} \}. $$ Hence, it goes forward, starting from the smallest $p$-value $p_{(1)}$, stopping the first time it finds $k_0$ such that $p_{(k_0)}> \tau_{k_0}$ and returning $\hat{k}_{SD}=k_0-1$. Such multiple testing procedures are thus driven by a sequence of critical values and by a choice between the step-up or step-down version. In our package, the $5$ different possible choices are listed in Table~\ref{tab:ListProceduresTransformations}, with $3$ step-up procedures [DBH-SU], [A-DBH-SU], [DBR-$\lambda$] and $2$ step-down procedures [DBH-SD], [A-DBH-SD]. We easily check that [A-DBH-SU] (resp. [A-DBH-SD]) rejects always more null hypotheses than [DBH-SU] (resp. [DBH-SD]). Note that the names of the procedures are slightly different in the original paper \citetalias{DDR2018}. This is done to emphasize that our package is primarily devoted to the discrete case. \subsection{Critical values} \label{subsec:CritConsts} The specific shape of the critical values comes from the FDR upper-bounds derived in \citetalias{DDR2018}, which ensures that these procedures control the FDR at the nominal level $\alpha$ under independence of the $p$-values, see Theorem~1 and Corollary~1 in \citetalias{DDR2018}. In Table~\ref{tab:ListProceduresTransformations}, each $F_i$ is defined as the (least favorable) cumulative distribution function of the $p$-value $p_i$ under the null hypothesis. As in the example from Section \ref{sec:ToyExample}, $\mathcal{A} =\mathcal{A}_1 \cup \ldots \cup \mathcal{A}_m \subset [0,1]$ stands for the union of the supports of the marginal distributions of the $p$-values, $p_i$, $1\leq i \leq m$ which can be determined under the full hull hypothesis, i.e., when all null hypotheses are assumed to be true. While $\mathcal{A}=[0,1]$ in the case where the $F_i$'s are continuous functions, the primary setting we have in mind is a large number of simultaneous Fisher exact tests, so that $\mathcal{A} =\mathcal{A}_1 \cup \ldots \cup \mathcal{A}_m $ is finite but very large. See also Section \ref{sec:ToyExample} for some concrete examples. Let us underline that obtaining such $\tau_k$ numerically might be time consuming because the overall support $\mathcal{A}$ can be large while testing whether each $t\in \mathcal{A}$ satisfies the required condition given by the second column in Table \ref{tab:ListProceduresTransformations} involves a complex combination of the $F_i$, $1\leq i \leq m$. In the package, we have implemented a shortcut that reduces the range of $t\in \mathcal{A}$ that has to be explored: it is based on the fact that if $F_i(t)\leq t$ for all $t$ and $i$ (super-uniformity), we have the following lower bounds $\tau^{\tiny \mbox{min}}_k$'s on the critical values $\tau_k$'s, see Lemmas~2, 3 and 4 in \citetalias{DDR2018}: \begin{tabular}{ll} \toprule \relax [DBH-SU] & $\tau^{\tiny \mbox{min}}_k = \max\{t\in \mathcal{A}\::\: t\leq \alpha k/m (1+\alpha)^{-1} \}$ \\ \relax [DBH-SD] & $\tau^{\tiny \mbox{min}}_k = \max\{t\in \mathcal{A}\::\: t\leq \alpha k/m (1+\alpha k/m)^{-1} \}$\\ \relax [A-DBH-SU] & {$\begin{aligned}[t] \tau^{\tiny \mbox{min}}_m &=\max\{t\in \mathcal{A}\::\: t\leq \alpha (1+\alpha)^{-1} \}\\ \tau^{\tiny \mbox{min}}_k &= \max\left\{ t\in \mathcal{A}\::\: t\leq \tau_m \wedge \left((1-\tau_m) \frac{\alpha k}{m-k+1}\right) \right\},\: k<m; \end{aligned}$} \\ \relax [A-DBH-SD]& $\tau^{\tiny \mbox{min}}_k = \max\{t\in \mathcal{A}\::\: t\leq \alpha k/ (m -(1-\alpha)k+1) \}$\\ \relax [DBR-$\lambda$] & $\tau^{\tiny \mbox{min}}_k = \max\left\{t\in \mathcal{A}\::\: t\leq \lambda \wedge \left((1-\lambda) \frac{\alpha k}{m-k+1}\right) \right\}$ \\ \bottomrule \relax \\ \end{tabular}\\ Our current implementations of [DBH-SU] and [A-DBH-SU] first determine $\tau_m$ by searching for $\tau_m$ in $\mathcal{A} \cap [\tau^{\tiny \mbox{min}}_m,1]$ and then determine all other $\tau_k$ simultaneously using $\tau_k\ge \tau^{\tiny \mbox{min}}_1$ instead of $\tau_k\ge \tau^{\tiny \mbox{min}}_k$. We take this approach for simplicity and performance reasons. The stepdown procedures only use the latter constraint. These lower bounds help to reduce the computational burden considerably. \subsection{Transformed $p$-value} \label{subsec:TransPval} If we are only interested in the set of rejected null hypotheses and not in the critical values, we can significantly speed-up the program by skipping the explicit computation of the critical values and by directly considering the transformed $p$-values: \begin{align}\label{equ:transfpvalues} p'_{k} = \xi_k (p_{(k)}),\:1\leq k \leq m, \end{align} where the functions $\xi_k(\cdot)$, defined in Table~\ref{tab:ListProceduresTransformations}, are such that $\tau_k$ is the inverse of $\xi_k$ at point $\alpha k/m$. Note that while the elements of $\{p_{(k)},1\leq k \leq m\}$ are ordered, this is not necessarily the case for the elements of $\{p'_{k},1\leq k \leq m\}$. The following proposition is obvious. \begin{prop} For each of the critical values listed in Table~\ref{tab:ListProceduresTransformations} we have for all $1\leq k \leq m$, \begin{align} p_{(k)} \le \tau _k & \Longleftrightarrow p'_{k} \le \alpha k /m, \label{eq:equiv:transformation} \end{align} where $p'_{k}$ are the transformed $p$-values defined by \eqref{equ:transfpvalues}. \end{prop} A consequence of \eqref{eq:equiv:transformation} is that the step-up and step-down cutoffs can be computed by using only the transformed $p$-values and the BH critical values as follows: \begin{align*} \hat{k}_{SU}&=\max\{k\::\: p'_{k}\leq \alpha k/m \}\\ \hat{k}_{SD}&=\max\{k\::\: \mbox{ for all } k'\leq k,\:p'_{k'}\leq \alpha k'/m \}. \end{align*} Thus, all of the above methods can be interpreted as variant of the classical (SU or SD) BH procedure, for which each $p$-value has been suitably transformed to account for discreteness. \begin{sidewaystable}[!p] \begin{tabular}{lll} \toprule Procedure & Critical values & Transformation\\ \hline \begin{tabular}{l} [DBH-SU]\\ \\ \\ \\ \\$k<m$ \end{tabular} & {$\begin{aligned}[t] \tau_m &=\max\left\{t \in \mathcal{A}\::\: \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(t\right)} \leq \alpha \right\}\nonumber\\ \tau_k &= \max\left\{t \in \mathcal{A}\::\: t\leq \tau_m,\:\frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(\tau_m\right)} \leq \alpha k/m \right\ \end{aligned}$} & {$\begin{aligned}[t] \xi_m(t) &= \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(t\right)}\\ \xi_k(t) &= \begin{cases} \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(\tau_m\right)} & ,t \le \tau_m\\ 1 & ,\text{else}\\ \end{cases} \\ \end{aligned}$}\\ \midrule \relax [DBH-SD] & {$\begin{aligned}[t] \tau_k = \max\left\{ t\in \mathcal{A}\::\: \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(t\right)} \leq \alpha k/m \right\} \end{aligned}$} & {$\begin{aligned}[t] \xi_k (t) &= \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(t\right)}\\ \end{aligned}$}\\ \midrule \relax \begin{tabular}{l} [A-DBH-SU]\\ \\ \\ \\ \\$k < m$ \end{tabular} & {$\begin{aligned}[t] \tau_m &=\max\left\{ t\in \mathcal{A}\::\: \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(t\right)} \leq \alpha \right\}\nonumber\\ \tau_k &= \max\left\{ t\in \mathcal{A}\::\: t \leq \tau_m,\: \sum_{\ell=1}^{m-k+1} \left( \frac{F\left(t\right)}{1-F\left(\tau_m\right)}\right)_{(\ell)} \leq \alpha k\right\ \end{aligned}$} & {$\begin{aligned}[t] \xi_m (t) &= \frac{1}{m}\sum_{i=1}^m \frac{F_i\left(t\right)}{1-F_i\left(t\right)}\\ \xi_k (t) &= \begin{cases} \frac{1}{m} \sum_{\ell=1}^{m-k+1} \left( \frac{F\left(t\right)}{1-F\left(\tau_m\right)}\right)_{(\ell)} & ,t \le \tau_m\\ 1 & ,\text{else}\\ \end{cases} \\ \end{aligned}$}\\ \midrule \relax [A-DBH-SD] & {$\begin{aligned}[t] \tau_k = \max\left\{ t\in \mathcal{A}\::\: \sum_{\ell=1}^{m-k+1} \left( \frac{F\left(t\right)}{1-F\left(t\right)}\right)_{(\ell)}\leq \alpha k\right\} \end{aligned}$} & {$\begin{aligned}[t] \xi_k (t) &= \frac{1}{m} \sum_{\ell=1}^{m-k+1} \left( \frac{F\left(t\right)}{1-F\left(t\right)}\right)_{(\ell)}\\ \end{aligned}$}\\ \midrule \relax \begin{tabular}{l} [DBR-$\lambda$]\\ \\ \\ \\ \\$k<m$ \end{tabular} & {$\begin{aligned}[t] \tau_m &=\max\left\{ t\in \mathcal{A}\::\: \left( F\left(t\right)\right)_{(1)} \leq \left((1-\lambda) m\alpha\right) \wedge \lambda \right\}\: \\ \tau_k &= \max\left\{ t\in \mathcal{A}\::\: \left( F\left(t\right)\right)_{(1)} \leq \lambda,\: \sum_{\ell=1}^{m-k+1} \left( F\left(t\right)\right)_{(\ell)}\leq \alpha k(1-\lambda)\right\ \end{aligned}$} & {$\begin{aligned}[t] \xi_m (t) &= \begin{cases} \frac{\left( F\left(t\right)\right)_{(1)}}{m(1- \lambda)} &\qquad ,\left( F\left(t\right)\right)_{(1)} \le \lambda\\ 1 & \qquad ,\text{else}\\ \end{cases} \\ \xi_k (t) &= \begin{cases} \frac{\sum_{\ell=1}^{m-k+1} \left( F\left(t\right)\right)_{(\ell)}}{m(1- \lambda)} &\qquad ,\left( F\left(t\right)\right)_{(1)} \le \lambda\\ 1 & \qquad ,\text{else}\\ \end{cases} \\ \end{aligned}$}\\ \bottomrule\\ \end{tabular} \caption{Implemented procedures (left column), critical values (center) and associated transformation functions (right column). The suffix 'SU' stands for step-up, the suffix 'SD' for step-down procedures.} \label{tab:ListProceduresTransformations} \end{sidewaystable} \subsection{Adjusted $p$-values} In applications, it is often convenient for the analyst to use \emph{adjusted} $p$-values $\widetilde{p}_k$ instead of the raw $p$-values and rejecting those hypotheses for which $\widetilde{p}_k \le \alpha$. The advantage of this approach is that it is more convenient to apply and easier to communicate. Furthermore, it avoids to explicitly rely on the, often somewhat arbitrary, choice of $\alpha$. With the transformations introduced above, it is straightforward to define (variants of) discrete FDR-adjusted $p$-values. The generic definition given in \cite{DudoitLaan2007} then yields \begin{align} \widetilde{p}_{k} &= \min_{\ell=k, \ldots,m } \left( \frac{m}{\ell} \cdot p'_{\ell} \right) \wedge 1,\:\:\: 1\leq k \leq m\label{adjustedpvaluesSU} \intertext{for step-up procedures and} \widetilde{p}_{k} &= \max_{\ell=1, \ldots,k } \left(\frac{m}{\ell} \cdot p'_{\ell} \right) \wedge 1,\:\:\: 1\leq k \leq m,\label{adjustedpvaluesSD} \end{align} for step-down procedures. For our step-down procedures, the usual result holds true. \begin{prop}\label{prop:adjpvalues} For the step-down procedures [DBH-SD], [A-DBH-SD] and the step-up procedure [DBR-$\lambda$] listed in Table~\ref{tab:ListProceduresTransformations} we have for all $\alpha\in (0,1)$, for all $1\leq k \leq m$, \begin{align*} \text{ $H_{(k)}$ is rejected by the procedure taken at level $\alpha$} \:\:\:\Longleftrightarrow \:\:\: \widetilde{p}_{k} \le \alpha , \end{align*} where $\widetilde{p}_{k}$ are the adjusted $p$-values defined by \eqref{adjustedpvaluesSD}. \end{prop} In the above proposition, note that $H_{(k)}$ is given by the original ordering of the $p$-values $\{p_i,1\leq i \leq m\}$. For the procedures [DBH-SU], [A-DBH-SU], the situation is more complicated since the adjusted $p$-value $\widetilde{p}_{k}$ depends on $\alpha$ (through $\tau_m$). The statement in Proposition~\ref{prop:adjpvalues} actually still holds in that situation but not the usual interpretation that the adjusted $p$-value $\widetilde{p}_{k}$ is the smallest level $\alpha$ at which the procedure rejects $H_{(k)}$. Hence, the analyst would need to exercise care in interpreting them. To avoid any confusion, the package does not report adjusted $p$-values for [DBH-SU] and [A-DBH-SU]. \section[Implementation]{Implementation in the package \pkg{DiscreteFDR}} \label{sec:implementation} \subsection{General structure} \label{subsec:General} The package consists of four groups of functions: \begin{longtable}{ll} Main functions & \code{discrete.BH}\xspace \\ & \code{DBR}\xspace \\ Kernel functions & \code{kernel.DBH.crit} \\ & \code{kernel.DBH.fast} \\ & \code{kernel.ADBH.crit} \\ & \code{kernel.ADBH.fast} \\ & \code{kernel.DBR.crit} \\ & \code{kernel.DBR.fast} \\ Helper functions & \code{match.pvals} \\ & \code{build.stepfuns} \\ & \code{short.eff} \\ & \code{fisher.pvalues.support} \\ Wrapper functions & \code{DBH}\xspace \\ & \code{ADBH}\xspace \\ & \code{fast.discrete} \\ \end{longtable} The \code{discrete.BH}\xspace function implements [DBH-SU], [DBH-SU], [A-DBH-SU] and [A-DBH-SD]. Similarly, \code{DBR}\xspace implements [DBR-$\lambda$]. They use the first three of the helper functions for common operations (see details in \ref{subsec:CompDetails}) and the kernels for the actual computation. [DBH-SU], [DBH-SD], [A-DBH-SU] and [A-DBH-SD] can be accessed directly through the wrapper functions \code{DBH}\xspace and \code{ADBH}\xspace, respectively. The wrapper function \code{fast.discrete} applies the Discrete FDR-controlling procedures, which are implemented in \code{discrete.BH}\xspace, to a set of $2 \times 2$ contingency tables, given by a matrix or data frame. It uses the \code{fisher.pvalues.support} helper function to compute $p$-value c.d.f.s and raw $p$-values from these tables in the framework of Fisher's exact test. We also provide the \code{amnesia} data set, used in our examples in Section \ref{sec:Using} and in our paper \citetalias{DDR2018}. It is basically the amnesia data set of package \pkg{discreteMTP}, but slightly reformatted (the difference lies in the third column). The end user should only use the main functions \code{DBR}\xspace and \code{discrete.BH}\xspace, and the wrapper functions \code{fast.discrete}, \code{DBH}\xspace and \code{ADBH}\xspace. The other functions are only internal functions called by the main ones. We intentionally did not hide them, so that interested users would be able to understand how the main procedures work. The functions \code{discrete.BH}\xspace, \code{DBH}\xspace, \code{ADBH}\xspace and \code{DBR}\xspace take the following input values: \begin{longtable}{ll} \code{raw.pvalues} & The vector (of the same length as \code{pCDFlist}) of raw observed $p$-values\\ & which is calculated from the data.\\ \code{pCDFlist} & A list of vectors that represent the supports $\mathcal{A}_1, \ldots, \mathcal{A}_m$ of the discrete\\ & distribution functions $F_1, \ldots, F_m$ under the respective null hypotheses,\\ & as described in Section \ref{sec:mathematics}.\\ \code{alpha} & The global significance level $\alpha \in (0,1)$ at which the procedure provides\\ & FDR control; the default is $0.05$.\\ \code{direction} & (\code{DBH}\xspace and \code{ADBH}\xspace only) A string, either \code{"su"} or \code{"sd"}, specifying whether\\ & the step-up variant (the default, \code{"su"}) or the step-down variant \\ & (\code{"sd"}) should be used.\\ \code{adaptive} & Specifying whether the adaptive version is to be used (\code{TRUE}) or not\\ & (\code{FALSE}).\\ \code{lambda} & (\code{DBR}\xspace only) The $\lambda$ parameter of the [DBR-$\lambda$] procedure as in Table \ref{tab:ListProceduresTransformations};\\ & the default is $0.05$.\\ \code{ret.crit.consts} & Specifying whether the critical values $\tau_k$ are to be computed and\\ & included in the output list, at the expense of computational speed; \\ & the default is \code{FALSE}. \end{longtable} They provide the following outputs: \begin{longtable}{ll} \code{Rejected} & A vector containing the rejected raw $p$-values\\ \code{Indices} & A vector containing the indices of rejected hypotheses\\ \code{k.hat} & Number of rejected hypotheses. This corresponds to $\hat{k}_{SU}$ and $\hat{k}_{SD}$,\\ & as described in Section \ref{subsec:TransPval}.\\ \code{Alpha} & Maximum significance level for the transformed $p$-values for which\\ & a rejection occurred, that is \code{Alpha} $= \alpha \cdot$ \code{k.hat}$/ m$. This corre-\\ & sponds to $\tau_k$ $(k = 1, \ldots, m)$ as in Section \ref{sec:mathematics}.\\ \code{Critical.constants} & A vector containing the critical values (if requested)\\ \code{Adjusted} & A vector containing adjusted $p$-values (if available)\\ \code{Lambda} & (\code{DBR}\xspace only) The parameter \code{lambda} that was used when calling \code{DBR}\xspace \end{longtable} More details as to the implementation are provided in the following part. \subsection{Details for some specific functions} \label{subsec:CompDetails} \subsubsection{Helper functions} The \code{match.pvals} function performs nearest-neighbor matching for all elements of raw.pvalues, i.e., it checks for each value whether it occurs in its respective $p$-value c.d.f. If this is not the case, it is replaced by the value that is closest to it, that is, its nearest neighbor in its c.d.f. This is to ensure that all values of raw.pvalues actually originate from their respective c.d.f.s, e.g., to correct rounding errors. It has been inspired by a help page of the package \pkg{discreteMTP}. \code{build.stepfuns} converts the vectors in \code{pCDFlist} to step function objects. This makes them easier to evaluate in the kernel functions. It is assumed and required that $F_i(t) \leq t$ applies to all c.d.f.s. Compliance with this premise cannot be not checked, so the user is responsible for providing correct vectors. If this condition is not met, the results may be incorrect. The \code{short.eff} function extracts all values from a sorted vector that are greater than or equal to the effective critical value associated to a threshold. It simply replaces multiple recurring lines of code with one single function call. \code{fisher.pvalues.support} computes discrete raw $p$-values and their support for the test of no association between two categorical variables in 2 x 2 contingency tables using Fisher's exact tests. The $p$-values are computed directly by \code{phyper}, instead of \code{fisher.test}, because the latter is much slower. The function is used by the \code{fast.discrete} function to apply such contingency tables directly to the \code{discrete.BH}\xspace function. \subsubsection{Main Functions} Basically, both main functions have the same workflow: \begin{enumerate} \item Use \code{match.pvals} for matching of raw $p$-values with the c.d.f.s and sort the results in ascending order. \item Convert the c.d.f. vectors in \code{pCDFlist} to step functions by means of \code{build.stepfuns}. \item Determine the overall support $\mathcal{A} = \bigcup \mathcal{A}_i$ from the individual c.d.f.s, remove double values and sort them in ascending order. \item Use the knowledge of lower bounds, as presented in Section \ref{subsec:CritConsts}, to remove unnecessary elements from the support. \item Compute transformed $p$-values and/or critical values (if requested) with kernel functions (see \ref{subsubsec:KernelFunctions}). \item Create output list with elements as described in Section \ref{subsec:General}. \end{enumerate} \subsubsection{Kernel functions} \label{subsubsec:KernelFunctions} As stated in Section \ref{sec:mathematics}, there are two ways to determine which hypotheses corresponding to the elements of a raw $p$-value vector can be rejected. \begin{enumerate} \item With critical values (see \ref{subsec:CritConsts}): this approach works by first determining the critical values. Especially when the size of the support and the number of hypotheses are large, it is computationally intensive, because all elements of the support have to be evaluated by every single c.d.f. \item With transformed $p$-values (see Section \ref{subsec:TransPval}): here, only the raw $p$-values are evaluated. Thus, it is much more efficient. \end{enumerate} As a result, all three implemented procedures have two kernels, that is, a fast one for simplified computation and a slower implementation that calculates critical values. These values are then used to determine which hypotheses are to be rejected and which are not. The kernel functions need the following parameters: \begin{longtable}{ll} \code{stepf} & A list of step function objects.\\ \code{pv.numer} & A vector of values from the support for the argument of the $F_i$ in the\\ & numerators of the fractions in the formulas presented in Table \ref{tab:ListProceduresTransformations}.\\ \code{pv.denom} & A vector of $p$-values or a single one for the denominators as in Table \ref{tab:ListProceduresTransformations}. \end{longtable} The critical values kernels additionally need: \begin{longtable}{ll} \code{alpha} & A numeric value specifying the global significance level.\\ \code{sorted.pv} & A numeric vector of observed $p$-values in ascending order. \end{longtable} For the [DBH] and [ADBH] procedures, a direction (\code{"su"} for step-up or \code{"sd"} for step-down) do not need to be explicitly passed to the kernels, because these two cases can be distinguished by \code{pv.numer} (the input values for the numerators of the fractions presented in Table \ref{tab:ListProceduresTransformations}) and \code{pv.denom} (the value(s) for the respective denominators). If all elements of both vectors are identical, we have the step-down case. Otherwise, it is step-up. Basically, the kernels implement the formulas of Table \ref{tab:ListProceduresTransformations}. Every step function must be evaluated at every element of either the support (for the critical values approach) or only the sorted raw $p$-values (for the transformed $p$-values approach). If this were done by means of \code{sapply} and/or \code{apply}, the results would be stored in a matrix, for which enough memory is reserved automatically. For a large number of hypotheses and even larger support sets, the size of this matrix would be vast and may easily take up several, if not dozens, of gigabytes of RAM. This may be too much for many workstations. As a solution to this problem, we implemented memory-conserving algorithms. For [DBH-SD], this means, that, for each $p$-value c.d.f. $F_i$, we compute the fractions $\frac{F_i(t)}{1 - F_i(t)}$ for all of $\{t \in \mathcal{A} : t \geq \tau^{\tiny \mbox{min}}_1\}$, with $\alpha$ being the significance level (see \citetalias{DDR2018}, Lemma 3) inside a \code{for} loop, which adds up the resulting \proglang{R} vectors iteratively. If the critical values are not demanded by the user, we evaluate at the observed $p$-values instead of the support. In both cases, the number of passes of the \code{for} loop is identical to the number of hypotheses. For [DBH-SU], we first compute the (last) critical constant $\tau_m$ as above, but we can restrict the computations to the set $\{t \in \mathcal{A} : t \geq \frac{\alpha}{1 + \alpha}\}$ (see \citetalias{DDR2018}, Lemma 2). After that, we compute the fractions $\frac{F_i(t)}{1 - F_i(\tau_m)}$ as before, but we only have to consider values of the set $\{t \in \mathcal{A} : t \leq \tau_m\}$. For the [A-DBH] procedures, the step functions are evaluated iteratively at smaller chunks of the input vectors \code{pv.numer} and \code{pv.denom}. The results of the fractions are then stored in a matrix. We found a size of 256 MiB to deliver the best performance. Depending on the number of hypotheses, $m$, the size and number of the chunks is determined dynamically. All $p$-value transformations and critical constant computations are done for this submatrix. The intermediate results are then stored in vectors. This is repeated for the remaining chunks by using a \code{for} loop. The intermediate results are updated with each pass of the loop until all input values have been processed. The [DBR-$\lambda$] algorithm is working almost identically, but no fractions are needed and there is no step-up/step-down direction. \subsection{Run times} To illustrate the run times of \code{DBH}\xspace, \code{ADBH}\xspace and \code{DBR}\xspace, we used the \code{arabidopsisE} data set, which was once included in the \pkg{fdrDiscreteNull} package, but was removed in recent versions. From this data, a total of 17400 hypotheses along with their respective $p$-value distributions and a vector of raw $p$-values were derived (for more details, see \cite{fdrDiscreteNull}). The accumulated size of the support $\mathcal{A}$ is 1,074,398. From this data, we used subsets of the first $m = 250, 500, 1000, 3000, 5000, 7000, 10000, 17400$ hypotheses, each resulting in different support sizes, as shown in the tables in the appendix. For each subset, the median run time of 25 runs was recorded. The decision for multiple, repeated runs and their median was made in order to account for possible side loads of the workstation and to avoid overly pronounced effects of very good and especially very bad runs, so we get a robust indication of the required time. All three methods were used with the following settings: \begin{itemize} \item \code{alpha = 0.05} \item \code{direction = "sd"} and \code{direction = "su"} (\code{DBH}\xspace and \code{ADBH}\xspace only) \item \code{lambda = 0.05} (\code{DBR}\xspace only) \item \code{ret.crit.consts = TRUE} and \code{ret.crit.consts = FALSE} \end{itemize} All computations were performed with \proglang{R} version 3.5.1 on the following system: \begin{itemize} \item CPU: AMD Ryzen 7 1800X, 3.60 GHz \item RAM: 32 GiB DDR4, 2400 MHz \item OS: Windows 10 Education v1803 \end{itemize} The complete results tables can be found in the appendix. \subsubsection{Results of critical values approach} The following plots illustrate our findings by depicting the development of the run times as a function of the product of $m$ and the overall support size $|\mathcal{A}|$. In addition to a plot with standard axis scaling, we also employ an additional one with logarithmic axes. \begin{figure}[H] \centering \includegraphics{article-runtimes-crit} \caption{Run time comparison of \pkg{DiscreteFDR} procedures with computation of critical values.} \end{figure} From both plots, we can clearly observe that [DBH-SU] is the fastest algorithm, followed by [DBH-SD], whose computation takes about 1.5 times as long. The calculations of [A-DBH-SU] takes about 4 times and those of [A-DBH-SD] almost 7 times as long as [DBH-SU]. [DBR-0.05] needs almost exactly the same time as [A-DBH-SU], so that their respective lines in the plots overlap. In addition, the second plot shows that these proportions of run times and, as a result, the order remain stable after $m \cdot |\mathcal{A}| \approx 5,000 \cdot 300,000 = 1,500,000,000$. Furthermore, it is visible that the run times of all the procedures exhibit roughly linear growth. \subsubsection{Results of transformed $p$-values approach} \begin{figure}[H] \centering \includegraphics{article-runtimes-fast} \caption{Run time comparison of \pkg{DiscreteFDR} procedures without computation of critical values.} \end{figure} Here, it is immediately apparent that the transformed $p$-values approach is an order of magnitude faster than the ones with critical values, but recognizing a ranking is a bit more difficult. However, up to and including $m \cdot |\mathcal{A}| \approx 1,000 \cdot 64,000 = 64,000,000$, all procedures take less than a second to compute their results, which is almost unnoticable. After that point, [DBH-SD] and [DBR-0.05] are the fastest algorithms, with [DBR-0.05] outperforming every other procedure for very large sizes. They are followed by [A-DBH-SD], [A-DBH-SU] and [DBH-SU]. The two latter ones exhibit mostly identical performance, but the higher $m \cdot \mathcal{A}$, the higher the performance advantage of [A-DBH-SU] over [DBH-SU], although they remain the slowest methods. Their largely higher computation time is explained by the fact that these two procedures have to determine the critical value $\tau_m$, which is responsible for ~80\% of the computational time, as an in-depth analysis has shown. The increasing advantage of [A-DBH-SU] over [DBH-SU] is explained by the fact that, as described before, [DBH-SU] simply adds up the fractions of evaluated c.d.f.s with a for loop, while [A-DBH-SU] uses a chunking approach, which also uses \code{for} loops, but requires much fewer passes than [DBH-SU]. The advantage of this approach is mitigated by the required sorting. But sill, as a result, [DBH-SU] becomes less efficient with larger sizes of $m \cdot \mathcal{A}$. \section[Examples]{Further analyses} \label{sec:Using} \subsection{Analysis of pharmacovigilance data} To illustrate how the procedures in \pkg{DiscreteFDR} can be used for real data, we revisit the analysis of the pharmacovigilance data from \citet{Heller2012} performed in \citetalias{DDR2018}. This data set is obtained from a database for reporting, investigating and monitoring adverse drug reactions due to the Medicines and Healthcare products Regulatory Agency in the United Kingdom. It contains the number of reported cases of amnesia as well as the total number of adverse events reported for each of the $m = 2446$ drugs in the database. For more details we refer to \citet{Heller2012} and to the accompanying R-package \pkg{discreteMTP} (\cite{discreteMTP}), which also contains the data. \cite{Heller2012} investigate the association between reports of amnesia and suspected drugs by performing for each drug a Fisher's exact test (one-sided) for testing association between the drug and amnesia while adjusting for multiplicity by using several (discrete) FDR procedures. In what follows we present code that reproduces parts of Figure 2 and Table 1 in \citetalias{DDR2018}. We procede as in the example in section \ref{sec:ToyExample}. Since we need to access the critical values we first determine the $p$-values and their support for the data set \code{amnesia} contained for convenience in the package \pkg{DiscreteFDR}. For this, we use the option \code{"HG2011"} in the function \code{fisher.pvalues.support}. \begin{Schunk} \begin{Sinput} R> library("DiscreteFDR") R> data(amnesia) R> amnesia.formatted <- fisher.pvalues.support(amnesia[, 2:3], + input = "HG2011") R> raw.pvalues <- amnesia.formatted$raw R> pCDFlist <- amnesia.formatted$support \end{Sinput} \end{Schunk} Then we perform the FDR analysis with functions \code{DBH} and \code{ADBH} (SU and SD) and \code{DBR} at level $\alpha=0.05$ including critical values. \begin{Schunk} \begin{Sinput} R> DBH.su <- DBH(raw.pvalues, pCDFlist, ret.crit.consts = TRUE) R> DBH.sd <- DBH(raw.pvalues, pCDFlist, direction = "sd", + ret.crit.consts = TRUE) R> ADBH.su <- ADBH(raw.pvalues, pCDFlist, ret.crit.consts = TRUE) R> ADBH.sd <- ADBH(raw.pvalues, pCDFlist, direction = "sd", + ret.crit.consts = TRUE) R> DBR.su <- DBR(raw.pvalues, pCDFlist, ret.crit.consts = TRUE) \end{Sinput} \end{Schunk} By accessing the critical values we can now generate a plot similar to Figure 2 from \citetalias{DDR2018}. Note that both [DBH-SU] and [DBH-SD] are visually indistinguishable from their adaptive counterparts. \begin{figure}[H] \centering \includegraphics{article-PlotErwin} \caption{\label{fig:anna} Critical values for [BH] (green dots), [DBH-SU] (orange dots), [DBH-SD] (red dots), [A-DBH-SU] (blue dots), [A-DBH-SD] (purple dots), [DBR] (yellow dots). The sorted raw $p$-values are represented by asterisks.} \end{figure} The rejected hypotheses can be accessed via the command \code{$Indices}. The following code yields some of the values from Table 1 in \citetalias{DDR2018}: \begin{Schunk} \begin{Sinput} R> rej.BH <- length(which(p.adjust(raw.pvalues, method = "BH") <= 0.05)) R> rej.DBH.su <- length(DBH.su$Indices) R> rej.DBH.sd <- length(DBH.sd$Indices) R> rej.ADBH.su <- length(ADBH.su$Indices) R> rej.ADBH.sd <- length(ADBH.sd$Indices) R> rej.DBR.su <- length(DBR.su$Indices) R> c(rej.BH, rej.DBH.su, rej.DBH.sd, rej.ADBH.su, rej.ADBH.sd, rej.DBR.su) \end{Sinput} \begin{Soutput} [1] 24 27 27 27 27 27 \end{Soutput} \end{Schunk} The (continuous) BH rejects only 24 hypotheses whereas all the discrete procedures implemented in \pkg{DiscreteFDR} are able to identify three additional drug candidates potentially associated with amnesia. \subsection{Other types of discrete tests} \label{ssec:PoissonTests} In this section we sketch how \pkg{DiscreteFDR} can be used to analyse arbitrary multiple discrete tests. \cite{JimenezUnaAlvarez2018} used \pkg{DiscreteFDR} to detect disorder in NGS experiments based on one-sample tests of the Poisson mean. Rather than reproducing their analysis in detail, we illustrate the general approach by using a toy example similar to the one in Section \ref{sec:ToyExample} and show how the test of the Poisson mean can be accomodated by \pkg{DiscreteFDR}. To fix ideas, suppose we observe $m=9$ independent Poisson distributed counts $N_1, \ldots, N_9$ (\cite{JimenezUnaAlvarez2018} used this to model the read counts of different DNA bases). We assume that $N_i \sim \textbf{{Pois}} (\lambda_i)$ and the goal is to identify cases where $\lambda_i$ is larger than some pre-specified value $\lambda^0_i$, i.e., we have the (one-sided) multiple testing problem \begin{align*} H_{0i}: \lambda_i = \lambda^0_i & \qquad \text{vs.} \qquad H_{1i}: \lambda_i > \lambda^0_i. \end{align*} As in Section \ref{sec:ToyExample}, the goal is to adjust for multiple testing by using the [DBH-SD] procedure at FDR-level $\alpha=5\%$. In our example the observations $n_1,\ldots, n_9$ and parameters $\lambda^0_1, \ldots, \lambda^0_9$ are given as follows: \begin{Schunk} \begin{Soutput} observations lambda.0 [1,] 3 0.6 [2,] 3 1.2 [3,] 1 0.7 [4,] 2 1.3 [5,] 3 1.0 [6,] 3 0.2 [7,] 1 0.8 [8,] 2 1.3 [9,] 4 0.9 \end{Soutput} \end{Schunk} Denote by $G_i$ the distribution of $N_i$ under $H_{0i}$ i.e., $G_i(x)=P(N_i \le x)$. For observations $n_1,\ldots, n_9$ of $N_1, \ldots,N_9$ the $p$-values for the above one-sided test are given by \begin{align*} p_i &= P(N_i \ge n_i) =P(N_i > n_i-1)= \overline{G_i}(n_i-1), \end{align*} where $\overline{G_i}(x)=P(N_i >x)=1-G_i(x)$ denotes the survival function of the Poisson distribution with parameter $\lambda^0_i$. Thus the raw $p$-values are determined by the following \proglang{R} code: \begin{Schunk} \begin{Sinput} R> raw.pvalues <- sapply(1:m,function(i){ppois(observations[i]-1,lambda.vector[i], + lower.tail = FALSE)}) R> raw.pvalues \end{Sinput} \begin{Soutput} [1] 0.023115288 0.120512901 0.503414696 0.373176876 0.080301397 [6] 0.001148481 0.550671036 0.373176876 0.013458721 \end{Soutput} \end{Schunk} Following the definition of the \code{qpois} function in \proglang{R} we define the inverse function of $\overline{G_i}$ by \begin{align*} \overline{G_i}^{-1}(p) &= \min \{ n \in {\mathbb N} : \overline{G_i}(n) \le p \} \\ \intertext{and obtain for the distribution function of the $i$-th $p$-value under the null} F_i(x) &= \overline{G_i} ( \overline{G_i}^{-1}(x) ). \end{align*} Each function $F_i$ is a step function with $F_i(0)=0$, $F_i(1)=1$ and there exists an infinite sequence of jumps at locations $1=x_1 > x_2 > \ldots > x_n > x_{n+1}> \ldots > 0$ such that $F(x_j)=x_j$ for $j \in {\mathbb N}$. Initially it seems that we run into a problem if we want to determine the critical values of [DBH-SD] since the supports of $F_1, \ldots, F_9$ are no longer finite (but still discrete). We can deal with this problem by using the observation from Section \ref{subsec:CritConsts} that it is sufficient to consider new, restricted supports $\mathcal{A}_i \cap [s^{\tiny \mbox{min}},1]$ where the lower threshold satisfies \begin{align} s^{\tiny \mbox{min}} &\le \tau^{\tiny \mbox{min}}_1 =\max \left\{ t\in \mathcal{A}\::\: t\leq y^{\tiny \mbox{min}} \right\} \qquad \text{where} \qquad y^{\tiny \mbox{min}} = \frac{\alpha}{m} \cdot \left( 1+\frac{\alpha}{m} \right)^{-1}. \label{eq:tau.min.Poisson} \end{align} To determine such an $s^{\tiny \mbox{min}}$ we procede as follows. Define $n^{\tiny \mbox{max}}_i = \overline{G_i}^{-1}(y^{\tiny \mbox{min}})+1,$ $ t^{\tiny \mbox{min}}_i = \overline{G_i} (n^{\tiny \mbox{max}}_i-1)$ and set $s^{\tiny \mbox{min}} = \min \left( t^{\tiny \mbox{min}}_1, \ldots, t^{\tiny \mbox{min}}_9 \right)$. It is easily checked that this choice of $s^{\tiny \mbox{min}}$ satisfies \eqref{eq:tau.min.Poisson}. We can determine $s^{\tiny \mbox{min}}$ by the following code \begin{Schunk} \begin{Sinput} R> y.min <- alpha/m*(1+alpha/m)^(-1) R> n.max <- sapply(1:m,function(w){qpois(y.min,lambda.vector[w], + lower.tail = FALSE)})+1 R> t.min <- sapply(1:m,function(w){ppois(n.max[w]-1,lambda.vector[w], + lower.tail = FALSE)}) R> s.min <- min(t.min) R> s.min \end{Sinput} \begin{Soutput} [1] 0.0007855354 \end{Soutput} \end{Schunk} For determining the restricted supports it is actually more convenient to work with $n^{\tiny \mbox{max}}_i$ than $s^{\tiny \mbox{min}}$. We can subsequently use these supports as the \code{pCDFlist} argument in the usual way when calling the \code{DBH} function: \begin{Schunk} \begin{Sinput} R> supports <- lapply(1:m,function(w){sort(ppois(0:n.max[w]-1,lambda.vector[w], + lower.tail = FALSE))}) R> DBH.sd <- DBH(raw.pvalues,supports,direction = "sd", ret.crit.consts = TRUE) \end{Sinput} \end{Schunk} Figure \ref{fig:Poisson} shows a summary similar to Figure \ref{fig:otto}. Applying the continuous BH procedure \begin{Schunk} \begin{Sinput} R> p.adjust(raw.pvalues, method = "BH") \end{Sinput} \begin{Soutput} [1] 0.06934586 0.21692322 0.55067104 0.47979884 0.18067814 0.01033633 [7] 0.55067104 0.47979884 0.06056424 \end{Soutput} \end{Schunk} results in one rejection at FDR-level $\alpha=5\%$, whereas the DBH step-up procedure can reject three hypotheses: \begin{Schunk} \begin{Sinput} R> DBH.sd$Adjusted \end{Sinput} \begin{Soutput} [1] 0.039602625 0.101622881 0.580898946 0.522450788 0.101509307 [6] 0.001935955 0.626257875 0.522450788 0.033073393 \end{Soutput} \end{Schunk} As in Figure \ref{fig:otto}, Panel (c) presents a graphical comparison between the two procedures applied to the $p$-values. \begin{figure}[htb] \centering \includegraphics{article-PoissonRejections} \caption{\label{fig:Poisson} Panel (a) depicts the distribution functions $F_1, \ldots, F_9$ in various colours, (b) is a graph of the transformation function $\xi_{\textnormal{SD}}$. The uniform distribution function is shown in light grey in (a) and (b). Panel (c) shows the [BH] critical values (green dots), the DBH step-down critical values (red dots) and the sorted raw $p$-values (asterisks).} \end{figure} \section[Summary]{Summary and future work} \label{sec:summary} Controlling the FDR for discrete tests is an important goal in many data analytic settings. In this paper, we introduced the \proglang{R} package \pkg{DiscreteFDR}, implementing procedures from \citetalias{DDR2018}. These procedures come with guaranteed FDR control under independence and deal effectively with the conservativeness encountered in discrete tests. We hope that our software will make discrete methods for FDR control more accessible to a wide audience of practitioners. More specifically, \pkg{DiscreteFDR} can be used both in an 'expert' and a 'standard' mode. For the data analyst, taking discreteness and multiplicity issues into account simultaneously may appear to be rather challenging since information on many distribution functions has to be stored, combined and evaluated. For this reason, we have included the wrapper function \code{fast.discrete} which applies the discrete procedures to a set of 2 $\times$ 2 tables, given by a matrix or data frame, where each contingency table is analysed by Fisher's exact test. Thus, this function can be seen as an implementation of a multiple Fisher test that controls FDR. For controlling the more stringent Familywise Error Rate (FWER) for multiple exact Fisher tests, we would like to point out the \proglang{R} package \pkg{multfisher} which implements the approaches described in \cite{Ristl2018}. For those analysts who are looking for a simple to apply, off-the-shelf method, using \code{fast.discrete} will automatically take care of generating the list of (the support of the) distribution functions \code{pCDFlist}, which may otherwise be tedious work. For more expert users who want to use other tests than Fisher's exact test, the work flow, described in more detail in Section \ref{ssec:PoissonTests}, consists of first generating the \code{pCDFlist} list, and then passing this on to the \code{DBH} or \code{DBR} functions. Interfaces that generate \code{pCDFlist} from a given data set for a given statistical test are very helpful tools. Currently, Fisher's exact test is the only test for which our package supplies such an interface. In the future, we are planning to include helper functions similar to \code{fisher.pvalues.support} for further tests like the Binomial and Poisson tests. The \proglang{R} package \pkg{DiscreteFDR} is available from the Comprehensive \proglang{R} Archive network (CRAN) at \url{https://cran.r-project.org/web/packages/DiscreteFDR/index.html}. \section[Acknowledgements]{Acknowledgements} We thank Antje Jahn for very carefully reading the manuscript and providing numerous suggestions that greatly improved the content and presentation of the paper. This work has been supported by the CNRS (PEPS FaSciDo) and the French grants ANR-16-CE40-0019 (SansSouci project) and ANR-17-CE40-0001 (Basics project).
2,877,628,090,640
arxiv
\section{Introduction} Technology fabricating one-atom thick systems has been rapidly developing since the appearance of monolayer honeycomb carbon (graphene).\cite{novoselov04} Recently, silicon analog of graphene (silicene) \cite{GLay,Takamura,Kawai} has been synthesized and attracts much attention. The low-lying excitations of the monolayer honeycomb systems are Dirac fermions. Due to spin-orbit interaction (SOI), the Dirac fermions become massive, i.e., the energy band has a gap. These massive Dirac fermion systems lead to a quantum spin Hall (QSH) insulator, which is originally proposed in graphene.\cite{kane05a,kane05b} However, SOI of graphene is tiny so that the QSH effect in graphene has not been experimentally observed. SOI of silicene is, in contrast, thousand times larger than that of graphene,\cite{Min, Yao} which makes experimentally accessible QSH effects\cite{liu11prl}. Transport properties of Dirac fermions show various anomalous behaviors. A prominent feature is the Klein tunneling\cite{klein29}. Graphene heterojunctions exhibit perfect transmission through the barrier at normal incidence regardless of the barrier characteristics. The origin of the Klein tunneling is the absence of the backscattering due to the pseudospin conservation.\cite{katsnelson06, beenakker08, ando98} The perfect transmission is protected by time--reversal symmetry. Actually, signature of Klein tunneling has been observed in graphene.\cite{huard07, shytov08, young09} The systems supporting Dirac fermions can exhibit unique charge and spin transport.\cite{saito07, sonin09, yokoyama09, bercioux10, bai10, ingenhoven10, rataj11, bai11, bai11PhysicaE, niu11, guigou11, liu12, esmaeilzadeh12, tian12,tian12EPJB, rothe12, prada13} They have a potential to provide us with new electronics and spintronics devices. Among such Dirac fermion systems, silicene has another advantage: the band gap is controllable by applying an external electric field\cite{ezawa12njp} owing to the buckling structure.\cite{takeda94} A bilayer graphene also has an electric-field-tunable band gap\cite{ohta06,mccann06,mccann06b,oostinga08,zhang09}. Silicene, differently from a bilayer graphene, shows a topological phase transition by tuning the band gap. If one applies an electric field whose energy is stronger than SOI, the topological phase transition occurs from the QSH to non-topological insulators. It is also intriguing that silicene realizes various topological insulators by exchange interaction,\cite{EzawaQAH} photo-irradiation\cite{EzawaPhoto} and anti-ferromagnetic order.\cite{EzawaExM} These characteristics could be useful for device applications. The most fundamental electronic device is a field-effect transistor (FET). FET made by silicene has an advantage that it has a large band gap due to SOI compared with graphene which is a zero gap semiconductor. In addition, the tunable band gap and the topological phase transition of silicene by external electric field may lead to a new feature for FETs. Charge transport properties of a silicene nanoribbon has been studied.\cite{ezawa13} Spin transport has been also studied in a bulk silicene junction under Zeeman field.\cite{tsai13} On the other hand, we focus on the charge transport in the bulk silicene. In this paper, we analyze the transport properties of pn and pnp junctions made of silicene. Controlling the conductance by tuning the gate voltage, we find that i) the gate-voltage dependences of conductance in the topological and non-topological phases are quite distinct, and ii) the conductance is almost quantized to be 0, 1 and 2. The former is an evidence for the existence of the topological phase. The latter enables us to use silicene as a FET with almost quantized three values of conductance. Our results will open a new way to future nanoelectronics. This paper is composed as follows. In Sec. \ref{bulk}, we review the bulk properties of silicene. In Sec. \ref{pnsec}, we investigate the pn junction of silicene. First we calculate the conductance for the normal incident case with and without the Rashba interaction. Next we calculate the obliquely incident case. The conductance is almost the same but smeared compared with that of the normal incident case. In sec. \ref{pnpsec}, we investigate a pnp junction of silicene. Section \ref{summary} is devoted to discussions. \section{Bulk properties} \label{bulk} First, we show the bulk properties of silicene. The Hamiltonian of a silicene in the vicinity of the K and K$'$ points reads\cite{liu11prl,liu11,ezawa12njp} \begin{align} H(\bm k) &= \hbar v_{\rm F} (k_x \tau_x - k_y \tau_y \eta_z) - \lambda_{\rm SO} \tau_z \sigma_z \eta_z \nonumber \\ & \quad + a \lambda_{\rm R} \tau_z (k_x \sigma_y - k_y \sigma_x) \eta_z + \ell E_z \tau_z, \label{hbulk} \end{align} where $\sigma_i$, $\tau_i$, and $\eta_i$ are the Pauli matrices for the spin ($\uparrow$ and $\downarrow$), sublattice (A and B sites) pseudospin, and valley (K and K$'$ points) spaces, respectively. $a=3.86 \mathrm{\AA}$ and $2\ell = 0.46 \mathrm{\AA}$ denote the lattice constant and the perpendicular distance between A and B sites, respectively. $\lambda_{\rm SO}$ is the intrinsic SOI coupling constant that triggers the topological phase transition from the non-topological to topological insulators (See below). The sublattice-dependent Rashba SOI $\lambda_{\rm R}$ also appears due to the buckling structure of silicene. And also, the mass term $\ell E_z \tau_z$ shows up because of the buckling structure with an external electric field $E_z$ along the $z$-axis. Hereafter, we set $\hbar=1$. We start with a review of the topological phase diagram of silicene. The Dirac mass $m$ for the K point ($\eta_z=1$) in the bulk silicene is given by \begin{align} m = -\lambda_{\rm SO} \tau_z \sigma_z + \ell E_z \tau_z. \end{align} The system is a topological insulator for $ \lambda_{\rm SO} > \ell |E_z|$, while it is a non-topological insulator for $\lambda_{\rm SO} < \ell | E_z|$. $ \lambda_{\rm SO} = \ell |E_z|$ is the critical point, where the energy gap closes. The sign-change of the mass term means the inversion between the conduction and valence bands. Correspondingly, the directions of $\bm \tau$ and $\bm \sigma$ of the conduction and valence electrons change. Namely, the sublattice and spin states are given by $\left| + - \right\rangle$, $\left| - - \right\rangle$, $\left| + + \right\rangle$, and $\left| - + \right\rangle$ in descending energy order for the topological phase. On the other hand, the second and third states are interchanged for the non-topological phase, i.e., $\left| + - \right\rangle$, $\left| ++ \right\rangle$, $\left| -- \right\rangle$, and $\left| - + \right\rangle$. Here $\left| \alpha \beta \right \rangle$ denotes the eigenstate with $\tau_z = \alpha \, \mathrm{sgn} (E_z)$ and $\sigma_z = \beta \, \mathrm{sgn} (E_z)$. This energy level scheme is shown in Fig. \ref{energy0}. As one switches off $E_z$, the two states with $\left| + - \right\rangle$ and $\left| - - \right\rangle$ (with $\left| + + \right \rangle$ and $\left| - + \right \rangle$) are degenerated ($\epsilon_1=\epsilon_2$ in Fig. \ref{energy0}). \begin{figure} \centering \includegraphics{energy0_v5.pdf} \caption{Energy level scheme for the topological (TP) ($\ell \left| E_z \right| < \lambda_{\rm SO}$) and non-topological (NTP) ($\ell \left| E_z \right| > \lambda_{\rm SO}$) phases at the K point. $\left| \alpha \beta \right \rangle$ denotes the eigenstate with $\tau_z = \alpha \, \mathrm{sgn} (E_z)$ and $\sigma_z = \beta \, \mathrm{sgn} (E_z)$. $\epsilon_1 = \lambda_{\rm SO} + \ell \left| E_z \right|$, $\epsilon_2 = \left|\lambda_{\rm SO} - \ell \left| E_z \right| \right|$. } \label{energy0} \end{figure} \begin{figure} \centering \includegraphics{bulkenergy.eps} \caption{Energy dispersions in the bulk silicene for $E_z=0$ (a) and $E_z = 0.5 \lambda_{\rm SO}$ (b). The parameters of the system are taken as follows. $v_{\rm F}/a = 1.4 \rm eV$, $\lambda_{\rm SO} = 3.9 \rm meV$, $\lambda_{\rm R} = 0.7 \rm meV$. } \label{bulkenergy} \end{figure} In addition, we show the energy dispersions for $E_z=0$ and $E_z \ne 0$ in Fig. \ref{bulkenergy}. The energy $E_\pm(\bm k)$ in the bulk is obtained to be \begin{align} E_\pm^2(\bm k) = v_{\rm F}^2 k^2 + \left(\pm \sqrt{\lambda_{\rm SO}^2 + a^2 \lambda_{\rm R}^2 k^2} + \ell E_z \right)^2, \end{align} with $k = (k_x^2+k_y^2)^{1/2}$. The corresponding eigenvector $\bm u_{\pm}(\bm k; E_z)$ is also obtained analytically (Appendix \ref{wavefunction}). The energy bands are doubly degenerated for $E_z=0$ due to the inversion [$\tau_z \sigma_z H(\bm k) \sigma_z \tau_z = H(-\bm k)$] and time-reversal [$\tau_y \sigma_x H^*(\bm k) \sigma_x \tau_y = H(-\bm k)$] symmetries defined within each valley. In contrast, there is no spin-degeneracy for $E_z \ne 0$ since $E_z$ breaks the inversion symmetry, which lifts the degeneracy at each k-point.. \section{Silicene pn junction} \label{pnsec} \begin{figure} \centering \includegraphics[scale=0.15]{pn_paper.eps} \caption{Schematic of a pn junction.} \label{pn} \end{figure} In this section, we investigate charge transport in a silicene pn junction, which is illustrated in Fig. \ref{pn}. \subsection{Normal incident case} \subsubsection{Formalism of the scattering problem} Firstly, we investigate a normal incident of a pn junction of silicene for $k_y=0$. The Hamiltonian is given by \begin{align} H(x) &= -i v_{\rm F} \partial_x \tau_x - \lambda_{\rm SO} \tau_z \sigma_z \eta_z + \ell E_z \theta(x) \tau_z \nonumber \\ & \quad + i a \lambda_{\rm R} \partial_x \sigma_y \tau_z \eta_z + V \theta(x). \end{align} Hereafter, we focus only on the K point ($\eta_z=1$). The same analysis is applicable to the K$'$ point. We solve the scattering problem of the pn junction. The calculation has been done by employing theories for graphene\cite{katsnelson06,beenakker08,cayssol09} and the Kane-Mele model.\cite{yamakage09,yamakage11} In the incident side ($x<0$), an external electric field is not applied and hence the energy bands are doubly degenerated. As a result, there are two incident states for a fixed incident energy $E_{\rm F}$. Wave function $\psi_{\pm}(x)$ of the scattering state with the incident energy being $E_{\rm F}$ has the form as \begin{align} \label{psil} \psi_{\pm}(-0) &= \bm u_{\pm}(k_{\rm I}; 0) + r_{\pm +} \bm u_+(-k_{\rm I}; 0) + r_{\pm -} \bm u_-(-k_{\rm I}; 0), \\ \psi_\pm(+0) &= t_{\pm +} \bm u_+(q_+; E_z) + t_{\pm -} \bm u_-(q_-; E_z), \end{align} where the subscript of $\psi_\pm(x)$ denotes the spin state of the incident state $\bm u_\pm(k_{\rm I}; 0)$. The first term $\bm u_\pm(k_{\rm I}; 0)$ of Eq. (\ref{psil}) denotes the incident state. The other two terms correspond to the reflected states with the same ($\pm$) and different (spin-flip, $\mp$) spin states. Momentum $k_{\rm I}$ of the incident electron is given by \begin{align} k_{\rm I} = \mathrm{sgn} ( E_{\rm F} ) \sqrt\frac {E_{\rm F}^2 - \lambda_{\rm SO}^2} {v_{\rm F}^2 + a^2 \lambda_{\rm R}^2}. \end{align} The sign of $k_{\rm I}$ is determined so that the group velocity of the incident state is positive. Note that the incident state must be a propagating mode; $E_{\rm F}^2 > \lambda^2_{\rm SO}$. Otherwise, the corresponding conductance is zero, by definition. On the other hand, momentum $q_\pm$ of the transmitted electron is obtained by solving the following equation; \begin{align} \left( E_{\rm F} - V \right)^2 = v_{\rm F}^2 q_\pm^2 + \left( \pm \sqrt{\lambda_{\rm SO}^2 + a^2 \lambda_{\rm R}^2 q_\pm^2} + \ell E_z \right)^2. \end{align} Since the group velocities of the transmitted electrons should be positive, the following relation is satisfied for the propagating mode; \begin{align} \mathrm{sgn} \left(q_\pm \right) = \mathrm{sgn} \left(E_{\rm F}-V \right). \end{align} For the evanescent mode, on the other hand, $\mathrm{Im} \, q_\pm > 0$ is satisfied. And note that when $\left| E_{\rm F}-V \right| < \left| \lambda_{\rm SO} - \ell \left| E_z \right|\right|$, the system in $x>0$ becomes insulating, i.e., the resulting conductance vanishes. The reflection and transmission coefficients $r_{\pm \pm}$ and $t_{\pm \pm}$ are obtained by solving the continuity condition at $x=0$. Since the charge current is conserved, the following relation holds. \begin{align} \left. \frac{\partial H}{\partial (-i \partial_x)} \right|_{x<0} \psi_\pm(-0) = \left. \frac{\partial H}{\partial (-i \partial_x)} \right|_{x>0} \psi_\pm(+0), \end{align} where $\partial H/[\partial (-i \partial_x)]$ is the velocity operator. The above relation is reduced to \begin{align} \psi_{\pm}(-0) = \psi_{\pm}(+0). \end{align} Solving this, one obtains the reflection and transmission coefficients. From the reflection coefficient $r_{\alpha\beta}$, the transmission probability $T_\pm$ is given by \begin{align} T_\pm = 1 - \sum_{\beta = \pm} \left|r_{\pm \beta} \right|^2. \end{align} Charge conductance $G$ in the normal incident case ($k_y=0$) is defined by \begin{align} G = \frac{e^2}{h} (T_+ + T_-). \end{align} \begin{figure} \centering \includegraphics{rashba_v2.png} \caption{Conductance in unit of $e^2/h$ for the normal incident case ($k_y=0$) in the presence [(a) and (b)] ($\lambda_{\rm R}/\lambda_{\rm SO} = 0.18$ corresponding to that for silicene) and absence [(c) and (d)] ($\lambda_{\rm R}=0$) of the Rashba SOI. (a) and (c) [(b) and (d)] are the lightly (heavily) doped case as $E_{\rm F} = 1.1 \lambda_{\rm SO}$ ($E_{\rm F} = 2 \lambda_{\rm SO}$). The solid and dashed lines in (a) are located at $\ell E_z = 1.5 \lambda_{\rm SO}$ and $\ell E_z = 0.5 \lambda_{\rm SO}$, respectively. (i)--(viii) are the representative points for which the conductance is shown in Fig. \ref{t1d}. These are defined as follows. (i) and (v): $V = 4\lambda_{\rm SO}$ (np and the double-channel regime). (ii) and (vi): $V = 2\lambda_{\rm SO}$ (np and the single-channel regime). (iii) and (vii): $V = 0$ (nn and the single-channel regime). (iv) and (viii): $V = -2 \lambda_{\rm SO}$ (nn and the double-channel regime). } \label{g1D} \end{figure} \subsubsection{Charge transport asymmetry in the nn and pn regimes} We show results on the conductance of the normal incident case ($k_y=0$) in Fig. \ref{g1D}. The horizontal axis is $(E_{\rm F}-V)/E_{\rm F}$, where $E_{\rm F}-V$ corresponds to the Fermi energy in $x>0$ measured from the charge neutrality point. The vertical axis is $\ell E_z/\lambda_{\rm SO}$. Note that $E_z$ and $V$ are not actually independent of each other since both of them are induced by the gate electric field. Therefore, the conductance along a curve in the $(E_{\rm F}-V, E_z)$ plane of Fig. \ref{g1D} is realized in the actual pn junction. The relation between $E_z$ and $V$ depend on the substrate. It is worthwhile to investigate the general conductance formula depending on $E_z$ and $V$. Only the region of $\ell E_z/\lambda_{\rm SO}>0$ is shown since the transmission probability is symmetric with respect to $E_z=0$ (See Appendix \ref{symmetry}). A pn junction with two doped topological insulators (TP/TP) is realized for $\ell |E_z| < \lambda_{\rm SO}$. On the other hand, that with doped topological and non-topological (TP/NTP) insulators is realized for $\ell |E_z| > \lambda_{\rm SO}$. Clearly seen from Fig. \ref{g1D}, there is no qualitative difference between the conductances with [Figs. \ref{g1D}(a) and (b)] and without [Figs. \ref{g1D}(c) and (d)] the sublattice-dependent Rashba SOI $\lambda_{\rm R}$. This is because $\lambda_{\rm R}$ for silicene ($\lambda_{\rm R}/\lambda_{\rm SO}=0.18$) is weak and furthermore vanishes at the K and K$'$ points. For $\lambda_{\rm R}=0$, one can obtain a simple formula for the reflection coefficient. The reflection coefficient $r_{\sigma}$ with $\sigma = \pm$ being the $z$-component of spin of the incident electron is given by \begin{align} r_\sigma = \frac{1-X_\sigma}{1+X_\sigma}, \end{align} with \begin{align} X_\sigma = \sqrt{ \frac {E_{\rm F} + \sigma \lambda_{\rm SO}} {E_{\rm F} - \sigma \lambda_{\rm SO}} \frac {E_{\rm F} - V - \sigma \lambda_{\rm SO} + \ell E_z} {E_{\rm F} - V + \sigma \lambda_{\rm SO} - \ell E_z} }. \end{align} The conductance is given by $G = (e^2/h) (2 - \sum_\sigma | r_\sigma |^2)$. If $\lambda_{\rm SO} \ll |E_{\rm F}|$ and $\ell |E_z| \ll |E_{\rm F}-V|$, the corresponding $r_\sigma$ tends to zero, i.e., a perfect transmission occurs, which is known as the Klein tunneling in monolayer graphene.\cite{katsnelson06,beenakker08} Here we go back to Fig. \ref{g1D}. In the inner region of $\left| E_{\rm F}-V \right| < \epsilon_1 \equiv \left| \lambda_{\rm SO} - \ell \left| E_z \right| \right|$, the conductance vanishes since the transmitted side ($x>0$) is insulating. In the central region of $\epsilon_1 < \left| E_{\rm F} - V \right| < \epsilon_2 \equiv \lambda_{\rm SO} + \ell \left| E_z \right|$, there is a single energy band at the Fermi level, hence the maximum value of resultant conductance is $e^2/h$. On the other hand, in the outer region of $\left| E_{\rm F} - V \right| > \epsilon_2$, two energy bands are located at the Fermi level. Here the conductance becomes larger (almost double) than that in the central region. We refer to these regions as insulating, single-channel, and double-channel regimes, respectively. This behavior originates from a peculiarity of silicene, i.e., the band gap and spin-split energy bands owing to SOI and electric-field effect in the buckling structure. Graphene, in contrast, does not have SOI nor the buckling structure. The resulting conductance is always $2 e^2/h$. For the lightly doped case [$E_{\rm F}=1.1\lambda_{\rm SO}$, Figs. \ref{g1D}(a) and \ref{g1D}(c)], one can see asymmetry of the conductance with respect to $\ell E_z = \lambda_{\rm SO}$. To be more explicit we show the conductance as a function of $V$ for $E_z=1.5 \lambda_{\rm SO}$ (TP/NTP junction) and $E_z = 0.5 \lambda_{\rm SO}$ (TP/TP junction) in Fig. \ref{t1d}. \begin{figure} \centering \includegraphics{cnd_v.pdf} \caption{Conductances for the TP/NTP ($\ell E_z = 1.5 \lambda_{\rm SO}$) and TP/TP ($\ell E_z = 0.5 \lambda_{\rm SO}$) junctions in the case of normal incidence. The parameters are the same as in Fig. \ref{bulkenergy}. Cases (i)--(viii) correspond to those in Fig. \ref{g1D}(a). } \label{t1d} \end{figure} For the double-channel regime ($\left| E_{\rm F} - V \right| > \epsilon_2$), the transmission probabilities of the TP/NTP and TP/TP junctions are similar for both np [(i) and (v)] and nn [(iv) and (viii) case]. The transmission probability in the nn regime [(iv) and (viii)] is slightly larger than that in the np regime [(i) and (v)]. In contrast, for the single-channel regime ($\epsilon_1 < \left| E_{\rm F} - V \right| < \epsilon_2$) [(ii), (iii), (vi), and (vii)], the conductances of the TP/NTP and TP/TP junctions are qualitatively different. Namely, the conductance for the TP/NTP (TP/TP) junction in the np (nn) regime (ii) [(vii)] takes a larger value than that in the nn (np) regime (iii) [(vi)]. Note that the gate-voltage dependence of the conductance for the TP/NTP junction is distinct to that for the TP/TP junction. This asymmetric behavior of conductance stems from the sublattice and spin states of the incident and transmitted electrons. \begin{figure*} \centering \includegraphics{energies_v2.pdf} \caption{Energy spectra for the incident ($x<0$) state with $\ell E_z = 0$ (the left-upper and left-lower panels), and for transmitted ($x>0$) states with $\ell E_z = 1.5 \lambda_{\rm SO}$ [(i)--(iv)] and with $\ell E_z =0.5$ [(v)--(viii)]. Cases (i)--(viii) corresponds to those in Fig. \ref{g1D}. $|\alpha\beta \rangle$ with $\tau_z = \alpha \, \mathrm{sgn} \left( E_z \right)$ and $\sigma_z = \beta \, \mathrm{sgn} \left( E_z \right)$ denotes the sublattice and spin state for $k=0$. } \label{energy} \end{figure*} Figure \ref{energy} shows the energy bands for $x<0$ and $x>0$. The sublattice and spin states $\left | \alpha \beta \right\rangle$ for each energy band at $k=0$ are also denoted in Fig. \ref{energy}. The incident states ($x<0$) with a positive energy are approximately given by $\left| + - \right \rangle$ and $\left| -- \right\rangle$. When the transmitted states is given by $\left| -- \right\rangle$ [(ii) and (vii)] or $\left| +- \right\rangle$ [(iv) and (viii)], the conductance is large, due to matching of the sublattice and spin states. Especially, for $(E_{\rm F}-V)/E_{\rm F} \sim -0.5$ and $(E_{\rm F}-V)/E_{\rm F} \sim 0.5$, the transmission probabilities are unity since the sublattice and spin states of the incident and transmitted electrons coincide with each other. In contrast, when the transmitted state is given by the mismatched state $\left| ++ \right \rangle$ [(iii) and (vi)] or $\left| -+ \right\rangle$ [(i) and (v)], the corresponding conductance is suppressed. Thus the matching/mismatching of the sublattice and spin states gives a larger/smaller conductance. It is emphasized that the transmitted states $\left| ++ \right \rangle$ and $\left| -- \right \rangle$ are controlled by $E_z$. As shown in Fig. \ref{energy0}, the two states $\left| ++ \right \rangle$ and $\left| -- \right \rangle$ are interchanged in the different topological phases, which is determined by $E_z$. In other words, the conductance is well tuned by $E_z$ through changing the symmetry of the wave function. The obtained results are summarized in Table. \ref{tab1}. The same behavior has been observed in Ref. \onlinecite{yokoyama10} on the surface of a topological insulator with ferromagnets. \begin{table} \centering \begin{tabular}{l|cc} \hline \hline & np & nn \\ \hline TP/NTP & large & small \\ TP/TP & small & large \\ \hline\hline \end{tabular} \caption{ Magnitudes of conductances in the silicene junctions for the single-channel regime ($\epsilon_1 < |E_{\rm F}-V| < \epsilon_2$). In the junction, the two cases are realized: the gated region is nontopological (TP/NTP) and topological (TP/TP). } \label{tab1} \end{table} As explained above, the conductance controlled by $E_z$ is determined by the matching of the sublattice and spin states between the both sides of the junction. Therefore, this behavior does not appear in the heavily doped case ($\left|E_{\rm F}\right| \gg \lambda_{\rm SO}, \ell \left| E_z \right|$) [Figs. \ref{g1D}(b) and \ref{g1D}(d)], where the mass gap $\sim \lambda_{\rm SO}$ is negligible as compared to $E_{\rm F}$. The conductance asymmetry peculiar to the topological phase can be expected for other topological insulators, provided that the system has a single Fermi surface in the pn junction. \subsubsection{Heavily doped case} From Figs. \ref{g1D} (b) and (d), the transmission probability is almost quantized to be 0 for the insulating case ($|E_{\rm F}-V| < \epsilon_1$), to 1 for the single-channel regime ($\epsilon_1 < |E_{\rm F}-V| < \epsilon_2$), and to 2 for the double-channel regime ($ | E_{\rm F} -V | > \epsilon_2$). This is a consequence of the Klein tunneling of Dirac fermions: A massless Dirac fermion can tunnel through any barriers. Hence the normal incident transmission probability is unity.\cite{katsnelson06, beenakker08} Although silicene has a finite energy gap, a perfect transmission approximately occurs for a heavily doped case, since the energy gap is effectively ignored as compared to the incident energy. In contrast, a graphene pn junction always shows a perfect transmission, i.e., the value of conductance is always $2 e^2/h$. Thus graphene cannot be used as a FET. \subsection{Obliquely incident case} \label{sec2dpn} Next we turn to the case of finite $k_y$, which corresponds to an actual silicene pn junction. The Hamiltonian of the two-dimensional system in the vicinity of the K point reads \begin{align} H(x) &= -i v_{\rm F} \partial_x \tau_x - k_y \tau_y - \lambda_{\rm SO} \tau_z \sigma_z + \ell E_z \theta(x) \tau_z \nonumber \\ & \quad - a \lambda_{\rm R} (-i\partial_x \sigma_y - k_y \sigma_x) \tau_z + V \theta(x). \end{align} Here we assume translational invariance along the $y$-axis, i.e., the $y$-component of momentum $k_y$ is regarded as a parameter. We solve the scattering problem in the same way as in the previous section. The normalized conductance $G/G_0$ is given by \begin{align} \frac{G}{G_0} = \int_{-\pi/2}^{\pi/2} \frac{d\theta}{2} \cos \theta \, [T_+(\theta) + T_-(\theta)], \label{gg0} \end{align} with $T_\pm(\theta)$ being the transmission probability for an incident angle $\theta$ defined by $k_y = k_{\rm F} \sin \theta$ and for incident spin $\pm$. In the case of perfect transmission ($T_\pm(\theta)=1$), the resulting conductance takes the value of $G = 2G_0$, where factor 2 means that the system has two incident states $\bm u_+$ and $\bm u_-$ with different spin states. Here, $G_0 = (k_{\rm F} W / \pi) e^2/h$, $k_{\rm F} = [(E_{\rm F}^2-\lambda_{\rm SO}^2)/(v_{\rm F}^2 + a^2 \lambda_{\rm R}^2)]^{1/2}$, and $W$ being the width of the system. Also, Fano factor $F$, which corresponds to the shot noise-to-signal ratio, is given by \begin{align} F = \frac{G_0}{G} \sum_{\alpha = \pm} \int_{-\pi/2}^{\pi/2} \frac{d\theta}{2} \cos \theta \, T_\alpha(\theta) \left[ 1-T_\alpha(\theta) \right]. \label{deffano} \end{align} Figure \ref{g2d} shows the normalized charge conductance $G/G_0$ as a function of gate voltage [$(E_{\rm F}-V)/E_{\rm F}$] and electric field ($\ell E_z/\lambda_{\rm SO}$). \begin{figure} \centering \includegraphics{g2Dv4.png} \caption{Normalized conductance $G/G_0$ averaged over the incident angles for the lightly (a) ($E_{\rm F}=1.1 \lambda_{\rm SO}$) and the heavily (b) ($E_{\rm F}=2\lambda_{\rm SO}$) doped cases. } \label{g2d} \end{figure} Obviously, the charge conductance (Fig. \ref{g2d}) and the transmission probability of the normal incident case [Figs. \ref{g1D}(a) and \ref{g1D}(b)] are almost the same, except for the broadening of line shape. This is because transport is determined basically by the normal incidence. An integral over the incident angle $\theta$ solely gives line broadening of the charge conductance from that for the normal incidence $T(\theta=0)$. \begin{figure} \centering \includegraphics{fano_v2.png} \caption{Fano factor for the lightly (a) ($E_{\rm F}=1.1 \lambda_{\rm SO}$) and the heavily (b) ($E_{\rm F}=2\lambda_{\rm SO}$) doped cases. } \label{fano} \end{figure} The Fano factor of the junction is shown in Fig. \ref{fano}. Overall, the resulting Fano factor for the lightly doped case (a) is smaller than that for the heavily doped case (b). Also, the Fano factor is roughly given by the inverse of conductance, i.e., it takes a small (large) value when the corresponding conductance is large (small). This behavior is realized if the shot noise power is almost independent of the parameters ($V$ and $E_z$). On the other hand, in the single-channel regime, the Fano factor is strongly suppressed when the sublattice and spin states of the incident and transmitted electrons coincide with each other (denoted by the arrows in Fig. \ref{fano}). Note that the Fano factor in the insulating region ($|E_{\rm F} - V| < \epsilon_1$) is obtained to be unity, although it is not well-defined for metal-insulator junctions because the corresponding conductance vanishes ($F \to 0/0$). We have concluded $F=1$ in the insulating region since $F = 1$ has been obtained for the long junction limit of the npn junction, as discussed in the next section. \section{silicene npn junction} \label{pnpsec} Next we investigate charge transport in a silicene npn junction, where electrostatic field is applied in $0<x<L$, which is illustrated in Fig. \ref{pnp}. \begin{figure} \centering \includegraphics[scale=0.15]{pnp_paper.pdf} \caption{Silicene npn junction. The center region (the length $L$) is gated. Charge current flows along the $x$-axis.} \label{pnp} \end{figure} The scattering problem of the npn junction is solved in a manner similar to that of the pn junction. The wave function $\psi_\pm(x)$ is given by \begin{align} \psi_\pm(x=-0) &= \bm u_\pm(k_{\rm I}; 0) + r_{\pm +} \bm u_+(-k_{\rm I}; 0) \nonumber\\ & \qquad + r_{\pm -} \bm u_-(-k_{\rm I}; 0), \\ \psi_\pm(0<x<L) &= \sum_{i=1}^4 w_{\pm i} \bm u_{\alpha_i} (q_i; E_z) e^{i q_i x}, \\ \psi_\pm(x=L+0) &= t_{\pm +} \bm u_+(k_{\rm I}; 0) e^{i k_{\rm I} L} \nonumber \\ & \qquad + t_{\pm -} \bm u_-(k_{\rm I}; 0) e^{i k_{\rm I} L}, \end{align} where $q_i$ and $\alpha_i$ are solutions of $E_{\rm F}-V = E_{\alpha_i}(q_i,k_y)$. Coefficients $r_{\pm \pm}$, $w_{\pm i}$, and $t_{\pm \pm}$ are obtained by solving the following boundary condition: \begin{align} \psi_\pm(-0) &= \psi_\pm(+0), \\ \psi_\pm(L-0) &= \psi_\pm(L+0). \end{align} The normalized conductance and the Fano factor are obtained by Eqs. (\ref{gg0}) and (\ref{deffano}), respectively. \subsection{Normal incident case} \begin{figure} \centering \includegraphics{pnp1d_v3.png} \caption{Conductance in unit of $e^2/h$ for npn junction ($\theta=0$) in the normal incident case ($k_y=0$). $L$ and $a$ denote the length of the gate in the junction and the lattice constant, respectively. The incident energy is taken to be $E_{\rm F}=1.1 \lambda_{\rm SO}$. } \label{pnp1d} \end{figure} First we show the conductance for the normal incident case in Fig. \ref{pnp1d}, i.e., $T_+(0) + T_-(0)$. A resonant tunneling occurs for $q_iL = 2n\pi, n \in \mathbb Z$ in a npn junction. In the short junction limit ($L \to 0$), a perfect transmission always occurs even when the central region is insulating. In a short but finite-length junction [Fig. \ref{pnp1d}(a)], the number of resonant peaks ($q_i = 2n\pi/L$) is still small (two peaks). On the other hand, the peak width is broad ($\sim 2 \lambda_{\rm SO}$) since the length of the junction is short so that the transmission probability is large. Thus, two broad resonant peaks appear in Fig. \ref{pnp1d}(a). As one increases $L$, the number of resonant peaks increases and the peak width becomes narrower, as shown in Figs. \ref{pnp1d}(b) and \ref{pnp1d}(c). Finally, the transmission probability of the long junction ($L > 10000a$) [Fig. \ref{pnp1d}(d)] asymptotically converges to that of the pn junction [Fig. \ref{g1D}(a)]. \subsection{Obliquely incident case} Next we show results on the obliquely incident case. \begin{figure} \centering \includegraphics{pnp2d_v4.png} \caption{Normalized Conductance $G/G_0$ in the npn junction for $E_{\rm F}=1.1 \lambda_{\rm SO}$.} \label{pnp2d} \end{figure} The charge conductance is shown in Fig. \ref{pnp2d}. As in the case of the pn junction discussed in Sec. \ref{sec2dpn}, integral over the incident angle entirely causes broadening of detail structures in the conductance. Namely, the conductance [Figs. \ref{pnp2d}(a)-(d)] is almost the same as that of the normal incident case [Figs. \ref{pnp1d}(a)-(d)]. In addition, we show the Fano factor in Fig. \ref{fano2d}. \begin{figure} \centering \includegraphics{fanopnp_v3.png} \caption{Fano factor in the npn junction for $E_{\rm F}=1.1 \lambda_{\rm SO}$.} \label{fano2d} \end{figure} The Fano factor (Fig. \ref{fano2d}) is basically given by the inverse of $G$ (Fig. \ref{pnp2d}): $F$ takes a small value for a resonant tunneling case. In the long junction limit [Figs. \ref{fano2d}(c) and \ref{fano2d}(d)], $F$ of the npn junction tends to that of the pn junction [Fig. \ref{fano}(a)]. And also, in the insulating regime $(|E_{\rm F}-V|<\epsilon_1)$, $F$ converges to be unity. Namely, the Fano factor is interpreted to be unity for the insulating regime of pn junction. \section{Summary} \label{summary} We have studied charge transport in the pn and npn junctions of silicene. In silicene, the topological phase transition occurs by applying electric field owing to the buckling structure. This transition affects the charge transport for the single-channel regime, i.e., the resulting conductance is suppressed in the np regime for the TP/TP junction, while it is suppressed in the nn regime for the TP/NTP junction. We have shown that this suppression originates from matching/mismatching of the spin and sublattice states of the incident and transmitted electrons. Furthermore, the silicene pn junction has been shown to be a FET which conductance is almost quantized. It is not the case in the graphene pn junction, which has no band gap. The silicene junctions can be a potential new device controlled by two types of electric field $V$ and $E_z$. \begin{acknowledgments} This work is supported by the ``Topological Quantum Phenomena" (No. 22103005) Grant-in Aid for Scientific Research on Innovative Areas from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. MH is supported by Grant-in Aid for Scientific Research No. 22740196. \end{acknowledgments} \if0
2,877,628,090,641
arxiv
\section{Introduction} \label{1} Multi-phase flow in superposed fluids and porous media has many applications in science and engineering. A prime example is the mixing of shallow groundwater and surface water in the hyporheic zone--a region of sediment and porous space beneath and alongside a stream bed. The hyporheic zone is a natural habitat for aquatic organisms and plays a major role in maintaining the self-purification function of streams. It is important to understand the hydrodynamic and biogeochemical processes of multiphase nature in this zone, cf. \cite{MBCardenas_WRR_2015}. Other applications of multi-phase flow in superposed fluids and porous media include contaminant transport in karst aquifers \cite{JMatusick_PZanbergen_GRA_2007}, oil recovery in petroleum engineering \cite{GlSw2004}, water management in PEM fuel cell technology \cite{KTuber_DPocza_CHebling_JPS_2003}, cardiovascular modeling and simulation \cite{LFormaggia_AQuarteroni_AVeneziani_2009} etc. Therefore it is of great importance to develop numerical models and efficient algorithms for simulating multi-phase flow in coupled free flow and porous media. As a solid fundamental work, the modeling of single-phase flow (such as water) in superposed free flow and porous media is usually based on the Stokes-Darcy type systems, see \cite{MdAAlMahbub_XMHe_NJNasu_CQiu_HZheng_1,YCao_MGunzburger_XMHe_XWang_2,MDiscacciati_EMiglio_AQuarteroni_1, JHou_MQiu_XMHe_CGuo_MWei_BBai_1, WJLayton_FSchieweck_IYotov_1, BRiviere_IYotov_1, SKFStoter_PMuller_elat_CMAME_2017,STlupova_RCortez_1} and many others. There have been abundant numerics for investigating this type single-phase model, such as finite element methods \cite{YCao_MGunzburger_XHu_FHua_XWang_WZhao_1,JCamano_GNGatica_ROyarzua_RRuizBaier_PVenegas_1,AMarquez_SMeddahi_FJSayas_2}, discontinous Galerkin method \cite{PChidyagwai_BRiviere_1,VGirault_BRiviere_1,KLipnikov_DVassilev_IYotov_1,BRiviere_1}, domain decomposition methods \cite{YBoubendir_STlupova_2,YCao_MGunzburger_XMHe_XWang_1,MDiscacciati_LGerardo-Giorda_1,MDiscacciati_AQuarteroni_AValli_1, MGunzburger_XMHe_BLi_SINN_2018,XMHe_JLi_YLin_JMing_1, CQiu_XMHe_JLi_YLin_1, DVassilev_CWang_IYotov_1}, multigrid methods \cite{TArbogast_MGomez_1,MCai_MMu_JXu_2,MMu_JXu_1}, and so on. The study of multi-phase flow in this context is very challenging, and up to our knowledge no sharp interface model is available for two-phase flow in superposed fluids and porous media. In recent years, diffuse interface model has become popular in numerical modeling of multi-phase flow \cite{JLowengrub_LTruskinovsky_1998,AGLamorgese_DMolin_RMauri_MJM_2011}. In this approach the sharp interface of two immiscible fluids is replaced by a diffusive interface of finite thickness where different fluids mix due to chemical diffusion. The diffuse interface approach could describe topological transitions of interfaces and avoid the cumbersome procedure of interface tracking in numerical simulations, cf. \cite{HGLee_JLowengrub_JGoodman_PFI_2002, HGLee_JLowengrub_JGoodman_PFII_2002}. A hybrid of the sharp interface model in porous media and the diffuse interface model in the free flow is proposed in \cite{JChen_SSun_XWang_1}. In \cite{DHan_DSun_XWang_1} the authors systematically derive a diffuse interface model, the Cahn-Hilliard-Stokes-Darcy model, for two-phase flow of matched/similar densities in the setting of coupled free flow and porous media. Well-posedness and numerical solvers for this model have been studied in \cite{DHan_XWang_HWu_1} and \cite{WChen_DHan_XWang_2017}, respectively. Generalization of the model to include inertia effect is done in \cite{YGao_XHe_LMei_XYang_2018}. A diffuse interface model for two-phase flow of arbitrary densities and viscosities in superposed fluids and porous media remains open. There are mainly two types of approach on developing diffuse interface models for two-phase flow of different densities in a single domain. The first approach defines a mass-averaged velocity that leads to a quasi-incompressible Cahn-Hilliard fluid model \cite{JLowengrub_LTruskinovsky_1998}. The mass-averaged velocity is non-solenoidal inside the diffusive interface, and the resulting model is a high order, nonlinear, strongly coupled system that is difficult for numerical simulation \cite{GLLW2017}. The second approach adopts a volume-averaged velocity which is a solenoidal vector field everywhere, cf. \cite{FBoyer_3, HDing_PDMSpelt_CShu_1, HAbels_HGarcke_GGrun_2012}. Due to the divergence-free velocity, efficient legacy numerical solvers for incompressible fluid are applicable. However, the classical continuity equation for density is no longer valid if volume-averaged velocity is employed. In this article we take an approximation approach for developing numerical models for superposed two-phase free flow and porous media, in the sense that we utilize the mass-averaged velocity but neglect the compressibility effect of the velocity field inside the thin diffusive interface. Such an approach has appeared in \cite{JShen_XYang_1} for numerical modeling of two-phase flow of variable densities in a single domain. On the domain interface boundary between free flow and porous media, we impose the Beavers-Joseph-Saffman-Jones interface boundary condition \cite{GBeavers_DJoseph_1} and the Lions interface boundary condition which states that the free-flow stress in the normal direction including the total pressure (pressure plus dynamic pressure) is balanced by the pressure in porous media. Under these conditions, we show that our model, the Cahn-Hilliard-Navier-Stokes-Darcy system, satisfies an energy law. The design of accurate and long-time stable time-stepping method for the Cahn-Hilliard-Navier-Stokes-Darcy system is very challenging for a number of reasons. The first challenge is the stiffness inherent to diffuse interface models (large transition over thin layers). There is a large body of literature on developing unconditionally stable time-marching algorithms for diffuse interface models. These methods include the convex-splitting strategy \cite{ElSt1993, JShen_CWang_XWang_SMWise_SINA_2012, BLWW2013, Grun2013, DHan_JSC_2016, RLi_YGao_JChen_LZhang_XMHe_ZChen_1,YLiu_WChen_CWang_SMWise_NM_2017,JYang_SMao_XMHe_XYang_YHe_1}, the stabilization method \cite{ShenYang2010, YYan_WChen_CWang_SMWise_CICP_2018}, the Invariant Energy Quadratization approach \cite{XYang_DHan_JCP_2017, QChen_XYang_JShen_JCP_2017, CXu_CChen_XYang_XMHe_1, XYang_LJu_CMAME_2017, JZhao_XYang_YGong_QWang_CMAME_2017,XYang_JZhao_XMHe_1}, and the Scalar Auxiliary Variable approach \cite{FLin_XMHe_XWen_1,JShen_JXu_JYang_JCP_2018, JShen_JXu_JYang_SIRV_2019}. The second issue is the coupling between the nonlinear Cahn-Hilliard equation and the fluid equations, and the coupling between fluid velocity and pressure. Operator-splitting is typically utilized to decouple the computation, cf. \cite{DKay_RWelford_1, DHan_JSC_2016, JShen_XYang_SINN_2015, WChen_DHan_XWang_2017}. The third challenge is the coupling between free flow and porous media via domain interface boundary conditions. Various domain decomposition approaches have been proposed to minimize the computational cost \cite{WJLayton_FSchieweck_IYotov_1, JChen_SSun_XWang_1, WChen_DHan_XWang_2017, YGao_XHe_LMei_XYang_2018}. It is of great importance to develop decoupled numerical algorithms while maintaining the unconditional stability for solving the Cahn-Hilliard-Navier-Stokes-Darcy system (CHNSD). A decoupled algorithm is proposed in \cite{WChen_DHan_XWang_2017} for solving the Cahn-Hilliard-Stokes-Darcy model in which the decoupling between the Cahn-Hilliard equation and fluid equations hinges upon the presence of time derivative in the Darcy equations. In our CHNSD model the governing equations for flow in porous media is the classical Darcy equations without the time derivative term. To accomplish the decoupling between phase field variable and Darcy velocity we resort to the technique of pressure stabilization from \cite{DHan_JSC_2016} originally designed for solving the Cahn-Hilliard-Darcy equations. Furthermore, it is desirable to separate the computation of velocity and pressure when solving the Navier-Stokes equations. Due to the presence of the nonlinear Lions domain interface boundary condition, we adopt a special method of artificial compressibility \cite{VDeCaria_WLayton_MMcLaughlin_CMAME_2017,XHe_NJiang_CQiu_IJNME_2019} which avoids boundary conditions in the update of the pressure. We rigorously establish the unconditional long-time stability of the proposed algorithm and verify numerically that the fully discrete schemes are convergent and energy-law preserving. Ample numerical experiments are performed to illustrate the distinctive features of two-phase flows in superposed fluids and porous media. The rest of the article is as follows. In Section \ref{sec(2)}, we propose the Cahn-Hilliard-Navier-Stokes-Darcy model for two-phase flows of arbitrary densities in superposed fluid and porous media, we show that the model satisfies an energy law, and we also develop an unconditionally stable coupled time-stepping method for solving the model. In Section \ref{sec(4)}, {we provide the fully discrete, decoupled numerical scheme and establish its energy stability.} Numerical results {are} reported in Section \ref{sec(5)}. \section{The Cahn-Hilliard-Navier-Stokes-Darcy model}\label{sec(2)} In this section, we propose the Cahn-Hilliard-Navier-Stokes-Darcy model (CHNSD) for two-phase flows of {different densities and viscosities} in a fluid layer overlying porous media. We refer to \cite{DHan_DSun_XWang_1,DHan_XWang_HWu_1} for a phase field model for two-phase flows of matched density in the coupled setting where the linear flow regime (Stokes equations) is assumed in the free flow region. We provide the weak formulation of this model and then show that the model obeys a dissipative energy law on the PDE level. We also introduce an unconditionally stable coupled time-stepping method for solving the CHNSD system. \subsection{The model} We consider the coupled CHNSD system on a bounded connected domain $\Omega=\Omega_c \bigcup \Omega_m \subset {\mathbb R}^{\mbox{\textbf{d}}}, \ (\textbf{d} =2, 3)$ consisting of a free-flow region $\Omega_c$ and a porous media region $\Omega_m$. Let $\partial{\Omega}_c$ and $\partial{\Omega}_m$ denote the Lipschitz continuous boundaries of $\Omega_c$ and $\Omega_m$ with the outward unit normal vectors ${\bm{n}}_c$ and ${\bm{n}}_m$ to the fluid and the porous media regions, respectively. The interface between the two parts is denoted by $\Gamma$, i.e $\Gamma:=\partial{\Omega}_m\cap\partial{\Omega}_c$. A typical two-dimensional geometry is illustrated in Figure \ref{Stokes_marcy_domain_illustration}. Let $w_j~(j=c,m)$ denote the chemical potential and $M_j~(j=c,m)$ denote a mobility constant related to the relaxation time scale. {Let $f(\phi)$ be a polynomial of $\phi$ such that $f(\phi)=F'(\phi)$, where $F(\phi)$ represents the Helmholtz free energy and is commonly taken to be a non-convex function of $\phi$ for two immiscible flows. In this article, we consider the Ginzburg-Landau double-well potential $F(\phi)=\frac{1}{4\epsilon}(\phi^2-1)^2$ with the width of mixing layer $\epsilon$.} $\rho$ and $\eta$ are the density and viscosity of the mixture, denoted by \begin{eqnarray}\label{mix_define} \rho=\frac{\rho_1-\rho_2}{2}\phi+\frac{\rho_1+\rho_2}{2},\quad \nu=\frac{\nu_1-\nu_2}{2}\phi+\frac{\nu_1+\nu_2}{2}. \end{eqnarray} The gravity vector is ${\bm{g}}=g{\bm{j}}$ with the gravity constant $g$ and the unit upward vector ${\bm{j}}$. $\rho{\bm{g}}$ denotes the external gravitational forces. Furthermore, $\gamma$ and $\epsilon$ denote the elastic relaxation time and the capillary width, respectively, of the thin interfacial region. The order parameters (phase functions) are denoted by $\phi_j~(j=c,m)$ in $\Omega_j~(j=c,m)$ which assume distinct values $\pm 1$ respectively in the bulk phases away from the diffuse interface and varies smoothly inside it. In the porous media region $\Omega_m$, consider the porous media flow governed by the following Cahn-Hilliard-Darcy (CHD) system: \begin{eqnarray} \mathbb K^{-1}{\bm{u}}_m+\nabla p_m+\phi_m\nabla w_m&=&\rho {\bm{g}},\label{Darcy_law_time_BJ}\\ \nabla \cdot {\bm{u}}_m&=&0,\label{Darcy_divergence_free_time_BJ}\\ \frac{\partial \phi_m}{\partial t}+\nabla\cdot({\bm{u}}_m\phi_m)-\nabla \cdot \left(M_m \nabla w_m\right)&=&0,\label{equation_for_marcy_time_BJ}\\ w_m+\gamma\epsilon \triangle \phi_m-\gamma f(\phi_m)&=& 0,\label{Darcy_chemical_potential_time_BJ} \end{eqnarray} where ${\bm{u}}_m$ is the fluid discharge rate in the porous media, $\mathbb K$ is the hydraulic conductivity tensor, $p_m$ is the hydraulic head, and the term $w_m\nabla\phi_m$ is the induced extra stress from the free energy. Assuming external forces to be zero, and inserting Darcy's law \eqref{Darcy_law_time_BJ} into the mass conservation equation \eqref{Darcy_divergence_free_time_BJ}, we will consider the second order formulation as follows: \begin{eqnarray} -\nabla \cdot ({\mathbb K}\nabla p_m+{\mathbb K}\phi_m\nabla w_m)&=&0.\label{second_order_darcy} \end{eqnarray} After solving this equation, one can recover the Darcy velocity via \eqref{Darcy_law_time_BJ}. In the fluid region $\Omega_c$, consider the two phase fluid flows governed by a coupled Cahn-Hilliard-Navier-Stokes (CHNS) system with different densities and viscosities: \begin{eqnarray} \rho\left(\frac{\partial {\bm{u}}_c}{\partial t}+({\bm{u}}_c \cdot \nabla) {\bm{u}}_c\right)-\nabla\cdot\mathbb{T}({\bm{u}}_c,p_c)+\phi_c\nabla w_c&=&\rho {\bm{g}},\label{time_ctokes_equation_BJ}\\ \nabla \cdot {\bm{u}}_c&=&0,\label{Stokes_divergence_free_time_BJ}\\ \frac{\partial \phi_c}{\partial t}+\nabla\cdot({\bm{u}}_c\phi_c)-\nabla \cdot \left(M_c \nabla w_c\right)&=&0,\label{Stokes_for_marcy_time_BJ}\\ w_c+\gamma\epsilon \triangle \phi_c-\gamma f(\phi_c)&=& 0,\label{Stokes_chemical_potential_time_BJ} \end{eqnarray} where ${\bm{u}}_c$ is the fluid velocity, $p_c$ is the kinematic pressure, $\nu$ is the kinematic viscosity of the fluid, $\mathbb{T}({\bm{u}}_c,p_c)=2\nu \mathbb{D}(u_c)-p_c\mathbb{I}$ is the stress tensor, $\mathbb{D}(u_c)=(\nabla {\bm{u}}_c+\nabla^T{\bm{u}}_c)/2$ is the deformation tensor, and ${\mathbb I}$ is the identity matrix. \begin{figure}[h] \centering \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{5pt} \includegraphics[height=2.5in]{Stokes_darcy_domain_illustration.eps} \caption{A sketch of the porous median domain $\Omega_m$, fluid domain $\Omega_c$, and the interface $\Gamma$.} \label{Stokes_marcy_domain_illustration} \end{figure} We now introduce domain interface boundary conditions in order to couple the CHD system \eqref{Darcy_divergence_free_time_BJ}-\eqref{Darcy_chemical_potential_time_BJ} and CHNS system \eqref{time_ctokes_equation_BJ}-\eqref{Stokes_chemical_potential_time_BJ}. The continuity of normal component of velocity is assumed across the interface \begin{eqnarray} {\bm{u}}_c\cdot {\bm{n}}_c= -{\bm{u}}_m\cdot {\bm{n}}_m. \label{BJS-1_time} \end{eqnarray} The balance of normal force over the interface is satisfied by~\cite{DHan_DSun_XWang_1,MCai_MMu_JXu_2,VGirault_BRiviere_1} \begin{eqnarray} -{\bm{n}}_c\cdot (\mathbb{T}({\bm{u}}_c,p_c)\cdot {\bm{n}}_c)+\frac{\rho}{2}|{\bm{u}}_c|^2= p_m. \label{BJS-3_time} \end{eqnarray} The Beavers-Joseph-Saffman-Jones (BJS) interface condition \cite{GBeavers_DJoseph_1} holds as follows \begin{eqnarray} - {\mathbf{{\mbox{\boldmath$\tau$}}}}_j\cdot(\mathbb{T}({\bm{u}}_c,p_c)\cdot {\bm{n}}_c) = \frac{\alpha\nu \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\mathbf{{\mbox{\boldmath$\tau$}}}_j \cdot {\bm{u}}_c, \label{BJS-2_time} \end{eqnarray} where $\mathbf{{\mbox{\boldmath$\tau$}}}_j$ $(j=1,\cdots,d-1)$ are mutually orthogonal unit tangential vectors through the interface $\Gamma$, and $\prod$ is the permeability of the porous media. Moreover, we assume the continuity conditions for the phase field function, the chemical potential, and their normal derivatives on the interface $\Gamma$ \cite{DHan_DSun_XWang_1,DHan_XWang_HWu_1}, \begin{eqnarray} \phi_c&=&\phi_m,\label{phase_interface_con_1}\\ w_c&=&w_m,\label{phase_interface_con_2}\\ \nabla \phi_c\cdot {\bm{n}}_c &=&-\nabla \phi_m \cdot {\bm{n}}_m,\label{phase_interface_con_3}\\ M_c \nabla w_c \cdot {\bm{n}}_c&=&-M_m \nabla w_m \cdot {\bm{n}}_m .\label{phase_interface_con_4} \end{eqnarray} For the boundary conditions and initial conditions, we consider \begin{eqnarray*} {\bm{u}}_m\cdot{\bm{n}}_m|_{\Gamma_m} = 0,\label{DCH_bound_con_velocity}~~ \nabla \phi_m \cdot {\bm{n}}_m|_{\Gamma_m}= 0,\label{DCH_bound_con_phase}~~ M_m \nabla w_m \cdot {\bm{n}}_m|_{\Gamma_m} = 0, \label{DCH_bound_con_potential} \end{eqnarray*} on $\Gamma_m=\partial\Omega_m\backslash \Gamma$, and \begin{eqnarray*} {\bm{u}}_c|_{\Gamma_c} = 0,\label{NSCH_bound_con_velocity}~~ \nabla \phi_c \cdot {\bm{n}}_c|_{\Gamma_c} = 0,\label{NSCH_bound_con_phase}~~ M_c \nabla w_c \cdot {\bm{n}}_c|_{\Gamma_c} = 0,\label{NSCH_bound_con_potential} \end{eqnarray*} on $\Gamma_c=\partial\Omega_c\backslash \Gamma$. The initial conditions can be simply given as \begin{eqnarray*} \phi_j(0,x,y) = \phi_j^0(x,y),\,\, j=c,m,~~ {\bm{u}}_c(0,x,y) = {\bm{u}}_c^0(x,y).\label{initial condition for u_NSCH} \end{eqnarray*} For the ease of presentation, we assume external forces $\rho{\bm{g}}$ on the right side of equations \eqref{Darcy_law_time_BJ} and \eqref{time_ctokes_equation_BJ} to be zero as these forces are given quantities which enter the system linearly. Hence, they do not have a qualitative effect on estimates or results. Without loss of generality, we also assume that $\mathbb{K}$ is a bounded, symmetric and uniformly positive definite matrix. \subsection{The weak formulation} In this subsection, we present the weak formulation of the CHNSD model system \eqref{Darcy_law_time_BJ}-\eqref{phase_interface_con_4}. Let $H^{m}\left(\Omega\right)$ be the classical Sobolev space with the norm $\|\cdot\|_m$, where $m$ is a nonnegative integer. The norm $\|\cdot\|_\infty$ denotes the essential supremum. For the sake of simplicity, we denote $L^2$ norm $\|\cdot\|_0$ by $\|\cdot\|$. Furthermore, we set ${\bm{V}}=[H_0^1(\Omega)]^d=\{{\bm{v}} \in [H^1(\Omega)]^d:{\bm{v}}|_{\partial\Omega}=0\}$. Given $v\in L^1(\Omega_j)~(j=c,m)$, we denote its mean value by ${\hat {v}}=|\Omega_j|^{-1}\int_{\Omega_j}\,v(x)\,\mbox{d}x$. Then we define the space \begin{eqnarray} \dot{L}^2(\Omega_j):=\{v\in L^2(\Omega_j):\int_{\Omega_j} v\,\mbox{d}{\bm{x}}=0\}. \end{eqnarray} Let $\dot{H}^1(\Omega_j)=H^1(\Omega_j)\cap \dot{L}^2(\Omega_j)$ be a Hilbert space with inner product $(u,v)_{H^1}=\int_{\Omega_j}\,\nabla u\cdot \nabla v\,d{\bm{x}}$ due to the classical Poincar\'{e} inequality for functions with zero mean. We denote its dual space by $(\dot{H}^1(\Omega_j))'$. For the coupled CHNSD system, we introduce the following spaces utilized throughout this paper \begin{eqnarray} &&{\bm{X}}_c = \{{\bm{v}} \in [H^1(\Omega_c)]^d\,\,\,|\,\,\, {\bm{v}}=0\ \mbox{on $ \Gamma_c$}\},\nonumber\\ &&{\bm{X}}_m = \{{\bm{v}} \in [H^1(\Omega_m)]^d\,\,\,|\,\,\, {\bm{v}}\cdot{\bm{n}}_m=0\ \mbox{on $ \Gamma_m$}\},\nonumber\\ && {\bm{X}}_{j,div}= \{{\bm{v}} \in X_j\,\,\,|\,\,\, \nabla \cdot {\bm{v}}=0\},\nonumber\\ && Q_c=L^2(\Omega_c),\quad Q_m= \dot{H}^1(\Omega_m),\nonumber\\ && \quad Y_j= H^1(\Omega_j),\quad Y= H^1(\Omega),\quad j=c,m.\nonumber \end{eqnarray} Define $P_\tau$ to be the projection onto the tangent space on $\Gamma$, i.e. $P_\tau{\bm{u}} = \sum_{j=1}^{d-1}({\bm{u}}\cdot{\mbox{\boldmath$\tau$}}_j){\mbox{\boldmath$\tau$}}_j$. For the domain $\Omega_j~(j=c,m)$, $(\cdot,\cdot)$ denotes the $L^2$ inner product on the domain $\Omega_j$ decided by the subscript of integrated functions, and $\langle\cdot,\cdot\rangle$ denotes the $L^2$ inner product on the interface $\Gamma$. Then it is clear that \begin{eqnarray*} &&(u_m,v_m)=\int_{\Omega_m}u_mv_md{\bm{x}},\quad (u_c,v_c)=\int_{\Omega_c}u_cv_cd{\bm{x}},\quad (u,v)=\int_{\Omega_m}u_mv_md{\bm{x}}+\int_{\Omega_c}u_cv_cd{\bm{x}},\\ && \|u_m\|:=\left(\int_{\Omega_m}|u_m|^2d{\bm{x}}\right)^{\frac{1}{2}},\quad \|u_c\|:=\left(\int_{\Omega_c}|u_c|^2d{\bm{x}}\right)^{\frac{1}{2}},\quad \|u\|^2=\int_{\Omega_m}|u_m|^2d{\bm{x}}+\int_{\Omega_c}|u_c|^2d{\bm{x}}, \end{eqnarray*} where $u_m:=u|_{\Omega_m}$ and $u_c:=u|_{\Omega_c}$. We also denote $H'$ the dual space of $H$ with the duality induced by the $L^2$ inner product. Different from the equal density case \cite{MGao_XWang_1}, it is difficult to eliminate the nonlinear convective term of Navier-Stokes equation in the proof of the energy law. Hence a new variable $\sigma=\sqrt{\rho}$ is introduced to replace $\rho$ \cite{JLGuermond_LQuartapelle_2000}. Using the mass conservation \begin{eqnarray}\label{mass_conservation} \frac{\partial \rho}{\partial t}+\nabla\cdot(\rho{\bm{u}}_c)=0, \end{eqnarray} one can derive \begin{eqnarray*} \sigma\frac{\partial (\sigma{\bm{u}}_c)}{\partial t}=\rho\frac{\partial{\bm{u}}_c}{\partial t}+\frac{1}{2}\frac{\partial \rho}{\partial t}{\bm{u}}_c=\rho\frac{\partial{\bm{u}}_c}{\partial t}-\frac{1}{2}\nabla\cdot(\rho{\bm{u}}_c){\bm{u}}_c. \end{eqnarray*} Therefore, \eqref{time_ctokes_equation_BJ} can be rewritten by replacing $\rho\frac{\partial{\bm{u}}_c}{\partial t}$ with $\sigma\frac{\partial (\sigma{\bm{u}}_c)}{\partial t}+\frac{1}{2}\nabla\cdot(\rho{\bm{u}}_c){\bm{u}}_c$ as \begin{eqnarray} &&\sigma\frac{\partial (\sigma{\bm{u}}_c)}{\partial t}+\rho\left({\bm{u}}_c \cdot \nabla\right){\bm{u}}_c-\nabla\cdot \mathbb{T}({\bm{u}}_c,p_c)+\phi_c \nabla w_c +\frac{1}{2}\nabla\cdot(\rho{\bm{u}}_c){\bm{u}}_c=0.\quad\label{time_ctokes_equation_BJ_1_an_time} \end{eqnarray} The application of this technique and the resulting \eqref{time_ctokes_equation_BJ_1_an_time} in the context of multiphase flows first appears in \cite{ShenYang2010} by Shen and Yang. It is noted by Lowengrub and Truskinovsky in \cite{JLowengrub_LTruskinovsky_1998} that the mass-averaged velocity which maintains the continuity equation \eqref{mass_conservation} is quasi-incompressible, that is, the mixture of two incompressible fluids is slightly compressible inside the diffusive interface. Hence the divergence-free condition in our model amounts to an approximation to the quasi-incompressibility of the mass-averaged velocity. This adoption is for the convenience of numerical modeling so that classical numerical methods for incompressible fluid such as pressure-correction can be employed. The approximation can be justified from the point-of-view of sharp interface limit, in the sense that our model will recover the sharp interface model as the interfacial width goes to zero in the case of single domains (the sharp interface model for two-phase flow in the coupled setting remains open). It is also a common practice to adopt the simplification of incompressibility when the Mach number is small. We point out that one could use the divergence-free (solenoidal) volume-averaged velocity as is proposed in \cite{HAbels_HGarcke_GGrun_2012} by Abels et al. In the formalism of volume-averaged velocity, the continuity equation \eqref{mass_conservation} is no longer valid, and there is an extra advection term from the chemical flux in the momentum equation. The numerical modeling utilizing the volume-averaged velocity is deferred to a future work. By applying the interface conditions \eqref{BJS-1_time}-\eqref{phase_interface_con_4}, the weak formulation of the proposed Cahn-Hilliard-Navier-Stokes-Darcy is given as follows: find \begin{eqnarray*} (p_m,{\bm{u}}_c,p_c,\phi,w)\in (Q_m,{\bm{X}}_c,Q_c,Y,Y) \end{eqnarray*} such that \begin{eqnarray} &&({\mathbb K}\nabla p_m,\nabla q)+({\mathbb K}\phi_m \nabla w_m,\nabla q)-\langle{\bm{u}}_c\cdot{\bm{n}}_c,q\rangle = 0,~\forall~q \in Q_m,\quad\label{DCH_weak_formulation_BJS_1_time}\\ &&(\sigma\frac{\partial (\sigma{\bm{u}}_c)}{\partial t},{\bm{v}})+\left(\rho\left({\bm{u}}_c \cdot \nabla\right){\bm{u}}_c,{\bm{v}}\right)+(2\nu\mathbb{D}({\bm{u}}_c),\mathbb{D}({\bm{v}})) -(p_c,\nabla\cdot{\bm{v}})+(\phi_c \nabla w_c,{\bm{v}})+\frac{1}{2}(\nabla\cdot(\rho{\bm{u}}_c){\bm{u}}_c,{\bm{v}}) \nonumber\\ &&\qquad\qquad\qquad +\langle p_m-\frac{\rho}{2}|{\bm{u}}_c|^2, {\bm{v}}\cdot {\bm{n}}_c \rangle +\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu P_\tau{\bm{u}}_c,P_\tau{\bm{v}}\rangle=0,~\forall~ {\bm{v}}\in {\bm{X}}_c,\quad\label{NSCH_weak_formulation_BJS_1_time}\\ && (\nabla\cdot{\bm{u}}_c,q) =0,~\forall~\, q \in Q_c,\quad\label{NSCH_weak_formulation_BJS_2_time}\\ && (\frac{\partial \phi}{\partial t},\psi)-({\bm{u}}\phi,\nabla\psi)+(M\nabla w,\nabla\psi)=0,~\forall~\psi\in Y,\quad\label{NSCH_weak_formulation_BJ_3_time}\\ && (w,\omega)-\gamma\epsilon(\nabla\phi,\nabla \omega)-\gamma(f(\phi),\omega)=0,~\forall~\omega\in Y, \quad\label{NSCH_weak_formulation_BJ_4_time} \end{eqnarray} where $t\in [0,T]$, $T$ is a finite time, ${\bm{u}}_m \in L^\infty(0,T;[L^2(\Omega_m)]^d) \cap L^2(0,T;{\bm{X}}_{m})$, ${\bm{u}}_c \in L^\infty(0,T;[L^2(\Omega_c)]^d) \cap L^2(0,T;{\bm{X}}_{c,div})$, $\frac{\partial {\bm{u}}_c}{\partial t}\in L^2(0,T;{\bm{X}}'_{c,div})$, $p_j \in L^2(0,T;Q_j)$, $\phi_j \in L^\infty(0,T;Y_j) \cap L^2(0,T;H^3(\Omega_j))$, $\frac{\partial \phi_j}{\partial t} \in L^2(0,T;Y'_j)$, $w_j \in L^2(0,T;Y_j)$, and $j=\{c,m\}$, ~${\bm{u}}_m$~is defined by \begin{eqnarray}\label{newdefum} {\bm{u}}_m=-{\mathbb K}\nabla p_m-{\mathbb K}\phi_m\nabla w_m \end{eqnarray} based on \eqref{Darcy_law_time_BJ}. We inherit the idea from~\cite{WChen_DHan_XWang_2017}~that we can solve a Cahn-Hilliard equations on the whole domain $\Omega$. This is an alternative to~\cite{YGao_XHe_LMei_XYang_2018}, where two Cahn-Hilliard equations are solved on $\Omega_m$ and $\Omega_c$ separately. The other three interface conditions~\eqref{BJS-1_time}-\eqref{BJS-2_time}~are utilized in the traditional way for the single-phase Navier-Stokes-Darcy model in the literature~\cite{XMHe_JLi_YLin_JMing_1,VGirault_BRiviere_1,MLHadji_AAssala_FZNouri_1}. \subsection{A dissipative energy law} In order to show that the above weak formulation obeys a dissipative energy law, we first note that the total energy of the coupled system is given by \begin{eqnarray} E(t)=\frac{1}{2}\|\sigma{\bm{u}}_c\|^2 +\gamma[\frac{\epsilon}{2}\|\nabla \phi\|^2+(F(\phi),1)].\quad\label{Energy_equation} \end{eqnarray} \begin{theorem}\label{thm_energy} Assume $({\bm{u}}_m,{\bm{u}}_c,\phi)$ is a smooth solution to the initial boundary value problem \eqref{Darcy_law_time_BJ}-\eqref{phase_interface_con_4}. Then $({\bm{u}}_m,{\bm{u}}_c,\phi)$ satisfies the basic energy law \begin{eqnarray} \frac{d}{dt}E(t)=-\mathcal{D}(t),\quad\label{Energy_equation_time} \end{eqnarray} where the energy dissipation $\mathcal{D}$ is given by \begin{eqnarray}\label{Energy_equation_dissipation} &&\mathcal{D}(t)=\|\sqrt{2\nu}\mathbb{D}({\bm{u}}_c)\|^2+ M\|\nabla w\|^2+\|\sqrt{\mathbb K^{-1}}{\bm{u}}_m\|^2+\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle\nu P_\tau{\bm{u}}_c,P_\tau{\bm{u}}_c\rangle.\qquad \end{eqnarray} \end{theorem} \noindent{\bf Proof.} First, choose the test functions ${\bm{v}}={\bm{u}}_c$ and $q=p_c$ in~\eqref{NSCH_weak_formulation_BJS_1_time}-\eqref{NSCH_weak_formulation_BJS_2_time}. Adding the resultants together, we get \begin{eqnarray}\label{weak_NSCH_Energy_1} &&\frac{1}{2}\frac{d}{dt}\|\sigma{\bm{u}}_c\|^2+(\rho({\bm{u}}_c\cdot\nabla) {\bm{u}}_c ,{\bm{u}}_c)+\frac{1}{2}(\nabla\cdot(\rho{\bm{u}}_c){\bm{u}}_c,{\bm{u}}_c)+(\phi_c \nabla w_c,{\bm{u}}_c) \nonumber\\ &&\hskip 1.9cm+\|\sqrt{2\nu}\mathbb{D}({\bm{u}}_c)\|^2+\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle\nu P_\tau{\bm{u}}_c,P_\tau{\bm{u}}_c\rangle+\langle{\bm{u}}_c\cdot{\bm{n}}_c,p_m-\frac{\rho}{2}|{\bm{u}}_c|^2\rangle=0.\quad\quad \end{eqnarray} Using integration by parts, we can show that \begin{eqnarray}\label{trilinear_property_NS_22} (({\bm{u}}_c\cdot\nabla) {\bm{v}} ,{\bm{v}})+\frac{1}{2}((\nabla\cdot {\bm{u}}_c){\bm{v}},{\bm{v}})=\frac{1}{2}\langle{\bm{u}}_c\cdot{\bm{n}}_c,{\bm{v}}\cdot{\bm{v}}\rangle, ~\forall {\bm{v}}\in V. \end{eqnarray} Thanks to \eqref{trilinear_property_NS_22}, we have \begin{eqnarray} &&((\rho{\bm{u}}_c\cdot\nabla) {\bm{u}}_c ,{\bm{u}}_c)+\frac{1}{2}(\nabla\cdot (\rho{\bm{u}}_c){\bm{u}}_c,{\bm{u}}_c)=\frac{1}{2}\langle\rho{\bm{u}}_c\cdot{\bm{n}}_c,{\bm{u}}_c\cdot{\bm{u}}_c\rangle.\label{trilinear_property_NS_2} \end{eqnarray} Thus, applying \eqref{trilinear_property_NS_2} in \eqref{weak_NSCH_Energy_1}, we obtain \begin{eqnarray}\label{weak_NSCH_Energy_2} &&\frac{1}{2}\frac{d}{dt}\|\sigma{\bm{u}}_c\|^2+(\phi_c \nabla w_c,{\bm{u}}_c)+\|\sqrt{2\nu}\mathbb{D}({\bm{u}}_c)\|^2+\langle{\bm{u}}_c\cdot{\bm{n}}_c,p_m\rangle +\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle\nu P_\tau{\bm{u}}_c,P_\tau{\bm{u}}_c\rangle=0.\quad \end{eqnarray} Second, by taking $q=p_m$ in \eqref{DCH_weak_formulation_BJS_1_time}, and applying \eqref{newdefum}, we obtain \begin{eqnarray}\label{weak_DCH_Energy_1} \begin{aligned} -({\bm{u}}_m,\nabla p_m)-\langle{\bm{u}}_c\cdot{\bm{n}}_c,p_m\rangle=0. \end{aligned} \end{eqnarray} Taking the inner product of \eqref{newdefum} with ${\bm{u}}_m$, we have \begin{eqnarray}\label{weak_DCH_Energy_2} \|\sqrt{\mathbb K^{-1}}{\bm{u}}_m\|^2=-(\nabla p_m,{\bm{u}}_m)-(\phi_m \nabla w_m,{\bm{u}}_m). \end{eqnarray} Adding \eqref{weak_DCH_Energy_1} and \eqref{weak_DCH_Energy_2}, we obtain \begin{eqnarray}\label{weak_DCH_Energy_3} \|\sqrt{\mathbb K^{-1}}{\bm{u}}_m\|^2+(\phi_m \nabla w_m,{\bm{u}}_m)-\langle{\bm{u}}_c\cdot{\bm{n}}_c,p_m\rangle=0.\quad \end{eqnarray} By taking $\psi=w$ and $\omega=-\frac{\partial \phi}{\partial t}$ in \eqref{NSCH_weak_formulation_BJ_3_time} and \eqref{NSCH_weak_formulation_BJ_4_time}, respectively, and adding these two equations, we derive \begin{eqnarray}\label{weak_DCH_Energy_4} \begin{aligned} &\gamma[\frac{\epsilon}{2}\frac{d}{dt}\|\nabla \phi\|^2+\frac{d}{dt}(F(\phi),1)]+M\|\nabla w\|^2-({\bm{u}}\phi,\nabla w)=0. \end{aligned} \end{eqnarray} Summing the above resultants \eqref{weak_NSCH_Energy_2}, \eqref{weak_DCH_Energy_3} and \eqref{weak_DCH_Energy_4} together, we obtain~\eqref{Energy_equation_time}. This completes the proof of Theorem \ref{thm_energy}. \mbox{}\hfill$\Box$ \subsection{An unconditionally stable coupled time-stepping method}\label{sec(3)} Unconditionally stable but coupled time-stepping methods can be readily constructed for solving the CHNSD system \eqref{DCH_weak_formulation_BJS_1_time}-\eqref{NSCH_weak_formulation_BJ_4_time}. Here we present such an example and discuss its energy stability. We shall follow the stabilization technique~\cite{ShenYang2010,ZhuCST1999,Xu06} to handle the non-convex double-well potential $F(\phi)$. In order to ensure the stability of this approach, we assume that the potential function $F(\phi)$ satisfying the following condition: there exists a constant $L$ such that \begin{eqnarray}\label{max_priciple} \max_{\phi \in \mathbb{R}} | F''(\phi) |\leq L. \end{eqnarray} It is clear that the common Ginzburg-Landau double well potential $F(\phi)$ does not satisfy~\eqref{max_priciple}. Following \cite{NCondette_CMMelcher_ESuli_2011,ShenYang2010}, one truncates $F(\phi)$, still denoted by $F(\phi)$, such that \eqref{max_priciple} holds with $L=\frac{2}{\epsilon}$ in \eqref{max_priciple}. We point out that both the IEQ method and the SAV approach will lead to linear schemes with energy laws reformulated in terms of Lagrange multipliers. Let $t_n, n=0,1 \cdots M$ be a uniform partition of $[0, T]$ with $\Delta t=t_{n+1}-t_n=\frac{T}{M}$ being the time step size. Then, we construct the following discrete time, and continuous space scheme in the weak form \eqref{DCH_weak_formulation_BJS_1_time}-\eqref{NSCH_weak_formulation_BJ_4_time}: Find \begin{eqnarray*} (p_m^{n+1},{\bm{u}}_c^{n+1},p_c^{n+1},\phi^{n+1},w^{n+1})\in (Q_m, {\bm{X}}_c, Q_c, Y, Y) \end{eqnarray*} such that for all $(q,{\bm{v}},q,\psi,\omega)\in (Q_m,{\bm{X}}_c,Q_c,Y,Y)$ \begin{align} &({\mathbb K}\nabla p_m^{n+1},\nabla q)+ ({\mathbb K} \phi_m^n\nabla w_m^{n+1},\nabla q)-\langle{\bm{u}}_c^{n+1}\cdot{\bm{n}}_c,q\rangle = 0,~\forall~\, q \in Q_m, \label{DCH_semi_disretization_BJS_1_time}\\ &(\sigma^{n+1}\frac{\sigma^{n+1}{\bm{u}}_c^{n+1}-\sigma^n{\bm{u}}_c^n}{\Delta t},{\bm{v}}) +\left(\rho^n\left({\bm{u}}_c^{n} \cdot \nabla\right){\bm{u}}_c^{n+1},{\bm{v}}\right) +(2\nu^{n}\mathbb{D}({\bm{u}}_c^{n+1}),\mathbb{D}({\bm{v}})) \nonumber\\ &\hskip 1.0cm-(p_c^{n+1},\nabla\cdot{\bm{v}})+(\phi_c^{n} \nabla w_c^{n+1} ,{\bm{v}})+\frac{1}{2}(\nabla\cdot(\rho^n{\bm{u}}_c^n){\bm{u}}_c^{n+1},{\bm{v}}) +\langle p_m^{n+1}, {\bm{v}}\cdot {\bm{n}}_c \rangle \nonumber\\ &\hskip 1.0cm-\frac{1}{2}\langle \rho^n {\bm{u}}_c^n\cdot{\bm{u}}_c^{n+1}, {\bm{v}}\cdot {\bm{n}}_c \rangle +\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^n P_\tau{\bm{u}}_c^{n+1},P_\tau{\bm{v}}\rangle=0, ~\forall~ {\bm{v}}\in {\bm{X}}_c,\quad \label{NSCH_semi_disretization_BJS_1_time}\\ & (\nabla\cdot{\bm{u}}_c^{n+1},q) =0,~\forall~\, q \in Q_c,\quad \label{NSCH_semi_disretization_BJS_2_time}\\ & (\frac{\phi^{n+1}-\phi^n}{\Delta t},\psi)-({\bm{u}}^{n+1}\phi^n,\nabla\psi)+(M \nabla w^{n+1},\nabla\psi)=0,~\forall~\psi\in Y,\quad \label{NSCH_semi_disretization_BJ_3_time}\\ & (w^{n+1},\omega)-\gamma\epsilon (\nabla\phi^{n+1},\nabla \omega)-\frac{\gamma}{\epsilon}(\phi^{n+1}-\phi^{n},\omega) -\gamma(f(\phi^n),\omega)=0, ~\forall~\omega\in Y,\quad \label{NSCH_semi_disretization_BJ_4_time} \end{align} where \begin{eqnarray}\label{DCH_velocity_semi} {\bm{u}}_m^{n+1}=-{\mathbb K}\nabla p_m^{n+1}-{\mathbb K}\phi_m^n\nabla w_m^{n+1}. \end{eqnarray} We now proceed to prove the energy stability theorem as follows. \begin{theorem}\label{thm_energy_semi} The scheme~\eqref{DCH_semi_disretization_BJS_1_time}-\eqref{NSCH_semi_disretization_BJ_4_time} is unconditionally energy stable, in the sense that its approximation $({\bm{u}}_c^{n+1},\phi_m^{n+1},\phi_c^{n+1})$ satisfies the following discrete energy law: \begin{eqnarray} &&E^{n+1}-E^{n}\leq-\mathcal{D}^{n+1},\quad\label{Decouple_Energy_equation_time} \end{eqnarray} where the discrete energy $E$ is defined as \begin{eqnarray} E^{n}=\frac{1}{2}\|\sigma^n{\bm{u}}_{ch}^n\|^2 +\gamma[\frac{\epsilon}{2}\|\nabla\phi^n\|^2+(F(\phi^n),1)],\quad\label{dis_Energy_equation} \end{eqnarray} and the energy dissipation $\mathcal{D}^{n+1}$ is given by \begin{eqnarray}\label{Decouple_Energy_equation_dissipation} &&\mathcal{D}^{n+1}=\frac{1}{2}\|\sigma^{n+1}{\bm{u}}_c^{n+1}-\sigma^n{\bm{u}}_c^n\|^2 +\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_c^{n+1})\|^2+\Delta t\|\sqrt{\mathbb K^{-1}}{\bm{u}}_m^{n+1}\|^2 +\frac{ \gamma\epsilon}{2}\|\nabla(\phi^{n+1}-\phi^n)\|^2 \nonumber\\ &&\hskip 1.0cm+\Delta t M\|\nabla w^{n+1}\|^2 +\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^n P_\tau{\bm{u}}_c^{n+1},P_\tau{\bm{u}}_c^{n+1}\rangle. \end{eqnarray} \end{theorem} \noindent{\bf Proof.} We first consider the Cahn-Hilliard part. Taking $\psi=\Delta t w^{n+1}$ in~\eqref{NSCH_semi_disretization_BJ_3_time}, using the identity \begin{eqnarray}\label{identity} 2a(a-b)=a^2-b^2+(a-b)^2, \end{eqnarray} we get \begin{eqnarray}\label{Decouple_NSCH_Energy_8} &&(\phi^{n+1}-\phi^n,w^{n+1})-\Delta t ({\bm{u}}^{n+1}\phi^n,\nabla w^{n+1})+\Delta tM\|\nabla w^{n+1}\|^2=0. \end{eqnarray} We take $\omega=-(\phi_c^{n+1}-\phi_c^n)$ in~\eqref{NSCH_semi_disretization_BJ_4_time}, use~\eqref{identity} and the Taylor expansion \begin{eqnarray}\label{Taylor_expansion} F(\phi^{n+1})-F(\phi^n)=f(\phi^n)(\phi^{n+1}-\phi^n)+\frac{F''(\xi^n)}{2}(\phi^{n+1}-\phi^n)^2, \end{eqnarray} to get \begin{eqnarray}\label{Decouple_NSCH_Energy_90} &&-(w^{n+1},\phi^{n+1}-\phi^n) +\frac{\gamma\epsilon}{2}[\|\nabla\phi^{n+1}\|^2-\|\nabla\phi^n\|^2+\|\nabla(\phi^{n+1}-\phi^n)\|^2] +\frac{\gamma}{\epsilon}\|\phi^{n+1}-\phi^{n}\|^2\nonumber\\ &&\hskip 3.2cm+\gamma(F(\phi^{n+1})-F(\phi^n),1) \leq \frac{\gamma}{2}|F''(\xi^n)|\|\phi^{n+1}-\phi^{n}\|^2 . \end{eqnarray} Then, combining~\eqref{max_priciple}, we derive \begin{eqnarray}\label{Decouple_NSCH_Energy_9} -(w^{n+1},\phi^{n+1}-\phi^n) +\frac{\gamma\epsilon}{2}[\|\nabla\phi^{n+1}\|^2-\|\nabla\phi^n\|^2+\|\nabla(\phi^{n+1}-\phi^n)\|^2] +\gamma(F(\phi^{n+1})-F(\phi^n),1)\leq0.\quad \end{eqnarray} Adding~\eqref{Decouple_NSCH_Energy_8} and \eqref{Decouple_NSCH_Energy_9} together, we get \begin{eqnarray}\label{Decouple_NSCH_Energy S1_9} &&\frac{\gamma\epsilon}{2}[\|\nabla\phi^{n+1}\|^2-\|\nabla\phi^n\|^2] +\gamma(F(\phi^{n+1})-F(\phi^n),1)+\Delta tM\|\nabla w^{n+1}\|^2\nonumber\\ &&\hskip 2.5cm +\frac{\gamma\epsilon}{2}\|\nabla(\phi^{n+1}-\phi^n)\|^2-\Delta t ({\bm{u}}^{n+1}\phi^n,\nabla w^{n+1})\leq0. \end{eqnarray} Then, we consider conduit part. Thanks to \eqref{trilinear_property_NS_22}, we have \begin{eqnarray} &&((\rho^n{\bm{u}}_c^n\cdot\nabla) {\bm{u}}_c^{n+1} ,{\bm{u}}_c^{n+1})+\frac{1}{2}(\nabla\cdot (\rho^n{\bm{u}}_c^n){\bm{u}}_c^{n+1},{\bm{u}}_c^{n+1})=\frac{1}{2}\langle\rho^n{\bm{u}}_c^n\cdot{\bm{u}}_c^{n+1},{\bm{u}}_c^{n+1}\cdot{\bm{n}}_c\rangle.\label{trilinear_property_NS_32} \end{eqnarray} By taking the test function ${\bm{v}}=\Delta t{\bm{u}}_c^{n+1}$ in~\eqref{NSCH_semi_disretization_BJS_1_time}, $q=\Delta tp_c^{n+1}$ in \eqref{NSCH_semi_disretization_BJS_2_time}, summing the resultants, applying~\eqref{trilinear_property_NS_32} and \eqref{identity}, we obtain \begin{eqnarray}\label{Decouple_NSCH_Energy S1_1} &&\frac{1}{2}[\|\sigma^{n+1}{\bm{u}}_c^{n+1}\|^2-\|\sigma^{n}{\bm{u}}_c^n\|^2+\|\sigma^{n+1}{\bm{u}}_c^{n+1}-\sigma^{n}{\bm{u}}_c^n\|^2] +\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_c^{n+1})\|^2+\Delta t( \phi_c^{n}\nabla w_c^{n+1},{\bm{u}}_c^{n+1})\nonumber\\ &&\hskip 2.5cm+\Delta t \langle{\bm{u}}_c^{n+1}\cdot{\bm{n}}_c,p_m^{n+1}\rangle+\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^{n} P_\tau{\bm{u}}_c^{n+1},P_\tau{\bm{u}}_c^{n+1}\rangle=0. \end{eqnarray} Next, we consider the matrix part. Choosing $q=\Delta tp_h^{n+1}$ in \eqref{DCH_semi_disretization_BJS_1_time} and taking the inner product of~\eqref{DCH_velocity_semi} with $v=\Delta t {\bm{u}}_m^{n+1}$, then adding the resultants together, we derive \begin{eqnarray}\label{Decouple_DCH_Energy_S1_3} &&\Delta t\|\sqrt{\mathbb K^{-1}}{\bm{u}}_m^{n+1}\|^2+\Delta t(\phi_m^n\nabla w_m^{n+1},{\bm{u}}_m^{n+1})-\Delta t\langle{\bm{u}}_c^{n+1}\cdot{\bm{n}}_c,p_m^{n+1}\rangle=0 . \end{eqnarray} Summing~\eqref{Decouple_NSCH_Energy S1_9}, \eqref{Decouple_NSCH_Energy S1_1} and \eqref{Decouple_DCH_Energy_S1_3} together, we have \begin{eqnarray}\label{Decouple_DCH_Energy_9} \begin{aligned} E^{n+1}-E^n &\leq-\frac{1}{2}\|\sigma^{n+1}{\bm{u}}_c^{n+1}-\sigma^{n}{\bm{u}}_c^n\|^2 -\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_c^{n+1})\|^2-\Delta t \|\sqrt{\mathbb K^{-1}}{\bm{u}}_m^{n+1}\|^2 \\ &-\frac{ \gamma\epsilon}{2}\|\nabla(\phi^{n+1}-\phi^n)\|^2 -\Delta t M\|\nabla w^{n+1}\|^2 -\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^n P_\tau{\bm{u}}_c^{n+1},P_\tau{\bm{u}}_c^{n+1}\rangle, \end{aligned} \end{eqnarray} namely, we obtain \eqref{Decouple_Energy_equation_time}. Therefore, the conclusion of Theorem \ref{thm_energy_semi} follows. \mbox{}\hfill$\Box$ \section{An unconditionally stable decoupled numerical scheme}\label{sec(4)} In this section, we present an unconditionally stable decoupled numerical scheme for solving the CHNSD model. Finite elements are used for the spatial discretization. Let $\Im_h$ be a quasi-uniform triangulation of domain $\Omega$ under mesh size $h$. We introduce the finite element spaces $Y_h\subset Y$, $Y_{jh}\subset Y_j$, ${\bm{X}}_{ch}\subset {\bm{X}}_c$ and $Q_{jh}\subset Q_j$ with $j=c,m$. Here we assume ${\bm{X}}_{ch}\subset {\bm{X}}_c$ and $Q_{ch}\subset Q_c$ satisfy an inf-sup condition for the divergence operator in the following form: There exists a constant $C>0$ independent of $h$ such that the LBB condition \begin{eqnarray*} \inf_{0\neq q_h}\sup_{0\neq \bm{v}_h} \frac{(\nabla\cdot\bm{v}_h,q_h)}{\|\bm{v}_h\|_1}> C\|q_h\|, \; \forall~ q_h\in Q_{ch}, \bm{v}_h\in {\bm{X}}_{ch} \label{inf_sup_condition} \end{eqnarray*} holds. We first recall the following lemma for the estimate of the interface term from \cite{WChen_DHan_XWang_2017,MMoraiti_1}: \begin{lemma}\label{lemma31} There exists a constant $C$ such that, for ${\bm{v}}\in{\bm{X}}_{c}$, $q_{mh}\in Q_{mh}$ \begin{eqnarray}\label{bound_interface1} |\langle{\bm{v}}\cdot{\bm{n}}_c,q_{mh}\rangle|\leq C \|{\bm{v}}\|_{{\bm{X}}_{div}}\|\nabla q_{mh}\|, \end{eqnarray} where $\|{\bm{v}}\|_{{\bm{X}}_{div}}^2=\|{\bm{v}}\|^2+\|\nabla\cdot{\bm{v}}\|^2$. \end{lemma} In order to decouple the velocity and pressure in the Navier-Stokes equations, we follow the idea of artificial compressibility method \cite{ AJChorin_2,VDeCaria_WLayton_MMcLaughlin_CMAME_2017,XHe_NJiang_CQiu_IJNME_2019,RTemam_BSMF_1968, NNYanenko_1971} and replace the divergence-free condition by \begin{eqnarray*} \nabla\cdot{\bm{v}}-\delta p_t=0, \end{eqnarray*} where $\delta$ is an artificial compression parameter such that the pressure can be solved explicitly. We propose the following decoupled, unconditionally stable, linear scheme: {\bf Step 1.} Find $(\phi_h^{n+1},w_h^{n+1})\in Y_{h}\times Y_{h}$, such that \begin{eqnarray} && (\frac{\phi_h^{n+1}-\phi_h^n}{\Delta t},\psi_h)-(\bar{u}_h^{n+1} \phi_h^n,\nabla\psi_h)+(M\nabla w_h^{n+1},\nabla \psi_h)=0,~\forall~\psi_h\in Y_h, \quad\label{2Decouple_DCH_full_disretization_BJS_2_time}\\ && (w_h^{n+1},{\omega}_h)-\gamma\epsilon(\nabla\phi_h^{n+1},\nabla {\omega}_h)-\frac{\gamma}{\epsilon}(\phi_h^{n+1}-\phi_h^{n},{\omega}_h) -\gamma(f(\phi_h^n),{\omega}_h)=0,~\forall~{\omega}_h\in Y_h,\quad\label{2Decouple_DCH_full_disretization_BJS_3_time} \end{eqnarray} where \begin{eqnarray*} \bar{u}_h^{n+1}= \left\{ \begin{array}{ll} {\bm{u}}_{c\star}^n,\quad {\bm{x}}\in\Omega_c,\\ {\bm{u}}_{mh}^{n+1},\quad {\bm{x}}\in\Omega_m, \end{array} \right. \end{eqnarray*} and ${\bm{u}}_{c\star}^n$ and ${\bm{u}}_{mh}^{n+1}$ are defined as following \begin{eqnarray} {\bm{u}}_{c\star}^n&=&{\bm{u}}_{ch}^n-\frac{1}{\rho^n}\Delta t \phi_{ch}^n\nabla w_{ch}^{n+1},\label{CS2_flux_identity}\\ {\bm{u}}_{mh}^{n+1}&=&-{\mathbb K}\nabla p_{mh}^n-{\mathbb K}\phi_{mh}^n \nabla w_{mh}^{n+1}.\label{DCH_velocity_time} \end{eqnarray} {\bf Step 2.} Find $p_{mh}^{n+1}\in Q_{mh}$, such that \begin{eqnarray} &&({\mathbb K}\nabla p_{mh}^{n+1},\nabla q_h)+({\mathbb K}\phi_{mh}^n\nabla w_{mh}^{n+1},\nabla q_h)+\beta\Delta t(\nabla p_{mh}^{n+1},\nabla q_h)-\langle{\bm{u}}_{ch}^{n}\cdot{\bm{n}}_c,q_h\rangle = 0,~\forall~\, q_h \in Q_{mh}.\label{2Decouple_DCH_full_disretization_BJS_1_time} \end{eqnarray} {\bf Step 3.} Find ${\bm{u}}_{ch}^{n+1}\in {\bm{X}}_{ch}$, such that \begin{eqnarray}\label{2Decouple_NSCH_full_disretization_BJS_1_time} &&(\frac{\bar{\rho}^{n+1}{\bm{u}}_{ch}^{n+1}-\rho^{n}{\bm{u}}_{ch}^n}{\Delta t},\bm{v}_h) +\left(\rho^{n}\left({\bm{u}}_{ch}^{n} \cdot \nabla\right){\bm{u}}_{ch}^{n+1},\bm{v}_h\right) +(2\nu^{n}\mathbb{D}({\bm{u}}_{ch}^{n+1}),\mathbb{D}(\bm{v}_h)) +( \phi_{ch}^{n}\nabla w_{ch}^{n+1},\bm{v}_h) \nonumber\\ &&\hskip 2.0cm -(2p_{ch}^{n}-p_{ch}^{n-1},\nabla\cdot\bm{v}_h)+\frac{1}{2}(\nabla\cdot(\rho^{n}{\bm{u}}_{ch}^n){\bm{u}}_{ch}^{n+1},\bm{v}_h) +\frac{\xi}{\Delta t} (\nabla\cdot ({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n}),\nabla\cdot\bm{v}_h)\quad \\ &&\hskip 2.0cm+\langle p_{mh}^{n+1}, \bm{v}_h\cdot {\bm{n}}_c \rangle-\frac{1}{2}\langle \rho^n{\bm{u}}_{ch}^n\cdot{\bm{u}}_{ch}^{n+1}, \bm{v}_h\cdot {\bm{n}}_c \rangle+\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^n P_\tau{\bm{u}}_{ch}^{n+1},P_\tau\bm{v}_h\rangle=0,~\forall~\bm{v}_h\in {\bm{X}}_{ch},\nonumber \end{eqnarray} with $\bar{\rho}^{n+1}=\dfrac{{\rho}^{n+1}+{\rho}^{n}}{2}$.\\ {\bf Step 4:} Find $p_{ch}^{n+1}\in Q_{ch}$, such that \begin{eqnarray} &&(p_{ch}^{n+1}-p_{ch}^{n},q_h)=-\frac{\zeta}{\Delta t}(\nabla\cdot{\bm{u}}_{ch}^{n+1},q_h),~\forall~\, q_h \in Q_{ch},\quad\label{2Decouple_NSCH_full_disretization_BJS_5_time} \end{eqnarray} with $\zeta=\frac{1}{4}\min\{\rho_1,\rho_2\}$. \begin{remark}\label{re3.7} The term $\beta \Delta t (\nabla p_{mh}^{n+1},\nabla q_h)$ in \eqref{2Decouple_DCH_full_disretization_BJS_1_time} is a stabilization term in order to deduce the unconditional stability for the linearized scheme \eqref{2Decouple_DCH_full_disretization_BJS_1_time}. The parameter $\alpha$ depends only on the geometry of $\Omega$. \end{remark} \begin{remark}\label{re3.8} The term $\frac{\xi}{\Delta t} \nabla (\nabla\cdot ({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n}))$ in \eqref{2Decouple_NSCH_full_disretization_BJS_1_time} is a term to ensure the energy stability of continuity equation~\cite{VDeCaria_WLayton_MMcLaughlin_CMAME_2017,JAFiordilino_WLayton_YRong_CMAME_2018}. Thus, one can derive the stability of numerical method under some approximate constant for $\xi$. \end{remark} \begin{remark}\label{re3.6} The scheme \eqref{2Decouple_DCH_full_disretization_BJS_2_time}-\eqref{2Decouple_NSCH_full_disretization_BJS_5_time} is a decoupled, linear scheme. Indeed, \eqref{2Decouple_DCH_full_disretization_BJS_2_time}-\eqref{2Decouple_DCH_full_disretization_BJS_3_time}, \eqref{2Decouple_DCH_full_disretization_BJS_1_time}, \eqref{2Decouple_NSCH_full_disretization_BJS_1_time} and \eqref{2Decouple_NSCH_full_disretization_BJS_5_time} are decoupled linear elliptic equations for $\phi_h^{n+1}$, $w_h^{n+1}$, $p_{mh}^{n+1}$, ${\bm{u}}_{ch}^{n+1}$ and $p_{ch}^{n+1}$. Therefore, at each time step, one only needs to solve a sequence of linear equations which can be solved very efficiently. \end{remark} We now prove the energy stability theorem as follows. \begin{theorem}\label{S2_thm_energy_Decouple} Let $({\bm{u}}_{ch}^{n+1},p_{mh}^{n+1},p_{ch}^{n+1},\phi_h^{n+1})$ be a smooth solution to the initial boundary value problem \eqref{2Decouple_DCH_full_disretization_BJS_2_time}-\eqref{2Decouple_NSCH_full_disretization_BJS_5_time}. Then the approximation $({\bm{u}}_{ch}^{n+1},p_{mh}^{n+1},p_{ch}^{n+1},\phi_h^{n+1})$ satisfies the following modified discrete energy law: \begin{eqnarray} &&\mathrm{\mathcal{E}}^{n+1}-\mathrm{\mathcal{E}}^{n}\leq-\mathcal{D}^{n+1},\quad\label{S2_Decouple_Energy_equation_time} \end{eqnarray} where the modified discrete energy $\mathrm{\mathcal{E}}^{n+1}$ is defined as \begin{eqnarray}\label{S2_dis_Energy_equation} &&\mathrm{\mathcal{E}}^{n}=E^n+\frac{\xi}{2} \|\nabla\cdot {\bm{u}}_{ch}^{n}\|^2+\frac{\Delta t^2}{2\zeta} \| p_{ch}^{n}\|^2+\frac{1}{2}\Delta t \|\sqrt{\mathbb K}\nabla p_{mh}^n\|^2, \end{eqnarray} with \begin{eqnarray*} E^{n}=\int_{\Omega_c}\frac{1}{2}|\sigma^n{\bm{u}}_{ch}^{n}|^2d{\bm{x}}+\gamma\int_{\Omega}[\frac{\epsilon}{2}|\nabla \phi_h^{n}|^2+F(\phi_h^{n})]d{\bm{x}}, \end{eqnarray*} and the energy dissipation $\mathcal{D}^{n+1}$ is given by \begin{eqnarray}\label{S2_Decouple_Energy_equation_dissipation} &&\mathcal{D}^{n+1}=\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_{ch}^{n+1})\|^2+\Delta tM\|\nabla w_h^{n+1}\|^2 +\frac{\gamma\epsilon}{2}\|\nabla\phi_h^{n+1}-\nabla\phi_h^n\|^2+\frac{\Delta t^2}{2\zeta}\|p_{ch}^{n}-p_{ch}^{n-1}\|^2 \nonumber\\ &&\hskip 1.3cm +\frac{1}{4}\Delta t\|\sqrt{\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^{n})\|^2+\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^{n} P_\tau{\bm{u}}_{ch}^{n+1},P_\tau{\bm{u}}_{ch}^{n+1}\rangle. \end{eqnarray} \end{theorem} \noindent{\bf Proof.} We firstly consider the full discretization~\eqref{2Decouple_DCH_full_disretization_BJS_2_time} and~\eqref{2Decouple_DCH_full_disretization_BJS_3_time} for Cahn-Hilliard euqation on whole domain $\Omega$. Taking $\psi_h=\Delta t w_h^{n+1}$ in~\eqref{2Decouple_DCH_full_disretization_BJS_2_time}, we get \begin{eqnarray}\label{2Decouple_NSCH_Energy_8} &&(\phi_h^{n+1}-\phi_h^n,w_h^{n+1})-\Delta t (\bar{\bm{u}}_h^{n+1}\phi_h^n,\nabla w_h^{n+1})+\Delta tM\|\nabla w_h^{n+1}\|^2=0. \end{eqnarray} We take ${\omega}_h=-(\phi_h^{n+1}-\phi_h^n)$ in~\eqref{2Decouple_DCH_full_disretization_BJS_3_time}, the equality \eqref{identity} and the Taylor expansion \eqref{Taylor_expansion} to get \begin{eqnarray}\label{2Decouple_NSCH_Energy_90} &&-(w_h^{n+1},\phi_h^{n+1}-\phi_h^n) +\frac{\gamma\epsilon}{2}[\|\nabla\phi_h^{n+1}\|^2-\|\nabla\phi_h^n\|^2] +\gamma(F(\phi_h^{n+1})-F(\phi_h^n),1)+\frac{\gamma}{\epsilon}\|\phi_h^{n+1}-\phi_h^{n}\|^2\nonumber\\ &&\hskip 3.3cm+\frac{\gamma\epsilon}{2}\|\nabla\phi_{ch}^{n+1}-\nabla\phi_{ch}^n\|^2 \leq \frac{\gamma}{2}|F''(\xi^n)|\|\phi_h^{n+1}-\phi_h^{n}\|^2.\quad \end{eqnarray} Then, combining~\eqref{max_priciple} and \eqref{2Decouple_NSCH_Energy_90}, we derive \begin{eqnarray}\label{2Decouple_NSCH_Energy_9} &&-(w_h^{n+1},\phi_h^{n+1}-\phi_h^n) +\frac{\gamma\epsilon}{2}[\|\nabla\phi_h^{n+1}\|^2-\|\nabla\phi_h^n\|^2] +\gamma(F(\phi_h^{n+1})-F(\phi_h^n),1) \leq-\frac{\gamma\epsilon}{2}\|\nabla\phi_h^{n+1}-\nabla\phi_h^n\|^2.\quad \end{eqnarray} Adding \eqref{2Decouple_NSCH_Energy_8} and \eqref{2Decouple_NSCH_Energy_9}, we obtain \begin{eqnarray}\label{2Decouple_CH_Energy_3} &&\frac{\gamma\epsilon}{2}[\|\nabla\phi_h^{n+1}\|^2-\|\nabla\phi_h^n\|^2] +\gamma(F(\phi_h^{n+1})-F(\phi_h^n),1)-\Delta t (\bar{\bm{u}}_h^{n+1}\phi_h^n,\nabla w_h^{n+1})\nonumber\\ &&\hskip 2.5cm\leq-\frac{\gamma\epsilon}{2}\|\nabla\phi_h^{n+1}-\nabla\phi_h^n\|^2-\Delta tM\|\nabla w_h^{n+1}\|^2.\quad \end{eqnarray} Next, we discuss the conduit part. Taking the test function $\bm{v}_h=\Delta t{\bm{u}}_{ch}^{n+1}$ in~\eqref{2Decouple_NSCH_full_disretization_BJS_1_time}, combining~\eqref{trilinear_property_NS_32}, \eqref{CS2_flux_identity}, and the identity \eqref{identity}, we obtain \begin{eqnarray}\label{Decouple_NSCH_Energy S2_1} &&\frac{1}{2}[\|\sigma^{n+1}{\bm{u}}_{ch}^{n+1}\|^2-\|\sigma^{n}{\bm{u}}_{c\star}^n\|^2 +\|\sigma^{n}\left({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{c\star}^n\right)\|^2]+\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_{ch}^{n+1})\|^2 \nonumber\\ &&\hskip 2.1cm+\frac{\xi}{2} [\|\nabla\cdot {\bm{u}}_{ch}^{n+1}\|^2-\|\nabla\cdot {\bm{u}}_{ch}^{n}\|^2+\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n})\|^2]\nonumber\\ &&\hskip 2.1cm+\Delta t(p_{ch}^{n+1}-2p_{ch}^{n}+p_{ch}^{n-1},\nabla\cdot{\bm{u}}_{ch}^{n+1}) -\Delta t(p_{ch}^{n+1},\nabla\cdot{\bm{u}}_{ch}^{n+1}) \nonumber\\ && \hskip 2.1cm+\Delta t \langle{\bm{u}}_{ch}^{n+1}\cdot{\bm{n}}_c,p_{mh}^{n+1}\rangle+\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^{n} P_\tau{\bm{u}}_{ch}^{n+1},P_\tau{\bm{u}}_{ch}^{n+1}\rangle=0. \end{eqnarray} Taking $q_h=\dfrac{\Delta t^2}{\zeta}(p_{ch}^{n+1}-2p_{ch}^{n}+p_{ch}^{n-1})$ in \eqref{2Decouple_NSCH_full_disretization_BJS_5_time}, and using~\eqref{identity}, we have \begin{eqnarray}\label{Decouple_NSCH_Energy S2_2} &&\frac{\Delta t^2}{2\zeta}[\|p_{ch}^{n+1}-p_{ch}^{n}\|^2-\|p_{ch}^{n}-p_{ch}^{n-1}\|^2+\|p_{ch}^{n+1}-2p_{ch}^n+p_{ch}^{n-1}\|^2] =\Delta t(\nabla\cdot{\bm{u}}_{ch}^{n+1},p_{ch}^{n+1}-2p_{ch}^n+p_{ch}^{n-1}).\quad \end{eqnarray} Taking $q_h=-\dfrac{\Delta t^2}{\zeta}p_{ch}^{n+1}$ in \eqref{2Decouple_NSCH_full_disretization_BJS_5_time}, and using~\eqref{identity}, we obtain \begin{eqnarray}\label{Decouple_NSCH_Energy S2_3} &&\frac{\Delta t^2}{2\zeta}[\| p_{ch}^{n+1}\|^2-\|p_{ch}^{n}\|^2+\|p_{ch}^{n+1}-p_{ch}^{n}\|^2] =-\Delta t(\nabla\cdot{\bm{u}}_{ch}^{n+1},p_{ch}^{n+1}).\qquad \end{eqnarray} Adding \eqref{Decouple_NSCH_Energy S2_2} and \eqref{Decouple_NSCH_Energy S2_3} to get \begin{eqnarray}\label{Decouple_NSCH_Energy S2_4} &&\frac{\Delta t^2}{2\zeta}[\| p_{ch}^{n+1}\|^2-\| p_{ch}^{n}\|^2+\|p_{ch}^{n}-p_{ch}^{n-1}\|^2 -\|p_{ch}^{n+1}-2p_{ch}^{n}+p_{ch}^{n-1}\|^2]\nonumber\\ &&\hskip 2.4cm=\Delta t(\nabla\cdot{\bm{u}}_{ch}^{n+1},p_{ch}^{n+1}-2p_{ch}^{n}+p_{ch}^{n-1})-\Delta t(\nabla\cdot{\bm{u}}_{ch}^{n+1},p_{ch}^{n+1}). \end{eqnarray} Now, we estimate the term $\|p_{ch}^{n+1}-2p_{ch}^{n}+p_{ch}^{n-1}\|^2$ on the right hand side of \eqref{Decouple_NSCH_Energy S2_4}. Taking the difference of \eqref{2Decouple_NSCH_full_disretization_BJS_5_time} at step $t^{n+1}$ and step $t^n$ to derive, \begin{eqnarray}\label{Decouple_NSCH_Energy S2_5} &&p_{ch}^{n+1}-2p_{ch}^{n}+p_{ch}^{n-1} =-\frac{\zeta}{\Delta t}\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^n),\qquad \end{eqnarray} which implies \begin{eqnarray}\label{Decouple_NSCH_Energy S2_7} &&\frac{\Delta t^2}{2\zeta}\|p_{ch}^{n+1}-2p_{ch}^{n}+p_{ch}^{n-1}\|^2\leq\frac{\zeta}{2}\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^n)\|^2. \end{eqnarray} Adding \eqref{Decouple_NSCH_Energy S2_1}, \eqref{Decouple_NSCH_Energy S2_4} and \eqref{Decouple_NSCH_Energy S2_7}, we obtain \begin{eqnarray}\label{Decouple_NSCH_Energy S2_8} &&\frac{1}{2}[\|\sigma^{n+1}{\bm{u}}_{ch}^{n+1}\|^2-\|\sigma^{n}{\bm{u}}_{c\star}^n\|^2 +\|\sigma^{n}\left({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{c\star}^n\right)\|^2] +\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_{ch}^{n+1})\|^2+\frac{\Delta t^2}{2\zeta}[\| p_{ch}^{n+1}\|^2-\| p_{ch}^{n}\|^2] \nonumber\\ &&\hskip 1.7cm+\frac{\xi}{2} [\|\nabla\cdot {\bm{u}}_{ch}^{n+1}\|^2-\|\nabla\cdot {\bm{u}}_{ch}^{n}\|^2+\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n})\|^2] +\frac{\Delta t^2}{2\zeta}\|p_{ch}^n-p_{ch}^{n-1}\|^2\nonumber\\ &&\hskip 2.4cm+\Delta t \langle{\bm{u}}_{ch}^{n+1}\cdot{\bm{n}}_c,p_{mh}^{n+1}\rangle+\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^{n} P_\tau{\bm{u}}_{ch}^{n+1},P_\tau{\bm{u}}_{ch}^{n+1}\rangle \nonumber\\ &&\hskip 2.4cm\leq \frac{\zeta}{2}\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^n)\|^2. \end{eqnarray} We rewrite \eqref{CS2_flux_identity} as \begin{eqnarray}\label{CS2_flux_identity_2} \frac{\rho^n({\bm{u}}_{c\star}^n-{\bm{u}}_{ch}^n)}{\Delta t}&=& -\phi_{ch}^n\nabla w_{ch}^{n+1}, \end{eqnarray} and take the inner product of \eqref{CS2_flux_identity_2} with $\Delta t {\bm{u}}_{c\star}^n$ to obtain by using the identity \eqref{identity} \begin{eqnarray}\label{Decouple_NSCH_Energy S2_9} &&\frac{1}{2}[\|\sigma^n{\bm{u}}_{c\star}^n\|^2-\|\sigma^n{\bm{u}}_{ch}^n\|^2+\|\sigma^n({\bm{u}}_{c\star}^n-{\bm{u}}_{ch}^n)\|^2]=- \Delta t (\phi_{ch}^n\nabla w_{ch}^{n+1},{\bm{u}}_{c\star}^n). \end{eqnarray} Adding \eqref{Decouple_NSCH_Energy S2_8} and \eqref{Decouple_NSCH_Energy S2_9}, we obtain \begin{eqnarray}\label{Decouple_NSCH_Energy S2_11} &&\frac{1}{2}[\|\sigma^{n+1}{\bm{u}}_{ch}^{n+1}\|^2-\|\sigma^n{\bm{u}}_{ch}^n\|^2+\|\sigma^n({\bm{u}}_{c\star}^n-{\bm{u}}_{ch}^n)\|^2+\|\sigma^n({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{c\star}^n)\|^2] +\frac{\Delta t^2}{2\zeta}[\| p_{ch}^{n+1}\|^2-\| p_{ch}^{n}\|^2] \nonumber\\ &&\hskip 2.4cm+\frac{\xi}{2} [\|\nabla\cdot {\bm{u}}_{ch}^{n+1}\|^2-\|\nabla\cdot {\bm{u}}_{ch}^{n}\|^2]+\frac{\xi}{2}\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n})\|^2 +\frac{\Delta t^2}{2\zeta}\|p_{ch}^{n}-p_{ch}^{n-1}\|^2 \nonumber\\ &&\hskip 2.4cm+\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_{ch}^{n+1})\|^2+\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^{n} P_\tau{\bm{u}}_{ch}^{n+1},P_\tau{\bm{u}}_{ch}^{n+1}\rangle+\Delta t \langle{\bm{u}}_{ch}^{n+1}\cdot{\bm{n}}_c,p_{mh}^{n+1}\rangle\nonumber\\ &&\hskip 2.4cm \leq -\Delta t (\phi_{ch}^n\nabla w_{ch}^{n+1},{\bm{u}}_{c\star}^n)+\frac{\zeta}{2}\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^n)\|^2. \end{eqnarray} Then, we study the matrix part. We take the inner product of \eqref{DCH_velocity_time} with $\Delta t {\bm{u}}_{mh}^{n+1}$ to get \begin{eqnarray}\label{Decouple_DCH_Energy_3} \Delta t \|\sqrt{\mathbb K^{-1}}{\bm{u}}_{mh}^{n+1}\|^2=\Delta t(\nabla (p_{mh}^{n+1}-p_{mh}^n),{\bm{u}}_{mh}^{n+1})-\Delta t(\nabla p_{mh}^{n+1},{\bm{u}}_{mh}^{n+1})-\Delta t(\phi_{mh}^n\nabla w_{mh}^{n+1},{\bm{u}}_{mh}^{n+1}). \end{eqnarray} From \eqref{DCH_velocity_time}, \eqref{2Decouple_DCH_full_disretization_BJS_1_time} can be written as \begin{eqnarray}\label{Decouple_DCH_full_disretization_BJS_11_time} -({\bm{u}}_{mh}^{n+1},\nabla q_h)+({\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^n),\nabla q_h)+\beta\Delta t (\nabla p_{mh}^{n+1},\nabla q_h)-\langle{\bm{u}}_{ch}^{n}\cdot{\bm{n}}_c,q_h\rangle = 0. \end{eqnarray} We take $q_h=\Delta t p_{mh}^{n+1}$ in \eqref{Decouple_DCH_full_disretization_BJS_11_time} and utilize the identity \eqref{identity} to obtain \begin{eqnarray}\label{Decouple_DCH_Energy_4} &&-\Delta t ({\bm{u}}_{mh}^{n+1},\nabla p_{mh}^{n+1})+\frac{1}{2}\Delta t [\|\sqrt{\mathbb K}\nabla p_{mh}^{n+1}\|^2-\|\sqrt{\mathbb K}\nabla p_{mh}^n\|^2+\|\sqrt{\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^{n})\|^2]+\beta\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2 \nonumber\\ &&\hskip 3cm=\Delta t \langle{\bm{u}}_{ch}^{n}\cdot {\bm{n}}_c, p_{mh}^{n+1} \rangle.\quad\quad \end{eqnarray} Taking the sum of \eqref{Decouple_DCH_Energy_3} and \eqref{Decouple_DCH_Energy_4}, we get \begin{eqnarray}\label{2Decouple_DCH_Energy_5} &&\Delta t\|\sqrt{\mathbb K^{-1}}{\bm{u}}_{mh}^{n+1}\|^2 +\frac{1}{2}\Delta t [\|\sqrt{\mathbb K}\nabla p_{mh}^{n+1}\|^2-\|\sqrt{\mathbb K}\nabla p_{mh}^n\|^2+\|\sqrt{\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^{n})\|^2]+\beta\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2\nonumber\\ &&\hskip 2.5cm=\Delta t(\nabla (p_{mh}^{n+1}-p_{mh}^n),{\bm{u}}_{mh}^{n+1})-\Delta t(\phi_{mh}^n\nabla w_{mh}^{n+1},{\bm{u}}_{mh}^{n+1})+\Delta t\langle{\bm{u}}_{ch}^{n}\cdot {\bm{n}}_c, p_{mh}^{n+1} \rangle. \end{eqnarray} Now, we estimate the term $(\nabla (p_{mh}^{n+1}-p_{mh}^n),{\bm{u}}_{mh}^{n+1})$. Combining \begin{eqnarray} \Delta t|(\nabla (p_{mh}^{n+1}-p_{mh}^n),{\bm{u}}_{mh}^{n+1})|\leq \Delta t \|\sqrt{\mathbb K^{-1}}{\bm{u}}_{mh}^{n+1}\|^2+\frac{1}{4}\Delta t \|\sqrt{\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^{n})\|^2,\label{Decouple_DCH_Energy_6} \end{eqnarray} then, we obtain from \eqref{2Decouple_DCH_Energy_5} \begin{eqnarray} &&\frac{1}{2}\Delta t [\|\sqrt{\mathbb K}\nabla p_{mh}^{n+1}\|^2-\|\sqrt{\mathbb K}\nabla p_{mh}^n\|^2]+\frac{1}{4}\Delta t\|\sqrt{\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^{n})\|^2+\beta\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2\nonumber\\ &&\hskip 2.5cm\leq-\Delta t(\phi_{mh}^n\nabla w_{mh}^{n+1},{\bm{u}}_{mh}^{n+1})+\Delta t\langle{\bm{u}}_{ch}^{n}\cdot {\bm{n}}_c, p_{mh}^{n+1} \rangle.\label{2Decouple_DCH_Energy_6} \end{eqnarray} Adding~\eqref{2Decouple_CH_Energy_3}, ~\eqref{Decouple_NSCH_Energy S2_11} and \eqref{2Decouple_DCH_Energy_6} together, we obtain \begin{eqnarray}\label{Decouple_NSCHDCH_Energy S2} &&\frac{1}{2}[\|\sigma^{n+1}{\bm{u}}_{ch}^{n+1}\|^2-\|\sigma^n{\bm{u}}_{ch}^n\|^2] +\frac{\gamma\epsilon}{2}[\|\nabla\phi_h^{n+1}\|^2-\|\nabla\phi_h^n\|^2] +\gamma(F(\phi_h^{n+1})-F(\phi_h^n),1) \nonumber\\ &&\hskip 2.4cm +\frac{\Delta t^2}{2\zeta}[\| p_{ch}^{n+1}\|^2-\| p_{ch}^{n}\|^2]+\frac{\xi}{2} [\|\nabla\cdot {\bm{u}}_{ch}^{n+1}\|^2-\|\nabla\cdot {\bm{u}}_{ch}^{n}\|^2]+\frac{\xi}{2}\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n})\|^2 \nonumber\\ &&\hskip 2.4cm+\frac{1}{2}\Delta t [\|\sqrt{\mathbb K}\nabla p_{mh}^{n+1}\|^2-\|\sqrt{\mathbb K}\nabla p_{mh}^n\|^2] +\frac{\Delta t^2}{2\zeta}\|p_{ch}^{n}-p_{ch}^{n-1}\|^2 +\frac{1}{4}\Delta t\|\sqrt{\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^{n})\|^2\nonumber\\ &&\hskip 2.4cm +\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_{ch}^{n+1})\|^2 +\Delta tM\|\nabla w_h^{n+1}\|^2+\beta\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2 +\frac{\gamma\epsilon}{2}\|\nabla\phi_h^{n+1}-\nabla\phi_h^n\|^2 \nonumber\\ && \hskip 2.4cm +\frac{1}{2}[\|\sigma^n({\bm{u}}_{c\star}^n-{\bm{u}}_{ch}^n)\|^2+\|\sigma^n({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{c\star}^n)\|^2] +\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^{n} P_\tau{\bm{u}}_{ch}^{n+1},P_\tau{\bm{u}}_{ch}^{n+1}\rangle \nonumber\\ &&\hskip 2.4cm \leq\frac{\zeta}{2}\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^n)\|^2 +\Delta t\langle({\bm{u}}_{ch}^n-{\bm{u}}_{ch}^{n+1})\cdot{\bm{n}}_c,p_{mh}^{n+1}\rangle. \end{eqnarray} Now, we estimate the last interface term in the above equation. Using using Lemma~\ref{lemma31} and the triangle inequality, \begin{eqnarray}\label{S2_estimate_interface} &&\Delta t|\langle\left({\bm{u}}_{ch}^{n}-{\bm{u}}_{ch}^{n+1}\right)\cdot {\bm{n}}_c,p_{mh}^{n+1}\rangle| \leq C\Delta t\|{\bm{u}}_{ch}^{n}-{\bm{u}}_{ch}^{n+1}\|_{{\bm{X}}_{div}}\|\nabla p_{mh}^{n+1}\|\nonumber\\ &&\hskip 4.5cm\leq \frac{1}{4}\min\{\rho_1,\rho_2\}\|{\bm{u}}_{ch}^{n}-{\bm{u}}_{ch}^{n+1}\|_{{\bm{X}}_{div}}^2+\tilde{C}\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2\nonumber\\ &&\hskip 4.5cm= \frac{1}{4}\min\{\rho_1,\rho_2\}\|{\bm{u}}_{ch}^{n}-{\bm{u}}_{ch}^{n+1}\|^2+\frac{1}{4}\min\{\rho_1,\rho_2\}\|\nabla\cdot({\bm{u}}_{ch}^{n}-{\bm{u}}_{ch}^{n+1})\|^2+\tilde{C}\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2\nonumber\\ &&\hskip 4.5cm\leq \frac{1}{4}\|\sigma^n({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n})\|^2+\frac{1}{4}\min\{\rho_1,\rho_2\}\|\nabla\cdot({\bm{u}}_{ch}^{n}-{\bm{u}}_{ch}^{n+1})\|^2+\tilde{C}\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2.\quad \end{eqnarray} On the other hand, we derive from the triangle inequality that \begin{eqnarray}\label{Decouple_NSCH_Energy S2_10} &&-\frac{1}{2}[\|\sigma^n({\bm{u}}_{c\star}^n-{\bm{u}}_{ch}^n)\|^2+\|\sigma^n({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{c\star}^n)\|^2]\leq -\frac{1}{4}\|\sigma^n({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^n)\|^2. \end{eqnarray} Adding \eqref{Decouple_NSCHDCH_Energy S2}, \eqref{S2_estimate_interface} and \eqref{Decouple_NSCH_Energy S2_10}, we obtain \begin{eqnarray}\label{Decouple_NSCHDCH_Energy_all S2} &&\mathrm{\mathcal{E}}^{n+1}-\mathrm{\mathcal{E}}^{n} \leq-\Delta t\|\sqrt{2\nu^{n}}\mathbb{D}({\bm{u}}_{ch}^{n+1})\|^2 -\Delta tM\|\nabla w_h^{n+1}\|^2-\frac{\Delta t^2}{2\zeta}\|p_{ch}^{n}-p_{ch}^{n-1}\|^2 -\frac{1}{4}\Delta t\|\sqrt{\mathbb K}\nabla (p_{mh}^{n+1}-p_{mh}^{n})\|^2\nonumber\\ &&\hskip 2.4cm -\frac{\gamma\epsilon}{2}\|\nabla\phi_h^{n+1}-\nabla\phi_h^n\|^2-\Delta t\frac{\alpha \sqrt{\mbox{d}}}{\sqrt{\mbox{trace$(\prod)$}}}\langle \nu^{n} P_\tau{\bm{u}}_{ch}^{n+1},P_\tau{\bm{u}}_{ch}^{n+1}\rangle \nonumber \\ &&\hskip 2.5cm-(\beta-\tilde{C})\Delta t^2 \|\nabla p_{mh}^{n+1}\|^2-\frac{1}{2}(\xi-\zeta-\frac{1}{2}\min\{\rho_1,\rho_2\})\|\nabla\cdot({\bm{u}}_{ch}^{n+1}-{\bm{u}}_{ch}^{n})\|^2. \quad \end{eqnarray} If we now impose $\xi\geq \zeta+\frac{1}{2}\min\{\rho_1,\rho_2\}$ and $\beta\geq 2\tilde{C}$ which only depends on the geometry of $\Omega_m$, $\Omega_c$, $\rho_1$ and $\rho_2$, then one leads to the energy stability and complete the proof of Theorem \ref{S2_thm_energy_Decouple}. \mbox{}\hfill$\Box$ \section{Numerical results}\label{sec(5)} In this section, we will use three numerical examples to illustrate the features of proposed model and numerical methods. The first example is provided to illustrate the convergence and accuracy. The second test is designed to verify that the proposed algorithm \eqref{2Decouple_DCH_full_disretization_BJS_2_time}-\eqref{2Decouple_NSCH_full_disretization_BJS_5_time} obeys the energy dissipation of the CHNSD model \eqref{Darcy_law_time_BJ}-\eqref{phase_interface_con_4}. The last experiment presents the simulation of a lighter bubble rising through the interface driven by buoyancy forces. For all examples, we employ the celebrated Taylor-Hood elements for the Navier-Stokes equation and linear elements for the Darcy equation. For the single Cahn-Hilliard equation in the coupling free flow and porous media, we consider the quadratic elements.\\ \noindent \textbf{Example 1: Convergence and accuracy.} Consider the CHNSD model on~$\Omega=[0,1]\times[0,2]$~where~$\Omega_m=[0,1]\times[0,1]$~and $\Omega_c=[0,1]\times[1,2]$. Set~$\nu=1$,~$\rho_1=1$,~$\rho_2=3$,~$M_m=1$,~$\gamma=1$,~$\epsilon=1$,~$M_c=1$~,~$\mathbb{K}=\mathbb{I}$,$\beta=5$, and $\xi=5$. The simulation is performed out at terminational time $T=0.2$. The exact solutions are chosen as: \begin{eqnarray}\label{accuray_example} \left\{ \begin{array}{l} \phi=g(x)g(y)\cos(\pi t),\\ p_m=g(x)g_m(y)\cos(\pi t), \\ {\bm{u}}_c=[x^2(y-1)^2,~-\frac{2}{3}x(y-1)^3]^T\cos(\pi t), \\ p_c=\cos(\pi t)g(x)g_c(y), \end{array} \right. \label{exact_solution_example_2} \end{eqnarray} where~$g(x)=16x^2(x-1)^2, g(y)=16y^2(y-2)^2,g_m(y)=16y^2(y-1)^2, g_c(y)=16(y-1)^2(y-2)^2$. The boundary condition functions and the source terms can be computed based on the exact solutions. To examine the accuracy of proposed scheme, we compute the pointwise convergence rate and define the rate of convergence in space as follows \begin{eqnarray*} \mbox{order}_h=\frac{\log(|e_{v,h_j}|/|e_{v,h_{j+1}}|)}{\log(h_j/h_{j+1})}=\frac{\log(|v_{h_j}^n-v(t_n)|/|v_{h_{j+1}}^n-v(t_n)|)}{\log(h_j/h_{j+1})},\quad v=\phi,\,p_m,\,{\bm{u}}_c,\,p_c, \end{eqnarray*} where $|\cdot|$ denotes the $L^2$ and $H^1$ norm errors with $\|\cdot\|$ and $\|\cdot\|_1$, $v_{h_j}$ is the numerical solution with spatial mesh size $h_j$. Tables~\ref{table_2_time_BJ} and \ref{table_3_time_BJ} list the $L^2$- and~$H^1$-norm errors of the phase variable, pressure and velocity of the designed decoupled linearized numerical schemes, in which a uniform time partition $\Delta t=2.5\times 10^{-4}$ is used. The numerical results in the two tables clearly show the optimal convergence rates for constructed numerical scheme for all presented error norms in space. To illustrate the order of convergence with respect to the time step $\Delta t$, we introduce the following convergence rate for $L^2$-norm error, \begin{eqnarray*} \mbox{order}_{\Delta t}=\frac{\log(\|v_h^{\Delta t}-v_h^{\Delta t/2}\|/\|v_h^{\Delta t/2}-v_h^{\Delta t/4}\|)}{\log(2)}, \quad v=\phi,\,p_m,\,{\bm{u}}_c. \end{eqnarray*} The $L^2$-norm errors are shown in Figure \ref{fig:TimeOrder} with fixing spatial mesh size $h=\frac{1}{32}$ and varying partition $\Delta t=0.02/2^k$, $k=0,1,\ldots,5$, which indicates that the proposed numerical method can achieve the first order accuracy in time for variables $\phi$, $p_m$ and ${\bm{u}}_c$. \\ \begin{table}[hbt] \begin{center} \begin{tabular}{c|c|c|c|c|c|c} \hline h & $\|e_{{\bm{u}}_c}\|$ &order &$\|e_{{\bm{u}}_c}\|_1$ &order & $\|e_{p_c}\|$ &order \\[0.5ex]\hline 1/4 &6.3163E-3 & &5.4761E-2 & &1.0353E-1 & \\ 1/8 &6.7679E-4 &3.22 &6.5007E-3 &3.07 &3.6981E-2 &1.84 \\ 1/16 &8.6286E-5 & 2.97 &1.2233E-3 & 2.41 &1.0093E-2 &1.87 \\ 1/32 &1.2159E-5 &2.83 &3.0139E-4 & 2.02 &2.6242E-3 &1.94 \\\hline \end{tabular} \end{center} \caption{The order of convergence in space for error norms for ${\bm{u}}_c$ and $p_c$.} \label{table_2_time_BJ} \end{table} \begin{table}[hbt] \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline h & $\|e_{\phi}\|$ &order &$\|e_{\phi}\|_1$ &order & $\|e_{p_m}\|$ &order &$\|e_{p_m}\|_1$ &order\\[0.5ex]\hline 1/4 &2.9077E-2 & &5.8721E-1 & &9.9369E-2 & &8.2818E-1 &\\ 1/8 &3.2129E-3 &3.18 &1.5886E-1 &1.89 &2.6547E-2 &1.90 &4.5968E-1 &0.849\\ 1/16 &3.6121E-4 & 3.15 &4.0798E-2 &1.96 &5.8678E-3 &2.18 & 2.3371E-1 &0.976\\ 1/32 &4.3424E-5 & 3.06 &1.0277E-2 &1.99 &1.4238E-3 &2.04 &1.1772E-1 &0.989 \\\hline \end{tabular} \end{center} \caption{The order of convergence in space for error norms for $\phi$ and $p_m$.} \label{table_3_time_BJ} \end{table} \begin{figure}[!ht] \centering \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.2cm} \includegraphics[width=3.0in]{timeorderdt321.eps} \caption{Log-Log plots of the $L^2$ error norms with different time step size $\Delta t$.} \label{fig:TimeOrder} \end{figure} For the diffuse interface problem, adaptive mesh refinement is preferable for the computation of different dynamics, due to the fact at leat four grid elements are required for accuracy over the width of interface \cite{JKim_KKang_JLowengrub_JCP_2004}. Therefore, we use an adaptive mesh strategy in this simulation. In the following numerical experiments, the root-level mesh is taken to be uniform with $h=\frac{1}{32}$. Starting with this base mesh, mesh refinement is performed. \noindent \textbf{Example 2: Shape relaxation and energy dissipation.} We consider the evolution of a square shaped circle bubble in the domain~$\Omega=[0, 1]\times[0, 2]$ with~$\Omega_c=[0,1]\times[0,1]$~and~$\Omega_m=[1,2]\times[0,1]$. The parameters are chosen~$M=0.1$,~$\gamma=0.01$,~$\epsilon=0.02$,~$\nu=1$ and~$\mathbb{K}=0.05\mathbb{I}$. The initial velocity, pressure and chemical potential are set to zero. A uniform time partition with the time step-size~$\Delta t=0.005$~is used in this simulation. Figure~\ref{fig:initialphiEx2} shows the initial shape of the bubble. Figure~\ref{fig:Energy150Ex2} shows the dynamic of the square relaxing to a circular shape under the effect of surface tension for the density ratio $\rho_1:\rho_2=1:50$ by using the proposed decoupled numerical method. The corresponding relative discrete energy~$\Delta_E=E^{n}/E^0$~is presented in Figure~\ref{fig:Energy1}. We can easily observe that the discrete energy is non-increasing for different density ratio cases, which is consistent with the theoretical result, and validates the interface conditions~\eqref{phase_interface_con_1}-\eqref{phase_interface_con_4}. \begin{figure}[!ht] \centering \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.2cm} \includegraphics[width=2.5in]{Shape150Initial.eps} \caption{Contour plots of the initial bubble.} \label{fig:initialphiEx2} \end{figure} \begin{figure}[ht] \centering \subfigur { \label{fig5:subfig:ea {\includegraphics[width=2.0in]{Shape150figg21.eps}}} \hskip -0.7in \subfigur { \label{fig5:subfig:eb} {\includegraphics[width=2.0in]{Shape150figg41.eps}}} \hskip -0.7in \subfigur { \label{fig5:subfig:ec} {\includegraphics[width=2.0in]{Shape150figg61.eps}}} \hskip -0.7in \subfigur {\label{fig5:subfig:ee} {\includegraphics[width=2.0in]{Shape150figg101.eps}}} \vskip -0.15in \subfigur {\label{fig5:subfig:ef} {\includegraphics[width=2.0in]{Shape150figg161.eps}}} \hskip -0.7in \subfigur {\label{fig5:subfig:eg} {\includegraphics[width=2.0in]{Shape150figg201.eps}}} \hskip -0.7in \subfigur {\label{fig5:subfig:edb} {\includegraphics[width=2.0in]{Shape150figg301.eps}}} \hskip -0.7in \subfigur {\label{fig5:subfig:elc} {\includegraphics[width=2.0in]{Shape150figg2001.eps}}} \vskip -0.25in \caption{The dynamics of a square shape bubble with density ratio 1:50. All the sub-figures are indexed from left to right row by row as follows: : (a)~$t=0.1$, (b)~$t=0.2$, (c)~$t=0.3$, (d)~$t=0.5$, (e)~$t=0.8$, (f)~$t=1.0$, (g)~$t=1.5$, (h)~$t=10.0$.} \label{fig:Energy150Ex2} \end{figure} \begin{figure}[ht] \centering \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.5cm} \subfigure[$\rho_1:\rho_2=1:5$] {\label{fhfg:subfig:a} {\includegraphics[width=2.5in]{Shape15Energy.eps}}} \subfigure[$\rho_1:\rho_2=1:50$] {\label{fhfg:subfig:b} {\includegraphics[width=2.5in]{Shape150Energy.eps}}} \caption{The evolution of discrete energy of two numerical schemes.} \label{fig:Energy1} \end{figure} \noindent \textbf{Example 3: Buoyancy-driven flow.} In this experiment, we simulate a light bubble rises in a heavier medium in order to validate the efficiency of proposed numerical method with respect to different density variations. Here, the karst geometry is modelled by a long tube~$\Omega = [0, 1]\times[0, 2]$~with the conduit~$\Omega_c = [0, 1]\times[0, 1]$~and porous media~$\Omega_m = [0, 1]\times[1, 2]$. The interface boundary is at~$[0,1]\times\{1\}$. We set~$M=0.01$,~$\gamma=0.01$,~$\epsilon=0.01$,~$\nu=1$, and~$\mathbb{K}=0.05\mathbb{I}$. The initial velocity and pressure are set to be zero and initial phase function is given by \begin{eqnarray}\label{Buoyancy_initialphi} \phi_c^0(x,y)=\tanh\left((0.2-\sqrt{(x-0.5)^2+(y-0.5)^2})/(\sqrt{2}\epsilon)\right). \end{eqnarray} Figure \ref{fig:initialphiEx4} shows the initial position of the bubble. We test two cases with density ratios~$1:5$ and $1:50$, respectively. Figure \ref{fig:graphr2} shows several snapshots of the droplet passing through the interface under the influence of buoyancy with a density ratio of $\rho_1:\rho_2=1:5$. As the bubble rises in the conduit domain, it deforms into an ellipsoid. When it passes through the domain interface, one can clearly see an interface separating the bubble in conduit and in the porous medium. The shape evolution of the rising bubble is shown in Figure \ref{Lfig:graphr2} for the density ratio~$\rho_1:\rho_2=1:50$. We can observe that the droplet quickly deforms into a heart-like shape as compared with those in Figure \ref{fig:graphr2}. As the droplet moves through the interface, the interface separates the bubble in conduit and matrix as presented in Figures \ref{figr1:subfig:f} and \ref{Lfigr2:subfig:k}. The smooth and excepted shape change of the droplet further validates physically faithful interface conditions when the droplet across the interface. The tail is seen as it leaves the interface in Figures \ref{figr1:subfig:ll} and \ref{Lfigr1:subfig:g}. The tail is eventually smoothed out by the surface tension effect when it completely enters the porous medium as shown in Figures \ref{figr2:subfig:k} and \ref{Lfigr1:subfig:h}. Additionally, we plot typical mesh refinement in Figures \ref{figIniA:subfig:b} and \ref{MLfig:graphr2} for this example. Once again, we observe that the mesh is properly refined near the interfacial region. All of these reasonable observations validate the interface conditions, the mathematical model and the numerical method proposed in this article. \begin{figure}[!ht] \centering \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.2cm} \subfigure[Initial phase function]{ \label{figIni:subfig:a} \includegraphics[width=2.5in]{gravity_SDfigg1.eps}} \subfigure[Initial adaptive mesh]{ \label{figIniA:subfig:b} \includegraphics[width=2.5in]{gravity_Inital_Adaptive.eps}} \caption{Contour plots of the initial bubble.} \label{fig:initialphiEx4} \end{figure} \begin{figure}[!htp] \centering \subfigure[$t=1.0$]{ \label{figr1:subfig:d} \includegraphics[width=2in]{Gravity15figg201.eps}} \hskip -0.7in \subfigure[$t=2.0$]{ \label{figsr1:subfig:d} \includegraphics[width=2in]{Gravity15figg401.eps}} \hskip -0.7in \subfigur {\label{figr1:subfig:f} \includegraphics[width=2in]{Gravity15figg601.eps}} \hskip -0.7in \subfigure[$t=4.0$]{ \label{figr1:subfig:g} \includegraphics[width=2in]{Gravity15figg801.eps}} \vskip -0.40in \subfigur {\label{figr1:subfig:h} \includegraphics[width=2in]{Gravity15figg1001.eps}} \hskip -0.7in \subfigur {\label{figr1:subfig:ll} \includegraphics[width=2.00in]{Gravity15figg1201.eps}} \hskip -0.7in \subfigur {\label{figr2:subfig:kk} \includegraphics[width=2.00in]{Gravity15figg1601.eps}} \hskip -0.7in \subfigur {\label{figr2:subfig:k} \includegraphics[width=2.00in]{Gravity15figg2201.eps}} \vskip -0.25in \caption{The evolution of arising drop with density ratio 1:5. All the sub-figures are indexed from left to right row by row as follows: : (a)~$t=1.0$, (b)~$t=2.0$, (c)~$t=3.0$, (d)~$t=4.0$, (e)~$t=5.0$, (f)~$t=6.0$, (g)~$t=8.0$, (h)~$t=11.0$.} \label{fig:graphr2} \end{figure} \begin{figure}[!htp] \centering \subfigure{ \label{Lfigr1:subfig:d} \includegraphics[width=2in]{Gravity150figg201.eps}} \hskip -0.70in \subfigure{ \label{Lfigr1:subfig:f} \includegraphics[width=2in]{Gravity150figg251.eps}} \hskip -0.7in \subfigur {\label{Lfigr2:subfig:k} \includegraphics[width=2in]{Gravity150figg301.eps}} \hskip -0.7in \subfigur {\label{Lfigr1:subfig:g} \includegraphics[width=2in]{Gravity150figg401.eps}} \vskip -0.40in {\label{Lfigr1:subfig:ll} \includegraphics[width=2in]{Gravity150figg501.eps}} \hskip -0.7in \label{Lfig:graphr1} \subfigur {\label{Lfigr2:subfig:b} \includegraphics[width=2in]{Gravity150figg601.eps}} \hskip -0.7in \subfigur {\label{Lfigr1:subfig:h} \includegraphics[width=2in]{Gravity150figg801.eps}} \hskip -0.70in \subfigur { \label{Lfigr1:subfig:b} \includegraphics[width=2in]{Gravity150figg1001.eps}} \vskip -0.25in \caption{The evolution of arising drop with density ratio 1:50. All the sub-figures are indexed from left to right row by row as follows: : (a)~$t=1.0$, (b)~$t=1.25$, (c)~$t=1.5$, (d)~$t=2.0$, (e)~$t=2.5$, (f)~$t=3.0$, (g)~$t=4.0$, (h)~$t=5.0$.} \label{Lfig:graphr2} \end{figure} \begin{figure}[htp] \centering \subfigur {\label{MLfigr1:subfig:f} \includegraphics[width=2in]{Mesh_Adaptive251.eps}} \hskip -0.7in \subfigur {\label{MLfigr2:subfig:k} \includegraphics[width=2in]{Mesh_Adaptive401.eps}} \hskip -0.70in \subfigur {\label{MLfigr1:subfig:g} \includegraphics[width=2in]{Mesh_Adaptive501.eps}} \hskip -0.70in \subfigur {\label{MLfigr1:subfig:h} \includegraphics[width=2in]{Mesh_Adaptive801.eps}} \vskip -0.25in \caption{Adaptive mesh for arising drop with density ratio 1:50. All the sub-figures are indexed from left to right row by row as follows: : (a)~$t=1.25$, (b)~$t=2.0$, (c)~$t=2.5$, (d)~$t=4.0$.} \label{MLfig:graphr2} \end{figure} \section{Conclusions} In this paper, a new Cahn-Hilliard-Navier-Stokes-Darcy (CHNSD) model and its decoupled numerical schemes are developed for two-phase flows of different densities and viscosities in superposed fluid and porous layers. Moreover, the unconditionally energy stability is proposed and analyzed for a time-stepping method combining with interface conditions. The novel decoupled numerical scheme is designed by introducing the artificial compressibility method and pressure stabilization strategy. The energy law is proposed and analyzed for the corresponding fully discretization in the framework of the finite element method for spatial discretizaion. Therefore, only a sequence of linear equations is needed to solve at each discrete time level for the computation of the new decoupled linear numerical method. The features of the proposed methods, such as accuracy, energy dissipation, and applicability for challenging model scenarios, are demonstrated by the various numerical experiments. \section*{Acknowledgement} Gao is partially supported by the NSFC grant 11901461, the Natural Science Foundation of Shaanxi province 2019JQ-024 and and China Postdoctoral Science Foundation 2020M673464. Han acknowledges support from the NSF grant DMS-1912715. He is partially supported by the Alexander von Humboldt Foundation, the NSF grants DMS-1722647 and DMS-1818642. \bibliographystyle{plain}
2,877,628,090,642
arxiv
\section{Introduction} \label{s: intro} \setcounter{equation}{0} Let $-\Delta_\Omega$ be the Dirichlet Laplacian corresponding to an open bounded domain $\Omega\subset\mathbb{R}^d$, defined in the quadratic form sense on $\mathcal{H}^1_0(\Omega)$. The operator is obviously non-negative and since the embedding $\mathcal{H}_0^1\hookrightarrow L^2(\Omega)$ is compact, its spectrum is purely discrete accumulating at infinity only. It is well known that for $d=3$, up to a choice of the scale, the eigenvalues describe energies of a spinless quantum particle confined to such a hard-wall `bottle'. Motivated by this physical problem, we consider in the present work a magnetic version of the mentioned Dirichlet Laplacian, that is, the operator $\mathcal{H}_\Omega(A)=(i\nabla+A(x))^2$ associated with the closed quadratic form $$ \|(i\nabla+A)u\|_{L^2(\Omega)}^2\,,\quad u\in\mathcal{H}_0^1(\Omega)\,, $$ where the real-valued and sufficiently smooth function $A$ is a vector potential. The magnetic Sobolev norm on the bounded domain $\Omega$ is equivalent to the non-magnetic one and the operator $\mathcal{H}_\Omega(A)$ has a purely discrete spectrum as well. We shall denote the eigenvalues by $\lambda_k =\lambda_k(\Omega, A)$, assuming that they repeat according to their multiplicities. One of the objects of our interest in this paper will be bounds of the eigenvalue moments of such operators. For starters, recall that for non-magnetic Dirichlet Laplacians the following bound was proved in the work of Berezin, Li and Yau \cite{Be72a, Be72b, LY83}, \begin{equation} \label{Berezin bound}\sum_k(\Lambda-\lambda_k(\Omega,0))_+^\sigma\le L_{\sigma,d}^{\mathrm{cl}}\,|\Omega|\, \Lambda^{\sigma+\frac{d}{2}} \quad\text{for any}\;\;\sigma\ge1 \;\;\text{and}\;\;\Lambda>0\,, \end{equation} where $|\Omega|$ is the volume of $\Omega$ and the constant on the right-hand side, $$ L_{\sigma,d}^{\mathrm{cl}}=\frac{\Gamma(\sigma+1)}{(4\pi)^{\frac{d}{2}}\Gamma (\sigma+1+d/2)}\,, $$ is optimal. Furthermore, the bound (\ref{Berezin bound}) holds true for $0\le\sigma<1$ as well, but with another, probably non-sharp constant on the right-hand side, \begin{equation}\label{Laptev ineq.} \sum_k(\Lambda-\lambda_k(\Omega,0))_+^\sigma\le 2\left(\frac{\sigma}{\sigma+1}\right)^\sigma L_{\sigma,d}^{\mathrm{cl}}\,|\Omega|\, \Lambda^{\sigma+\frac{d}{2}}\,,\quad 0\le\sigma<1\,. \end{equation} see \cite{La97}. In the particular case $\sigma=1$ the inequality (\ref{Berezin bound}) is equivalent, via Legendre transformation, to the lower bound \begin{equation}\label{Leg.trans.} \sum_{j=1}^N\lambda_j(\Omega,0)\ge C_d|\Omega|^{-\frac{2}{d}}N^{1+\frac{2}{d}}\,,\quad C_d=\frac{4\pi d}{d+2}\Gamma(d/2+1)^{\frac{2}{d}}\,. \end{equation} Turning next to the magnetic case, we note first that the pointwise diamagnetic inequality \cite{LL01}, namely $$ |\nabla|u(x)||\le|(i\nabla+A)u(x)|\quad\text{for a.a.}\;\; x\in\Omega\,, $$ implies $\lambda_1(\Omega, A)\ge\lambda_1(\Omega,0)$, however, the estimate $\lambda_j(\Omega, A)\ge\lambda_j(\Omega,0)$ fails in general if $j\ge2$. Nevertheless, momentum estimates are still valid for some values of the parameters. In particular, it was shown \cite{LW00} that the sharp bound (\ref{Berezin bound}) holds true for arbitrary magnetic fields provided $\sigma\ge\frac{3}{2}$, and the same sharp bound holds true for constant magnetic fields if $\sigma\ge1$, see\cite{ELV00}. Furthermore, in the dimension $d=2$ the bound (\ref{Laptev ineq.}) holds true for constant magnetic fields if $0\le\sigma<1$ and the constant on its right-hand side cannot be improved \cite{FLW09}. Our main aim in the present work is to derive sufficiently precise two-dimensional Berezin-type estimates for quantum systems exposed to a magnetic field and to apply them to the three-dimensional case. We are going to address two questions, one concerning eigenvalue moments estimates for magnetic Laplacians on three dimensional domains having a bounded cross section in a fixed direction, and the other about similar estimates for magnetic Laplacians defined on whole $\mathbb{R}^3$. Let us review the paper content in more details. In Sec.~\ref{s: reduction} we will describe the dimensional-reduction technique \cite{LW00} which allows us to derive the sought spectral estimates for three-dimensional magnetic `bottles' using two-dimensional ones. Our next aim is to derive a two-dimensional version of the Li-Yau inequality (\ref{Leg.trans.}) in presence of a constant magnetic field giving rise to an extra term on the right-hand side. The result will be stated and proved in first part of Sec.~\ref{s: Berezin-Li-Yau}. This in turn will imply, by means of Legendre transformation, a magnetic version of the Berezin inequality which we are going to present in second part of Sec.~\ref{s: Berezin-Li-Yau}. It has to be added that the question of semiclassical spectral bounds for such systems has been addressed before, in particular, another version of the magnetic Berezin inequality was derived by two of us \cite{KW13}. In final part of Sec.~\ref{s: Berezin-Li-Yau} we are going to compare the two results and show that the one derived here becomes substantially better when the magnetic field is strong. In some cases the eigenvalues of the magnetic Dirichlet Laplacian with a constant magnetic field can be computed exactly in terms of suitable special functions. In the first part of Sec.~\ref{s: disc} we are present such an example considering the magnetic Dirichlet Laplacian on a two-dimensional disc with a constant magnetic field. Its eigenvalues will be expressed in terms of Kummer function zeros. Next, in the second part, we are going to consider again the magnetic Dirichlet Laplacian on a two-dimensional disc, now in a more general situation when the magnetic field is no longer homogeneous but retains the radial symmetry; we will derive the Berezin inequality for the eigenvalue moments. In Sec.~\ref{3Dapplication} we shall return to our original motivation and use the mentioned reduction technique to derive Berezin-type spectral estimates for a class of three-dimensional magnetic `bottles' characterized by a bounded cross section in the $x_3$ direction. Turning to the second one of the indicated questions, from Sec.~\ref{s:mgBerezin} on, we shall be concerned with magnetic Laplacians in $L^2(\mathbb{R}^3)$ associated with the magnetic field $B:\mathbb{R}^3\to\mathbb{R}^3$ which is as a local perturbation of a constant magnetic field of intensity $B_0>0$. Again, as before, we first derive suitable two-dimensional estimates; this will be done in Sec.~\ref{s:mgBerezin}. In the last two sections we apply this result to the three-dimensional case. In Sec.~\ref{s:3Dhole} we show that the essential spectrum of the magnetic Laplacian with corresponding perturbed magnetic field coincides with $[B_0, \infty)$. The Sec.~\ref{LT-3D} we then prove Lieb-Thirring-type inequalities for the moments of eigenvalues below the threshold of the essential spectrum for several types of magnetic `holes'. \section{Dimensional reduction} \label{s: reduction} \setcounter{equation}{0} As indicated our question concerns estimating eigenvalues due to confinement in a three-dimensional `bottle' by using two-dimensional Berezin type estimates. In such situation one can use the dimension-reduction technique \cite{LW00}. In particular, let $-\Delta_\Omega$ be the Dirichlet Laplacian on an open domain $\Omega\subseteq\mathbb{R}^3$, then for any $\sigma\ge\frac{3}{2}$ the inequality \begin{equation}\label{Dir.Laplacian} \mathrm{tr}\left(\Lambda-(-\Delta_\Omega)\right)_+^\sigma\le L_{1,\sigma} ^{\mathrm{cl}}\int_{\mathbb{R}}\mathrm{tr}\left(\Lambda-(-\Delta_{\omega(x_3)})\right)_+^{\sigma+ \frac{1}{2}}\,\mathrm{d}x_3 \end{equation} is valid, where $-\Delta_{\omega(x_3)}$ is the Dirichlet Laplacian on the section $$ \omega(x_3)=\left\{x'=(x_1, x_2)\in\mathbb{R}^2|\,\,x=(x', x_3)=(x_1, x_2, x_3)\in\Omega \right\}, $$ see \cite{LW00}, and also \cite{ELW04, Wei08}. The integral at the right-hand side of (\ref{Dir.Laplacian}), in fact restricted to those $x_3$ for which $\inf\,\mathrm{spec}(-\Delta_{\omega(x_3)})<\Lambda$, yields the classical phase space volume. Note that in this way one can obtain estimates also in some unbounded domains \cite{GW11} as well as remainder terms \cite{Wei08}. A similar technique can be used also in the magnetic case. To describe it, consider a sufficiently smooth magnetic vector potential $A(\cdot):\Omega\to \mathbb{R}^3$ generating the magnetic field $$ B(x)=(B_1(x),B_2(x),B_3(x))=\mathrm{rot}\,A(x)\,. $$ For the sake of definiteness, the shall use the gauge with $A_3(x)=0$. Furthermore, we consider the magnetic Dirichlet Laplacians $$ \mathcal{H}_\Omega(A)=(i\nabla_x-A(x))^2\quad\text{on}\;\, L^2(\Omega) $$ and $$ \widetilde{H}_{\omega(x_3)}(\widetilde{A})=(i\nabla_{x'}-\widetilde{A}(x))^2\quad \text{on}\;\, L^2(\omega(x_3))\,, $$ where $\widetilde{A}(x):=(A_1(x), A_2(x))$. Note that for the fixed $x_3$ the two-dimensional vector potential $\widetilde{A}(x', x_3)$ corresponds to the magnetic field $$ \tilde{B}(x', x_3)=B_3(x)=\frac{\partial A_2}{\partial x_1}-\frac{\partial A_1} {\partial x_2}\,. $$ Referring to\cite[Sec.~3.2]{LW00} one can then claim that for a $\sigma\ge\frac{3}{2}$ we have \begin{equation} \label{magn.field}\mathrm{tr}(\Lambda-\mathcal{H}_\Omega(A))_+^\sigma\le L_{1,\sigma}^{\mathrm{cl}} \int_{\mathbb{R}}\mathrm{tr}(\Lambda-\widetilde{H}_{\omega(x_3)}(\widetilde{A}))_+^ {\sigma+1/2}\,\mathrm{d}x_3\,. \end{equation} \section{Berezin-Li-Yau inequality with a constant magnetic field} \label{s: Berezin-Li-Yau} \setcounter{equation}{0} Suppose that the motion is confined to a planar domain $\omega$ being exposed to influence of a constant magnetic field of intensity $B_0$ perpendicular to the plane, and let $A:\: \mathbb{R}^2\to \mathbb{R}^2$ be a vector potential generating this field. We denote by $H_\omega(A)$ the corresponding magnetic Dirichlet Laplacian on $\omega$ and $\mu_j(A)$ will be its eigenvalues arranged in the ascending with repetition according to their multiplicity. Our first aim is to extend the Li-Yau inequality (\ref{Leg.trans.}) to this situation with an additional term on the right-hand side depending on $B_0$ only. This will be then used to derive the Berezin-type inequality. Conventionally we denote by $\mathbb{N}$ the set of natural numbers, while the set of integers will be denoted by $\mathbb{Z}$. \medskip \noindent The following result is not new. Indeed, it can be recovered from \cite[Sec.~2]{ELV00}, however, for the sake of completeness we include a proof. \subsection{Li-Yau estimate} \setcounter{equation}{0} \begin{theorem} Assume that $\omega\subset\mathbb{R}^2$ is open and finite. Then the inequality \begin{equation}\label{eigenvalue} \sum_{j\le N}\mu_j(A)\ge\frac{2\pi N^2}{|\omega|}+ \frac{B_0^2}{2\pi}|\omega| m(1-m) \end{equation} holds, where $m:=\left\{\frac{2\pi N}{B_0|\omega|}\right\}$ is the fractional part of $\frac{2\pi N}{B_0|\omega|}$. \end{theorem} \begin{proof} Without loss of generality we may assume that $B_0>0$. Let $P_k$ be the orthogonal projection onto the $k$-th Landau level, $B_0(2k-1)$, of the Landau Hamiltonian $(i\nabla+A(x))^2$ in $L^2(\mathbb{R}^2)$ which is an integral operator with the kernel $P_k(x,y)$ -- see \cite{KW13}. Note that we have \begin{equation}\label{P_k} P_k(x,x)= \frac{1}{2\pi}B_0\,, \end{equation} \begin{eqnarray} \lefteqn{\int_{\mathbb{R}^2}\left(\int_\omega|P_k(y,x)|^2\,\mathrm{d}x\right)\,\mathrm{d}y =\int_\omega\left(\int_{\mathbb{R}^2}P_k(y,x)\,\overline{P_k(x,y)}\,\mathrm{d}y\right) \,\mathrm{d}x} \nonumber \\ && \label{Landau level} \qquad\qquad\quad =\int_\omega P_k(x,x)\,\mathrm{d}x=\frac{B_0}{2\pi}|\omega|\,. \phantom{AAAAAAAAAA} \end{eqnarray} Let $\phi_j$ be a normalized eigenfunction corresponding to the eigenvalue $\mu_j(A)$. We put $f_{k,j}(y):=\int_\omega P_k(y,x)\phi_j(x)\,\mathrm{d}x$, where $y\in\mathbb{R}^2$, and furthermore $$ F_N(k):=\sum_{j\le N}\|f_{k,j}\|_{L^2(\mathbb{R}^2)}^2\,. $$ We have the following identity, \begin{eqnarray*} \lefteqn{\sum_{j\le N}\mu_j(A)=\sum_{j\le N}\|(i\nabla-A)\phi_j\|_{L^2(\omega)}^2} \\ && =\sum_{j\le N}\sum_{k\in\mathbb{N}}\|(i\nabla-A)f_{k,j}\|_{L^2(\mathbb{R}^2)}^2 \\ && =\sum_{k\in\mathbb{N}}B_0(2k-1)\sum_{j\le N}\|f_{k,j}\|_{L^2(\mathbb{R}^2)}^2 \\ && =\sum_{k\in\mathbb{N}}B_0(2k-1)F_N(k) =:J[F_N]\,. \end{eqnarray*} Moreover, the normalization of the functions $\phi_j$ implies \begin{equation}\label{FN} \sum_{k\in\mathbb{N}}F_N(k)= \sum_{j\le N}\sum_{k\in\mathbb{N}}\|f_{k,j}\|_{L^2(\mathbb{R}^2)}^2 =\sum_{j\le N}\|\phi_j\|_{L^2(\omega)}^2=N\,. \end{equation} Finally, in view of Bessel's inequality the following estimate holds true, \begin{eqnarray} \lefteqn{F_N(k)=\sum_{j\le N}\|f_{k,j}\|_{L^2(\mathbb{R}^2)}^2 =\int_{\mathbb{R}^2}\left|\sum_{j\le N}\int_\omega P_k(y,x)\phi_j(x)\,\, \mathrm{d}x\right|^2\,\mathrm{d}y} \nonumber \\ && \label{tildeB} \qquad \le\int_{\mathbb{R}^2} \left(\int_\omega|P_k(y,x)|^2\,\mathrm{d}x\right)\,\mathrm{d}y =\frac{B_0}{2\pi}|\omega|\,. \phantom{AAAAAAA} \end{eqnarray} Let us now minimize the functional $J[F_N]$ under the constraints (\ref{FN}) and (\ref{tildeB}). To this aim, recall first the \emph{bathtub principle} \cite{LL01}: \medskip \noindent Given a $\sigma$-finite measure space $(\Omega,\,\Sigma,\,\mu)$, let $f$ be a real-valued measurable function on $\Omega$ such that $\mu\{x:f(x)<t\}$ is finite for all $t\in\mathbb{R}$. Fix further a number $G>0$ and define a class of measurable functions on $\Omega$ by $$ \mathcal{C}=\left\{g:0\le g(x)\le1\quad\text{for all}\quad x\quad\text {and}\quad\int_\Omega g(x)\mu(\mathrm{d}x)=G\right\}\,. $$ Then the minimization problem of the functional $$ I=\inf_{g\in\mathcal{C}}\int_\Omega f(x)g(x)\mu(\mathrm{d}x) $$ is solved by \begin{equation} \label{minimizer} g(x)=\chi_{\{f<s\}}(x)+c\chi_{\{f=s\}}(x)\,, \end{equation} giving rise to the minimum value $$ I=\int_{\{f<s\}}f(x)\mu(\mathrm{d}x)+cs\mu\{x:f(x)=s\}\,, $$ where $$ s=\sup\{t:\mu\{x:\,f(x)<t\}\le G\} $$ and $$ c\mu\{x:f(x)=s\}=G-\mu\{x:f(x)<s\}\,. $$ Moreover, the minimizer given by (\ref{minimizer}) is unique if $G= \mu\{x:f(x)<s\}$ or if $G=\mu\{x:f(x)\le s\}$. \medskip \noindent Applying this result to the functional $J[F_N]$ with the constraints (\ref{FN}) and (\ref{tildeB}) we find that the corresponding minimizers are $$ F_N(k)=\frac{B_0}{2\pi}|\omega|\,,\quad k=1,2,\ldots,M\,, $$ $$ F_N(M+1)=\frac{B_0}{2\pi}|\omega| m\,, $$ $$ F_N(k)=0,\quad k>M+1, $$ where $M=\left[\frac{2\pi N}{B_0|\omega|}\right]$ is the entire part and $m=\left\{\frac{2\pi N}{B_0|\omega|}\right\}$, so that $M+m=\frac{2\pi N}{B_0|\omega|}$. Consequently, we have the lower bound \begin{eqnarray*} \lefteqn{J[F_N]\ge\frac{B_0}{2\pi}|\omega|\sum_{k=1}^M(2k-1)B_0 +\frac{B_0}{2\pi}|\omega|m(2M+1)B_0} \\ && =\frac{B_0}{2\pi}|\omega| (M^2+2M m+m) \\ && =\frac{B_0^2}{2\pi}|\omega|(M+m)^2+\frac{B_0^2}{2\pi}|\omega| (m-m^2) \end{eqnarray*} which implies $$ \sum_{j\le N}\mu_j(A)\ge\frac{2\pi N^2}{|\omega|}+ \frac{B_0^2}{2\pi}|\omega| m(1-m)\,. $$ This is the claim we have set out to prove. \end{proof} Since $0\le m<1$ by definition the last term can regarded as a non-negative remainder term, which is periodic with respect to $\frac{N} {\Phi}$, where $\Phi=\frac{B_0|\omega|}{2\pi}$ is the magnetic flux, i.e. the number of flux quanta through $\omega$. Note that for $N<\Phi$ the right-hand side equals $NB$ and for large enough $B_0$ this estimate is better than the lower bound in terms of the phase-space volume. \subsection{A magnetic Berezin-type inequality} \setcounter{equation}{0} The result obtained in the previous subsection allows us to derive an extension of the Berezin inequality to the magnetic case. We keep the notation introduced above, in particular, $H_\omega(A)$ is the magnetic Dirichlet Laplacian on $\omega$ corresponding to a constant magnetic field $B_0$ and $\mu_j(A)$ are the respective eigenvalues. Without loss of generality we assume again that $B_0>0$. Then we can make the following claim. \begin{theorem} \label{th-berezin} Let $\omega\subset\mathbb{R}^2$ be open and finite, then for any $\Lambda>B_0$ we have \begin{equation}\label{Berezin} \sum_{j=1}^N(\Lambda-\mu_j(A))\le\frac{(\Lambda^2 -B_0^2)|\omega|}{8\pi}+\frac{(\Lambda-B_0) B_0|\omega|} {4\pi}\left\{\frac{\Lambda+B_0}{2B_0}\right\}. \end{equation} \end{theorem} \begin{proof} Subtracting $N\Lambda$ from both sides of inequality (\ref{eigenvalue}), we get \begin{equation}\label{eigenvalue sum} \sum_{j=1}^N(\Lambda-\mu_j(A))\le N\Lambda- \frac{2\pi N^2}{|\omega|}-\frac{B_0^2}{2\pi}|\omega|m(1-m)\,, \end{equation} and consequently $$ \sum_{j=1}^N(\Lambda-\mu_j(A))_+\le \left(N\Lambda-\frac{2\pi N^2}{|\omega|}-\frac{B_0^2}{2\pi}|\omega|m(1-m)\right)_+. $$ We are going to investigate the function $f:\,\mathbb{R}_+\to \mathbb{R}$, $$ f(z):=z\Lambda-\frac{2\pi z^2}{|\omega|} -\frac{B_0^2|\omega|}{2\pi}\left\{\frac{2\pi z}{B_0|\omega|}\right\} \left(1-\left\{\frac{2\pi z}{B_0|\omega|}\right\}\right), $$ on the intervals $$ \frac{B_0|\omega|k}{2\pi} \le z<\frac{B_0|\omega|(k+1)}{2\pi},\quad k=0,1,2,\ldots, $$ looking for an upper bound. It is easy to check that \begin{eqnarray*} \lefteqn{f'(z)=\Lambda-\frac{4\pi}{|\omega|}z-\frac{B_0^2 |\omega|}{2\pi}\frac{2\pi}{B_0|\omega|}+\frac{2B_0^2|\omega|}{2\pi} \left\{\frac{2\pi z}{B_0|\omega|}\right\}\frac{2\pi}{B_0|\omega|}} \\ && \quad =\Lambda-\frac{4\pi}{|\omega|}z-B_0+2B_0\left\{\frac{2\pi z}{B_0|\omega|}\right\}, \phantom{AAAAAAAAAAA} \end{eqnarray*} thus the extremum of $f$ is achieved at the point $z_0$ such that \begin{equation}\label{x_0} \Lambda-B_0-\frac{4\pi}{|\omega|}z_0+2B_0 \left\{\frac{2\pi z_0}{B_0|\omega|}\right\}=0\,. \end{equation} Denoting $x_0:=\frac{2\pi z_0}{B_0|\omega|}$, the condition reads $\Lambda-2B_0 x_0-B_0+2B_0 \{x_0\}=0$ giving $$ x_0=\frac{\Lambda-B_0+2B_0 \{x_0\}}{2B_0}. $$ It yields the value of function $f$ at $z_0$, namely \begin{eqnarray} \lefteqn{f(z_0)=\frac{\Lambda B_0|\omega|}{2\pi} \frac{(\Lambda-B_0+2B_0\{x_0\})}{2B_0}-\frac{B_0^2|\omega|} {2\pi}\left(\frac{\Lambda-B_0+2B_0\{x_0\}}{2B_0}\right)^2} \nonumber \\ && \quad -\frac{B_0^2|\omega|}{2\pi}\{x_0\}(1-\{x_0\}) \nonumber \\ && =\frac{\Lambda|\omega|}{4\pi} \left(\Lambda-B_0+2B_0\{x_0\}\right)-\frac{|\omega|}{8\pi}\left(\Lambda- B_0+2B_0\{x_0\}\right)^2 \nonumber \\ && \quad -\frac{B_0^2|\omega|}{2\pi}\{x_0\}(1-\{x_0\}) \nonumber \\ && =\frac{|\omega|}{4\pi}\biggl(\Lambda(\Lambda-B_0+2B_0\{x_0\}) -\frac{(\Lambda-B_0+2B_0\{x_0\})^2}{2} \nonumber \\ && \quad -2B_0^2\{x_0\}(1-\{x_0\}) \biggr) \nonumber \\ && =\frac{|\omega|}{4\pi}\biggl(\Lambda^2-\Lambda B_0+2\Lambda B_0\{x_0\}-\frac{\Lambda^2}{2}+\Lambda B_0-\frac{B_0^2}{2} -2\Lambda B_0\{x_0\} \nonumber \\ && \quad +2B_0^2\{x_0\} -2B_0^2 \{x_0\}^2-2B_0^2\{x_0\}+2B_0^2\{x_0\}^2\biggr) \nonumber \\ && =\frac{|\omega|(\Lambda^2-B_0^2)}{8\pi}\,. \label{inequality} \end{eqnarray} Furthermore, the values of $f$ at the endpoints $\frac{B_0 k|\omega|}{2\pi},\,k=0,1,2,\ldots\,$, equal $$ f\left(\frac{B_0 k|\omega|}{2\pi}\right)=\frac{B_0 \Lambda k|\omega|}{2\pi}-\frac{2\pi}{|\omega|} \frac{B_0^2 k^2|\omega|^2}{4\pi^2}=\frac{B_0 k|\omega|}{2\pi}(\Lambda-k B_0)\,. $$ Consider now an integer $m$ satisfying $1\le m\le\left[\frac{\Lambda+B_0}{2B_0} \right]$, then \begin{eqnarray} \nonumber \lefteqn{f\left(\frac{B_0|\omega|}{2\pi}\left(\left[\frac{\Lambda+B_0}{2B_0} \right]-m\right)\right)} \\ && \nonumber =\frac{B_0|\omega|}{2\pi}\left(\left[\frac{\Lambda+ B_0}{2B_0}\right]-m\right)\left(\Lambda-\left(\left[\frac{\Lambda+ B_0}{2B_0}\right]-m\right)B_0\right) \\ && \nonumber \le\frac{B_0|\omega|}{2\pi}\left(\frac{\Lambda+B_0}{2B_0}-m\right) \left(\Lambda-\left(\frac{\Lambda+B_0}{2B_0}-m\right)B_0+\left\{ \frac{\Lambda+B_0}{2B_0}\right\}B_0\right) \\ && \nonumber=\frac{\left(\Lambda-(2m-1)B_0\right)|\omega|}{4\pi}\left(\frac{\Lambda +(2m-1)B_0}{2}+\left\{\frac{\Lambda+B_0} {2B_0}\right\}B_0\right)\\ && \nonumber=\frac{\left(\Lambda^2-(2m-1)^2 B_0^2\right)|\omega|}{8\pi}+\frac{\left(\Lambda-(2m-1)B_0\right)B_0 |\omega|}{4\pi}\left\{\frac{\Lambda+B_0}{2B_0}\right\}\\ && \label{extremum} \le\frac{(\Lambda^2-B_0^2)|\omega|}{8\pi}+\frac{\left(\Lambda-B_0\right) B_0|\omega|}{4\pi}\left\{\frac{\Lambda+B_0}{2B_0}\right\}. \end{eqnarray} On the other hand, for integers satisfying $k\ge \left[\frac{\Lambda+B_0}{2B_0}\right]$ one can check easily that $$4 B_0^2 k^2-4B_0\Lambda k+\Lambda^2-B_0^2\ge0\,, $$ which means \begin{equation}\label{maximum} \frac{B_0 k|\omega|}{2\pi}(\Lambda-B_0 k)\le \frac{(\Lambda^2-B_0^2)|\omega|}{8\pi}\,. \end{equation} Combining inequalities (\ref{extremum}) and (\ref{maximum}) we conclude that at the interval endpoints, $z=\frac{B_0 k|\omega|}{2\pi},\,k=0,1,2,\ldots\,$, the value of function $f$ does not exceed $\frac{(\Lambda^2-B_0^2)|\omega|}{8\pi}+\frac{(\Lambda-B_0) B_0|\omega|}{4\pi}\left\{\frac{\Lambda+B_0}{2B_0}\right\}$. Hence in view of (\ref{inequality}) we have $$ f(z)\le\frac{(\Lambda^2-B_0^2)|\omega|}{8\pi}+\frac{(\Lambda-B_0) B_0|\omega|}{4\pi}\left\{\frac{\Lambda+B_0}{2B_0}\right\} $$ for any $z\ge 0$. Combining this inequality above with the bound (\ref{eigenvalue sum}), we arrive at the desired conclusion. \end{proof} \begin{remark} \label{AL-rmk} {\rm Using the Aizenman-Lieb procedure \cite{AL78} and the fact that $\inf\sigma(H_\omega(A))\ge B_0$ we can get also bound for other eigenvalue moments. Specifically, for any $\sigma\ge3/2$ Theorem~\ref{th-berezin} implies} \begin{eqnarray*} && \hspace{-1.5em} \sum_{j=1}^N(\Lambda-\mu_j(A))_+^{\sigma+1/2}=\frac{\Gamma(\sigma+3/2)} {\Gamma(\sigma-1/2)\Gamma(2)}\int_0^\infty(\Lambda-t)_+^{\sigma-3/2}\sum_{j=1}^N (t-\mu_j(A))_+\,\mathrm{d}t \\ && \le\frac{\Gamma(\sigma+3/2)}{\Gamma(\sigma- 1/2)}\int_0^\infty(\Lambda-t)_+^{\sigma-3/2}\biggl(\frac{(t^2-B_0^2)_+ |\omega|}{8\pi} \\ && \quad + \frac{(t-B_0)_+B_0|\omega|}{4\pi}\left\{\frac {\Lambda+B_0}{2B_0}\right\}\biggr)\,\mathrm{d}t \\ && \le\frac{\Gamma(\sigma+3/2) |\omega|}{\Gamma(\sigma-1/2)}\biggl(\frac{(\Lambda^2-B_0^2)_+}{8\pi} \\ && \quad + \frac{(\Lambda-B_0)_+B_0}{4\pi}\left\{\frac{\Lambda+B_0} {2B_0}\right\}\biggr)\int_0^\infty(\Lambda-t)_+^{\sigma-3/2}\,\mathrm{d}t \\ && \ =\frac{\Gamma(\sigma+3/2)\Lambda^{\sigma-1/2}|\omega|}{\Gamma(\sigma-1/2)(2\sigma-1)} \left(\frac{(\Lambda^2-B_0^2)_+}{4\pi}+\frac{(\Lambda-B_0)_+B_0}{2\pi}\left\{\frac{\Lambda+B_0} {2B_0}\right\}\right). \end{eqnarray*} \end{remark} \subsection{Comparison to earlier results} Given a set $\omega\subset\mathbb{R}^2$ and a point $x\in\omega$, we denote by $$ \delta(x)=\mathrm{dist}(x,\partial\omega)=\min_{y\in\partial\omega}|x-y| $$ the distance of $x$ to the boundary, then $$ R(\omega)=\sup_{x\in\omega}\delta(x) $$ is the in-radius of $\omega$. Furthermore, given a $\beta>0$ we introduce $$ \omega_\beta=\{x\in\omega:\,\,\delta(x)<\beta\}\,,\quad\beta>0\,, $$ and define the quantity \begin{equation}\label{sigma} \sigma(\omega):=\inf_{0<\beta<R(\omega)}\frac{|\omega_\beta|} {\beta}\,. \end{equation} Using these notions and the symbols introduced above we can state the following result obtained in the work of two of us \cite{KW13}: \begin{theorem} Let $\omega\subset\mathbb{R}^2$ be an open convex domain, then for any $\Lambda>B_0$ we have \begin{eqnarray}\label{Ber.In} \lefteqn{\sum_{j=1}^N(\Lambda-\mu_j(A))\le\frac{\Lambda^2|\omega|}{8\pi} -\frac{1}{512\pi}\frac{\sigma^2(\omega)}{|\omega|}\Lambda} \\ && -B_0^2\left(\frac{1}{2}-\left\{\frac{\Lambda+B_0}{2 B_0}\right\}\right)^2\left(\frac{|\omega|}{2\pi} -\frac{1}{128\pi}\frac{\sigma^2(\omega)}{|\omega|\Lambda}\right). \nonumber \end{eqnarray} \end{theorem} \bigskip \noindent To make a comparison to the conclusions of the previous section, let us make both $B_0$ and $\Lambda$ large keeping their ratio fixed. Specifically, we choose a $\Lambda$ from the interval $(B_0,\,2B_0)$ writing it as $\Lambda=B_0(1+\alpha)$ with an $\alpha\in (0,1)$. The second term on the right-hand side of (\ref{Berezin}) is then $\frac{\alpha^2 B_0^2|\omega|}{8\pi}$, and we want to show that the difference between the bounds (\ref{Ber.In}) and (\ref{Berezin}) tends to plus infinity as $B_0\to\infty$. To this aim, we write $\Lambda=B_0(1+\alpha)$ with an $\alpha\in (0,1)$, then \begin{equation}\label{comparison} \frac{(\Lambda^2-B_0^2)|\omega|}{8\pi}+ \frac{(\Lambda-B_0)B_0|\omega|}{4\pi}\left\{\frac{\Lambda+B_0}{2 B_0}\right\}= \frac{B_0^2|\omega|}{4\pi}\,\alpha(1+\alpha)\,. \end{equation} On the other hand, a short calculation shows that for our choice of $B_0$ and $\Lambda$ the right-hand side of the bound (\ref{Ber.In}) becomes $$ =\frac{\Lambda^2|\omega|}{8\pi}-\frac{B_0^2|\omega|}{2\pi}\left(\frac{1}{2}- \frac{\alpha}{2}\right)^2+\frac{\Lambda}{512\pi}\frac{\sigma^2(\omega)}{|\omega|} \left(-1+\frac{(1-\alpha)^2}{(1+\alpha)^2}\right), $$ in particular, after another easy manipulation we find that for large $B_0$ this expression behaves as $\frac{B_0^2|\omega|}{2\pi}\alpha +\mathcal{O}(B_0)$. Comparing the two bounds we see that \begin{equation}\label{comp-diff} \text{\emph{rhs of} (\ref{Ber.In})} - \text{\emph{rhs of} (\ref{Berezin})}=\frac{B_0^2|\omega|}{4\pi}\,\alpha(1-\alpha)+\mathcal{O}(B_0) \end{equation} tending to plus infinity as $B_0\to\infty$. At the same time, \begin{equation}\label{comp-rat} \frac{\text{\emph{rhs of} (\ref{Ber.In})}}{\text{\emph{rhs of} (\ref{Berezin})}} =\frac{2}{1+\alpha}\,+\mathcal{O}(B_0^{-1}) \end{equation} illustrating that the improvement represented by Theorem~\ref{th-berezin} is most pronounced for eigenvalues near the spectral threshold. \section{Examples: a two-dimensional disc} \label{s: disc} \setcounter{equation}{0} Spectral analysis simplifies if the domain $\omega$ allows for a separation of variables. In this section we will discuss two such situations. \subsection{Constant magnetic field} We suppose that $\omega$ is a disc and the applied magnetic field is homogeneous. As usual in cases of a radial symmetry, the problem can be reduced to degenerate hypergeometric functions. Specifically, we will employ the Kummer equation \begin{equation}\label{Kummer} r\frac{\mathrm{d}^2\omega}{\mathrm{d}r^2}+(b-r)\frac{\mathrm{d}\omega}{\mathrm{d}r}-a\omega=0 \end{equation} with real valued parameters $a$ and $b$ which has two linearly independent solutions $M(a, b, r)$ and $U(a, b, r)$, the second one of which has a singularity at zero \cite{AS64}. Given an $\alpha>0$, we denote by $\big\{a^k_{|m|, \alpha}\big\}_{k\in\mathbb{N}}$ the set of the first parameter values such that $M(a^k_{|m|, \alpha}, |m|+1, \alpha)=0$. Since for any $a, b\ge0$ the function $M(a, b, r)$ has no positive zeros \cite{AS64}, all the $a^k_{|m|, \alpha}$ are negative. Then the following claim is valid. \begin{theorem} \label{thm:homdisc} Let $H_\omega(A)$ be the magnetic Dirichlet Laplacian corresponding to a constant magnetic field $B_0$ and $\omega$ being the two dimensional disc with center at the origin and radius $r_0>0$. The eigenvalues of $H_\omega(A)$ coincides with $$ \left\{B_0+B_0\left(|m|-m-2a^k_{|m|, \sqrt{B_0} \, r_0/ \sqrt{2}}\right)\right\}_{m\in\mathbb{Z},\, k\in\mathbb{N}}\,. $$ \end{theorem} \begin{proof} We employ the standard partial wave decomposition -- see, e.g., \cite{Er96} \begin{equation}\label{decomp.} L^2(\omega)=\bigoplus_{m=-\infty}^\infty L^2((0, r_0), 2\pi r\,\mathrm{d}r)\,, \end{equation} and $H_\omega(A)=\bigoplus_{m=-\infty}^\infty h_m$, where \begin{equation} \label{hm*}h_m:=-\frac{\mathrm{d}^2}{\mathrm{d}r^2}-\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r}+\left(\frac{m}{r}-\frac{B_0 r}{2}\right)^2. \end{equation} The last named operator differs by $mB_0$ from the operator \begin{equation}\label{tildehm} \tilde{h}_m=-\frac{\mathrm{d}^2} {\mathrm{d}r^2}-\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r}+\frac{m^2}{r^2}+\frac{B_0^2 r^2}{4} \end{equation} on the interval $(0,r_0)$ with Dirichlet boundary condition at the endpoint $r_0$. Looking for solutions to the eigenvalue equation \begin{equation}\label{magn.lapl.disc} \tilde{h}_mu=\lambda u \end{equation} we employ the Ansatz $$ u(r)=r^{|m|} e^{-B_0 r^2/4}v(r)\,, $$ where $v\in L^2((0, r_0), r\mathrm{d}r)$. Computing the first two derivatives we get $$ \tilde{h}_mu=\left(-v''-\frac{2|m|+1}{r}v^{\prime}+B_0(|m|+1)v(r)+B_0 r v^{\prime}\right)r^{|m|} e^{-B_0 r^2/4}, $$ hence the equation (\ref{magn.lapl.disc}) can rewritten as \begin{equation}\label{change.of.var.} v''+\left(\frac{2|m|+1}{r}-B_0 r\right)v^{\prime}-(B_0(|m|+1)-\lambda)v=0\,. \end{equation} Using the standard substitution we pass to the function $g(r)=v\left(\frac{\sqrt{2r}}{\sqrt{B_0}}\right)$ belonging to $L^2\left(0, B_0 r_0^2/2\right)$. Expressing the derivatives of $v$ in terms of those of $g$, one can rewrite equation (\ref{change.of.var.}) as $$ rg''(r)+\left(|m|+1-r\right)g^{\prime}-\frac{\left((|m|+1)B_0-\lambda\right)}{2B_0}g(r)=0\,, $$ which is the Kummer equation with $b=|m|+1$ and $a=\frac{(|m|+1)B_0-\lambda}{2B_0}$. The mentioned singularity of its solution $U(a, b, r)$ for small $r$, namely \cite{AS64} $$ U(a, b, r)=\frac{\Gamma(b-1)}{\Gamma(a)}r^{1-b}+\mathcal{O}(r^{b-2})\quad\text{for}\quad b>2 $$ and $$ U(a, 2, z)=\frac{1}{\Gamma(a)}\frac{1}{r}+\mathcal{O}(\ln r)\,,\quad U(a, 1, r)=-\frac{1}{\Gamma(a)}\ln r+\mathcal{O}(1) $$ means that $u(r)=r^{|m|} e^{-B_0 r^2/4}U\left(\frac{(|m|+1)B_0-\lambda}{2B_0}, |m|+1, \frac{B_0\,r^2}{2}\right)$ does not belong to $\mathcal{H}_0^1((0, r_0),\,r\mathrm{d}r)$. Consequently, the sought solution of (\ref{magn.lapl.disc}) has the form $$ r^{|m|} e^{-B_0 r^2/4}M\left(\frac{(|m|+1)B_0-\lambda}{2B_0}, |m|+1, \frac{B_0\,r^2}{2}\right)\,, $$ and in view of the Dirichlet boundary conditions at $r_0$ we arrive at the spectral condition $$ M\left(\frac{(|m|+1)B_0-\lambda}{2B_0}, |m|+1, \frac{B_0\,r_0^2}{2}\right)=0\,. $$ which gives $\left\{(|m|+1)B_0-2B_0 a^k_{|m|, \sqrt{B_0}\,r_0/\sqrt{2}}\right\}_{m\in\mathbb{Z}, \,k\in\mathbb{N}}$ as the eigenvalue set; returning to the original operator $h_m$ we get the claim of the theorem. \end{proof} \subsection{Radial magnetic field} If the magnetic field is non-constant but still radially symmetric, in general one cannot find the eigenvalues explicitly but it possible to find a bound to the eigenvalue moments in terms of an appropriate radial two-dimensional Schr\"odinger operator. \begin{theorem} \label{thm:raddisc} Let $H_\omega(A)$ be the magnetic Dirichlet Laplacian $H_\omega(A)$ on a disc $\omega$ of radius $r_0>0$ centered at the origin with a radial magnetic field $B(x)=B(|x|)$. Assume that \begin{equation}\label{assump.} \alpha:=\int_0^{r_0} sB(s)\,\mathrm{d}s<\frac{1}{2}\,. \end{equation} Then for any $\Lambda,\,\sigma\ge0$, the following inequality holds true \begin{eqnarray}\label{magn.estimate} \lefteqn{\mathrm{tr}(\Lambda-H_\omega(A))_+^\sigma\le\left(\frac{1}{\sqrt{1-2\alpha}}+\sup_{n\in\mathbb{N}}\left\{\frac{n}{\sqrt{1-2\alpha}}\right\}\right)} \\ && \times\:\mathrm{tr}\left(\Lambda-\left(-\Delta_D^\omega+\frac{1}{x^2+y^2}\left(\int_0^{\sqrt{x^2+y^2}} sB(s)\,\mathrm{d}s\right)^2\right)\right)_+^\sigma\,. \nonumber \end{eqnarray} In particular, the estimate (\ref{magn.estimate}) implies $$ \inf \sigma(H_\omega(A))\ge\inf \sigma\left(-\Delta_D^\omega+\frac{1}{x^2+y^2}\left(\int_0^{\sqrt{x^2+y^2}} sB(s)\,\mathrm{d}s\right)^2\right)\,. $$ \end{theorem} \begin{proof} Let us again employ the partial-wave decomposition (\ref{decomp.}), with the angular component (\ref{hm*}) replaced by \begin{equation}\label{hm} h_m:=-\frac{\mathrm{d}^2}{\mathrm{d}r^2} -\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r} +\left(\frac{m}{r}-\frac{1}{r}\int_0^r sB(s)\,\mathrm{d}s\right)^2, \end{equation} and inspect the eigenvalues of this operator. Obviously, for $m\le0$ we have \begin{equation}\label{h1m} h_m\ge-\frac{\mathrm{d}^2}{\mathrm{d}r^2} -\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r} +\frac{m^2}{r^2}+\frac{1}{r^2}\left(\int_0^r sB(s)\,\mathrm{d}s\right)^2, \end{equation} while for any $m>0$ we can use the inequality $$ \frac{2|m|}{r^2}\int_0^r sB(s)\,\mathrm{d}s\le\frac{2m^2}{r^2}\int_0^r sB(s)\,\mathrm{d}s $$ which in view of the assumption (\ref{assump.}) yields $$ h_m\ge-\frac{\mathrm{d}^2}{\mathrm{d}r^2} -\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r} +(1-2\alpha)\frac{m^2}{r^2}+\frac{1}{r^2}\left(\int_0^r sB(s)\,\mathrm{d}s\right)^2. $$ Next we divide the set of natural numbers into groups such that for all the elements of any fixed group the entire part $\left[\sqrt{1-2\alpha}\,m\right]$ is the same, and we estimate the operator $h_m$ from below by \begin{equation}\label{tildehm2} h_m \ge -\frac{\mathrm{d}^2}{\mathrm{d}r^2} -\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r} +\frac{\left[\sqrt{1-2\alpha}\, m\right]^2}{r^2}+\frac{1}{r^2}\left(\int_0^r sB(s)\,\mathrm{d}s\right)^2. \end{equation} Since the number of elements in each group is bounded from above by the sum $\frac{1}{\sqrt{1-2\alpha}} +\sup_{n\in\mathbb{N}}\left\{ \frac{n}{\sqrt{1-2\alpha}}\right\}$, using (\ref{h1m}) and (\ref{tildehm2}) one infers that \begin{eqnarray*} \lefteqn{\mathrm{tr}(\Lambda-H_\omega(A))_+^\sigma\le\left(\frac{1}{\sqrt{1-2\alpha}} +\sup_{n\in\mathbb{N}}\left\{\frac{n}{\sqrt{1-2\alpha}}\right\}\right)} \\ && \times\sum_{m=-\infty}^\infty\mathrm{tr}\left(\Lambda-\left(-\frac{\mathrm{d}^2}{\mathrm{d}r^2} -\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r}+\frac{m^2}{r^2}+\frac{1}{r^2}\left(\int_0^r sB(s)\,\mathrm{d}s\right)^2\right)\right)_+^\sigma \\ && \hspace{-1em} =\left(\frac{1}{\sqrt{1-2\alpha}}+\sup_{n\in\mathbb{N}}\left\{\frac{n}{\sqrt{1-2\alpha}}\right\}\right) \\ && \times\:\mathrm{tr}\left(\Lambda-\bigoplus_{m=-\infty}^\infty \left(-\frac{\mathrm{d}^2}{\mathrm{d}r^2} -\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r}+\frac{m^2}{r^2}+\frac{1}{r^2}\left(\int_0^r sB(s)\,\mathrm{d}s\right)^2\right)\right)_+^\sigma \end{eqnarray*} with any $\sigma,\,\Lambda\ge0$. However, the direct sum in the last expression is nothing else than a partial-wave decomposition of the two-dimensional Schr\"odinger operator with the radial potential $V(r) = \frac{1}{r^2}\left(\int_0^r sB(s)\,\mathrm{d}s\right)^2$ and the Dirichlet condition at the boundary of the disc; this yields the desired claim. \end{proof} \section{Application to the three-dimensional case} \label{3Dapplication} \setcounter{equation}{0} Let us return now to our original motivation of estimating eigenvalues due to confinement in a three-dimensional `bottle'. One can employ inequality (\ref{magn.field}) in combination with the results of the previous sections to improve in some cases the spectral bound by taking the magnetic field into account instead of just dropping it. Let $\Omega\subset\mathbb{R}^3$ with the bounded $x_3$ cross sections. The class of fields to consider are those of the form $B(x)=(B_1(x), B_2(x), B_3(x_3))$, that is, those for which the component $B_3$ perpendicular to the cross section depends on the variable $x_3$ only. Such fields certainly exist, for instance, one can think of the situation when the `bottle' is placed into a homogeneous magnetic field. The field is induced by an appropriate vector potential $A(\cdot):\Omega\to \mathbb{R}^3$, $$ B(x)=(B_1(x),B_2(x),B_3(x_3))=\mathrm{rot}\,A(x), $$ and we consider the magnetic Dirichlet Laplacians $$ \mathcal{H}_\Omega(A)=(i\nabla_x-A(x))^2\quad\text{on}\;\, L^2(\Omega). $$ We use the notion introduced in Sec.~\ref{s: reduction}. In view of the variational principle we know that the ground-state eigenvalue of $\widetilde{H}_{\omega(x_3)}(\widetilde{A})$ cannot fall below the first Landau level $B_3(x_3)$. Consequently, integrating with respect to $x_3$ in the formula (\ref{magn.field}) one can drop for all the $x_3$ for which $B_3(x_3)\ge\Lambda$. Combining this observation with Remark~\ref{AL-rmk} we get \begin{eqnarray*} \lefteqn{\mathrm{tr} (\Lambda-\mathcal{H}_\Omega(A))_+^\sigma\le \frac{\Gamma(\sigma+3/2)\Lambda^{\sigma-1/2}}{4\pi(2\sigma-1) \Gamma(\sigma-1/2)}\,L_{1,\sigma}^{\mathrm{cl}} \int_{\{x_3:\, B_3(x_3)<\Lambda\}}|\omega(x_3)|} \\[.5em] && \times\,\biggl[\big(\Lambda^2-B_3(x_3)^2\big) +2B_3\big(\Lambda-B_3(x_3)\big)\,\left\{\frac{\Lambda+B_3}{2B_3} \right\}\biggr]\,\mathrm{d}x_3 \end{eqnarray*} for any $\sigma\ge3/2$. \begin{example} (circular cross section) {\rm Let $\Omega$ be a three-dimensional cusp with a circular cross section $\omega(x_3)$ of radius $r(x_3)$ such that $r(x_3)\to0$ as $x_3\to\infty$. Then the above formula in combination with Theorem~\ref{thm:homdisc} yields \begin{eqnarray*} \lefteqn{\mathrm{tr} (\Lambda-\mathcal{H}_\Omega(A))_+^\sigma\le L_{1,\sigma}^{\mathrm{cl}}\sum_{m\in\mathbb{Z}, \,k\in\mathbb{N}}\int_{\mathbb{R}}\biggl(\Lambda-B_3(x_3)} \\[.2em] && -B_3(x_3)\biggl(|m|-m-2a_{|m|, \sqrt{B_3(x_3)} r_0(x_3)/\sqrt{2}}^k\biggr)\biggr)_+^{\sigma+1/2}\,\mathrm{d}x_3 \end{eqnarray*} for any $\sigma\ge3/2$. The particular case $B(x)=\{0,0,B\}$ applies to a cusp-shaped region placed to a homogeneous field parallel to the cusp axis.} \end{example} \begin{example} (radial magnetic field) {\rm Consider the same cusp-shaped region $\Omega$ in the more general situation when the third field component can depend on the radial variable, $B(x)=(B_1(x), B_2(x), B_3(x_1^2+x_2^2, x_3))$, assuming that $$ \sup_{x_3\in\mathbb{R}} \alpha(x_3)=\sup_{x_3\in\mathbb{R}} \int_0^{r_0(x_3)} sB_3(s, x_3)\,\mathrm{d}s<\frac{1}{2}\,. $$ Then the dimensional reduction in view of Theorem~\ref{thm:raddisc} gives \begin{eqnarray*} \lefteqn{\mathrm{tr} (\Lambda-\mathcal{H}_\Omega(A))_+^\sigma\le L_{1,\sigma}^{\mathrm{cl}}\int_{\mathbb{R}} \left(\frac{1}{\sqrt{1-2\alpha(x_3)}} +\sup_{n\in\mathbb{N}}\left\{\frac{n}{\sqrt{1-2\alpha(x_3)}}\right\}\right)} \\ && \times\,\mathrm{tr}\left(\Lambda-\left(-\Delta_D^{\omega(x_3)} +\frac{1}{x_1^2+x_2^2}\left(\int_0^{\sqrt{x_1^2+x_2^2}} sB_3(s, x_3)\,\mathrm{d}s\right)^2\right)\right)_+^{\sigma+1/2} \end{eqnarray*} for any $\sigma\ge3/2$.} \end{example} \section{Spectral estimates for eigenvalues from \\ perturbed magnetic field} \label{s:mgBerezin} \setcounter{equation}{0} Now we change the topic and consider situations when the discrete spectrum comes from the magnetic field alone. We are going to demonstrate a Berezin-type estimate for the magnetic Laplacian on $\mathbb{R}^2$ with the field which is a radial and local perturbation of a homogeneous one. We consider the operator $H(B)$ in $L^2(\mathbb{R}^2)$ defined as follows, \begin{equation} \label{eq-HB} H(B)= -\partial_x^2 +(i\partial_y+A_2)^2, \qquad A = \big(0,B_0\,x- f(x,y)\big)\,, \end{equation} with $f$ given by $$ f(x, y)=-\int_x^\infty g(\sqrt{t^2+y^2})\,\mathrm{d}t\,. $$ with $g: \mathbb{R}_+\to\mathbb{R}_+$; the operator $H(B)$ is then associated with the magnetic field $$ B= B(x,y) = B_0 - g(\sqrt{x^2+y^2}\,)\,. $$ Since have chosen the vector potential in such a way that the unperturbed part corresponds to the Landau gauge, we have $$ H(B_0)=-\partial_x^2 +(i\partial_y+B_0 x)^2. $$ Using a partial Fourier transformation, it is easy to conclude from here that the corresponding spectrum consists of identically spaced eigenvalues of infinite multiplicity, the Landau levels, \begin{equation} \label{sp-hb0} \sigma (H(B_0)) =\left\{(2n-1)B_0, \ \ n\in\mathbb{N}\, \right\} \, . \end{equation} It is well known that $\inf\sigma_{\mathrm{ess}}(H(B)-B)=0$, hence the relative compactness of $B_0-B$ with respect to $H(B)-B_0$ in $L^2(\mathbb{R}^2)$ implies $$ \inf\sigma_{\mathrm{ess}}(H(B)) = B_0. $$ We have to specify the sense in which the magnetic perturbation is local. In the following we will suppose that \begin{enumerate}[(i)] \item the function $g\in L^\infty(\mathbb{R}_+)$ is non-negative and such that both $f$ and $\partial_{x_2} f$ belong to $L^\infty(\mathbb{R}^2)$, and $$ \lim_{x_1^2+x_2^2\to\infty} \big(|\partial_{x_2} f(x_1, x_2)| + |f(x_1, x_2)|\big) =0\,. $$ \item $\|g\|_\infty \leq B_0\,.$ \end{enumerate} Let us next rewrite the vector potentials $A_0$ and $A$ associated to $B_0$ and $B$ in the polar coordinates. Passing to the circular gauge we obtain \begin{equation} \label{poincare} A_0=(0, a_0(r))\,, \qquad A=(0, a(r))\,, \end{equation} with \begin{equation}\label{a-a0} a_0(r) = \frac{B_0 r}{2}\,, \qquad a(r) = \frac{B_0 r}{2} - \frac{1}{r} \int_0^r g(s)\, s\, \mathrm{d}s\,. \end{equation} Hence the operators $H(B_0)$ and $H(B)$ are associated with the closures of the quadratic forms in $L^2(\mathbb{R}_+, r \mathrm{d}r)$ with the values \begin{equation} \label{q-b-0} Q(B_0)[u] = \int_0^{2\pi} \int_0^\infty \left ( |\partial_r u|^2 + |\, i r^{-1} \partial_\theta u+a_0(r)\, u|^2\right)\, r\, \mathrm{d}r\, \mathrm{d}\theta \end{equation} and \begin{equation} \label{q-b} Q(B)[u] = \int_0^{2\pi} \int_0^\infty \left ( |\partial_r u|^2 + |\, i r^{-1} \partial_\theta u+a(r)\, u|^2\right)\, r\, \mathrm{d}r\,\mathrm{d}\theta\,, \end{equation} respectively, both defined on $C_0^\infty(\mathbb{R}_+)$. Furthermore, for every $k\in\mathbb{N}_0$ we introduce the following auxiliary potential, \begin{equation} \label{vk} \qquad V_k(r) := \frac{2k}{r} (a_0(r)-a(r)) +a^2(r)-a_0^2(r)\,, \end{equation} and the functions \begin{equation} \label{psi-k} \psi_k(r) = \sqrt{\frac{B_0}{\Gamma(k+1)}}\ \left(\frac{B_0}{2}\right)^{k/2}\, r^k\, \exp \left( -\frac{B_0\, r^2}{4}\right). \end{equation} Finally let us denote by \begin{equation} \label{flux.} \alpha = \int_0^\infty g(r)\, r\, \mathrm{d}r \end{equation} the flux associated with the perturbation; recall that in the rational units we employ the flux quantum value is $2\pi$. Now we are ready to state the result. \begin{theorem} \label{thm-red} Let the assumptions (i) and (ii) be satisfied, and suppose moreover that $\alpha \leq 1$. Put \begin{equation} \label{lam-k.} \Lambda_k = \left(\psi_k, \big(\,V_k(\,\cdot)\big)_- \, \psi_k\right)_{L^2(\mathbb{R}_+, r \mathrm{d}r)}\,. \end{equation} Then the inequality \begin{equation} \label{eq-LT-gen-2d} \mathrm{tr} (H(B)-B_0)_-^\gamma \ \leq 2^\gamma \sum_{k=0}^\infty\ \Lambda_k^\gamma\,, \qquad \gamma \ge 0\,, \end{equation} holds true whenever the right-hand side is finite. \end{theorem} \begin{remark} {\rm For a detailed discussion of the asymptotic distribution of eigenvalue of the operator $H(B)$ we refer to \cite{rt08}.} \end{remark} \begin{proof} We are going to employ the fact that both $A_0$ and $A$ are radial functions, see \eqref{poincare}, and note that by the partial-wave decomposition \begin{equation} \label{pwd-mg} \mathrm{tr} \, (H(B)-B_0)_-^\gamma = \sum_{k\in\mathbb{Z}}\, \mathrm{tr} \, (h_k(B)-B_0)_-^\gamma\,, \end{equation} where the operators $h_k(B)$ in $L^2(\mathbb{R}_+, r \mathrm{d}r)$ are associated with the closures of the quadratic forms $$ Q_k[u] = \int_0^\infty \left ( |\partial_r u|^2 + \biggl|\frac{k}{r} u-a(r)\, u\biggr|^2\right)\, r\, \mathrm{d}r \,, $$ defined originally on $C_0^\infty(\mathbb{R}_+)$, and acting on their domain as $$ h_k(B) = -\partial_r^2 -\frac 1r \partial_r + \left(\frac kr -a(r)\right)^2. $$ In view of \eqref{vk} it follows that $$ h_k(B) = h_k(B_0) + V_k(r)\,, $$ where $$ h_k(B_0) = -\partial_r^2 -\frac 1r \partial_r + \left(\frac kr -a_0(r)\right)^2 \,. $$ To proceed we need to recall some spectral properties of the two-dimensional harmonic oscillator, $$ {\rm H}_\mathrm{osc} = -\Delta +\frac{B_0^2}{4}\, (x^2+y^2) \qquad \text{in} \;\, L^2(\mathbb{R}^2)\,. $$ It is well known that the spectrum of ${\rm H}_\mathrm{osc}$ consists of identically spaced eigenvalues of a finite multiplicity, \begin{equation} \label{h-osc-spec} \sigma\bigr({\rm H}_\mathrm{osc}\bigl) = \left\{\, nB_0, \ n\in\mathbb{N} \,\right\} \,, \end{equation} where the first eigenvalue $B_0$ is simple and has a radially symmetric eigenfunction. The latter corresponds to the term with $k=0$ in the partial-wave decomposition of ${\rm H}_\mathrm{osc}$, which implies $$ \sigma\bigl({\rm H}_\mathrm{osc}\bigr) = \bigcup_{k\in\mathbb{Z}}\ \sigma \left(-\partial_r^2 -\frac 1r \partial_r + \frac{k^2}{r^2} +\frac{B_0^2\, r^2}{4}\right)\,, $$ where the operators in the brackets at the right-hand side act in $L^2(\mathbb{R}_+, r \mathrm{d}r)$. Hence in view of \eqref{h-osc-spec} we have \begin{equation} \label{k-neq-0} \inf_{k\neq 0} \sigma \left(-\partial_r^2 -\frac 1r \partial_r + \frac{k^2}{r^2} +\frac{B_0^2\, r^2}{4}\right) \, \geq 2 B_0\, . \end{equation} On the other hand, for $k<0$ it follows from (ii), \eqref{vk} and \eqref{flux.} that $$ V_k(r) = \frac{2k}{r}\, \int_0^r g(s)\, s\, \mathrm{d}s -B_0\, \int_0^r g(s)\, s\, \mathrm{d}s + \frac{1}{r^2} \left(\int_0^r g(s)\, s\, \mathrm{d}s\right)^{\!2} \\ \geq k B_0 - B_0\,. $$ By \eqref{k-neq-0} we thus obtain the following inequality which holds in the sense of quadratic forms on $C_0^\infty(\mathbb{R}_+)$ for any $k<0$, \begin{eqnarray*} \lefteqn{h_k(B) = h_k(B_0) + V_k(r) = -\partial_r^2 -\frac 1r \partial_r + \frac{k^2}{r^2} +\frac{B_0^2\, r^2}{4} - k B_0 +V_k(r)} \\ && \geq -\partial_r^2 -\frac 1r \partial_r + \frac{k^2}{r^2} +\frac{B_0^2\, r^2}{4} -\alpha\, B_0 \\ && \geq (2-\alpha) B_0\,. \phantom{AAAAAAAAAAAAAAAAAAAAAAAAAAAA} \end{eqnarray*} Since $\alpha \leq 1$ holds by hypothesis, this implies that \begin{equation} \label{k-pos} \mathrm{tr} \, (H(B)-B_0)_-^\gamma = \sum_{k\in\mathbb{Z}}\, \mathrm{tr} \, (h_k(B)-B_0)_-^\gamma\, = \sum_{k\geq 0}\, \mathrm{tr} \, (h_k(B)-B_0)_-^\gamma\, , \end{equation} see \eqref{pwd-mg}. In order to estimate $\mathrm{tr} \, (h_k(B)-B_0)_-^\gamma$ for $k\geq 0$ we employ $$ \Pi_k = \left( \cdot\, , \, \psi_k\right)_{L^2(\mathbb{R}_+, r \mathrm{d}r)}\, \psi_k\,, $$ the projection onto the subspace spanned by $\psi_k$, and note that \begin{equation} \label{norm} \psi_k \in \ker (h_k(B_0)-B_0)\,, \qquad \|\psi_k\|_{L^2(\mathbb{R}_+, r \mathrm{d}r)} = 1 \quad \forall\ k\in\mathbb{N}\cup\{0\} \,. \end{equation} Let $Q_k = 1-\Pi_k$. From the positivity of $\bigl(\,V_k(\cdot)\bigr)_-$ it follows that for any $u\in C_0^\infty(\mathbb{R}_+)$ it holds \begin{eqnarray} \lefteqn{\left( u, \left( \Pi_k \bigl(\,V_k(\cdot)\bigr)_- Q_k + Q_k \bigl(\,V_k(\cdot)\bigr)_-\Pi_k\right)\, u\right)} \nonumber \\ && \leq \, \left( u, \Pi_k \bigl(\,V_k(\cdot )\bigr)_- \Pi_k\, u\right) + \left( u, Q_k \big(\,V_k(\cdot )\big)_- Q_k\, u\right)\,, \label{pq} \end{eqnarray} where the scalar products are taken in $L^2(\mathbb{R}_+, r \mathrm{d}r)$. From \eqref{pq} we infer that \begin{eqnarray} \lefteqn{h_k(B)-B_0 = (\Pi_k+Q_k) \left(h_k(B_0) -B_0+V_k(\cdot)\right) (\Pi_k+Q_k)} \nonumber \\ && \geq (\Pi_k+Q_k) \left(h_k(B_0) -B_0-\bigl(\,V_k(\cdot)\bigr)_-\right) (\Pi_k+Q_k) \nonumber \\ && \geq \Pi_k \left(h_k(B_0) -B_0-2 \bigl(\,V_k(\cdot )\bigr)_-\right) \Pi_k \nonumber\\ && \quad + Q_k \left(h_k(B_0) -B_0-2 \bigl(\,V_k(\cdot )\bigr)_-\right) Q_k\,. \label{pk-only} \end{eqnarray} The operator $h_k(B_0)$ has for each $k\in\mathbb{N}_0$ discrete spectrum which consists of simple eigenvalues. Moreover, from the partial-wave decomposition of the operator $H(B_0)$ we obtain $$ \sigma (H(B_0)) =\left\{(2n-1)B_0, \ n\in\mathbb{N} \right\} \, = \bigcup_{k\in\mathbb{Z}} \, \sigma(h_k(B_0))\,, $$ see \eqref{sp-hb0}. It means that $$ \forall\ k \in\mathbb{Z}\ : \quad \sigma(h_k(B_0)) \, \subset\, \left\{(2n-1)B_0, \ \ n\in\mathbb{N} \right\}\,, $$ and since $\psi_k$ is an eigenfunction of $h_k(B_0)$ associated to the simple eigenvalue $B_0$, see \eqref{norm}, it follows that \begin{equation} \label{higher-ev} Q_k \left(h_k(B_0) -B_0\right) Q_k \, \geq\, 2 B_0\, Q_k\,, \qquad \forall\ k\in\mathbb{N}\cup\{0\}\,. \end{equation} On the other hand, by \eqref{vk} and \eqref{flux.} we infer $$ \sup_{r>0} \bigl(\,V_k(r)\bigr)_- \, \leq\, \alpha\, B_0 \qquad \forall\ k\in\mathbb{N}\cup\{0\}. $$ The last two estimates thus imply that $$ Q_k \left(h_k(B_0) -B_0-2 \bigl(\,V_k(\cdot )\bigr)_-\right) Q_k \geq Q_k \left( 2\, B_0 (1-\alpha)\right) Q_k \geq 0\,, $$ where we have used the assumption $\alpha \leq 1$. With the help of \eqref{pk-only} and the variational principle we then conclude that \begin{eqnarray*} \lefteqn{\mathrm{tr} \, (h_k(B)-B_0)_-^\gamma\,\leq\, \mathrm{tr} \left(\Pi_k \left(h_k(B_0) -B_0-2 \bigl(\,V_k(\cdot )\bigr)_-\right) \Pi_k\right)_-^\gamma} \\ && = \mathrm{tr} \left(-2\, \Pi_k \bigl(\,V_k(\cdot )\big)_- \Pi_k\right)_-^\gamma = 2^\gamma\, \mathrm{tr} \left( \Pi_k \bigl(\,V_k(\cdot )\bigr)_- \Pi_k\right)^\gamma \\ && = 2^\gamma\, \left(\psi_k, \bigl(\,V_k(\cdot )\bigr)_- \, \psi_k\right)^\gamma_{L^2(\mathbb{R}_+, r \mathrm{d}r)}\, = 2^\gamma\, \Lambda_k^\gamma\,, \phantom{AAAAAA} \end{eqnarray*} see \eqref{lam-k.}. To complete the proof it now remains to apply equation \eqref{k-pos}. \end{proof} \section{Three dimensions: a magnetic `hole'} \setcounter{equation}{0} \label{s:3Dhole} Let us return to the three-dimensional situation and consider a magnetic Hamiltonian $\mathcal{H}(B)$ in $L^2(\mathbb{R}^3)$ associated to the magnetic field \mbox{$B :\mathbb{R}^3\to \mathbb{R}^3$} regarded as a perturbation of a homogeneous magnetic field of intensity \mbox{$B_0>0$} pointing in the $x_3$-direction, \begin{equation} \label{mgf-3d} B(x_1, x_2, x_3)= (0, 0, B_0) - b(x_1, x_2, x_3)\,, \end{equation} with the perturbation $b$ of the form $$ b(x_1, x_2, x_3) = \biggl(-\omega'(x_3)\, f(x_1, x_2),\, 0,\, \omega(x_3)\, g\left(\sqrt{x_1^2+x_2^2}\, \right) \biggr)\,. $$ Here $\omega: \mathbb{R}\to \mathbb{R}_+$ , $g: \mathbb{R}_+\to \mathbb{R}_+$ and \begin{equation} \label{fgh2} f(x_1, x_2) = -\int_{x_1}^\infty g\left(\sqrt{t^2+x_2^2}\, \right)\, \mathrm{d}t\,. \end{equation} The resulting field $B$ thus has the component in the $x_3$-direction given the $B_0$ plus a perturbation which is a radial field in the $x_1, x_2-$plane with a $x_3-$dependent amplitude $\omega(x_3)$. The first component of $B$ then ensures that $\nabla\cdot B=0$, which is required by the Maxwell equations which include no magnetic monopoles; it vanishes if the field is $x_3$-independent. A vector potential generating this field can be chosen in the form $$ A(x_1, x_2, x_3) = (0,\, B_0\, x_1 -\omega(x_3)\, f(x_1, x_2),\, 0 )\,, $$ which reduces to Landau gauge in the unperturbed case, and consequently, the operator $\mathcal{H}(B)$ acts on its domain as \begin{equation} \label{eq-H} \mathcal{H}(B) = -\partial_{x_1}^2 +(i\partial_{x_2} +B_0\, x_1 -\omega(x_3)\, f(x_1, x_2) )^2 -\partial_{x_3}^2. \end{equation} We have again to specify the local character of the perturbation: we will suppose that \begin{enumerate}[(i)] \item the function $g\in L^\infty(\mathbb{R}_+)$ is non-negative, such that $f$ and $\partial_{x_2} f$ belong to $L^\infty(\mathbb{R}^2)$, and $$ \lim_{x_1^2+x_2^2\to\infty} \big(|\partial_{x_2} f(x_1, x_2)| + |f(x_1, x_2)|\big) =0\,, $$ \item $\omega\geq 0$, $\, \omega\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$, and $$ \|\omega\|_\infty\, \|g\|_\infty \leq B_0\,, \qquad \lim_{|x_3|\to\infty} \omega(x_3) =0\,. $$ \end{enumerate} \begin{lemma}\label{lem-es} The assumptions (i) and (ii) imply $\sigma_{\mathrm{ess}}(\mathcal{H}(B)) = [B_0,\infty)$. \end{lemma} \begin{proof} We will show that the essential spectrum of $\mathcal{H}(B)$ coincides with the essential spectrum of the operator $$ \mathcal{H}(B_0) = -\partial_{x_1}^2 +(i\partial_{x_2} +B_0\, x_1)^2 -\partial_{x_3}^2\,, $$ which is easy to be found, we have $\sigma(\mathcal{H}(B_0))= \sigma_{\mathrm{ess}}(\mathcal{H}(B_0)) = [B_0,\infty)$. Let $$ T= \mathcal{H}(B) -\mathcal{H}(B_0) = -2\, \omega f\, (i\partial_{x_2} + B_0 x_1) -i \omega\, \partial_{x_2} f +\omega^2 f^2. $$ From assumption (i) in combination with \cite[Thm.~5.7.1]{da} it follows that the operator $(\omega\, \partial_{x_2} f +\omega^2 f^2 )(-\Delta+1)^{-1}$ is compact on $L^2(\mathbb{R}^3)$. The diamagnetic inequality and \cite{pi} thus imply that the sum $i \omega\, \partial_{x_2} f +\omega^2 f^2$ is relatively compact with respect to $\mathcal{H}(B_0)$. As for the first term of the perturbation $T$, we note that since $(i\partial_{x_2} + B_0 x_1)$ commutes with $\mathcal{H}(B_0)$, it holds \begin{eqnarray} \lefteqn{\omega f\, (i\partial_{x_2} + B_0 x_1)\, (\mathcal{H}(B_0)+1)^{-1}} \nonumber \\ && = \omega f\, (\mathcal{H}(B_0)+1)^{-1/2}\, (i\partial_{x_2} + B_0 x_1)\, (\mathcal{H}(B_0)+1)^{-1/2}. \label{T-1} \end{eqnarray} In the same way as above, with the help of \cite[Thm.5.7.1]{da}, diamagnetic inequality, and \cite{pi}, we conclude that $\omega\, f\, (\mathcal{H}(B_0)+1)^{-1/2}$ is compact on $L^2(\mathbb{R}^3)$. On the other hand, $(i\partial_{x_2} + B_0 x_1)\, (\mathcal{H}(B_0)+1)^{-1/2}$ is bounded on $L^2(\mathbb{R}^3)$. As their product the operator \eqref{T-1} is compact; by Weyl's theorem we then have $\sigma_{\mathrm{ess}}(\mathcal{H}(B))=\sigma_{\mathrm{ess}}(\mathcal{H}(B_0))= [B_0,\infty)$. \end{proof} \subsection{\bf Lieb-Thirring-type inequalities for $\mathcal{H}(B)$} \label{LT-3D} Now we are going to formulate Lieb-Thirring-type inequalities for the negative eigenvalues of $\mathcal{H}(B)-B_0$ in three different cases corresponding to different types of decay conditions on the function $g$. Let us start from a general result. We denote by $$ \alpha(x_3) = \omega(x_3) \int_0^\infty g(r)\, r\, \mathrm{d}r $$ the magnetic flux (up to the sign) through the plane $\{ (x_1,x_2, x_3): (x_1,x_2)\in\mathbb{R}^2\}$ associated with the perturbation. From Theorem~\ref{thm-red} and inequality (\ref{magn.field}) we make the following conclusion. \begin{theorem} \label{thm-red3} Let assumptions (i) and (ii) be satisfied. Suppose, moreover, that $\sup_{x_3} \alpha(x_3) \leq 1$ and put \begin{equation} \label{lam-k} \Lambda_k(x_3) = \bigl(\psi_k, \big(\,V_k(\cdot ; x_3)\big)_- \, \psi_k\bigr)_{L^2(\mathbb{R}_+, r \mathrm{d}r)}\,. \end{equation} Then the inequality \begin{equation} \label{eq-LT-gen-3d} \mathrm{tr} \, (\mathcal{H}(B)-B_0)_-^\sigma \leq L^{\mathrm{cl}}_{\sigma, 1} \ 2^{\, \sigma+\frac 12} \int_{\mathbb{R}} \, \sum_{k=0}^\infty\ \Lambda_k(x_3)^{\sigma+\frac 12}\, \mathrm{d}x_3\,, \quad \sigma \geq \frac 32\,, \end{equation} holds true whenever the right-hand side is finite. \end{theorem} \subsubsection{Perturbations with a power-like decay} Now we come to the three cases mentioned above, stating first the results and then presenting the proofs. We start from magnetic fields (\ref{mgf-3d}) with the perturbation $g$ which decays in a powerlike way. Specifically, we shall assume that \begin{equation} \label{g-power} 0 \, \leq \, g(r) \, \leq \, B_0\, (1+\sqrt{B_0}\ r)^{-2\beta}\,, \qquad \beta >1\,. \end{equation} We have included the factor $\sqrt{B_0}$ on the right hand side of \eqref{g-power} having in mind that $B_0^{-1/2}$ is the Landau magnetic length which defines a natural length unit in our model. \smallskip \noindent For any $\beta> 1$ and $\gamma> \max\left\{ \frac{1}{\beta-1}\,, 2\right\}$ we define the number \begin{equation} \label{eq-K} K(\beta, \gamma) = 2^{-\gamma} +\sum_{k=1}^\infty\, \left(\frac{\Gamma\left (( k+1-\beta)_+\right)}{\Gamma(k)} +\frac{1}{2\sqrt{2\pi k}} \right)^\gamma\,, \end{equation} and recall also the classical Lieb-Thirring constants in one dimension, \begin{equation} \label{LT-constants} L^{\mathrm{cl}}_{1,\sigma}= \frac{\Gamma(\sigma+1)}{2\sqrt{\pi}\ \Gamma(\sigma+3/2)}\,, \qquad \sigma>0\,. \end{equation} \smallskip \begin{theorem} \label{thm-power} Assume that $g$ satisfies \eqref{g-power} and that $\|\omega\|_\infty \leq 2(\beta-1)$. Then $$ \mathrm{tr} \, (\mathcal{H}(B)-B_0)_-^\sigma \ \leq \ L^{\mathrm{cl}}_{1,\sigma}\ K\Big(\beta, \sigma+\frac 12\Big) \left(\frac{2\, B_0}{\beta-1}\right)^{\sigma+\frac12} \int_{\mathbb{R}}\omega(x_3)^{\sigma+\frac 12}\, \mathrm{d}x_3 $$ holds true for all \begin{equation} \label{sigma-min} \sigma > \max\left\{ \frac 32\, ,\, \frac{3-\beta}{2\beta-2} \right\}\,. \end{equation} \end{theorem} \begin{remark} {\rm Since $\omega\in L^\infty(\mathbb{R}) \cap L^2(\mathbb{R})$, it follows that $\omega\in L^{\sigma+\frac 12}(\mathbb{R})$ for any $\sigma\geq 3/2$. Note also that by the Stirling formula we have} $$ \frac{\Gamma\left ( k+1-\beta \right)}{\Gamma(k)} \ \sim \ k^{1-\beta} \quad \mathrm{as}\quad k\to\infty\,. $$ {\rm Hence the constant $K\big(\beta, \sigma+\frac 12\big)$ is finite for any $\sigma$ satisfying \eqref{sigma-min}.} \end{remark} \subsubsection{Gaussian decay} Next we assume that the perturbation $g$ has a Gaussian decay, in other words \begin{equation} \label{g-gauss} 0 \, \leq \, g(r) \, \leq \, B_0\, e^{- \varepsilon B_0 r^2} \, , \qquad \varepsilon >0. \end{equation} \smallskip \begin{theorem} \label{thm-gauss} Assume that $g$ satisfies \eqref{g-gauss} and that $\|\omega\|_\infty \leq 2 \varepsilon$. Then for any $\sigma > 3/2$ it holds $$ \mathrm{tr} \, (\mathcal{H}(B)-B_0)_-^\sigma \ \leq \ L^{\mathrm{cl}}_{\sigma,1}\, \left(\frac{B_0}{\varepsilon}\right)^{\sigma+\frac 12} G(\varepsilon, \sigma)\, \int_{\mathbb{R}} \omega(x_3)^{\sigma+\frac 12}\, \mathrm{d}x_3\,, $$ where \begin{equation}\label{G} G(\varepsilon, \sigma) = 1 + \sum_{k=1}^\infty \left((1+2\varepsilon)^{-k} + \frac{1}{2\sqrt{2\pi k}}\right)^{\sigma+\frac 12}\,. \end{equation}\end{theorem} \subsubsection{Perturbations with a compact support} Let $D$ be a circle of radius $R$ centered at the origin and put \begin{equation} \label{g-hole} g(r) = \left\{ \begin{array}{l@{\quad}l} B_0 & \quad r \leq R \, \\ 0 &\quad r > R \\ \end{array} \right. \, . \end{equation} \smallskip \begin{theorem} \label{thm-hole} Assume that $g$ satisfies \eqref{g-hole} with $R$ such that $B_0 R^2\leq 2$. Suppose moreover that $\|\omega\|_\infty \leq 1$. Then for any $\sigma > 3/2$ it holds \begin{equation} \label{LT-hole-3d} \mathrm{tr} \, (\mathcal{H}(B)-B_0)_-^\sigma \ \leq \ L^{\mathrm{cl}}_{\sigma,1}\ J\Big(B_0\, , \sigma\Big)\ B_0^{\sigma+\frac 12} \int_{\mathbb{R}} \omega(x_3)^{\sigma+\frac 12}\, \mathrm{d}x_3\,, \end{equation} where \begin{equation} J(B_0, \sigma) = \left(B_0\, R^2\right)^{\sigma+\frac 12}\left( 1 + \sum_{k=1}^\infty \left( \left(\frac{B_0\, R^2}{2}\right)^{k+1} \frac{1}{k!} + \frac{1}{2\sqrt{2\pi k}}\right)^{\sigma+\frac 12}\, \right)\, . \end{equation} \end{theorem} \subsection{The proofs} Note that the assumptions of these theorems ensure that $\sup_{x_3} \alpha(x_3)\leq 1$, hence in all the three cases we may apply Theorem~\ref{thm-red} and, in particular, the estimate \eqref{eq-LT-gen-3d}. To this note it is useful to realize that by \eqref{a-a0}, \eqref{vk} and \eqref{flux.} we have \begin{eqnarray} \lefteqn{V_k(r;x_3) = -\alpha(x_3) B_0 +\frac{2 \alpha(x_3) k}{r^2} - \frac{2k\, \omega(x_3)}{r^2} \int_r^\infty g(s)\, s\, \mathrm{d}s} \nonumber \\ && \quad +B_0\, \omega(x_3) \int_r^\infty g(s)\, s\, \mathrm{d}s + \frac{\omega^2(x_3)}{r^2} \left(\int_0^r g(s)\, s\, \mathrm{d}s\right)^2. \label{vk-eq} \end{eqnarray} Consequently, we obtain a simple upper bound on the negative part of $V_k$, \begin{equation} \label{vk-neg} \big(\,V_k(r ; x_3)\big)_- \ \leq\ \frac{2k\, \omega(x_3)}{r^2} \int_r^\infty g(s)\, s\, \mathrm{d}s +\alpha(x_3)\left(B_0 -\frac{2k}{r^2}\right)_+ \end{equation} for all $k\in\mathbb{N}\cup\{0\}$. For $k=0$ we clearly we have \begin{equation} \label{upperb-0} \Lambda_0(x_3) \, \leq\, \alpha(x_3) B_0\,, \end{equation} by \eqref{norm}. In order to estimate $\Lambda_k(x_3)$ with $k\geq 1$ we denote by $\lambda_k(x_3)$ the contribution to $\Lambda_k(x_3)$ coming from the first term on the right-hand side of \eqref{vk-neg}, i.e. \begin{equation} \label{lam-k-2} \lambda_k(x_3) = 2\, \omega(x_3)\, k \int_0^\infty \psi^2_k(r) \left(\int_r^\infty g(s)\, s\, \mathrm{d}s\right) \, r^{-1}\, \mathrm{d}r\,. \end{equation} Before coming to the proofs we need an auxiliary result. \begin{lemma} \label{lem-aux} For any $k\in\mathbb{N}$ it holds $$ \Lambda_k(x_3) \, \leq\, \lambda_k(x_3) + \frac{\alpha(x_3) B_0}{\sqrt{2\pi k}}\,. $$ \end{lemma} \begin{proof} In view of \eqref{lam-k}, \eqref{vk-neg}, and \eqref{lam-k-2} the claim will follow if we show that \begin{equation} \label{enough} \int_0^\infty \psi^2_k(r) \, \left(B_0 -\frac{2k}{r^2}\right)_+ r\, \mathrm{d}r\ \leq \ \frac{B_0}{\sqrt{2\pi k}}\,. \end{equation} Let $r_k = \sqrt{\frac{2k}{B_0}}$. Using \eqref{psi-k} and the substitution $s= \frac{B_0 r^2}{2}$ we then find \begin{eqnarray*} \lefteqn{\int_0^\infty \psi^2_k(r) \, \left(B_0 -\frac{2k}{r^2}\right)_+ r\, \mathrm{d}r = B_0 \int_{r_k}^\infty \psi^2_k(r) \, r\, \mathrm{d}r - 2k \int_{r_k}^\infty \psi^2_k(r) \, r^{-1}\, \mathrm{d}r} \\ && = \frac{B_0}{\Gamma(k+1)} \int_k^\infty e^{-s}\, s^k\, \mathrm{d}s -\frac{B_0}{\Gamma(k)} \int_k^\infty e^{-s}\, s^{k-1}\, \mathrm{d}s\,. \phantom{AAAAAAAAAAA} \end{eqnarray*} Integration by parts gives $$ \int_k^\infty e^{-s}\, s^k\, \mathrm{d}s = e^{-k}\, k^k + k \int_k^\infty e^{-s}\, s^{k-1}\, \mathrm{d}s\,, $$ hence $$ \int_0^\infty \psi^2_k(r) \, \left(B_0 -\frac{2k}{r^2}\right)_+ r\, \mathrm{d}r = \frac{e^{-k}\, k^k\, B_0}{\Gamma(k+1)}\,, $$ and inequality \eqref{enough} follows from the Stirling-type estimate \cite[Eq.~6.1.38]{AS64} $$ \Gamma(k+1) = k! \geq \sqrt{2\pi}\ k^{\, k+\frac 12}\, e^{-k}\,, \qquad k\in\mathbb{N}\,; $$ this concludes the proof. \end{proof} \smallskip \begin{proof}[\bf Proof of Theorem \ref{thm-power}] In view of \eqref{upperb-0} and Lemma \ref{lem-aux} it suffices to estimate $\lambda_k(x_3)$ in a suitable way from above for $k\geq 1$. Using \eqref{g-power} we find \begin{align*} \int_0^\infty g(r)\, r \, \mathrm{d}r & \leq B_0 \int_0^\infty (1+\sqrt{B_0}\ r)^{-2\beta}\, r\, \mathrm{d}r \leq B_0 \int_0^\infty (1+\sqrt{B_0}\ r)^{1-2\beta}\, \mathrm{d}r \\ &= \int_0^\infty (1+ s)^{1-2\beta}\, \mathrm{d}s = \frac{1}{2(\beta-1)}\,, \end{align*} which implies \begin{equation} \label{alpha-upperb} \alpha(x_3) \leq \frac{\omega(x_3)}{2(\beta-1)}\, . \end{equation} Moreover, by virtue of \eqref{g-power} $$ \int_r^\infty g(s)\, s\, \mathrm{d}s \, \leq \, \sqrt{B_0}\, \int_r^\infty (1+\sqrt{B_0}\ s)^{1-2\beta}\, \mathrm{d}s = \frac{1}{2\beta-2}\, (1+\sqrt{B_0}\ r)^{2-2\beta} . $$ Assume first that $1\leq k \leq \beta -1$. In this case a combination of \eqref{psi-k} and the last equation gives \begin{eqnarray} \lefteqn{\lambda_k(x_3) \leq \frac{\omega(x_3)\, B_0}{(\beta-1)\, \Gamma(k)} \, \left(\frac{B_0}{2}\right)^{k} \int_0^\infty e^{-\frac{B_0 r^2}{2}} r^{2k-1} (1+\sqrt{B_0}\ r)^{2-2\beta}\, \mathrm{d}r} \nonumber \\ && = \frac{\omega(x_3)\, B_0}{(\beta-1)\, \Gamma(k)} \int_0^\infty e^{-s} s^{k-1}\, (1+\sqrt{2 s})^{2-2\beta}\, \mathrm{d}s \, \nonumber \\ && \leq\, \frac{\omega(x_3)\, B_0}{(\beta-1)\, \Gamma(k)} \, \int_0^\infty e^{-s} \, \mathrm{d}s = \frac{\omega(x_3)\, B_0}{(\beta-1)\, \Gamma(k)} \,, \phantom{AAAAAAAAAAAA} \label{k-geq-1} \end{eqnarray} where we have used again the substitution $s= \frac{B_0 r^2}{2}$. \smallskip \noindent On the other hand, for $k > \beta-1$ we have \begin{eqnarray*} \lefteqn{\lambda_k(x_3) \leq \frac{\omega(x_3)\, B_0}{(\beta-1)\, \Gamma(k)} \, \left(\frac{B_0}{2}\right)^{k} \int_0^\infty e^{-\frac{B_0 r^2}{2}} r^{2k-1} (1+\sqrt{B_0}\ r)^{2-2\beta}\, \mathrm{d}r} \\ && \leq \frac{\omega(x_3)\, B_0}{(\beta-1)\, \Gamma(k)} \, \left(\frac{B_0}{2}\right)^{k} \int_0^\infty e^{-\frac{B_0 r^2}{2}} r^{2k-1} (B_0\, r^2)^{1-\beta}\, \mathrm{d}r \\ && \leq \frac{\omega(x_3)\, B_0}{(\beta-1)\, \Gamma(k)} \int_0^\infty e^{-s} s^{k-\beta}\, \mathrm{d}s = \frac{\omega(x_3)\, B_0\, \Gamma(k+1-\beta)}{(\beta-1)\, \Gamma(k)}\,. \phantom{AAAA} \end{eqnarray*} This together with equations \eqref{alpha-upperb}, \eqref{upperb-0}, \eqref{k-geq-1} and Lemma \ref{lem-aux} shows that $$ \sum_{k=0}^\infty\, \Lambda_k^\gamma(x_3)\, \leq \, K(\beta, \gamma) \left(\frac{B_0}{\beta-1}\right)^{\gamma} \omega(x_3)^{\gamma}\, , $$ with the constant $K(\beta, \gamma)$ given by \eqref{eq-K}. The claim now follows from \eqref{eq-LT-gen-3d} upon setting $\gamma=\sigma +\frac 12$. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{thm-gauss}] We proceed as in the proof of Theorem \ref{thm-power} and use equation \eqref{upperb-0} and Lemma \ref{lem-aux}. Since \begin{equation} \label{alpha-gauss} \alpha(x_3) \leq \omega(x_3)\, B_0 \int_0^\infty B_0\, e^{- \varepsilon B_0 r^2} \, r\, \mathrm{d}r = \frac{\omega(x_3)}{2\varepsilon} \end{equation} holds in view of \eqref{g-gauss}, for $k=0$ we get $$ \Lambda_0(x_3) \, \leq \, \alpha(x_3) B_ 0 \, \leq \, \frac{\omega(x_3)\, B_0}{2\varepsilon}\,. $$ On the other hand, $$ \int_r ^\infty g(s)\, s\, \mathrm{d}s \leq B_0 \int_r^\infty e^{- \varepsilon B_0 s^2} \, s\, \mathrm{d}s = \frac{1}{2 \varepsilon}\, e^{- \varepsilon B_0 r^2} \, . $$ Hence using the substitution $s= \frac{B_0 r^2}{2} (1+2\varepsilon)$, we obtain \begin{eqnarray*} \lefteqn{\lambda_k(z) \leq \frac{\omega(x_3)\, B_0}{\varepsilon\, \Gamma(k)} \, \left(\frac{B_0}{2}\right)^{k} \int_0^\infty e^{-\frac{B_0 r^2}{2} (1+2\varepsilon)} \, r^{2k-1}\, \mathrm{d}r} \\ && = \frac{\omega(x_3)\, B_0}{2\varepsilon} \, \frac{(1+2\varepsilon)^{-k}}{\Gamma(k)} \int_0^\infty e^{-s}\, s^{k-1}\, \mathrm{d}s = \frac{\omega(x_3)\, B_0}{2\varepsilon}\, (1+2\varepsilon)^{-k} \end{eqnarray*} for any $k\geq 1$. Summing up gives $$ \sum_{k=0}^\infty \, \Lambda_k^\gamma(x_3) \, \leq \, \left(\frac{\omega(x_3)\, B_0}{2\, \varepsilon}\right)^\gamma \left(1+ \sum_{k=1}^\infty \left( (1+2\varepsilon)^{-k} +\frac{1}{2\sqrt{2\pi k}}\right)^\gamma \right)\,. $$ Theorem \ref{thm-red} applied with $\gamma=\sigma +\frac 12$ then completes the proof. \end{proof} \smallskip \begin{proof}[\bf Proof of Theorem \ref{thm-hole}] In this case we have $$ \alpha(x_3) = \omega(x_3)\, \frac{B_0 R^2}{2}\,. $$ Inequality \eqref{upperb-0} thus implies $$ \Lambda_0(z) \, \leq \, \omega(z)\, \frac{B_0^2\, R^2}{2}\, . $$ For $k\geq 1$ we note that in view of \eqref{g-hole} $$ \int_r^\infty g(s)\, s\, \mathrm{d}s = \left\{ \begin{array}{l@{\quad}l} \frac 12\, (R^2-r^2) & \quad r \leq R \, \\ & \\ 0 &\quad r > R \\ \end{array} \right. $$ Hence from \eqref{psi-k} and \eqref{lam-k-2} we conclude that \begin{eqnarray*} \lefteqn{\lambda_k(z)\, \leq\, \frac{B_0^2\, R^2\, \omega(x_3)}{\Gamma(k)} \, \left(\frac{B_0}{2}\right)^{k} \int_0^R e^{-\frac{B_0 r^2}{2}} \, r^{2k-1}\, \mathrm{d}r} \\ &&\, \leq\, \frac{B_0^2\, R^2\, \omega(x_3)}{2 \Gamma(k)} \int_0^{\frac{B_0 R^2}{2}} e^{-s} \, s^{k-1}\, \mathrm{d}s \\ &&\, \leq\, \frac{B_0^2\, R^2\, \omega(z)}{2 k \Gamma(k)} \, \left(\frac{B_0\, R^2}{2}\right)^k = \frac{B_0\, \omega(x_3)}{\Gamma(k+1)} \, \left(\frac{B_0\, R^2}{2}\right)^{k+1}\!\!, \quad k\in\mathbb{N}\,. \end{eqnarray*} This in combination with the above estimate on $\Lambda_0(x_3)$ and Lemma \ref{lem-aux} implies $$ \sum_{k=0}^\infty \, \Lambda_k^\gamma(x_3) \ \leq \ \omega(x_3)^\gamma\, B_0^\gamma\, \left(\frac{B_0\, R^2}{2}\right)^\gamma \left(1+ \sum_{k=1}^\infty \left(\left(\frac{B_0\, R^2}{2}\right)^{k} \frac{1}{k!} + \frac{1}{\sqrt{2\pi k}}\right)^\gamma\, \right) \,, $$ and the claim follows again by applying Theorem \ref{thm-red} with $\gamma=\sigma +\frac 12$. \end{proof} \subsection*{Acknowledgements} The research was supported by the Czech Science Foundation (GA\v{C}R) within the project 14-06818S. D.B. acknowledges the support of the University of Ostrava and the project ``Support of Research in the Moravian-Silesian Region 2013''. H.~K. was supported by the Gruppo Nazionale per Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The support of MIUR-PRIN2010-11 grant for the project ``Calcolo delle variazioni'' (H.~K.) is also gratefully acknowledged. T.W. was in part supported by the DFG project WE 1964/4-1 and the DFG GRK 1838.
2,877,628,090,643
arxiv
\section{Introduction} Extended Self-Similarity (ESS), discovered by \cite{BCTBMS93}, is the empirical observation that in fully developed turbulence, when plotting structure functions of order $p$ vs, say, the structure function of order three, rather than the traditional way where they are plotted vs the separation, then the range over which clean power-law scaling is observed can be substantially increased. This has allowed a much better determination of the scaling exponents $\zeta_p$ of the structure functions of order $p$ --- or at least of ratios of such exponents --- and has been key to confirming that three-dimensional high Reynolds number incompressible turbulence does not follow the \cite{K41} scaling laws $\zeta_p = p/3$, but instead has \textit{anomalous} scaling, whose exponents cannot be obtained solely through dimensional arguments. In spite of several attempts to explain the success of ESS \cite[see, e.g.][and Section~\ref{s:back-to-ns}]{BS99,SB99,FG01,SLP96,Y01}, the latter is still not fully understood and we do not know how much we can trust scaling exponents derived by ESS. It would be nice to have at least one instance for which ESS not only works, but does so for reasons we can rationally understand. A very natural candidate might be the one-dimensional Burgers equation. Early attempts to test ESS on the Burgers equation did not show any appreciable increase in the quality of scaling through the use of ESS. As we shall see in Section~\ref{s:burgers}, the conclusion that ``ESS does not work for the Burgers equation'' \cite[][]{BCBC95} was just reflecting the computational limitations of the early nineties. In Section~\ref{s:ess-in-a-nutshell} we recall some basic facts and notation for ESS in three-dimensional Navier--Stokes turbulence. Then in Section~\ref{s:burgers} we turn to the Burgers equation and present new numerical evidence that ESS works for Burgers, provided high enough spatial resolution is used. In Section~\ref{s:asymptotic-ess-theory} we use asymptotic theory to explain in detail why ESS works for the Burgers case. Finally, in Section~\ref{s:back-to-ns} we examine the possible lessons from our Burgers ESS study for three-dimensional Navier--Stokes turbulence. \section{ESS in a nutshell} \label{s:ess-in-a-nutshell} Consider the three-dimensional Navier--Stokes (3DNS) equation \begin{equation} \partial_t {\bf v} + {\bf v}\cdot \nabla {\bf v} = -\nabla p +\nu \nabla^2 {\bf v}, \qquad \nabla\cdot {\bf v} =0. \label{NS} \end{equation} For the case of homogeneous isotropic turbulence, (longitudinal) structure functions of integer order $p$ are defined as \begin{equation} S_p(r) \equiv \left\langle\left( \delta v_\parallel({\bf r})\right)^p\right \rangle, \label{defsp} \end{equation} in terms of the longitudinal velocity increments \begin{equation} \delta v_\parallel ({\bf r})\equiv [{\bf v}( {\bf x}+ {\bf r}) - {\bf v}({\bf x})]\cdot \frac{{\bf r}}{r}\,, \label{defdeltav} \end{equation} where $r \equiv |{\bf r}|$ and the angular brackets denote averaging. There is experimental and numerical evidence that, at high Reynolds numbers, structure functions follow scaling laws \cite[see. e.g.][]{MY71,F95} \begin{equation} S_p(r) \propto r^{\zeta_p} \label{scaling} \end{equation} over some range of separations (the inertial range) $L\gg r\gg \eta_p$. Here $L$ is the integral scale and $\eta_p$ the dissipation scale. The latter may depend on the order $p$ \cite[see e.g.][]{PV87,FV91}. Of course, the dominant-order behaviour given by \eqref{scaling} is accompanied by subdominant corrections involving the two small parameters characteristic of inertial-range intermediate asymptotics, namely $r/L$ and $\eta_p/r$. The simplest would be to have \begin{equation} S_p(r) =C_pr^{\zeta_p}\left(1+D_p^{\rm IR}(r/L)^{g_p^{\scriptscriptstyle\rm IR}}+D_p^{\rm UV}(\eta_p/r)^{g_p^{\scriptscriptstyle\rm UV}}\right) + {\rm h.o.t.}\,, \label{firstsub} \end{equation} where h.o.t. stands for ``higher-order terms'' and where $g_p^{\rm IR}>0$ and $g_p^{\rm UV}>0$ are the infrared (IR) and ultraviolet (UV) gaps, respectively. For a given Reynolds number and thus a given ratio $L/\eta_p$, the smaller the gaps and the constants $D_p^{\rm IR}$ and $D_p^{\rm UV}$, the larger the range of separations over which subdominant corrections remain small. The ESS is an operational procedure that effectively enlarges the range of separations over which dominant-order scaling is a good approximation. In its simplest formulation, one considers two integer orders $n$ and $m$ and plots $|S_n(r)|$ vs $|S_m(r)|$ and finds empirically that the scaling relations \begin{equation} |S_n(r)| \approx |S_m(r)|^{\alpha(n,m)}\, , \label{esss1caling} \end{equation} with suitable exponents $\alpha(n,m)$, hold much better than \eqref{scaling}. One particularly interesting instance of this procedure is when $m=3$. We then know from \cite{K41} that, to dominant order, we have the four-fifths law \cite[see, also][]{F95} \begin{equation} S_3(r) = -\frac{4}{5} \varepsilon r\, , \label{fourfiths} \end{equation} where $\varepsilon$ is the mean energy dissipation per unit mass. Thus, the third-order structure function (divided by $-(4/5)\varepsilon$) may be viewed as a \textit{deputy} of the separation $r$. A variant of the ESS, which frequently gives even better scaling, is to use alternative structure functions, defined with the absolute values of the longitudinal velocity increments, namely \begin{equation} F_p(r) \equiv \left\langle\left|\delta v_\parallel({\bf r})\right|^p\right \rangle\,. \label{deffp} \end{equation} It is then found empirically that \begin{equation} F_n(r) \approx F_m(r)^{\beta(n,m)}\, , \label{esss2caling} \end{equation} with suitable scaling exponents $\beta(n,m)$. Whatever its empirical merits, the variant procedure has the drawback that there is no equivalent to the four-fifths law for the third-order structure function with the absolute value of the longitudinal velocity increment. Thus we cannot safely use $F_3(r)$ as a deputy of $r$. We shall come back to this in Section~\ref{s:back-to-ns}. \section{ESS revisited for the Burgers equation} \label{s:burgers} The one-dimensional Burgers equation \begin{equation} \partial_t u +u\partial_x u = \nu \partial_x^2 u; \quad u(x,0) =u_0(x), \label{burgerseq} \end{equation} which was introduced originally as a kind of poor man's Navier--Stokes equation \cite[see, e.g.,][]{B74}, has some dramatic differences with three-dimensional Navier--Stokes (3DNS) turbulence, foremost that it is integrable \cite[][]{H50,C51} and --- as a consequence --- does not display self-generated chaotic behaviour. Nevertheless it does display \textit{anomalous scaling} in the following sense: superficially, the \cite{K41} theory is applicable to the Burgers equation as much as it is to 3DNS. However, when starting with smooth initial data $u_0(x)$, the evolved solution in the limit of vanishing viscosity $\nu$ will display shocks. Thus $\zeta_p=1$ for $p\ge 1$. When the Reynolds number is finite, structure functions will display scaling only over a limited range of separations $r$. Therefore, the Burgers equation may be a good testing ground for ESS and also perhaps for understanding why and when it works. Such considerations did not escape the creators of the ESS technique. Unfortunately, no clean scaling for structure functions is observed with the Burgers equation, either in the standard representation or in ESS, as long as simulations are done with the spatial resolution easily available in the early nineties, namely a few thousand collocation points. Scaling emerges only at much higher spatial resolutions with 128K ($128\times 1024$) Fourier modes and becomes fully manifest with 256K modes, which is now also the highest resolution achievable numerically within a time span of a few week. \begin{figure} \includegraphics[width=12cm]{essburgers2.eps} \caption{Compensated sixth-order structure function in standard (continuous line) and ESS (dashed line) representations.} \label{f:essburgers2} \end{figure} Let us now explain our numerical strategy for studying ESS with the Burgers equation. Our goal in doing preliminary numerical experiments is to understand ESS in a rational way, starting from the basic equations. For this it is advisable to keep the formulation minimally complex, avoiding too many ``realistic trappings''. For example, there is no need at first to assume random initial conditions: we can just take an $L$-periodic initial condition and define the structure functions by integrating over the period: \begin{equation} S_p(r)\equiv (1/L)\int_0 ^L dx\, [u(x+r,t) -u(x,t)]^p. \label{determstruct} \end{equation} We shall mostly work with a very simple \textit{single-mode model} for which the initial condition is $2\pi$-periodic deterministic and has a single Fourier mode \begin{equation} u_0 = \sin x. \label{initial} \end{equation} As we shall see in Section~\ref{s:asymptotic-ess-theory}, it is easy to extend the theory from the deterministic to the random case. We integrated the Burgers equation (\ref{burgerseq}) with the initial condition (\ref{initial}) using a pseudo-spectral method with $N$ collocation points and a two-thirds alias-removal rule. Time stepping was done in double precision by a 4th order Runge--Kutta scheme with constant time step $\delta t$. The viscous term was handled by the slaving technique known as ETDRK4 described in \cite{CM02}, which allows taking a time step about ten times larger than would be permitted with a direct handling of the viscous term. (It was pointed out by \cite{KT2005} that ETDRK4 can produce numerically ill-conditioned cancellations; following a suggestion of J.Z.~Zhu (private communication), we handled these by performing Taylor expansions to suitable order rather then by the complex-plane method proposed by \cite{KT2005}.) Slaving results not only in considerable speed-up but in much less accumulation of rounding noise. The parameters of the run were $N = 256K$ and $\delta t =10^{-5}$. Output was processed at $t=2$, well beyond the time of appearance of the first shock at $t=t_\star =1$ . Fig.~\ref{f:essburgers2} shows the compensated structure function --- that is divided by the theoretically predicted inertial-range dominant term --- of order six in both the standard representation and in a variant of the ESS representation. Our variant uses \begin{equation} \tilde{S}_3(r) \equiv \frac{S_3(r)}{-12\varepsilon}, \label{deftildes3} \end{equation} where $\varepsilon \equiv -(1/L)(d/dt)\int_0^{L}dx\, u ^2/2 $ is the mean energy dissipation. It is easy to show that the Burgers counterpart of the four-fifth low is a ``minus twelve'' law \cite[\textit{cf.}][]{GSAFT97} which makes $\tilde S_3$ the appropriate deputy of the separation $r$. In our opinion it is important to chose the constant in the definition of $\tilde S_3$ in such a way that it becomes $r$ with a unit factor (to dominant order). Otherwise an ESS plot in log-log coordinates may show an overall improvement in quality of scaling, without our being able to disentangle the small-separation (UV) improvement from the large-separation (IR) improvement. As we shall see, both are in general present and have quite different origins. Fig.~\ref{f:essburgers2} shows a substantial improvement in scaling for ESS, that is a wider horizontal plateau in the compensated structure function; and this at both the IR and UV ends. This is the first evidence that ``ESS works for Burgers''. Next we shall understand why it works. \section{Asymptotic theory of ESS for the Burgers equation} \label{s:asymptotic-ess-theory} We now give the theory for improved ESS scaling when $p \geq 3$, first for the single mode case and then upgrade it for the case of random solutions. To handle the infrared (IR) contributions to the structure functions we can work with an infinitely sharp shock, taking $\nu \to 0$. The dominant contribution to structure functions of integer order comes clearly from intervals $[x,\,x+r]$ which straddle the shock location $x_{\scriptscriptstyle \rm S}$ (in this Section the time variable is written explicitly only when needed). It is also easily shown that for $p\ge 3$ the first-order subdominant contributions comes from the small changes of the velocity, immediately to the left and the right of the shock, which are expressible by Taylor expanding the velocity to first order in these two regions \cite[see Section 4.2.2 of][]{BFK00}: \begin{equation} u(x) = u_-+(x-x_{\scriptscriptstyle \rm S})s_-+ \textrm{h.o.t.}, \quad u(x) = u_++(x-x_{\scriptscriptstyle \rm S})s_+ + \textrm{h.o.t.}, \label{taylor} \end{equation} where $u_-$ and $u_+$ are the velocities immediately to the left and to the right of the shock and $s_-$ and $s_+$ their respective gradients. Starting from \eqref{determstruct} and limiting the integration domain to the interval $[x_{\scriptscriptstyle \rm S} -r,\,x_{\scriptscriptstyle \rm S}]$, which corresponds to the straddling condition, we obtain, using \eqref{taylor} \begin{eqnarray} LS_p(r)=(-1)^p\left(\Delta^pr-\Delta^{p-1}(s_++s_-)\frac{p}{2}r^2\right)+\textrm{h.o.t.}, \label{cscs} \end{eqnarray} where \begin{eqnarray} \Delta \equiv u_--u_+ >0 \label{defdelta} \end{eqnarray} is the amplitude of the shock and $L=2\pi$ is the spatial period. Specialising to the third-order structure function and to its rescaled version the \textit{separation deputy} $\tilde S_3(r) \equiv S_3(r)/(-12\varepsilon)$, we obtain \begin{eqnarray} LS_3(r) &=& -\Delta^3 r + \left(\frac{3}{2}\right)\Delta^2(s_- + s_+)r^2 + \textrm{h.o.t.}\\ \tilde {S}_3(r) &=& r-\left(\frac{3}{2\Delta}\right)(s_- + s_+)r^2 +\textrm{h.o.t.}, \label{bsbs} \end{eqnarray} where we have used the relation \begin{eqnarray} \varepsilon =\frac{\Delta ^3}{12 L} \label{espdelta} \end{eqnarray} between the energy dissipation and the shock strength. We now eliminate $r$ between \eqref{bsbs} and \eqref{cscs}, so a to rewrite the structure function of order $p$ as an expansion in the separation deputy : \begin{eqnarray} LS_p=(-1)^p \left(\Delta ^p\tilde{S}_3 -\Delta^{p-1}\left(\frac{p-3}{2}\right)(s_-+s_+)\tilde{S}_3^2\right)+\textrm{h.o.t.}\label{dsds} \end{eqnarray} Comparison of the ``standard'' expansion \eqref{cscs} of the structure function and its ESS expansion \eqref{dsds}, shows that they have the same dominant terms and that their first subdominant corrections differ only by a numerical coefficient: $p/2$ for the standard case and $(p-3)/2$ for ESS. Hence the subdominant correction has been decreased by a factor $p/(p-3)$. For the case of the sixth-order structure function, considered in Fig.~\ref{f:essburgers2}, this is a reduction by a factor two. Hence, the ESS inertial-range scaling extends by a factor $2$ further into the IR direction before a same level of degradation is achieved as for the standard case. Note that for large $p$s the gain in scaling range becomes smaller. Next we turn to the ultraviolet (UV) contributions which now require a finite viscosity $\nu$ that broadens the shock. Standard boundary layer analysis for the shock in the frame where the shock is at rest (which basically amounts to dropping the time derivative term in the Burgers equation) gives the following well-known tanh structure : \begin{eqnarray} u(x) \approx -\frac{\Delta}{2} \tanh \frac{x \Delta}{4\nu}. \label{tanh} \end{eqnarray} Hence the UV structure functions are given by \begin{eqnarray} LS_p(r) = \left(\frac{\Delta}{2}\right) ^p \int_{-\infty} ^\infty dx \left[\tanh \frac{x \Delta}{4\nu} -\tanh \frac{(x+r) \Delta}{4\nu} \right]^p. \label{structuv} \end{eqnarray} We are interested in the expansion of these structure functions for $r$ much larger than the typical width $4\nu/\Delta$ of the shock. For this, we first established the following expansion (for large $\tilde r$) \begin{eqnarray} \int_{-\infty} ^\infty d\tilde x\, [\tanh (\tilde x) -\tanh(\tilde x + \tilde r)]^p= (-2)^p(\tilde r - H_{p-1}) +\textrm{t.s.t.}\label{spuv} \end{eqnarray} where ``$\textrm{t.s.t.}$" stands for ``transcendentally small terms" such as $\exp(-\tilde r)$ and\\ $H_p \equiv 1+1/2+1/3+...+1/p$ is the Harmonic function, which behaves as $\ln p$ for large $p$. Using \eqref{spuv} in \eqref{structuv}, we obtain \begin{eqnarray} L S_p(r) = (-1)^p\left( \Delta ^pr -4\nu H_{p-1}\Delta ^{p-1}\right) +\textrm{t.s.t.} \label{spuv} \end{eqnarray} Specialising to $p = 3$, we have \begin{eqnarray} LS_3 = -\Delta^3 r + 6\nu\Delta^2 + \textrm{t.s.t.},\quad \tilde {S}_3(r)=r - 6\frac{\nu}{\Delta} + \textrm{t.s.t.}. \label{s3uv} \end{eqnarray} Proceeding as in the IR case, we re-expand $S_p$ in terms of the deputy separation $\tilde S_3$: \begin{eqnarray} L S_p(r) = (-1)^p\left( \tilde S_3 \Delta ^p -4\nu \left(H_{p-1} - \frac{3}{2}\right)\Delta ^{p-1} \right) + \textrm{t.s.t.} \label{essspuv} \end{eqnarray} Thus, we see that with ESS the subdominant UV term for the structure function of order $p$ is reduced by a factor $2H_{p-1}/(2H_{p-1} -3)$ and, again, the range of scaling is extended by the same factor. For $p=6$ the extension factor is $137/47 \approx 2.91$. Combining the UV and the IR gains, we see that the scaling for $S_6$ is extended by a factor $5.92$, that is about three quarters of a decade. Next we upgrade the arguments to the case of the Burgers equation with smooth random initial conditions and forcing, defined on the whole real line. We assume that the forces and the initial conditions are (statistical) homogeneous and have rapidly decreasing spatial correlations (mixing). We can then use ergodicity to obtain the following representation of structure functions: \begin{eqnarray} S_p(r) \equiv \langle\left( u(x+r) - u(x)\right)^p\rangle = \lim _{L\to \infty}\frac{1}{L}\int _0^L dx\, \left( u(x+r) - u(x)\right)^p, \label{ergodic} \end{eqnarray} where the limit is in the almost sure sense. In the present context we have typically an infinite number of shocks on the whole line, but a finite number per unit length. The shock amplitudes $\Delta$ and the left and right velocity gradients $s_-$ and $s_+$ become random variables. Revisiting the arguments given above for the deterministic single-mode case, we find that in both the IR and UV expansions we now have to add the contributions stemming from the various shocks. Using ergodicity we obtain, in the UV domain \begin{eqnarray} S_p(r) = (-1)^p\left( \langle \Delta ^p\rangle\, r -4\nu H_{p-1}\langle \Delta ^{p-1}\rangle \right) +\textrm{t.s.t.} \label{rspuv} \end{eqnarray} and, in the IR domain \begin{eqnarray} S_p(r)&=&(-1)^p\left(\langle\Delta^p\rangle \, r-\langle\Delta^{p-1}(s_++s_-)\rangle\,\frac{p}{2}r^2\right)+\textrm{h.o.t.} \label{rcscs} \end{eqnarray} In the UV case we now make use of the inequality ${\langle\Delta^p\rangle}/{\langle\Delta^{p-1}\rangle}\ge{\langle\Delta^{3}\rangle}/{\langle\Delta^{2}\rangle}$, which follows, for $p\ge 3$, from the log-convexity of the moment function $q\mapsto\ln\langle\Delta^q\rangle$ for a positive random variable $\Delta$. We can then finish the analysis of the depletion of subdominant contributions essentially as done above (in the UV) case and conclude that ESS extends the UV scaling by \textit{at least} a factor $2H_{p-1}/(2H_{p-1} -3)$. In the IR domain, the presence of the random quantity $s_++s_-$ prevents us from reaching similar conclusions and it is thus not clear that there is a general result regarding improved ESS scaling in the IR regime. This can however be circumvented, if we assume (i) that the Burgers turbulence is freely decaying (no forcing) and (ii) we limit ourselves to times large compared to the typical turnover time of the initial condition. The solution degenerates then into a set of ramps of slope exactly $1/t$, separated by shocks. Hence $s_++s_- \approx 2/t$, which is deterministic. Thus, using the same log-convexity inequality as above we infer that ESS extends the IR scaling by \textit{at least} a factor $p/(p-3)$. \section{Back to three-dimensional Navier--Stokes turbulence} \label{s:back-to-ns} Here, we have explained the success of ESS by a depletion of subdominant IR and UV contributions. How does this relate to various explanations given over the past one and half decade for 3DNS turbulence? Let us mention a few. Sain and Bhattacharjee \cite[see,][]{BS99,SB99} resorted to phenomenology in proposing cross-over functions from the dissipation- to the inertial-range for the structure functions defined in Fourier space. They inferred that in the UV regime, the linear scaling in log-log plots of structure function deteriorate at lower values of wavenumbers than in the corresponding ESS plots. \cite{FG01} --- again phenomenologically --- introduced a scaling variable to include crossovers between various subranges of scaling behaviour for the magnitude of longitudinal velocity differences and thereby showed that the scaling improves at the IR end when ESS is used. In the spirit of our formulation of a theory for ESS, the work of \cite{SLP96} comes closest. In this paper they have correctly identified the mechanism of increased scaling range at the UV end: a reduced coefficient in the subdominant diffusive correction. Note that their work on ESS was in the context of intermittency and anomalous scaling for passive scalar dynamics. They used the model of \cite{K68,K94} and supplemented it by a closure relation suggested by \cite{K94}, which was later shown not to be fully consistent with the original model \cite[see e.g.,][]{FGV01}. Benzi (private communication) also made a similar observation using a passive scalar shell model. How much of our findings for the Burgers equation carry over to 3DNS? It is important to realize that the improved scaling can be both at the IR and the UV end of the scaling regime. In order to avoid mixing up the two types of improvements one should correctly \textit{calibrate} the choice of the deputy for the separation $r$, by using the four-fifths law for 3DNS: \begin{equation} \tilde S_3(r) \equiv -\frac{S_3}{(4/5)\varepsilon}. \label{calibrate} \end{equation} If a given turbulent flow shows a gain in scaling when using ESS, either in the IR or the UV domain or both, we can then use \eqref{firstsub} to interpret the gain in terms of subdominant corrections. It is best to handle the IR and UV cases separately. Following the same procedure as in Section~\ref{s:asymptotic-ess-theory} it is easily shown that ESS gives just a modification of the coefficient of the first subdominant contribution provided the gap $g_p$ between dominant and first subdominant exponent is independent of the order $p$. The fact that ESS works so nicely for 3DNS suggests that this independence may actually hold. If so, it is immediately seen that the reduction in the subdominant coefficient is equal to the gain in scaling raised to the gap value. We begin to see here ESS as a way to obtain information on subdominant corrections. This is of interest for several reasons. For example, subdominant corrections can give rise to spurious multifractal scaling \cite[see e.g.,][]{AFLV92,DBPF05}. Furthermore, consideration of subdominant corrections is needed to explain the absence of logarithms in the third-order structure function \cite[\textit{cf.}][]{FAMY05}. Also, the multifractal description of turbulence is quite heuristic and arbitrary and would be much more strongly constrained if we had information on subdominant terms and on gap values. This may be the right place to discuss the issue of the best deputy of separation in the ESS procedure. Should one use $S_3(r)$ or the function $F_3(r)$ that is defined with the third moment of the \textit{absolute value} of the longitudinal velocity increment and which is easier to extract from experimental data because it involves only positive contributions? It is not clear to what extent the two procedures are equivalent. For the case of the Burgers turbulence it is easy to show that $-S_3(r)$ and $F_3(r)$ have the same dominant and first-order subdominant terms: they differ only in subsubdominant contributions. For 3DNS longitudinal velocity increments are somewhat more likely to be negative than positive but they have no reason to have exactly the same scaling behaviour although the scaling exponents are found to be nearly equal \cite[][]{VS94}. There has also been some amount of discussion regarding the scaling of longitudinal and transverse structure functions \cite[see e.g.,][]{BBFLT09}. It may well be that they differ only by the relative strength of subdominant terms. We finally address the issue of appropriate strategies to systematically obtain subdominant corrections from experimental or simulation data. The most straightforward method is to determine the dominant-order contribution using ESS and then to subtract it from the the data. The result can then be analysed either in the standard way or in the ESS fashion. However it is known that when subdominant terms are rather sizeable it is better to determine them at the same time as the dominant ones. One instance is the simultaneous determination of dominant-order isotropic scaling and subdominant anisotropic corrections for weakly anisotropic turbulence \cite[see e.g.,][and references therein]{ADKLPS98,BP05}. The determination of both dominant and subdominant terms can be much improved when higher precision is available, such as is frequently the case in double-precision spectral simulations. Indeed it was recently pointed out by \cite{V09} that, when trying to extract the asymptotic expansion of a function $f(x)$ as $x\to \infty$ from data sampled at a large number of values of discrete $x$-values, it not advisable to first try to obtain the dominant order and only then to look for subdominant terms: without the knowledge of subdominant corrections, the parameters appearing in the dominant term will be very poorly conditioned. It is better to completely subvert the dominant-order-first strategy by introducing the method of \textit{asymptotic interpolation}, which relies on a series of transformations, recursively, peeling off the dominant and subdominant terms in the asymptotic expansion without any need to know their detailed functional form. These transformations can, in principle, be carried out till either the rounding noise becomes significant or asymptoticity is lost. At the end of the process, the newly transformed data admit a simple interpolation to --- usually --- a constant. Then by undoing the peeling-off transformations, one can determine very accurately the asymptotic expansion of the data up to a certain number of subdominant terms, which depends on the precision available on the original data. This method has been applied to various nonlinear problems and shown to give very accurate expression of the asymptotic expansion when the data have enough precision \cite[see e.g.,][]{PF07,BFPRT09}. \begin{acknowledgements} J.~Bec, R.~Benzi, L.~Biferale, S.~Kurien, R.~Pandit, K.R.~Sreenivasan and V.~Yakhot are thanked for fruitful discussions. SSR thanks DST and UGC (India) for support. The work was partially supported by ANR ``OTARIE'' BLAN07-2\_183172. Computations used the M\'esocentre de calcul of the Observatoire de la C\^ote d'Azur and SERC (IISc). \end{acknowledgements}
2,877,628,090,644
arxiv
\section{Conclusion}\label{sec:conclusion} In this paper, we showed how to efficiently predict the velocity of other vehicles based only on a video from a standard monocular camera by introducing an intermediate representation which decouples perception from velocity estimation. We also showed how priors in real data can be exploited to generate synthetic data in the intermediate representation and that such synthetic training data can be used to build a system capable of processing real-world data without any loss in accuracy. The decomposition is advantageous not only in terms of the ability to easily generate training data with the desired parameters, but also to model different driving styles or various emergency situations. Last but not least, it also ensures each component is individually verifiable, which is essential to meet reliability standards for autonomous driving. \section{Experiments}\label{sec:experiments} \section{Learning from real or synthetic data}\label{sec:dataset} We begin by discussing a real benchmark dataset sufficient to train our model and then we discuss how we can generate analogous synthetic data effectively in the space of object bounding boxes. \subsection{Real data: TuSimple}\label{sec:real} While many driving datasets exist in the realm of autonomous driving many focus on single frame tasks such as object detection in 2D/3D, depth estimation, semantic segmentation and localisation. The only dataset with nessecary annotations for our task with comparable work come from TuSimple\cite{CVPR2017Challenge}. The TuSimple dataset was introduced for the CVPR2017 Autonomous Driving velocity estimation challenge~\cite{CVPR2017Challenge}. It contains $1074$ ($269$) driving sequences with $1442$ ($375$) annotated vehicles in the training (respectively testing) set, split into three subsets based on the distance between the ego-vehicle and the observed car (see~\cref{tbl:datasetsplit}). The dataset was recorded on a motorway by an standard camera (image resolution $1280\times720$) mounted on the roof of the ego-vehicle. Each driving sequence contains 40 frames, captured at 20fps as seen in \cref{f:samples}. LiDAR and radar were simultaneously used to capture the position and velocity of nearby vehicles and recorded data were then used to give the ground truth velocity and 3D position of each vehicle in the last frame of the sequence. Additionally, a manually created bounding box (in the image space) is available in the last frame for each vehicle. Using the notation from~\cref{sec:velocityestimation}, the dataset therefore provides $1442$ training and $375$ testing triplets $(\mathbf{I}, b_T, V)$. As shown in \cref{fig:distributions} the TuSimple dataset spans a wide range of distances from the camera with a high bias towards the same lane as is typical in driving imagery. Velocity in both $Z$ and $X$ is roughly uniformly distributed around $0$ which is expected in motorway situations. \begin{table} \centering \small \begin{tabular}{l|ccc|c} \toprule & Near & Medium & Far & Total \\ distance [m] & $d < 20$ & $20 < d < 45$ & $d > 45$ \\ \midrule Train & 166 & 943 & 333 & 1442\\ Test & 29 & 247 & 99 & 375\\ \bottomrule \end{tabular} \caption{The number of annotated vehicles in the TuSimple dataset~\cite{CVPR2017Challenge} and their split based on their distance from the ego-vehicle.}% \label{tbl:datasetsplit} \vspace{-2em} \end{table} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{images/P,V_dist.pdf} \vspace{-1em} \caption{Distribution of vehicle instances in the TuSimple dataset.}% \label{fig:distributions} \end{figure*} \begin{figure*}[b] \centering \includegraphics[width=0.45\textwidth]{dataset_examples/54.png} \includegraphics[width=0.45\textwidth]{dataset_examples/82.png} \caption{Here we show and example of our synthetic boxes in these cases generated starting at the same location as a real object. The fainter boxes depict the location of the object in subsequent frames for the velocity written above which may differ from the actual vehicles path in real data} \label{fig:data_example} \end{figure*} \subsection{Synthetic data}\label{sec:synthetic} By distilling the information contained in an image to bounding boxes $\mathbf{b}$ we now have data which is highly-interpretable and easy to characterise statistically; the latter can be used to generate \emph{synthetic} training data, which ideally will have very similar statistical properties to the real data. We could also choose to simulate vehicles with speeds not normally seen in the real world to train a network capable of understanding such abnormal situations. We use the training subset of the TuSimple dataset (see~\cref{sec:dataset}) to infer these statistical properties. In~\cref{fig:bb_coords}-left we plot the bounding box horizontal coordinates $x$ vs the depth of the enclosed vehicle. There is some obvious structure in the data; in particular, several lanes to the left and to the right of the ego-vehicle are clearly visible. The other plots show the $y$\textbf{} coordinate as well as the bounding box width and height. The latter are highly constrained by the physical sizes of vehicles. We found learning to be sensitive to the distribution of vehicles locations and less so of car velocities and sizes. We thus represent the location distribution empirically (\cref{fig:data_example}) and sample from TuSimple the first bounding box in each to obtain its 3D ground point $(X,0,Z)$. This is then re-projected to the image as explained in~\cref{sec:elementary} to obtain the bounding box location $(x,y)$. The box height and width are obtained indexing with the depth $Z$ polynomial fits to the TuSimple height/width data (\cref{fig:bb_coords}). Finally, the track is simulated by sampling a velocity vector $V$ from a Gaussian fit of the TuSimple velocity data (\cref{fig:distributions}) and integrating the motion (\cref{fig:motion}). Note that only the bounding box positions are empirical, whereas the other parameters are sampled form very simple distributions. We show that this is sufficient to train models that achieve excellent performance on real data. This helps isolating what are the important/sensitive priors (location) and what are not (size, velocity) for velocity estimation. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/motion.pdf} \caption{\textbf{Real vs synthetic vehicle sequences.} The motion of the bounding box in the video sequence - vehicle motion in real video as captured by the tracker (blue), and synthetic motion generated by our method (red). In the width and height plot we see some tracking inaccuracy when the blue lines increase going to the right.}% \label{fig:motion} \end{figure*} \section{Evaluation}\label{sec:evaluation} We evaluated our approach using two models. For both models, we used MLP with 4 hidden layers, trained for 150 epochs, using dropout of 0.2 after the Concatenated ReLU\cite{CReLU} activation function, learning rate with learning rate $6\times10^{-4}$ adjusted using exponential decay with decay rate set to $0.99$. In the first model denoted as \textit{MLP-tracker}, we initially processed all video frames by the MedianFlow~\cite{MedianFlow} tracker, using the bounding box $b_T$ from the ground truth for tracker initialization. We then used the resulting sequences $\mathbf{b}=(b_1,\dots,b_T)$ as the training data for the MLP (see~\cref{sec:velocityestimation}). At test time, we followed the same procedure, which is pre-processing the input video sequence by the tracker, and then feeding the intermediate representation $\mathbf{b}$ into the MLP to obtain the velocity estimate. We follow the TuSimple competition protocol, and report velocity estimation accuracy $E_v$ calculated as the average over the three distance-based subsets (\cref{tbl:datasetsplit}): \begin{equation} E_v = \frac{E_{v}^{\mbox{\tiny{near}}} + E_{v}^{\mbox{\tiny{medium}}}+E_{v}^{\mbox{\tiny{far}}}}{3},\qquad E_v^S = \frac{1}{|S|}\sum_{i \in S} = \|V_i - \hat{V}_i\|^2, \end{equation} where $V_i$ is the ground truth velocity from the LiDAR sensor and $\hat{V}_i$ is method's velocity estimate. We also note that according to the dataset authors, the overall ground-truth accuracy is at around 0.71m/s, however the accuracy will almost certainly depend on the distance of the observed car. Using the above model, we reach the overall velocity estimation error of $1.29$, which outperforms the state-of-the-art\footnote{Kampelm\"uhler~\emph{et al.} also report the error of $1.86$ and $1.25$ for their method using only tracking information, however this was not the competition entry and it is not obvious from the paper how the latter number was reached and what data were used for training.} and significantly more complex method of Kampelm\"uhler~\emph{et al.}\cite{kampelmuhler2018}. This experiment also shows, that the intermediate representation $\mathbf{b}$ contains sufficient amount of information to successfully infer vehicle velocity. In the second model \textit{MLP-synthetic}, we instead used the synthetic training data as the intermediate representation $\mathbf{b}$. We generated $11536$ samples using the generation procedure described in \cref{sec:synthetic} and used them as the training samples for the MLP. At testing time, we again used the MedianFlow tracker to get the intermediate representation $\mathbf{b}$ from real video sequences and fed them into the MLP to obtain vehicle velocity estimates (see \cref{f:model} bottom). Because our synthetic boxes are generated without noise inherently coming from the tracker road surface and other sources, at test time we apply a Gaussian filter with $\sigma=5$ in each coordinate of the bounding box as pre-processing to remove noise. The resulting model has the velocity estimation error of $1.28$, with the biggest improvement in the Far range (see \cref{tbl:results}). Generating more synthetic data for training did not result in further improvement in accuracy, which we contribute to the relatively small size of the testing set and the relative low variance of vehicle speed owing to motorway driving style. \begin{table}[h] \centering \small \begin{tabular}{l|l|lll} \toprule & $E_v$ & $E_{v}^{\mbox{\tiny{near}}}$ & $E_{v}^{\mbox{\tiny{medium}}}$ & $E_{v}^{\mbox{\tiny{far}}}$ \\ \midrule Kampelm\"uhler~\emph{et al.}\cite{kampelmuhler2018} (1st) & 1.30 & 0.18 & \textbf{0.66} & 3.07 \\ Wrona (2nd) & 1.50 & 0.25 & 0.75 & 3.5 \\ Liu (3rd) & 2.90 & 0.55 & 2.21 & 5.94 \\ \midrule geometric reprojection & 8.5 & 0.48 & 1.50 & 23.60\\ Ours (MLP-tracker) & 1.29 & 0.18 & 0.70 & 2.99 \\ Ours (MLP-synthetic) & \textbf{1.28} & \textbf{0.17} & 0.72 & \textbf{2.96}\\ \midrule \textit{LiDAR} & \multicolumn{4}{c}{\textit{0.71}} \\ \bottomrule \end{tabular} \caption{Vehicle velocity estimation error $E_v$ on the TuSimple dataset, including the top 3 best-performing methods of the CVPR 2017 Vehicle Velocity Estimation challenge~\cite{CVPR2017Challenge}. The average accuracy of the LiDAR sensor used for ground truth acquisition added for reference. Runtime of Kampelm\"uhler~\emph{et al.}\cite{kampelmuhler2018} is $423ms$ owing to the varying and complex inputs required for a forward pass, our method can perform a forward pass in $10ms$ on the same hardware(runtime for other methods are not public).}\label{tbl:results} \end{table} \section{Introduction}\label{sec:intro} Autonomous driving systems rely on a wide array of sensors including LiDARs, radars and cameras. LiDAR sensors are especially good at estimating the position and velocities of vehicles and obstacles ($0.25$m/s at the distance of $15$ meters~\cite{kellner2013instantaneous}, $0.71$m/s at a wider range of distances~\cite{CVPR2017Challenge}). However, LiDAR sensors are also very expensive, not reliable in adverse weather conditions~\cite{kutila2016automotive} and easily confused by exhaust fumes~\cite{hasirlioglu2017effects}. Depending on a single source of information in a context such as autonomous driving is also inherently fragile. Hence, it is natural to develop alternative sensors either as redundancies or to improve the accuracy and robustness of sensors like LiDARs. Cameras are a natural choice as they are very cost effective and in principle sufficient for navigation given that humans drive cars primarily using their sense of vision. Furthermore, there is a substantial amount of computer vision research that is directly applicable to autonomous driving. In this paper, we aim to predict the velocity of other cars based only on a video from a standard monocular camera. Because the vehicle where the camera is mounted on (the \emph{ego-vehicle}) is typically moving as well, the velocity estimate of other vehicles is relative to the velocity of the ego-vehicle. Our main contribution is to show that, for this problem, one can separate perception from velocity estimation by first mapping the visual data to a mid-level representation --- the space of vehicle bounding boxes --- with no loss in performance compared to much more complex estimation approaches. The velocity estimation problem is then modelled purely on bounding boxes, which change their position and size over time, tracking the motion of vehicles on the road in a simplified view of the data. Using such an approach, our simple model outperforms the winning model~\cite{kampelmuhler2018} of CVPR 2017 Vehicle Velocity Estimation Challenge~\cite{CVPR2017Challenge}, which is a much more complex model that combines tracking with two different deep networks for monocular depth and optical flow estimation. A major advantage of using such a simple intermediate representation is that it becomes much easier to \emph{simulate} training data for the velocity estimator model. Still, we show that doing so in an effective manner requires to capture accurately the statistics of real bounding boxes. Our second contribution is thus to show how such a synthetic dataset is created, by extracting the necessary data priors from a small set of real data and using it to generate a much larger training set in the mid-level feature space. We then show that training only using this synthetic dataset results in excellent performance on real test data. In the process, we also distill the data statistics that are crucial for velocity estimation from visual data, clarifying in the process what is important and what information is not for this task. The ability to easily generate synthetic data is not only beneficial to improve velocity estimation accuracy at virtually no cost for existing scenarios, but can be also used to train the model for different driving scenarios, such as different countries, and to model different driving styles and various emergency situations (e.g.~unexpected breaking) without the need for laborious real data collection or for expensive 3D photo-realistic animation in order to cater for such situations. Our approach can also be seen as encapsulating perception, which is often implemented by opaque and difficult-to-diagnose components such as deep convolutional neural networks, in a module which has a simple, interpretable, and testable interface. Decomposing systems in modules that are individually verifiable and that can be connected in a predictable manner is essential for autonomous systems to meet reliability standards such as ISO 26262~\cite{ISO26262, palin2011iso}. Hence, while end-to-end trainable systems may be conceptually preferable, decompositions such as the one we propose may be essential in practice --- fortunately, as we show, this may come with no loss of performance. The rest of the paper is structured as follows. In \cref{sec:relatedwork}, prior work is discussed. In \cref{sec:method}, we introduce the method, in \cref{sec:dataset} we discuss the data generation. The evaluation is presented in \cref{sec:evaluation} and the paper is concluded in \cref{sec:conclusion}. \section{Method}\label{sec:method} \input{fig-model} Our goal is to estimate the velocity of other vehicles imaged from a camera rigidly mounted on the ego-vehicle and looking forward. We can model this situation as follows. The input to the model is a sequence $\mathbf{I}=(I_1,\dots,I_T)$ of $T$ video frames extracted from the camera. We also assume to have a bounding box $b_T$ tightly enclosing the vehicle of interest at the end of the sequence $T$. The output of the model $\Phi$ is a 2D vector $V = \Phi(\mathbf{I},b_T)\in\mathbb{R}^2$ representing the velocity of the target vehicle at time $T$ projected on the ground plane relative to the ego-vehicle.\footnote{Estimating the vertical velocity $Y$ is essentially irrelevant for this application.} \subsection{Geometry and elementary velocity estimation}\label{sec:elementary} Next, we describe in some detail the geometry of the problem and provide a na\"ive solution based solely on projecting bounding boxes into 3D world coordinate system. The physical constraint of the setup results in several simplifications compared to the general imaging scenario. Let $P = (X,Y,Z) \in\mathbb{R}^3$ be a 3D point expressed in the ego-vehicle reference frame. We can assume that the camera is at a fixed height $H$ from the ground looking straight ahead. Hence, point $P$ projects to the image point $ p = (u,v), $ $ u = f\,\frac{X}{Z}, $ $ v = f\,\frac{Y+H}{Z} $ where $f$ is the focal length of the camera.\footnote{The image coordinates $(u,v)$ are standardized, with $(0,0)$ corresponding to the view direction, and are in practice related to pixels coordinates via a non-linear transformation that accounts for the camera intrinsic parameters, including effects such as radial distortion. We assume that the intrinsic parameters are known and that their effects has already been removed from the data.} Now assume that the bounding box $b=(x,y,w,h) \in\mathbb{R}^4$ is given by the image coordinates $(u,v) = (x+w/2, y+h)$ of the mid point of the bottom edge of the box and by the box width and height $(w,h)$. To a first approximation, we can assume that $(u,v)$ is the image of a certain virtual 3D point $P=(X,0,Z)$ rigidly attached to the vehicle at ground level. Hence, since we know the height $H$ of the camera, we can readily infer the depth or distance $Z$ of the vehicle, and hence the 3D point, as \begin{equation}\label{e:naive} Z = fH\,\frac{1}{v}, \quad X = H\,\frac{u}{v}. \end{equation} If we can track the bounding box $b_t$ over time $t\in[1,T]$, then we can use~\cref{e:naive} to estimate the coordinates $P_t=(X_t,0,Z_t)$ of the 3D point relative to the ego-vehicle and obtain the velocity as the derivative $V = (\dot X_T, \dot Z_T)$. However, owing to the varying quality of road surfaces and inclines or declines along a road this technique provides very poor estimate of vehicle velocity, especially for vehicles which are further away (see \cref{tbl:results}). Indeed, analyzing \cref{e:naive} and assuming a camera with standard image resolution, we come to a conclusion that for vehicles in distance $d > 20$m the approach requires sub-pixel accuracy of the bounding-box estimate for a reasonable estimate of 3D position and hence velocity, which simply is not realistic. \input{fig-bb-distribution} \subsection{Deep learning for velocity estimation}% \label{sec:velocityestimation} Sophisticated models which combine several streams of information from image pixels (depth, optical flow) perform significantly better, so one may be tempted to ascribe the poor performance of the geometric approach to the fact that too much information is discarded by looking at bounding boxes only. We show that, somewhat surprising, \emph{this is not the case}. Instead, we show below that bounding boxes \emph{are} sufficient provided that the modelling of dynamics is less na\"\i{}ve. \paragraph{State-of-the art baseline} Our reference model is the one of Kampelm\"{u}hler~\emph{et al.}~\cite{kampelmuhler2018}. They propose a complex network that combines three data streams, all derived from the video sequence $\mathbf{I}$ (see~\cref{f:model} top). One stream applies a pre-trained monocular depth-estimation network, called MonoDepth~\cite{monodepth17}, to estimate a 3D depth map from the video. Another stream applies FlowNet2~\cite{FlowNet}, a state-of-the-art optical flow estimation network, to estimate instead optical flow. The last stream uses an off-the-shelf tracker~\cite{MedianFlow} to track the bounding box $b_T$ backward through time, and thus obtain a simplified representation $(b_1,\dots,b_T)$ of the vehicle trajectory in the image. The output of the different streams are concatenated and passed through a multi-layer perceptron (MLP) to produce the final velocity estimate $V = \Phi(\mathbf{I},b_T)$. In addition to pre-training MonoDepth, FlowNet2, and the tracker on external data-sources, the MLP, fusing the information, is trained on an ad-hoc benchmark dataset that contains vehicle bounding boxes for one frame with their ground-truth velocities and positions measured via a LiDAR (\cref{sec:dataset}). Furthermore, three individual models of different-sized MLPs are used to determine velocity within the different vehicle distance ranges used in the evaluation. These range from 3 layers of 40 hidden neurons to 4 layers of 70 hidden neurons with increasing model size for vehicles further away~\cite{kampelmuhler2018}. The final velocity prediction is given by averaging output of 5 model instances, therefore in total 15 distance-specific models are used at testing time. \paragraph{Our Model} Similar to \cite{kampelmuhler2018} we use a fully connected neural network with 4 hidden layers of 70 neurons each with CReLU\cite{CReLU} activation and Dropout\cite{dropout} between layers during training. This allows us to reduce the runtime of our method significantly as we do not require estimation of optical flow and depth which allows our runtime to reduce from $423ms$ to $10ms$ and only one model is trained reducing the inference time further. \paragraph{Estimating velocity from bounding boxes} Next, we describe our architecture to estimate vehicle velocity (see \cref{f:model} bottom). We consider an input video sequence $\mathbf{I}=(I_1,\dots,I_T)$ of $T$ video frames capturing a vehicle at time $t\in[1,T]$. As shown above, in the reference model, the velocity estimate $V$ is then obtained as \begin{equation}\label{e:phi} V = \Phi(\mathbf{I}, b_T) \end{equation} where $\Phi$ is neural network or a combination of several neural networks as in \cite{kampelmuhler2018}, taking video sequence $\mathbf{I}$ as an input and a bounding box $b_T$ in the last frame. In our paper, we decompose $\Phi$ into two mappings $\Psi$ and $\Xi$ \begin{equation}\label{e:psi} V = \Psi(\Xi(\mathbf{I},b_T)) \end{equation} by introducing an intermediate representation $\mathbf{b}=(b_1,\dots,b_T)$ representing the vehicle image location at time $t\in[1,T]$ $ \mathbf{b} = \Xi(\mathbf{I},b_T), V = \Psi(\mathbf{b}), $ where $\Xi$ is a off-the-shelf tracker component~\cite{MedianFlow} and $\Psi$ is the vehicle velocity estimator we train. Using the above decomposition, we only require $(\mathbf{b},V)$ as supervision pairs, and not the $(\mathbf{I}, V)$ pairs as in the original formulation~\cite{kampelmuhler2018}, which are extremely expensive to obtain as it implies capturing and annotating many driving video sequences. As before, bounding boxes $\mathbf{b}$ are represented as quadruples $(x,y,w,h)$. The trajectories $\mathbf{b}$ are passed to a filter $g(\mathbf{b})$ that applies temporal Gaussian smoothing to each coordinate independently. Finally, the output of the filter is flattened to a $4T$-dimensional vector and fed to a multi-layer perceptron $\bar \Psi : \mathbb{R}^{4T} \rightarrow \mathbb{R}$. Hence, the overall model at inference time can be written as $ V = \Psi(\mathbf{b}) = \bar \Psi(\operatorname{vec}(g(\Xi(\mathbf{I},b_T)))). $ At training time, assuming the tracker $\Xi$ is fixed, we however only need to train $\Psi$, using the pairs $(\mathbf{b},V)$ by minimizing the loss $\|V - \Psi(\mathbf{b})\|^2$. Next we show how the pairs can be obtained. \section{Related Work}\label{sec:relatedwork} \input{fig-samples.tex} \paragraph{Depth and ego-motion estimation} In recent years numerous approaches to monocular depth and ego-motion estimation have been explored thanks to the expressive power of convolutional neural networks. Supervised approaches~\cite{eigen2014depth, ummenhofer2017demon, xu2017multi} depend on pixel-wise depth annotations to be available for each pixel of the training set, which results in costly data collection. Unsupervised methods~\cite{zhou2017unsupervised, Klodt18} on the other hand depend heavily on camera intrinsics and rely on camera motion between successive frames to produce coarse relative depths, which are difficult to use for predicting exact distances. \paragraph{Velocity estimation} Classical vision techniques for motion detection from a moving camera such as Yamaguchi \emph{et al.} \cite{yamaguchi2006} first match points in successive frames and then filter them based on their location and compatibility with the epipolar geometry. A similar approach is used in Fanani~\emph{et al.}\cite{IMOEpipolarCNN}, where candidate objects are first filtered by a CNN which detects vehicles and then tests based on the epipolar geometry are used to determine whether these vehicles are moving or not. The above approaches aim to only differentiate between static and moving vehicles with no indication of their velocity. Kampelm\"uhler \emph{et al.}\cite{kampelmuhler2018}, the winner of CVPR 2017 Vehicle Velocity Estimation Challenge~\cite{CVPR2017Challenge}, extends these approaches to predict the relative velocity of the vehicles in view of the ego-vehicle. This is achieved by passing the sequence of images through a depth~\cite{monodepth17} and flow~\cite{FlowNet} estimation network, as well as applying a classical tracking system~\cite{MedianFlow}. The features extracted from these three sub-components are then concatenated and fed to a small neural network which predicts the velocity (see \cref{f:model} top). \paragraph{Tracking} Object tracking is a classical computer vision problem~\cite{lucas1981iterative}. The multiple instance learning tracker~\cite{babenko2009visual} expresses the problem as a classification task, where detections in previous frames are used as training data for future frames. MedianFlow~\cite{MedianFlow} uses the forward-backward error to validate which points are robust predictors of object movement. The method is further improved in the TLD tracker~\cite{kalal2012tracking} by disabling online learning when the object is occluded and by allowing the algorithm to re-detect the object once it appears again. More recently, with the emergence of deep learning, specialized networks have been trained to explicitly track location of objects in video sequences~\cite{held2016learning}. Similarly, Siamese CNNs~\cite{bertinetto2016fully,valmadre2017end} have been exploited to build a powerful embedding to discriminate whether an image patch contains the same object or not, therefore tracking objects by appearance similarity. For a systematic evaluation of tracking algorithms, we refer the reader to the VOT challenge results~\cite{kristan2017visual}. \paragraph{Synthetic training data} Synthetic training data have been successfully applied in various domains of computer vision, ranging from scene text detection~\cite{gupta2016synthetic} to optical flow estimation~\cite{FlowNet}. In the autonomous driving domain, the most widely used synthetic dataset is Virtual~KITTI~\cite{Gaidon:Virtual:CVPR2016}, which contains 50 photo-realistic videos generated by the Unity game engine. Thanks to the synthetic source of the data, the dataset comes with pixel-level annotations for segmentation, optical flow and depth, which would be virtually impossible and very expensive to achieve in real data. More recently, the SYNTHIA~\cite{Ros_2016_CVPR} and Synscapes~\cite{wrenninge2018synscapes} artificial driving datasets have also been introduced. The main difference to our work is that the above datasets are based on photo-realistic representation of the world, whereas our mid-level representation consists of mere bounding boxes, which are significantly easier to generate.
2,877,628,090,645
arxiv
\section{Introduction} \label{sec:intro} Surgical Data Science (SDS) has recently emerged as a new scientific field which aims to improve the quality of interventional healthcare~\cite{maier2017surgical}. SDS involves the observation of all elements occurring within and around the treatment process in order to provide the right assistance to the right person at the right time. In laparoscopy, some of the major opportunities that SDS offers to improve surgical outcomes are surgical decision support~\cite{marz2015toward} and context awareness~\cite{katic2016bridging}. Here, technical challenges include the detection and localization of anatomical structures and surgical instrumentation, intra-operative registration, and workflow modeling and recognition. To date, however, clinical translation of the developed methodology continues to be hampered by the poor robustness of the existing methods. In fact, a grand international initiative on SDS \cite{maier2017surgical} concluded that the robustness and reliability of SDS methods are of crucial importance. With the same perspective, several researches in the case-base reasoning community (e.g.~\cite{cheetham2004measures,kolodner2014case,kendall2017uncertainties}) have pointed out the benefits of estimating method confidence level in assigning a result. The aim of this paper is to address this issue in the specific context of organ classification and image tagging in endoscopic video images. \input{tables_images/wf.tex} Guided by the hypotheses that ({\bf{H1}}) automatic confidence estimation can significantly increase the accuracy and robustness of automatic image labeling methods, and that ({\bf{H2}}) multispectral imaging (MI) data are more suitable for \textit{in vivo} anatomical structure labeling than RGB data, the contributions of this paper are summarized as follows: \begin{enumerate} \item {\underline{Uncertainty-aware organ classification}} (Sec.~\ref{sec:uncertainty-aware}): Development of a new method for superpixel ($Spx$)-based anatomical structure classification, which features an intrinsic confidence measure for self-performance estimation and which can be generalized to MI data; \item {\underline{Automatic image tagging}} (Sec.~\ref{sec:image-tagging}): Development of an approach to automatic image tagging, which relies on the classification method and corresponding confidence estimation to label endoscopic RGB/multispectral images with the organs present in that image; \item {\underline{\textit{In vivo} validation}} (Sec.~\ref{sec:experiments}): A comprehensive \textit{in vivo} study is conducted using seven pigs to experimentally investigate hypotheses {\bf{H1}} and {\bf{H2}}. \end{enumerate} It is worth noting that, when we mention image tagging, we refer to the action of identifying organs present in an image. Instead, when mentioning organ classification, we refer to the classification of the organ present in an $Spx$. To the best of our knowledge, we are the first to use MI data for \textit{in vivo} abdominal tissue classification. Furthermore, this is the first study to address the topic of classification uncertainty estimation. We will make our validation dataset fully available online. \subsection{Related work} First attempts at image-guided classification of tissues in RGB endoscopic images primarily used parameter-sensitive morphological operations and intensity-based thresholding techniques, which are not compatible with the high levels of inter-patient multi-organ variability (e.g.~\cite{lee2007automatic, mewes2011automatic}). The method for multiple-organ segmentation in laparoscopy reported in~\cite{nosrati2014efficient} relied on non-rigid registration and deformation of pre-operative tissue models on laparoscopic images using color cues. This deformation was achieved using statistical deformable models, which may not always represent the patient-specific tissue deformation, thus resulting in a lack of robustness in terms of inter-patient variability. Recently, machine learning based classification algorithms for tissue classification have been proposed to attenuate this issue. The method described in~\cite{chhatkuli2014live} exploited a machine learning approach to segment the uterus. Gabor filtering and intensity-based features were exploited to segment the uterus from background tissues with support vector machines (SVM) and morphology operators. However, this approach is limited to single organ segmentation and the performance is influenced by the position of the uterus. Similarly, the method presented in~\cite{prokopetc2015automatic} was specifically designed for segmentation of fallopian tubes, as it exploits tube-specific geometrical features, such as orientation and width, and cannot be transferred to other anatomical targets. In parallel to the development of new computer-assisted strategies to tissue classification, the biomedical imaging field is also evolving thanks to new technologies such as MI~\cite{li2013review}. MI is an optical technique that enables us to capture both spatial and spectral information on structures. MI provides images that generally have dozens of channels, each corresponding to the reflection of light within a certain wavelength band. Multispectral bands are usually optimized to encode the informative content which is relevant for a specific application. Thus, MI can potentially reveal tissue-specific optical characteristics better than standard RGB imaging systems \cite{li2013review}. One of the first \textit{in vivo} applications of MI was proposed by Afromowitz et al. \cite{afromowitz1988multispectral}, who developed a MI system to evaluate the depth of burns on the skin, showing that MI provides more accurate results than standard RGB imaging for such application. For abdominal tissue classification, Akbari et al.~\cite{akbari2009hyperspectral} and Triana et al.~\cite{triana2014multispectral} exploited pixel-based reflectance features in open surgery and \textit{ex vivo} tissue classification. The work that is most similar to the present study was recently presented by Zhang et al.~\cite{zhang2016tissue}. It pointed out the advantages of combining both reflectance and textural features. However, the validation study for this focused on patch-based classification and was limited to \textit{ex vivo} experiments in a controlled environment, including only 9 discrete endoscope poses to view the tissues, with only single organs in the image and without tissue motion and deformation. Furthermore, the challenges of confidence estimation were not addressed. As for automatic laparoscopic image tagging, there is no previous work in the literature that has specifically addressed this challenging topic. However, it has been pointed out that there is a pressing need to develop methods for tagging images with semantic descriptors, e.g. for decision support or context awareness \cite{gur2017towards,bodenstedt2016superpixel}. For example, context-aware augmented reality (AR) in surgery is becoming a topic of interest. By knowing the surgical phase, it is possible to adapt the AR to the surgeon's needs. Contributions in the field include~\cite{katic2013context,katic2016bridging}. The AR systems in \cite{katic2013context,katic2016bridging} provide context awareness by identifying surgical phases based on (i) surgical activity, (ii) instruments and (iii) anatomical structures in the image. This is something that is commonly assumed as standard~\cite{neumuth2009validation}. However, a strategy for retrieving the anatomical structures present in the image was not proposed. A possible reason for such a lack in the literature can be seen in the challenging nature of tagging images recorded during \textit{in vivo} laparoscopy. Tissues may look very different across images and may be only partially visible. The high level of endoscopic image noise, the wide range of illumination and the variation of the endoscope pose with respect to the recorded tissues further increase the complexity of the problem. As a result, standard RGB systems may be not powerful enough to achieve the task, even when exploiting advanced machine learning approaches to process the images. With {\bf{H1}} and {\bf{H2}}, we aim at investigating if the use of MI and the introduction of a measure of classification confidence may face such complexity. \section{Methods} \label{sec:met} Figure~\ref{fig:wf} shows an overview of the workflow of the proposed methods for uncertainty-aware organ classification (Sec.~\ref{sec:uncertainty-aware}) and automatic image tagging (Sec.~\ref{sec:image-tagging}). {Table \ref{tab:acron} lists the symbols used in Sec. \ref{sec:met}.} \begin{table}[tbp] \caption{{Table of symbols used in Sec. II.}} \label{tab:acron} \begin{adjustbox}{width=.5\textwidth} \begin{tabular}[tbp]{c|l} Symbol & Description \\ \hline $N_c$ & Number of image channels\\ $\lambda_i$ & Camera light-filter central wavelength for channel $i$\\ $I(\lambda_i)$ & Row image for channel $i$\\ $Sr(\lambda_i)$ & Spectral reflectance image for channel $i$\\ $D(\lambda_i)$ & Reference dark image for channel $i$\\ $W(\lambda_i)$ & Reference white image for channel $i$\\ $N$ & Number of superpixels in the image\\ $Spx_n$ & $n^{th}$ superpixel $n \in [0,N)$ \\ $LBP_{riu2}^{R,P}$ & Uniform rotation-invariant local binary pattern\\ $R$ & Radius used to compute $LBP_{riu2}^{R,P}$\\ $P$ & Number of points used to compute $LBP_{riu2}^{R,P}$\\ $\{{\mathbf{p}_p\}}_{p \in (0,P-1)}$ & Points used to compute $LBP_{riu2}^{R,P}$\\ $g_{\mathbf{c}}$ & Intensity value of pixel ${\mathbf{c}}$\\ $H_{LBP}$ & Histogram of $LBP_{riu2}^{R,P}$\\ $AS_{Spx_{n}}$ & Average spectrum for $Spx_n$\\ $M$ & Number of pixels in $Spx_n$\\ $l_{H_{LBP}}$ & Length of $H_{LBP}$ for $Spx_n$ and channel $i$\\ $l_{AS}$ & Length of $AS$ for $Spx_n$ and channel $i$\\ $f$ & Support vector machine decision function \\ ${\mathbf{x_k}}$ & $k^{th}$ input feature vector \\ $y_k$ & $k^{th}$ output label \\ $\gamma$, $C$ & Support vector machine hyperparameters \\ $N_t$ & Number of training samples\\ $J$ & Total number of considered abdominal tissues\\ $Pr(Spx = j)$ & Probability for the $n^{th}$ $Spx$ to belond to the $j^{th}$ organ\\ $E(Spx_n)$ & Shannon entropy computed for $Spx_n$ \\ $PPCI(Spx_n)$ & Posterior probability certainty index computed for $Spx_n$ \\ $GC(Spx_n)$ & Gini coefficient computed for $Spx_n$ \\ $L$ & Lorentz curve \\ \end{tabular} \\ \end{adjustbox} \end{table} \begin{figure*} \centering \includegraphics[width = 1\textwidth]{images/lbp_spx.pdf} \caption{\label{fig:spx} A feature vector is extracted from each $n \in N$ superpixel ($Spx_n$), where $N$ is the number of superpixels in the image. The feature vector for $Spx_n$ is obtained by concatenating the histogram ($H_{LBP}$) of uniform rotation--invariant local binary pattern ($LBP_{riu2}^{R,P}$) and the average spectrum ($AS$), for each $i \in N_C$ image channel, where $N_C$ is the number of channels in the image.} \end{figure*} \subsection{Uncertainty-aware tissue classification} \label{sec:uncertainty-aware} The steps comprising the proposed approach to organ classification are presented in the following subsections. \subsubsection{Pre-processing} To remove the influence of the dark current and to obtain the spectral reflectance image $Sr(\lambda_i)$ for each MI channel $i \in [1, N_C]$), where $N_C$ is the number of MI bands, the raw image $I(\lambda_i)$ was pre-processed by subtracting the reference dark image $D(\lambda_i)$ of the corresponding channel from the multispectral image. $\lambda_i$ refers to the band central wavelength of the $i^{th}$ channel. This result was then divided by the difference between the reference white image $W(\lambda_i)$ of the corresponding channel and $D(\lambda_i)$, as suggested in~\cite{mansouri2005development}: \begin{equation} \label{eq:norm} Sr(\lambda_i) = \frac{I(\lambda_i)-D(\lambda_i)}{W(\lambda_i)-D(\lambda_i)} \end{equation} Note that $W(\lambda_i)$ and $D(\lambda_i)$ had to be acquired only once for a given camera setup and wavelength. These images were obtained by placing a white reference board in the field of view and by closing the camera shutter, respectively. Each reflectance image was additionally processed with anisotropic diffusion filtering to remove noise while preserving the sharp edges~\cite{kroon2010optimized}. The specular reflections were segmented by converting the RGB image into hue, saturation, value (HSV) color space and thresholding the V value. They were then masked from all channels \cite{moccia2016automatic}. \subsubsection{Feature extraction} In the method proposed in this study, we extracted features from $Spx$. $Spx$ were selected because, compared to regular patches, they are built to adhere to image boundaries better~\cite{li2015superpixel}. This characteristic is particularly useful considering the classification of multiple organs within one single image. To obtain the $Spx$ segmentation, we applied linear spectral clustering (LSC)~\cite{li2015superpixel} to the RGB image and then used the obtained $Spx$ segmentation for all multispectral channels. Inspired by the recently published \textit{ex vivo} study by Zhang et al.~\cite{zhang2016tissue}, we extracted both textural and spectral reflectance features from each multispectral channel. Indeed, as stated in Sec.~\ref{sec:intro}, the authors demonstrated that incorporating textural information improved the classification performance with respect to single pixel-based features in their controlled experimental setup. As laparoscopic images are captured from various viewpoints under various illumination conditions, the textural features should be robust to the pose of the endoscope as well as to the lighting conditions. Furthermore, their computational cost should be negligible to enable real-time computation with a view to future clinical applications. The histogram ($H_{LBP}$) of the uniform rotation--invariant local binary pattern ($LBP_{riu2}^{R,P}$), which fully meets these requirements, was here used to describe the tissue texture of an $Spx$. The $LBP^{R,P}_{riu2}$ formulation requires to define, for a pixel $\mathbf{c} = (c_x, c_y)$, a spatial circular neighborhood of radius $R$ with $P$ equally-spaced neighbor points ($\{{\mathbf{p}_p\}}_{p \in (0,P-1)}$): \begin{equation} LBP^{R,P}_{riu2}(\mathbf{c}) = \begin{cases} \sum_{p=0}^{P-1}s(g_{{\mathbf{p}}_p} - g_{\mathbf{c}}), & \mbox{if } U(LBP^{R,P}) \leq 2 \\ P+1, & \mbox{otherwise} \end{cases} \end{equation} where $g_{\mathbf{c}}$ and $g_{{\mathbf{p}}_p}$ denote the gray values of the pixel $\mathbf{c}$ and of its $p^{th}$ neighbor $\mathbf{p}_p$, respectively. $s(g_{\mathbf{p}_p} - g_{\mathbf{c}})$ is defined as: \begin{equation} s(g_{\mathbf{p}_p} - g_{\mathbf{c}}) = \Bigg\{ \begin{array}{rl} 1, & \text{$g_{\mathbf{p}_p} \geq g_{\mathbf{c}}$}\\ 0, & \text{$g_{\mathbf{p}_p} < g_{\mathbf{c}}$} \end{array} \end{equation} and $U(LBP^{R,P})$ is defined as: \begin{equation} \begin{split} U(LBP^{R,P}) = |s(g_{\mathbf{p}_{P-1}}-g_{\mathbf{c}}) -s(g_{\mathbf{p}_0}-g_{\mathbf{c}})| + \\ \sum_{p=1}^{P-1}|s(g_{\mathbf{p}_{p}}-g_{\mathbf{c}}) -s(g_{\mathbf{p}_{p-1}}-g_{\mathbf{c}})| \end{split} \end{equation} The $H_{LBP}$, which counts the occurrences of $LBP^{R,P}_{riu2}$, was normalized to the unit length to account for the different pixel numbers in an $Spx$. Spectral reflectance information was encoded in the average spectrum $(AS)$, which is the average spectral reflectance value in an $Spx$. The $AS$ for the $i^{th}$ channel and the $n^{th}$ $Spx$ ($Spx_n$), with $n \in (1,N)$ and $N$ the total number of $Spx$, is defined as: \begin{equation} AS_{Spx_n}(\lambda_i) = \frac{1}{M} \sum_{\mathbf{p} \in Spx_n} Sr_p(\lambda_i) \end{equation} where {$M$} is the number of pixels in $Spx_n$ and $Sr_p(\lambda_i)$ is the reflectance value of the $p^{th}$ pixel of $Spx_n$ in the $i^{th}$ channel. The L2-norm was applied to the $AS$ in order to accommodate lighting differences. $AS$ was exploited instead of the simple spectral reflectance at one pixel to improve the feature robustness against noise, although this is detrimental to spatial resolution. The steps for obtaining the feature vector are shown in Fig.~\ref{fig:spx}. \subsubsection{Superpixel-based classification} To classify the $Spx$-based features, we used SVM with the radial basis function. For a binary classification problem, given a training set of $N_t$ data $\{y_k, {\mathbf{x_k}}\}_{k=1}^{N_t}$, where ${\mathbf{x_k}}$ is the $k^{th}$ input feature vector and $y_k$ is the $k^{th}$ output label, the SVM decision function ($f$) takes the form of: \begin{equation} f({\mathbf{x}}) = sign\Big[\sum_{k=1}^{N_t} a_k^* y_k \Psi({\mathbf{x}}, {\mathbf{x_k}}) + b \Big] \end{equation} where: \begin{equation} \Psi({\mathbf{x}}, {\mathbf{x_k}}) = exp\{-\gamma||{\mathbf{x}}-{\mathbf{x_k}}||_2^2/\sigma^2\}, \qquad \gamma > 0 \end{equation} $b$ is a real constant and $a_k^*$ is computed as follows: \begin{equation} a_k^* = \max \Big \{ -\frac{1}{2} \sum_{k,l=1}^{N_t} y_k y_l \Psi({\mathbf{x_k}}, {\mathbf{x_l}}) a_k a_l + \sum_{k=1}^{N_t} a_k \Big \} \end{equation} with: \begin{equation} \sum_{k=1}^{N_t} a_k y_k = 0, \qquad 0 \leq a_k \leq C, \qquad k=1, ..., {N_t} \end{equation} In this paper, $\gamma$ and $C$ were computed with grid search, as explained in Sec. \ref{sec:experiments}. \begin{figure}[tbp] \centering \includegraphics[width = .25\textwidth]{images/gc.pdf} \caption{The Gini coefficient ($GC$) is computed as twice the area (green area) between the line of equality and the Lorentz curve. The Lorentz curve represents the cumulative classification probability among the outcome classification states rank-ordered according to the decreasing values of their individual probabilities. A uniform discrete probability distribution has $GC=0$, as the Lorentz curve overlays the line of equality, while for a state with probability 100\% and the others at 0\%, $GC=1$.} \label{fig:gc} \end{figure} Since our classification task is a multiclass classification problem, we implemented SVM with the \textit{one-against-one} scheme. Specifically, six organ classes were involved in the SVM training process, as described in Sec. \ref{sec:experiments}. Prior to classification, we standardized the feature matrix within each feature dimension. As a prerequisite for our confidence estimation, we retrieved the probability {$Pr(Spx_n=j)$} for the $n^{th}$ $Spx$, to belong to the $j^{th}$ organ ($j\in [1,J]$), $J$ is the number of considered organs. In particular, {$Pr(Spx_n=j)$} was obtained, according to the pairwise comparison method proposed in \cite{wu2004probability} (which is an extension of \cite{platt1999probabilistic} for the binary classification case), by solving: \begin{equation} Pr(Spx_n = j) = \sum_{i=1,i \neq j}^J \frac{Pr(Spx_n = j) + Pr(Spx_n = i)}{J-1}r_{ji}, \forall j \end{equation} subject to: \begin{equation} \sum_{j=1}^J Pr(Spx_n = j) = 1, \quad Pr(Spx_n = j) \geq 0, \quad \forall j \end{equation} where $r_{ij}$ is the estimates of $Pr(Spx_n = j| Spx_n \in \{i,j\})$ with $r_{j,i} + r_{i,j} = 1, \forall j \neq i$. The estimator $r_{j,i}$ was obtained according to \cite{platt1999probabilistic}, mapping the SVM output to probabilities by training the parameters of a sigmoid function. \input{tables_images/camera_spec.tex} \subsubsection{Confidence estimation} To estimate the SVM classification performance, we evaluated two intrinsic measures of confidence: (i) a measure based on the normalized Shannon entropy~($E$), called posterior probability certainty index ($PPCI$), and (ii) the Gini coefficient ($GC$)~\cite{marcot2012metrics}. For the $n^{th}$ $Spx$, $PPCI(Spx_n)$ is defined as: \begin{equation} PPCI(Spx_n) = 1 - E(Spx_n) \end{equation} where $E$ is: { \begin{equation} E(Spx_n) = -\frac{\sum_{j=1}^J Pr(Spx_n=j) log (Pr(Spx_n=j))}{log(J)} \end{equation} and: \begin{equation} \!\begin{aligned}[l] log (Pr(Spx_n=j)) = \\ \begin{cases} log(Pr(Spx_n=j)), & \mbox{if } Pr(Spx_n=j)>0 \\ 0, & \mbox{if } Pr(Spx_n=j)=0 \end{cases} \end{aligned} \end{equation} } For the $n^{th}$ $Spx$, $GC(Spx_n)$ is defined as: \begin{equation} \label{eq:gc} GC(Spx_n) = 1 - 2 \int_0^1 \! L(x) \, \mathrm{d}x. \end{equation} where $L$ is the Lorentz curve, which is the cumulative probability among the $J$ outcome states rank-ordered according to the decreasing values of their individual probabilities {($Pr(Spx_n = 1), ..., Pr(Spx_n = J)$)}. As can be seen from Fig.~\ref{fig:gc}, in case of uniform discrete probability distribution (complete uncertainty), $L$ corresponds to the line of equality. Thus, the integral in Eq. \ref{eq:gc} (red area in Fig.~\ref{fig:gc}) has values 0.5 and $GC = 0$. On the contrary, for the case of a single state at 100\% with the others at 0\% (complete certainty), the integral value is 0 and $GC=1$. The $GC$ computation can be also seen as twice the area (green area in Fig.~\ref{fig:gc}) between the line of equality and the Lorentz curve. \begin{figure*} \centering \includegraphics[width = .6\textwidth]{images/challenges.png} \caption{Challenges of the evaluation dataset. Four samples of images showing the gallbladder (first row) and spleen (second row) are reported. Images were recorded with varying endoscope pose and illumination level. Specular reflections are present in the images due to the smooth and wet organ surfaces. Multiple organs can be present in a single image. All images refer to the same multispectral channel.} \label{fig:chall} \end{figure*} Although both metrics are suitable to evaluate the dispersion of the classification probability, $GC$ is faster to compute, as it does not require the logarithm computation. Moreover, $GC$ is more sensitive than $PPCI$ at higher values~\cite{marcot2012metrics}. \subsection{Automatic image tagging} \label{sec:image-tagging} Automatic image tagging uses the SVM $Spx$-based classification and the corresponding confidence estimation. Specifically, test images were tagged considering $Spx$ labels with high confidence values only. The value of $GC(Spx_n)$ was thresholded to obtain binary confidence information. An $Spx$ was considered to have an acceptable confidence level if $GC(Spx_n) > \tau$, for the threshold~$\tau$. The same procedure was performed using $PPCI$ instead of $GC$. \input{tables_images/feat_conf.tex} \section{In vivo validation} \label{sec:experiments} Seven pigs were used to examine the {\bf{H1}} and {\bf{H2}} introduced in Sec.~\ref{sec:intro}. Raw multispectral images ($I$) were acquired using a custom-built MI laparoscope. In this study, the multispectral laparoscope was comprised of a Richard Wolf (Knittlingen, Germany) laparoscope and a 5--MP Pixelteq Spectrocam (Largo, FL, USA) multispectral camera. The $\lambda_i$ for each $i^{th}$ band index and the corresponding full widths at half maximum (FWHM) are reported in Table~\ref{tab:wave}. The filters were chosen according to the band selection strategy for endoscopic spectral imaging presented in \cite{wirkert2014endoscopic}. The method makes use of the Sheffield index \cite{sheffield1985selecting}, which is an information theory based band selection method originally proposed by the remote sensing community. The $700$, $560$ and $470$ nm channels were chosen to simulate RGB images as the camera did not provide RGB images directly. The image size was $1228\times1029\times8$ for MI and $1228\times1029\times3$ for RGB. The physical size of the multispectral camera was 136~x~124~x~105~mm, with a weight of 908~g. The acquisition time of one multispectral image stack took 400~ms. From the seven pigs, three pigs were used for training ($29$ images) and four for testing ($28$ images). The number of images used to test the SVM performance on RGB and MI data was the same, as RGB data were directly obtained from MI data by selecting 3 of the 8 MI channels. The total number of $Spx$ in the training and testing dataset, for both MI and RGB data, was 1382 and 1559, respectively. We considered six porcine organ tissues typically encountered during hepatic laparoscopic surgery: the liver, gallbladder, spleen, diaphragm, intestine, and abdominal wall. These tissues were recorded during \textit{in vivo} laparoscopy. Challenges associated with the \textit{in vivo} dataset include: \begin{itemize} \item Wide range of illumination \item Variation of the endoscope pose \item Presence of specular reflections \item Presence of multiple organs in one image \item Organ movement \end{itemize} Visual samples of the dataset challenges are shown in Fig. \ref{fig:chall}. The multispectral images were pre-processed as described in Sec.~\ref{sec:uncertainty-aware}. The $Spx$ segmentation with LSC was achieved using an average $Spx$ size of $150^2$ pixels and an $Spx$ compactness factor of $0.1$. Accordingly, $55$ $Spx$ on average were obtained for each image. The $LBP_{riu2}^{R,P}$ were computed considering the following $(R, P)$ combinations: {(1, 8), (2, 16), and (3,~24)}. The feature vector for an $Spx$ was obtained by concatenating the $H_{LBP}$ with the $AS$ value for all $8$ multispectral channels (for MI) and for $\lambda_i = 700$, $560$ and $470$ nm (for RGB). The feature vector size for an $Spx$ was: \begin{equation} (l_{H_{LBP}} + l_{AS}) \times N_C \end{equation} where $l_{H_{LBP}}$ is the length of $H_{LBP}$, equal to 54, $l_{AS}$ is the length of $AS$, equal to 1, and $N_C$ is the number of channels, 3 for RGB and 8 for multispectral data. The SVM kernel parameters ($C = 10^4$ and $\gamma=10^{-5}$) were retrieved during the training phase via grid-search and 10-fold cross-validation on the training set. The grid-search spaces for $\gamma$ and $C$ were set to [$10^{-8}$,~$10^{1}$] and [$10^{1}$, $10^{10}$], respectively, with $10$ values spaced evenly on the $log_{10}$ scale in both cases. The determined values for the hyperparameters were subsequently used in the testing phase. The feature extraction was implemented using OpenCV \footnote{http://opencv.org/}. The classification was implemented using scikit-learn \cite{pedregosa2011scikit} \footnote{http://scikit-learn.org/}. \input{tables_images/spAccThConf.tex} \subsubsection{Investigation of H1} To investigate whether the inclusion of a confidence measure increases $Spx$-based organ classification accuracy ($Acc_{Spx}$), we evaluated the $Acc_{Spx}$ dependence on $\tau \in [0.5:0.1:1)$ applied to both $GC$ and $PPCI$. $Acc_{Spx}$ is defined as the ratio of correctly classified confident $Spx$ to all confident samples in the testing set. We evaluated whether differences existed between $Acc_{Spx}$ obtained applying $GC$ and $PPCI$ on the SVM output probabilities using the Wilcoxon signed-rank test for paired samples (significance level = 0.05). {We also investigated the SVM performance with the inclusion of confidence when leaving one organ out of the training set. Specifically, we trained six SVMs, leaving each time one organ out. We computed, for each of the six cases, the percentage ($^\%LC_{Spx}$) of low-confidence $Spx$ (considering $\tau = 0.9$). We did this both for the organ that was excluded ($Ex$) from the training set and for the included organs ($In$).} For image tagging, we computed the tagging accuracy ($Acc_{Tag}$) for different $\tau$, where $Acc_{Tag}$ is the ratio of correctly classified organs in the image to all organs in the testing image. \subsubsection{Investigation of H2} To investigate whether MI data are more suitable for anatomic structure classification than conventional RGB video data, we performed the same analysis for RGB and compared the results with those from the MI. To complete our evaluation, we also evaluated the performance of $H_{LBP}$ alone and $AS$ alone for $\tau=0$, which corresponds to the \textit{Base} case, i.e., SVM classification without a confidence computation. Since the analyzed populations were not normal, we used the Wilcoxon signed-rank test for paired samples to assess whether differences existed between the mean ranks of the RGB and MI results (significance level~$= 0.05$). \input{tables_images/cm.tex} \input{tables_images/hl_acc1.tex} \section{Results} \label{sec:res} The descriptive statistics of $Acc_{Spx}$ for the analyzed features are reported in Table~\ref{tab:feat_conf}. For the \textit{Base} case, the highest $Acc_{Spx}$ (median $= 90\%$, inter-quartile range $= 6\%$) was obtained with $H_{LBP}+AS$ and MI. The other results all differ significantly (p-value $< 0.05$) from those obtained with $H_{LBP}+AS$ and MI. When $\tau$ applied to $GC$ (Fig. \ref{fig:spAccThConf_gc}) and $PPCI$ (Fig. \ref{fig:spAccThConf_entro}) was varied in [0.5 : 0.1 : 1), the median $Acc_{Spx}$ for the MI data increased monotonously to 99\% ($\tau = 0.9$), when using both $GC$ and $PPCI$. The same trend was observed for the RGB data, with an overall improvement of the median from 81\% to 93\% (using $GC$) and 91\% (using $PPCI$). For both the \textit{Base} case and after introduction of the confidence measures, the MI outperformed the RGB (p-value $<$ 0.05). No significant differences were found when comparing the classification performance obtained with $GC$ and $PPCI$. Therefore, as $GC$ computation is more sensitive to high values and faster to compute than $PPCI$, we decided to use $GC$. \begin{figure} \centering \includegraphics[width = .4\textwidth]{images/excl_vs_incl.pdf} \caption{\label{fig:in_ex} {Effect of previously unseen target structures on the uncertainty estimation. Percentage ($^\%LC_{Spx}$) of low-confidence $Spx$ ($\tau = 0.9$) for organs that were seen ($In$) and not seen ($Ex$) in the training process.}} \end{figure} Figure~\ref{fig:cm} shows the confusion matrix for MI and $\tau = 0.9$ on $GC$. Note that, in the case yielding the least accurate result, which corresponds to spleen classification, the accuracy rate still achieved $96\%$, whereas for RGB the lowest accuracy rate was~$69\%$. \input{tables_images/tag_base_conf.tex} \input{tables_images/visual.tex} The $^\%LC_{Spx}$ boxplots relative to the leave-one-organ out experiment are shown in Fig. \ref{fig:in_ex}. The $^\%LC_{Spx}$ is significantly higher for organs that were not seen in the training phase (MI: 42\% ($Ex$) vs. 23\% ($In$); RGB: 36\% ($Ex$) vs. 40\% ($In$)). When applied to endoscopic image tagging, the mean $Acc_{Tag}$ values in our experiments were increased from 65\% (RGB) and 80\% (MI) to 90\% (RGB) and 96\% (MI) with the incorporation of the confidence measure (using $GC$). The descriptive statistics are reported in Fig.~\ref{fig:hl_acc1}. In this instance, the MI also outperformed the RGB both in the \textit{Base} case and with the confidence measure (p-value~$<0.05$). {Figure~\ref{fig:tag_base_conf}} shows the influence of low-confidence $Spx$ exclusion on the image tagging: after low-confidence $Spx$ exclusion, all $Spx$ in the image were classified correctly. Sample results for the SVM classification and the corresponding confidence map (using $GC$) are shown in {Fig.~\ref{fig:visual}}. For low-confidence $Spx$, the probable cause of uncertainty is also reported. The main sources of uncertainty are specular reflections, camera sensor noise at the image corner, and the partial organ effect, i.e., when two or more organs correspond to one $Spx$. \section{Discussion} \label{sec:dis} The emerging field of surgical data science \cite{maier2017surgical} aims at observing the entire patient workflow in order to provide the right assistance at the right time. One important prerequisite for context-aware assistance during surgical treatment is to correctly classify the phase within an intervention. While a great amount of effort has been put into automatic instrument detection (e.g. \cite{sznitman2014fast,bodenstedt2016superpixel,allan2013toward}), the problem of automatic organ classification has received extremely little extension. We attribute this to the fact that the task is extremely challenging. In fact, the related problem of organ boundary detection was regarded so challenging by participants of the MICCAI 2017 endoscopic vision challenge (https://endovis.grand-challenge.org/) that only a single team decided to submit results for the sub-challenge deadline with kidney boundary detection. In this work, we tackled this problem by two previously unexplored approaches: \begin{itemize} \item Accuracy: We slightly changed the image acquisition process using a multispectral camera as opposed to a standard RGB camera in order to increase the quality of the input data (for the classifier). The effect of this measure was an increase in accuracy of 11\% for the task of organ classification and an increase of 23\% for the task of automatic image tagging. \item Robustness: We derived superpixel-based measures of confidence to increase the reliability of image tagging. The result was a boost in accuracy of 38\% (RGB) and 20\% (MI) absolute. \end{itemize} With our validation dataset, we showed that MI significantly outperforms standard RGB imaging in classifying abdominal tissues. Indeed, as the absorption and scattering of light in tissue is highly dependent on (i) the molecules present in the tissues, and (ii) the wavelength of the light, the multispectral image stack was able to encode the tissue-specific optical information, enabling higher accuracy in distinguish different abdominal structures in comparison to standard RGB. With the introduction of the confidence measure, we showed that the classification accuracy can be improved, for both RGB and MI. This happened when exploiting both $GC$ and $PPCI$. Since no significant differences were found between $GC$ and $PPCI$, we decided to use $GC$ as it is more sensitive at higher values than $PPCI$ and its computation is faster. In fact, a major advantage of our method is its high classification accuracy, which attained 93\% (RGB) and 99\% (MI) in the regions with high confidence levels, with a significant improvement compared to the \textit{Base} case. Few misclassifications of high-confidence $Spx$ occurred, and where they did then this was mainly with tissues that are also challenging to distinguish between for the human eye, e.g. liver and spleen (Fig.~\ref{fig:cm}). { It is worth noting that $GC$ and $PPCI$ were two examples of confidence estimation measures {to investigate {\bf{H1}}}. {We decided against using simple thresholding on the maximum ($Max$) value of $Pr(Spx_n=j)$ computed among the $J$ organ classes as $GC$ and $PPCI$ are generally known for being more sensitive at higher values \cite{marcot2012metrics}. This assumption was confirmed in additional experiments, where image tagging performed with confident $Spx$ according to $GC$/$PPCI$ was substantially more robust than tagging based on confident $Spx$ according to $Max$.} The results obtained with the introduction of the confidence measure are comparable with those obtained by Zhang et al.~\cite{zhang2016tissue} for \textit{ex vivo} organ classification in a controlled experimental setup. Zhang et al. reported a median classification accuracy of 98\% for MI, whereas our classification accuracy for the \textit{Base} case only achieved 90\% due to the challenging nature of the \textit{in vivo} dataset. An accuracy level comparable to the one of~\cite{zhang2016tissue} was, however, restored for our dataset once the low-confidence $Spx$ were excluded. {When excluding one organ from the training set, $^\%LC_{Spx}$ relative to the excluded organ was significantly higher than the number of low-confidence superpixels obtained for the remaining organs. This indicates that the confidence inclusion helped in handling situations where unknown structures appeared in the field of view of the camera.} These results are in keeping with those found in the literature for case reasoning~\cite{cheetham2004measures, kendall2017uncertainties}. Indeed, the importance of the estimation of the level of confidence of the classification with a view to improving system performance has been widely highlighted in several research fields, such as face recognition~\cite{delany2005generating}, spam-filtering~\cite{ orozco2008confidence}, and cancer recognition~\cite{zhang2013subpopulation}. However, the use of confidence metrics had not been exploited in the context of laparoscopic image analysis, up until now. Although several $Spx$ misclassifications occurred at the \textit{Base} case, which had a negative effect on tagging performance, the low-confidence $Spx$ exclusion significantly increased tagging accuracy. Indeed, regions affected by camera sensor noise, specular reflections, and spectral channel shift due to organ movement were easily discarded based on their confidence value. The same process was implemented when the $Spx$ segmentation failed to separate two organs. Also in this case, MI showed that it performs better than standard RGB. While we are the first to address the challenges of \textit{in vivo} image labeling, including the large variability of illumination, variation of the endoscope pose, the presence of specular reflections, organ movement, and the appearance of multiple organs in one image, one disadvantage of our validation setup is that our database was not recorded during real surgery. Hence, some of the challenges typically encountered when managing real surgery images were absent (e.g., blood, smoke, and occlusion). Moreover, as our camera does not provide RGB data directly, we generated a synthetic RGB image by merging three MI channels. It should be noted, however, that our RGB encodes more specific information, as the bands used to obtain these data are considerably narrower than those of standard RGB systems (FWHM = $20$ nm). We also recognize that a limitation of the proposed work could be seen in the relatively small number of training images (29). However, analyzing researches on the topic of tissue classification in laparoscopy, such number is comparable with the one of Chhatkuli et al. \cite{chhatkuli2014live}, which exploited 45 uterine images, and Zhang et al. \cite{zhang2016tissue}, which recorded 9 poses of just 12 scenes (3 pigs $\times$ 4 \textit{ex-vivo} organs). Further, it is worth noting that our training was performed at $Spx$-level, meaning that the training set sample size was about $55 \times 29$, where 55 is the average number of $Spx$ in an image. Considering that the proposed study was not aimed at evaluating the system performance for clinical translation purpose, we did not analyze the clinical requirements of the proposed method performance. Despite the fact that we recognize the relevance of such analysis, we believe that it should be performed in relation to the specific application. For example, with reference to \cite{katic2013context}, we plan to analyze and evaluate the requirements of a context-aware AR system supported by the proposed methodology. { However, when discussing with our clinical partners, it emerged that the end-to-end accuracy should be close to 100\% (i.e. for recognizing the surgical state). However, it has to be further investigated how errors in image tagging affect the error of the final task.} With our MI laparoscope prototype, the image stack acquisition time (400 ms) was faster than most systems commonly presented in literature, like e.g. (e.g. \cite{clancy2014multispectral} with $\sim$3~s), which makes it more advantageous for clinical applications. Anyway, to fully meet the clinical requirements in terms of system usability, we are currently working on further shrinking the system and speeding it up, as to achieve real-time acquisition. {A further solution we would like to investigate is the use of loopy belief propagation \cite{murphy1999loopy,ihler2005loopy} as post-processing strategy to include spatial information with respect to how confident classification labels appear in the image. This would be particularly useful for images where the tagging failed due to few confident misclassified $Spx$ surrounded by correctly classified confident $Spx$.} Future work will also deal with the real-time implementation of the classification algorithm, which was not the aim of this work. Recent advancements in tissue classification research suggest that the use of convolutional neural network (CNN) could be also investigated for comparison~\cite{shin2016deep}. {Indeed, uncertainty in deep learning is an active and relatively new field of research, and standard deep learning tools for classification do not capture model uncertainty \cite{gal2016uncertainty}. Excluding popular dropout strategies (e.g. \cite{srivastava2014dropout,kendall2016modelling}), among the most recently proposed solutions, variational Bayes by Backpropagation \cite{blundell2015weight,pawlowski2017implicit} is is drawing attention of the deep learning community.} \section{Conclusions} \label{sec:con} In this paper, we addressed the challenging topic of robust classification of anatomical structures in \textit{in vivo} laparoscopic images. With the first \textit{in vivo} laparoscopic MI dataset, we confirmed the two hypotheses: ({\bf{H1}}) the inclusion of a confidence measure increases the $Spx$-based organ classification accuracy substantially and ({\bf{H2}}) MI data are more suitable for anatomic structure classification than conventional video data. To this end, we proposed the first approach to anatomic structure labeling. The approach features an intrinsic confidence measure and can be used for high accuracy image tagging, with an accuracy of $90\%$ for RGB and $96\%$ for MI. In conclusion, the method proposed herein could become a valuable tool for surgical data science applications in laparoscopy due to the high level of accuracy it provides in image tagging. Moreover, by making our MI dataset fully available, we believe we will stimulate researches in the field, encouraging and promoting the clinical translation of MI systems. \section*{Acknowledgments} The authors would like to acknowledge support from the European Union through the ERC starting grant COMBIOSCOPY under the New Horizon Framework Programme grant agreement ERC-2015-StG-37960. \subsection*{Compliance with ethical standards} \subsection*{Disclosures} The authors have no conflict of interest to disclose. \subsection*{Ethical standards} This article does not contain any studies with human participants. All applicable international, national and/or institutional guidelines for the care and use of animals were followed. \bibliographystyle{IEEEtran}
2,877,628,090,646
arxiv
\section{Introduction} \hskip+1.5em In this paper we are continuing in the case of $\hbar\to 0$ a description of loci of zeros of fundamental solutions to Schr{\"o}dinger equation (SE) given in our previous paper \cite{11} for the case of the high energy limit $E\to\infty$. The case $\hbar\to 0$ has been considered recently by Hezari \cite{9} who investigated in this limit the problem of complex zeros of eigenfunctions of SE with real polynomial potentials of even degree, while the energy parameter $E$ was kept fixed and $\hbar$ was quantized. In fact the two cases, i.e. the energy quantized while the Planck constant kept fixed and the energy kept fixed but the Planck constant is quantized have two different semiclassical limits for the quantized parameters, i.e. the high energy limit and the small $\hbar$-limit lead each to different behaviour of the corresponding Stokes graphs and to different sets of eigenfunctions. Nevertheless since both these limits are of the same semiclassical nature at least mathematically it is not surprising that Hezari's results on complex zeros eigenfunctions problem are similar to those of Eremenko {\it et al} \cite{12}. Note also the respective studies done by Martinez-Filkenstein {\it et al} \cite{15} and Zelditch \cite{14} as well as by Delabaere {\it et al} \cite{13}. In this paper we would like to generalize the results of Hezari. Particularly we would like: \begin{enumerate} \item to establish for a general polynomial potential theorems on zeros distributions of fundamental solutions in the limit $\hbar\to 0$ which are analogues of the corresponding theorems established in our previous paper \cite{11} for $E\to\infty$ \item to get corresponding results of this limit for a general real double-well (D-W) polynomial potential of even degree and its multiple-well generalization. \end{enumerate} In the first case a general result has been obtained for polynomials providing Stokes graphs with simple turning points only and at most with one inner Stokes line while in the second case we have considered initially a real D-W polynomial potential of tenth degree but of a form that allows us for a simple generalization of the obtained results to general multiple-well real polynomial potentials of the even degree. The paper is organized as follows. In sec.2 the main problem of this paper is formulated and two general theorems are obtained for the limit loci of zeros of fundamental solutions: for a general non-critical Stokes graph corresponding to a polynomial potential and for such a graph with a unique internal Stokes line. In sec.3 the fundamental solutions which sectors are linked by a unique internal Stokes line are quantized and changes caused by the quantization in the limit loci of their zeros are notified accordingly. In sec.4 a double-well potential is considered and the limit distributions of zeros of two fundamental solutions vanishing in the infinities of the real axis are found. In sec.5 the same solutions are quantized (matched to each other) and changes caused by the quantization in the limit loci of their zeros are investigated. In sec.6 the symmetric case of the double-well potential is considered together with the limit loci of zeros of the same pair of quantized fundamental solutions. In sec.7 simple generalizations of the results from the sections 5-7 to multiple-well potentials are done. In sec.8 we summarize the results of the paper. \section{The non-quantized cases of the limit $\hbar\to 0$} \hskip+2em It is worth to note that the limit $\hbar\to 0$ can be considered in fact for the not quantized Planck constant and this case is even much easier to handle at least as long as all turning points are simple. In the last cases our method developed in the previous paper \cite{11} can be applied equally well despite the fact that positions of the roots can be now arbitrary. The method can certainly be used safely for the non critical SG's as well as for the critical ones with a single internal Stokes line. To begin with consider the Schr{\"o}dinger equation for the case: \begin{eqnarray} \phi''(z)-\lambda^2(P_n(z)-E)\phi(z)=0 \label{1} \end{eqnarray} with $P_n(z)=a_nz^n+...+a_1z,\;a_n\neq 0,\;n\geq 1$, and $\lambda^2=\frac{2m}{\hbar^2},\;\lambda>0$, and all $a_n,\;n\geq 1$, are complex and $\lambda$-independent. By assumption all roots of the equation: \begin{eqnarray} P_n(z)-E=0 \label{1b} \end{eqnarray} are simple. To make our paper self-contained we shall equip it with all necessary notions allowing us to introduce the fundamental solutions (FS's) to eq.\mref{1} and to discuss their main properties utilized in the paper despite the fact that all these necessary notion can be found also in our earlier papers \cite{11,3}. While considering cases of double- and multiple-well potentials we shall limit ourselves to the real and positive $\lambda$ it is worthwhile to consider it in this section as a complex parameter. First with eq.\mref{1} and the potential $P_n(z)$ the set of all Stokes lines (SL's) called Stokes graph (SG) is associated. The SL's are defined by the conditions: \begin{eqnarray} \Re\ll(\lambda \int_{z_i}^{z} \sqrt{W_n(\xi)}d\xi\r) = 0,\;\;\;\;\;\;\;\;i=1,...,n \label{1a} \end{eqnarray} where $W_n(z)\equiv P_n(z)-E$ and $z_i,\;i=1,...,n$, are roots of $W_n(z)$. The roots $z_i,\;i=1,...,n$, are called also turning points (TP's). By the above assumptions and definition the Stokes lines and the corresponding Stokes graph are defined on the two sheeted Riemann surface $R_2$ with the turning points of $W_{n}(z)$ as the branch points of this surface. However since on these two sheets the values of $\sqrt{W_{n}(z)}$ differ by signs only the projections on the $z$-plane of the Stokes lines defined on each sheet coincide. Therefore considering a pattern of SL's on the cut $z$-plane $C_{cut}$ with cuts emerging from the turning points of $W_{n}(z)$ we see that the SL's on $C_{cut}$ are quasi continuous on the cuts despite the fact that they are pieces of different SL's collected from the two sheets of $R_2$. In general, a SL emerging from a given turning point $z_i$ can run to infinity of $C_{cut}$ or end at another turning point $z_j$. The SL with the first property is called {\it external} while with the second one is called the {\it inner} SL. A SG is called {\it critical} if at least one of its SL's is the inner one. It is called {\it not critical} in the opposite case. There are three SL's emerging from each TP at equal angles on $C_{cut}$. In each SG there are at least $n+2$ SL's which are external, i.e. at least $n+2$ SL's have to escape to infinity of $C_{cut}$ in $n+2$ different directions, i.e. there are $n+2$ asymptotes corresponding to these SL's. These asymptotes which emerge from $z=0$ point of $C_{cut}$ at equal angles are called Stokes rays (SR). There is of course the Euler-like relation between a number $e$ of the external SL's and a number $i$ of the internal ones, namely $e+2i=3n$. As it was said external SL's group into $n+2$ bunches. In each such a bunch with a given asymptote there is a finite number of external SL's which can be enumerated with the clockwise rule. The first SL in each bunch and its last one are the most important. This is because when moving along the last SL from one bunch toward the closest TP which it emerges from and avoiding clockwise all the TP met along such travelling to get successive SL's one finds the last SL to be linked continuously with the first SL of the next bunch by a chain of internal SL's. These two SL's together with all these internal ones passed by the travelling mentioned form a boundary of an infinite domain called a {\it sector}. It follows from their definition that each sector does not contain any TP inside. There are therefore $n+2$ such sectors for each SG on $C_{cut}$ separated by bunches. We can enumerate the sectors clockwise starting with a sector chosen arbitrarily. $C_{cut}$ has its partner $C_{cut}'$ with which it can be glued by cuts forming in this way the connected Riemann surface $R_2$. On $C_{cut}'$ the construction of sectors can be repeated so that on $R_2$ there are in this way $2n+4$ sectors which can be enumerated clockwise in the way mentioned above. However as we have mentioned earlier the sectors lying on $C_{cut}'$ are projections of the corresponding sectors lying on $C_{cut}$ so that by a proper enumeration mentioned we can have the relations $S_{n+2+k}=PS_k,\;k=1,...,n+2$, where $S_{n+2+k}$ and $S_k$ denote the sectors in $C_{cut}'$ and $C_{cut}$ respectively and $P$ denotes the projection. By their definition in each sector $\Re\ll(\lambda \int_{z_i}^{z} \sqrt{W_n(\xi)}d\xi\r)$ has a definite sign ($\pm$) if a root $z_i$ is chosen on its boundary. This sign alternates when $z$ is moving from a given sector to its neighbours or when it crosses a cut. In particular the corresponding signs are opposite for the pairs of sectors $S_k$ and $S_{n+2+k},\;k=1,...,n+2.$ There are $2n+4$ fundamental solutions (FS) to the equation \mref{1}. In the sector $S_k,\;k=1,...,2n+4,$ the corresponding FS $\psi_k(z,\lambda)$ has the following form: \begin{eqnarray} \psi_k(z,\lambda) = W_n^{-\frac{1}{4}}(z)e^{\sigma_k\lambda {\tilde W}_n(z,z_k)}{\chi_k(z,\lambda)} \label{10} \end{eqnarray} where $z\in S_k$ and $z_k$ is a turning point (a root) lying on the boundary of $S_k$ while \begin{eqnarray} {\tilde W}_n(z,z_k)=\int_{z_k}^z\sqrt{W_n(y)}dy\nonumber\\ \chi_{k}(z,\lambda) = 1 + \sum_{n{\geq}1} \left( \frac{\sigma_k}{2\lambda} \right)^{n}Y_{k,n}(z,\lambda) \label{11} \end{eqnarray} and \begin{eqnarray} Y_{k,n}(z,\lambda) =\int_{\gamma_k(z)}d{y_{1}} \int_{\gamma_k(y_1)}d{y_{2}} \ldots \int_{\gamma_k(y_{n-1})}d{y_{n}} \omega(y_{1})\omega(y_{2}) \ldots \omega(y_{n}){\times}\nonumber\\ \ll( 1 - e^{-2\sigma_k\lambda {\tilde W}_n(z,y_1)} \right) \left(1 - e^{-2\sigma_k\lambda {\tilde W}_n(y_1,y_2)}\right)\cdots \ll(1 - e^{-2\sigma_k\lambda {\tilde W}_n(y_{n-1},y_n)}\right)\nonumber\\n\geq 1 \label{12} \end{eqnarray} with \begin{eqnarray} \omega(z) ={\frac{5}{16}}{\frac{W_n^{\prime 2}(z)}{W_n^{\frac{5}{2}}(z)}} - {\frac{1}{4}}{\frac{W_n^{\prime\prime}(x)}{W_n^{\frac{3}{2}}(x)}}= {W_n^{- \frac{1}{4}}(z)} \left( {W_n^{- \frac{1}{4}}(z)} \right)^{\prime{\prime}} \label{13} \end{eqnarray} The signatures $\sigma_k=\pm 1$ present in the formulae \mref{10}-\mref{12} are defined in each particular sector $S_k$ in such a way to ensure the inequality $\Re(\sigma_k\lambda{\tilde W}_n(z,z_k))<0$ to be satisfied in this sector. The integration paths $\gamma_k(z)$ in \mref {12} which start from the infinities of the corresponding sectors are canonical i.e. they are chosen in such a way to satisfy the inequality $\Re(\sigma_k\lambda{\tilde W}_n(y_j,y_{j+1}))\geq 0,\;y_j,y_{j+1}\in\gamma_k(z),$ for each factor of the integrand in \mref{12}. It is easy to note the following identities: \begin{eqnarray} \psi_k(z,\lambda)\equiv\pm i\psi_{n+2+k}(Pz,\lambda),\;\;\;\;\;\;\;k=1,...,n+2,\;\;\;\; z\in C_{cut} \label{13a} \end{eqnarray} which come out from the definitions \mref{10}-\mref{13} of FS's. On the other hand in each pair $\psi_k(z,\lambda),\psi_j(z,\lambda),\;k,j=1,...,2n+4,\;j\neq n+2+k$ of FS's the solutions are linear independent. We can conclude therefore that we can limit our attention to the FS's defined on $C_{cut}$ only, considering the FS's $\psi_k(z,\lambda)$ and $\psi_{n+2+k}(z,\lambda),\;k=1,...,n+2,$ as the two different representations of the same solution $\psi_k(z,\lambda),\;k=1,...,n+2,$ defined on $C_{cut}$, i.e. if $\psi_k(z,\lambda)$ is continued analytically on $C_{cut}$ and $z$ crosses a cut of $C_{cut}$ during this continuation then $\psi_k(z,\lambda)$ has to be substituted after such crossing by its alternative form $\psi_{n+2+k}(z,\lambda)$ satisfying \mref{13a}. This procedure has to be repeated each time when such a crossing is happened. Every point $z,\;z\in C_{cut}$, which can be achieved by a canonical path $\gamma_k(z)$ will be called canonical with respect to the sector $S_k$ and the FS $\psi_k(z,\lambda)$. In particular TP's which belong to $\partial S_k$ are canonical. A domain $D_k$ of validity of the representation \mref{10}-\mref{13} of $\psi_k(z,\lambda)$ is called canonical. It means that $D_k$ is collected of all canonical points of $\psi_k(z,\lambda)$ except the canonical TP's in which the representation \mref{10}-\mref{13} of $\psi_k(z,\lambda)$ is singular. Obviously a boundary $\partial D_k$ is collected of TP's and SL's emerging from the former. We call (after Eremenko {\it et al} \cite{12}) all the SL's contained in $\partial D_k$ the exceptional SL's if the corresponding SG is non-critical or if it is critical with a single internal SL only. Let us now characterize the ESL's for the cases mentioned. Consider the non-critical SG. The boundary $\partial D_k$ of $D_k$ has to contain some number of TP's including the ones of $\partial S_k$. Let $z_r$ be such a TP. Then there is a canonical path $\gamma_k(z_r)$. Suppose that for any $z\in\gamma_k(z_r),\;z\neq z_r$, $\Re(\sigma_k\lambda{\tilde W}_n(z_r,z))>0$. Then it follows that $\gamma_k(z_r)$ does not cross any SL's emerging from $z_r$ and $\gamma_k(z_r)\in D_{L_{z_r}'L_{z_r}''}$, where $L_{z_r}'$ and $L_{z_r}''$ are two external SL's emerging from $z_r$ and $D_{L_{z_r}'L_{z_r}''}$ is a domain of $C_{cut}$ with these two SL's as its boundary. However we can always deform $\gamma_k(z_r)$ to make it coinciding partly with each of these two SL's up to $z_r$ and each of these common parts can be arbitrarily long. We conclude therefore that these two SL's emerging from $z_r$ have to belong to $D_k$. It should be now clear that this is the remaining external SL $L_{z_r}'''$ emerging from $z_r$ which is one of ESL's constructing $\partial D_k$, i.e. $L_{z_r}'''$ is this SL which points can not be achieved by any canonical path $\gamma_k(z)$ if none of the SL's $L_{z_r}'$ and $L_{z_r}''$ is to be crossed by such paths. For a given FS $\psi_k(z,\lambda)$ and a given TP $z_r$ denote this exceptional SL by $L_k^r(\equiv L_{z_r}''')$. Then we have: $\partial D_k=\bigcup_{r=1}^nL_k^r$. Next consider the critical SG with a single internal SL between the TP's $z_{r_1}$ and $z_{r_2}$ and let $\psi_{k_1}(z,\lambda)$ be FS's defined in the respective sector $S_{k_1}$ with $z_{r_1}\in \partial S_{k_1}$ while $z_{r_2}\in \partial S_{k_2}$ where $S_{k_2}$ is the sector where the FS $\psi_{k_2}(z,\lambda)$ is defined. Denote also by $L_{r_1r_2}$ the internal SL and by $L_{r_j}'$ and $L_{r_j}''$ the remaining two external SL's emerging from the point $z_{r_j},\;j=1,2$. Then we have: $\partial D_k=\bigcup_{r=1}^nL_k^r,\;k\neq k_1,k_2$, $\partial D_{k_1}=\bigcup_{r=1,r\neq r_1,r_2}^nL_{k_1}^r\cup L_{r_1r_2}\cup L_{r_2}'\cup L_{r_2}''$ and $\partial D_{k_2}=\bigcup_{r=1,r\neq r_1,r_2}^nL_{k_2}^r\cup L_{r_1r_2}\cup L_{r_1}'\cup L_{r_1}''$. Also in the considered critical case of SG we shall call each component of the sum constructing $\partial D_k$ the exceptional SL (ESL) corresponding to the solution $\psi_k(z,\lambda),\;k=1,...,n+2$. For a given $\partial D_k$ denote further by $V_k(\epsilon)$ its $\epsilon$-vicinity defined by the following conditions: \begin{enumerate} \item $V_k(\epsilon)$ is a subset of $D_k$ \item $V_k(\epsilon)$ consists of all the points of $D_k$ the Euclidean distance of which to $\partial D_k$ is smaller than $\epsilon$ (see Fig.2 and 3.) \end{enumerate} Let us still denote by $D_{k,\epsilon}$ a subset of $D_k$ given by $D_{k,\epsilon}=D_k\setminus{\bar V}_k(\epsilon)$. The following {\bf Lemma} and {\bf Theorem 1} can be proved along the same lines as in \cite{11}: \vskip 18pt {\bf Lemma} {\it In the domain $D_{k,\epsilon}$ the factor $\chi_k(z,\lambda)$ of the solution} \mref{10} {\it satisfies the following bound \begin{eqnarray} |\chi_k(z,\lambda)-1|\leq e^\frac{C_{\epsilon}}{\lambda_0}-1,\;\;\;\;\;|\lambda|>\lambda_0\nonumber\\ C_{\epsilon}=\liminf_{\gamma_k(z),\;z\in D_{k,\epsilon},\;k=1,...,n+2}\int_{\gamma_k(z)}|\omega(\xi)d\xi|<\infty \label{14} \end{eqnarray} where $\gamma_k(z)$ are canonical}. \vskip 30pt {\bf Theorem 1} {\it For sufficiently large $\lambda$ all zeros of $\psi_k(z,\lambda)$ lie entirely in the completion $C_{cut}\setminus D_{k,\epsilon}$ of the domain $D_{k,\epsilon}$.} Exactly in the same way as in \cite{11} {\bf Theorems 1a-1b} formulated below can be proved. However they need still to consider asymptotic expansions of the $\chi$-factors of the fundamental solutions \mref{10} for $\lambda\to +\infty$ in their canonical domains. These asymptotic expansions are well defined and can be given the following exponential forms (see \cite{11} as well as ref.1 of \cite{4} and ref.5 of \cite{5}): \begin{eqnarray} \chi_k(z,\lambda)\sim\chi_k^{as}(z,\lambda)=1 + \sum_{m{\geq}1} \left( \frac{\sigma_k}{2\lambda} \right)^{n}{\tilde Y}_{k,m}(z)= \exp\ll(\sum_{m{\geq}1}\left(\frac{\sigma_k}{2\lambda}\right)^m\int_{\infty_k}^zX_m(y)dy\r) \label{551} \end{eqnarray} where \begin{eqnarray} {\tilde Y}_{k,m}(z)=\int_{\infty_k}^zdy_mW_n^{-\frac{1}{4}}(y_m)\ll(W_n^{-\frac{1}{4}}(y_m) \int_{\infty_k}^{y_m}dy_{m-1}W_n^{-\frac{1}{4}}(y_{m-1})\times\r.\nonumber\\ \ll(...\ll.W_n^{-\frac{1}{4}}(y_2)\int_{\infty_k}^2dy_1W_n^{-\frac{1}{4}}(y_1) \ll(W_n^{-\frac{1}{4}}(y_1)\r)''...\r)''\r)'' \label{552} \end{eqnarray} and \begin{eqnarray} \sum_{m{\geq}1}\left(\frac{\sigma_k}{2\lambda}\right)^mX_m(y)\equiv Z_k(y,\lambda)= \frac{1}{\chi_k^{as}(z,\lambda)}\frac{d\chi_k^{as}(z,\lambda)}{dz} \label{553} \end{eqnarray} Note that $X_m(y),\;m\geq 1$, are sector independent point functions on the $C_{cut}$-plane being given by the following recurrent formula (see ref.2 of \cite{4}): \begin{eqnarray} X_1(z)=-\omega(z)=-W_n^{-\frac{1}{4}}(z)\ll(W_n^{-\frac{1}{4}}(z)\r)''=W_n^{-\frac{1}{2}}(z)\frac{U_{2n-2}(z)}{W_n^2(z)}\nonumber\\ X_m(z)=\frac{1}{2} W_n^{-\frac{3}{2}}(z)W_n'(z)X_{m-1}(z)-W_n^{-\frac{1}{2}}(z)X_{m-1}'(z)-\nonumber\\ W_n^{-\frac{1}{2}}(z)\sum_{k=1}^{m-2}X_kX_{m-k-1},\;\;\; m=2,3,... \label{554} \end{eqnarray} where $U_{2n-2}(z)$ is a polynomial of the $2n-2$-degree. It follows from \mref{554} that $X_{2m},\;m\geq 1,$ have only poles at the turning points while $X_{2m+1},\;m\geq 0,$ have there the square root branch points. Therefore $Z^+(z,\lambda)$ and $Z^-(z,\lambda)$ where $Z^+(z,\lambda)+\sigma_kZ^-(z,\lambda)=Z_k(z,\lambda)$ have the same properties at these points respectively and \begin{eqnarray} Z^+(z,\lambda)=\sum_{m{\geq}1}\left(\frac{1}{2\lambda}\right)^{2m}X_{2m}(z)\nonumber\\ Z^-(z,\lambda)=\sum_{m{\geq}0}\left(\frac{1}{2\lambda}\right)^{2m+1}X_{2m+1}(z) \label{555} \end{eqnarray} If we now take into account that $\chi_{i\to j}(\lambda)\equiv\lim_{z\to\infty_j}\chi_i(z,\lambda)= \chi_{j\to i}(\lambda)$ where $\infty_j$ is the infinite point of the sector $S_j$ communicated canonically with the sector $S_i$ (see ref.1 of \cite{4} and ref.5 of \cite{5}) then we get \begin{eqnarray} e^{\int_{\infty_i}^{\infty_j}(Z^++\sigma_iZ^-)dz}=e^{\int_{\infty_j}^{\infty_i}(Z^++\sigma_jZ^-)dz} \label{556} \end{eqnarray} Since however $\sigma_i=-\sigma_j$ then we get from \mref{556} \begin{eqnarray} \int_{\infty_i}^{\infty_j}Z^+(z,\lambda)dz=0 \label{557} \end{eqnarray} for any pair of canonically communicated sectors. However the integration in \mref{557} is now not limited by canonical paths since under the integral there are now no exponentials limiting this integration to canonical paths, i.e. these paths can be freely deformed with the integral beeing still convergent. It is easy to see that because of that a given integral \mref{557} can be deformed to any other integral between any pair of sectors (i.e. not necessarily communicated canonically) as well as to any integral along an arbitrary loop. It means that residua of $Z^+(z,\lambda)$ at the poles which it has at the turning points vanish. Therefore we conclude that the Riemann surface of $Z^+(z,\lambda)$ is just the $C$-plane on which it is meromorphic with vanishing residua at its poles. It means of course that this is the property of each $X_{2m},\;m\geq 1$ as well. Thus when integrating $Z_k(z,\lambda)$ along contours starting and ending at the same points we get only contribution from the odd part of $Z_k(z,\lambda)$, i.e. from $\sigma_kZ^-(z,\lambda)$. We can now formulate two theorems being analogous with the corresponding {\bf Theorems 3a-3b} of \cite{11}. {\bf Theorem 1a} {\it Zeros $\zeta_{l,qr}^{(k)}(\lambda)$ of $\psi_k(z,\lambda)$, $\lambda=|\lambda|e^{i\beta},\;|\lambda|=[|\lambda|]+\Lambda,\;0\leq\Lambda<1,\;k,l=1,...,n+2,\;q=0,1,2,...,\; r=0,1,...,[|\lambda|]-1$, in the non-critical cases and in the regular limit $\lambda\to\infty$ (with constant value of $\Lambda$), are distributed on $C_{cut}$ uniquely along the corresponding exceptional SL's according to the formulae}: \begin{eqnarray} \int_{K_l(\zeta_{l,qr}^{(k)}(\lambda))}\ll(\frac{1}{2}\sqrt{W_n(y)}+ \frac{1}{2\lambda}Z^-(y,\lambda)\r)dy=\sigma_k\ll(q[|\lambda|]+r-\frac{1}{4}\r)\frac{i\pi}{\lambda} \label{14a} \end{eqnarray} {\it where $K_l(\zeta_{l,qr}^{(k)}(\lambda))$ is a contour which starts and ends at $\zeta_{l,qr}^{(k)}(\lambda)$ rounding the turning point $z_l$ anticlockwise (see} Fig.1{\it). Zeros $\zeta_{l,qr}^{(k)}(\lambda)$ for $q>0$ have the following semiclassical expansion (see Appendix A):} \begin{eqnarray} \zeta_{l,qr}^{(k)}(\lambda)=\sum_{p\geq 0}\frac{1}{\lambda^p}\zeta_{l,qrp}^{(k)}(\Lambda) \label{141} \end{eqnarray} {\it with the two first terms given by}: \vskip 18pt \begin{tabular}{c} \psfig{figure=Fig1.EPS,width=11cm}\\ Fig.1 Ecxeptional lines (bold Stokes lines), zeros $\zeta_{l,qr}^{(k)}(\lambda)$ of $\psi_k(z,\lambda)$ and \\the itegration contour $K_l(\zeta_{l,qr}^{(k)}(\lambda))$ \end{tabular} \vskip 18pt \begin{eqnarray} \int_{K_l(\zeta_{l,qr0}^{(k)})}\frac{1}{2}\sqrt{W_n(y)}dy= \int_{z_l}^{\zeta_{l,qr0}^{(k)}}\sqrt{W_n(y)}dy=\sigma_kqi\pi e^{-i\beta}\nonumber\\ \zeta_{l,qr1}^{(k)}(\Lambda)=\sigma_k(r-q\Lambda-\frac{1}{4})\frac{i\pi e^{-i\beta}}{\sqrt{W_n(\zeta_{l,qr0}^{(k)})}} \label{142} \end{eqnarray} {\it For $q=0$ we have instead $\zeta_{l,0r0}^{(k)}(\Lambda)\equiv z_l$ and} \begin{eqnarray} \int_{z_l}^{z_l+\zeta_{l,0r1}^{(k)}/\lambda}\sqrt{W_n(y)}dy= (r-\frac{1}{4})\frac{i\pi}{\lambda}\nonumber\\ r>\frac{|\lambda|}{\pi}\limsup_{|\phi|\leq\pi}\ll|\int_{z_l}^{z_l+\epsilon e^{i\phi}}\sqrt{W_n(y)}dy\r| \label{142b} \end{eqnarray} {\it as well as} \begin{eqnarray} \zeta_{l,0r2}^{(k)}=\frac{1}{8}\frac{\int_{K_l(z_l+\zeta_{l,0r1}^{(k)}/\lambda)}X_1(y)dy} {\sqrt{W_n(z_l+\zeta_{l,0r1}^{(k)}/\lambda)}} \label{142c} \end{eqnarray} {\it with $|\zeta_{l,0r1}^{(k)}/\lambda|>\epsilon$}. Assuming now that there is a single inner SL between the roots $z_{r_1}\in\partial S_{k_1}$ and $z_{r_2}\in\partial S_{k_2}$ while all the remaining SL's of the SG corresponding to $W_n(z)$ are external and putting: \begin{eqnarray} \sigma_{k_1}\lambda_s(R)\int_{z_{r_1}}^{z_{r_2}}\sqrt{W_n(y)}dy\equiv \sigma_{k_1}\lambda_s(R)I_{r_1r_2}=-\frac{1}{2}(s+R)i\pi\nonumber\\s=0,1,2,...,\;-\frac{1}{2}\leq |R|<\frac{1}{2} \label{24} \end{eqnarray} we get the following theorem analogous with {\bf Theorem 3b} of \cite{11}: \vskip 18pt {\bf Theorem 1b} {\it Zeros $\zeta_{l,qr}^{(k)}(\lambda)$ of $\psi_k(z,\lambda)$, $k=1,...,n+2,\;l=1,...,n,\;q=1,2,...,\; r=0,1,...,[|\lambda|]-1$ in the critical cases and in the regular limit $\lambda\to\infty$ are distributed on $C_{cut}$ uniquely along the corresponding exceptional SL's according to the formulae: a) $(k,l)\neq (k_1,r_1),(k_1,r_2),(k_2,r_1),(k_2,r_2)$, $|\lambda|=[|\lambda|]+\Lambda,\;0\leq\Lambda<1$ and $\Lambda$ is fixed} \begin{eqnarray} \int_{K_l(\zeta_{l,qr}^{(k)}(\lambda))}\ll(\frac{1}{2}\sqrt{W_n(y)}+ \frac{1}{2\lambda}Z^-(y,\lambda)\r)dy=\sigma_k\ll(q[|\lambda|]+r-\frac{1}{4}\r)\frac{i\pi}{\lambda} \label{14b} \end{eqnarray} {\it where $K_l(\zeta_{l,qr}^{(k)}(\lambda))$ is a contour which starts and ends at $\zeta_{l,qr}^{(k)}(\lambda)$ rounding the turning point $z_l$ anticlockwise. Zeros $\zeta_{l,qr}^{(k)}(\lambda)$ have the following semiclassical expansion:} \begin{eqnarray} \zeta_{l,qr}^{(k)}(\lambda)=\sum_{p\geq 0}\frac{1}{\lambda^p}\zeta_{l,qrp}^{(k)}(\Lambda) \label{143} \end{eqnarray} {\it with two first terms given by}: \begin{eqnarray} \int_{K_l(\zeta_{l,qr0}^{(k)})}\frac{1}{2}\sqrt{W_n(y)}dy= \int_{z_l}^{\zeta_{l,qr0}^{(k)}}\sqrt{W_n(y)}dy=\sigma_kqi\pi e^{-i\beta}\nonumber\\ \zeta_{l,qr1}^{(k)}(\Lambda)= \sigma_k(r-q\Lambda-\frac{1}{4})\frac{i\pi e^{-i\beta}}{\sqrt{W_n(\zeta_{l,qr0}^{(k)})}} \label{144} \end{eqnarray} {\it For $q=0$ we have instead $\zeta_{l,0r0}^{(k)}(\Lambda)\equiv z_l$ and} \begin{eqnarray} \int_{z_l}^{z_l+\zeta_{l,0r1}^{(k)}(\Lambda)/\lambda}\sqrt{W_n(y)}dy= (r-\frac{1}{4})\frac{i\pi}{\lambda}\nonumber\\ r>\frac{|\lambda|}{\pi}\limsup_{|\phi|\leq\pi}\ll|\int_{z_l}^{z_l+\epsilon e^{i\phi}}\sqrt{W_n(y)}dy\r| \label{144a} \end{eqnarray} {\it as well as} \begin{eqnarray} \zeta_{l,0r2}^{(0)}(\Lambda)=\frac{1}{8}\frac{\int_{K_l(z_l+\zeta_{l,0r1}^{(k)}(\Lambda)/\lambda)}X_1(y)dy} {\sqrt{W_n(z_l+\zeta_{l,0r1}^{(k)}(\Lambda)/\lambda)}} \label{144b} \end{eqnarray} {\it b) $(k,l)=(k_1,r_1),(k_1,r_2),(k_2,r_1),(k_2,r_2),\; |\lambda_s(R)|=[|\lambda_s(R)|]+\Lambda_s(R),\; 0\leq\Lambda_s(R)<1,\;s=0,1,2,...,$ and $R$ fixed where $\lambda_s(R)=|\lambda_s(R)|e^{i\beta_s(R)}=-\frac{s+R}{I_{r_1r_2}}i\pi\sigma_k$ ($\Lambda_s(R)$ is bounded but not fixed) In these cases the number $q$ is bounded, i.e. $q\leq |I_{r_1r_2}|/\pi$ and in} \mref{144} {\it the minus sign is to be chosen according to our earlier conventions. In the regular limit $\lambda_s(R)\to\infty$ there are two infinite sequences of zeros $\zeta_{r_2,qr}^{(k_1)\pm},\;q=1,2,...,\;r=0,1,...,[|\lambda_s(R)|]-1$, of $\psi_{k_1}(z,\lambda_s(R))$ distributed along the two ESL's $L^{r_2'}$ and $L^{r_2''}$ of the sector $S_{k_2}$ according to the following rules}: \begin{eqnarray} \int_{K_{r_2}(\zeta_{r_2,qr}^{(k_1)\pm})}\ll(\frac{1}{2}\sqrt{W_n(y)}+\frac{1}{2\lambda_s(R)}Z^-\r)dy =\pm\sigma_{k_1}\ll(q[|\lambda_s(R)|]+r-\frac{1}{4}+\frac{R}{2}\r)\frac{i\pi}{\lambda_s(R)}+\nonumber\\ \pm\frac{1}{4\lambda_s(R)}\oint_{K_{r_1r_2}}Z^-dy- \frac{\sigma_{k_1}}{2\lambda_s(R)}\ln2\cos\ll(R\pi-\frac{i\sigma_{k_1}}{2}\oint_{K_{r_1r_2}}Z^-dy\r) \label{14c} \end{eqnarray} {\it with the closed contour $K_{r_1r_2}$ surrounding the internal SL $L_{r_1r_2}$ and with the following first coefficients of the corresponding semiclassical expansion of $\zeta_{r_2,qr}^{(k_1)\pm}$}: \begin{eqnarray} \int_{z_{r_2}}^{\zeta_{r_2,qr0}^{(k_1)\pm}}\sqrt{W_n(y)}dy=\mp\sigma_{k_1}qi\pi e^{-i\beta_s(R)}\nonumber\\ \zeta_{r_2,qr1}^{(k_1)\pm}(R)=\mp\sigma_{k_1}\ll(r-q\Lambda_s(R)-\frac{1}{4}+\frac{R}{2} \pm\frac{1}{2}\ln2\cos(R\pi)\r)\frac{i\pi e^{-i\beta_s(R)}}{\sqrt{W_n(\zeta_{r_2,qr0}^{(k_1)\pm})}} \label{145} \end{eqnarray} {\it where the plus sign corresponds to a vicinity of the SL being the upper boundary of $S_{k_2}$ while the minus one to a vicinity of its lower boundary. Mutatis mutandis the same results are valid for the solution $\psi_{k_2}(z,\lambda_s(R))$} Fig.2 shows all ESL's occupied by zeros of $\psi_{k_1}(z,\lambda_s(R))$ in the limits considered in the above theorem. \section{The quantized case of the limit $\hbar\to 0$} \hskip+2em Let us consider now the $\hbar$-quantized case. We would like to consider this limit a little bit more generally than it was done by Hezari \cite{9} assuming that the quantization condition is provided by identifying the fundamental solutions defined in the sectors $S_{k_1},S_{k_2}$. Such an assumption needs however a comment since it can appear that the corresponding quantization condition does not exist at all in the limit $\lambda(=2m\hbar^{-1})\to\infty$. First let us note that if the polynomial potential is fixed and $W_n(z)$ has only simple roots then the argument of $\hbar$ remains as the unique tool for changing a form of the corresponding SG, i.e. this SG is independent of the absolute value of $\hbar$. On the other hand changing $\arg\hbar$ one can find only a finite number of its values for which the corresponding SG is critical, i.e. for all except a final number of values of $\arg\hbar$ these SG's are non-critical. But for the non-critical SG's each two sectors of such graphs communicate canonically, what means that each FS can be continued to each sector directly along canonical path. This means further however that to vanish in any other sector except the one the FS considered is defined in it is necessary for its factor $\chi(z,\lambda)$ to vanish for $z\to\infty$ in such a sector. But since $\chi(z,\lambda)\to 1$ along any canonical path for $|\lambda|\to\infty$ the latter vanishing can not happen for $|\lambda|$ sufficiently large, i.e. for such $\lambda$ there are no solutions for the $\lambda$-eigenvalue problem when SG's are non-critical. Therefore we have to conclude that solutions of the $\lambda$-eigenvalue problem with arbitrarily large eigenvalues of $\lambda$ can exist only for critical cases of SG's and two sectors can not communicate canonically if FS's which are defined in them are to coincide. \vskip 15pt \begin{tabular}{c} \psfig{figure=Fig2.EPS,width=11cm}\\ Fig.2 Ecxeptional lines (bold Stokes lines), zeros $\zeta_{k_2,qr}^{(k_1)-}(\lambda_s(R))$ of $\psi_{k_1}(z,\lambda_s(R))$\\ and the integration contour $K_{k_2}(\zeta_{k_2,qr}^{(k_1)-}(\lambda_s(R)))$ in the non-quantized critical case \end{tabular} \vskip 15pt However as we have already mentioned above there is only a finite number of $\arg\lambda$ for which SG's can be critical. For a given such $\arg\lambda$ (and fixed polynomial) the only variable which can be quantized then is $|\lambda|$, i.e. a real quantity. But a quantization condition which one gets by matching two FS's is in general a complex number equation, which means that one gets typically two real conditions for one real variable. Such conditions can not be satisfied in general by one real variable only. Also a change of $\arg\lambda$ from its one discret value to another seems not to save the situation. Therefore one needs some additional assumptions which allow one to eliminate somehow one of these two real conditions. As such assumptions can be chosen for example the realness of polynomial potentials used together with matching two real FS's which are allowed by the real polynomial potentials. One can convince oneself that in such a case the corresponding quantization equation is a real one (see for example \mref{52}). Consequently to proceed further we have to assume that we have introduced proper conditions for a polynomial considered as well as the choice of solutions made above has been proper. Then it is not difficult to get the following asymptotic quantization condition for FS's defined in the sectors $S_{k_1},S_{k_2}$: \begin{eqnarray} 1+\exp\ll[\sigma_{k_1}\oint_{K_{r_1r_2}}\ll(\lambda\sqrt{W_n(y)}+Z^-(y,\lambda)\r)dy\r]=0 \label{25} \end{eqnarray} or \begin{eqnarray} \oint_{K_{r_1r_2}}\ll(\lambda_s\sqrt{W_n(y)}+Z^-(y,\lambda_s)\r)dy=-\sigma_{k_1}(2s+1)i\pi,\;\;\;\;s=0,1,2,... \label{26} \end{eqnarray} Compairing the last result with \mref{24} we see that $R=\frac{1}{2}$ in this formula and we can repeat arguments of the previous paper \cite{11} leading us to {\bf Corollary 1b} of this paper almost with no changes. \vskip 20pt {\bf Corollary 1} {\it In the singular limit $\lambda_s\to\infty$ roots of FS's for the potential $P_n(z)$ with a single inner SL which is quantized (in the sense of eq.}\mref{26}){\it are distributed uniquely on the exceptional lines for both the quantized and not quantized solutions} \section{The critical case -- a double-well potential. The not quantized case} \hskip+2em In this section we would like to extend the results of the previous two sections to the cases of SG's with two and more internal SL's. However as it follows from the discussion accompanied the case of SG's with a single internal SL such an extension can not be expected to be direct so that we shall limit initially ourselves to the cases of double-well (D-W) real polynomial potentials and in particular to a real polynomial of tenth degree $P_{10}(z)$ since as we shall see for this case we can arrange SG shown in Fig.3 to be composed of all essential ingredients of a general case. However even simplifying the cosidered cases of potentials in the way described above we are still faced with the problem of ESL's for these cases. In fact we can not define them as previously as forming a boundary of the canonical domain corresponding to a chosen FS since most of SL's which are the limit loci of zeros of this chosen FS can lie outside this domain. Therefore to identify such SL's as hypothetical ESL's one has to continue analytically the solution chosen outside its canonical domain in a way allowing one to take the semiclassical limit in any stage of this continuation. The latter demand means that such analytical continuations have to be done along canonical paths. This is always possible when all turning points are simple. In many of our earlier papers we have shown how such analytical continuations can be done with the help of other FS's (see, for example, ref.4 of \cite{4} or ref.1,2 of \cite{5} for the respective procedure). It has to be stressed also that each FS given in the form \mref{10}-\mref{13} can be continued analytically in this way to any point of $C_{cut}$ except the TP's in which the form \mref{10}-\mref{13} is singular. \vskip 18pt \begin{tabular}{c} \psfig{figure=Fig3.EPS,width=11cm}\\ Fig.3 The Stokes graph for the double-well potential $P_{10}(z)$. The bold full lines\\ denote cuts for $W_{10}^{\frac{1}{2}}(z)$ and $W_{10}^{-\frac{1}{4}}(z)$ while the bold dashed line - for $W_{10}^{-\frac{1}{4}}(z)$.\\ The outlined domains which are not communicating canonically with \\the sector $S_1$ contain possible ESL's of $\psi_1(z,\lambda)$. \end{tabular} \vskip 18pt On the other hand since we are going to compare configurations of loci of two chosen FS's in the cases when they are linearly independent and when they are matched (quantizing in this way the Planck constant) then we see invoking the discussion of the previous section that we have to choose such a pair of solutions properly to ensure this matching. It seems therefore that in the case of the chosen potential $P_{10}(z)$ two obvious candidates of FS's for acheaving this goal are the solutions $\psi_1(z,\lambda)$ and $\psi_7(z,\lambda)$ defined in the respective sectors $S_1$ and $S_7$ of Fig.3. Below we shall investigate loci of zeros of these solutions and we shall call as previously exceptional the lines along which these zeros are distributed and if they are SL's we shall maintain the previous description as ESL's. A general observation suggesting where the limit loci of zeros should be looked for in the critical cases of SG's is that the latter arise in the case of a fixed polynomial potential from non-critical ones by changing $\arg\lambda$ so that three SL's emerging from each root of the polynomial rotate around the root. During these rotations different particular SL's coincide leading to critical SG's. Consider the critical SG of Fig.3. It has appeared for $\arg\lambda=0$. However for sufficiently small $\epsilon$ and $0<|\arg\lambda|<\epsilon$ all SG's corresponding to these $\lambda$'s are non-critical. For these SG's {\bf Theorem 1a} applies so that for each $\psi_k(z,\lambda)$ its limit zeros distribution when $\lambda\to\infty$ coincides with its ESL's. If now we come back (by a rotation of SL's) to the critical configuration for $\arg\lambda=0$ then some of these ESL's will coincide partly with some other SL's. All these coinciding non-exceptional SL's become then the limit loci of zeros of the considered $\psi_k(z,\lambda)$ on these their parts which coincide with the ESL's. {\bf Theorem 1b} is the illustration of this rule. Applying the above suggestions to the SG of Fig.3 we readily get the corresponding limit distributions of zeros of $\psi_1(z,\lambda)$ and $\psi_7(z,\lambda)$ not matched with themselves (i.e. for not quantized $\lambda$) in the forms shown in Fig.4a and Fig.4b respectively. Nevertheless while these suggestions are very convincing they have to be confirmed by detailed calculations. Therefore it follows that we should continue analytically in the way mentioned above the solution $\psi_1(z,\lambda)$ ($\lambda$ is now real positive) to the vicinities of all these SL's emerging from the turning points $z_1,...,z_{\bar 7}$ with which the sector $S_1$ cannot communicate canonically, looking for positions of possible zeros of this solution in the distinguished domains when the limit $\lambda\to+\infty$ is taken. We shall consider these positions for both the cases - the qunatized and the not quantized $\lambda$. The relevant vicinities are shown in Fig.3 as the outlined domains containing the corresponding SL's. However, because of the obvious symmetry of the SG considered with respect to the real axis we can limit our investigations only to the upper part of these domains including the SL's emerging from the turning points $z_1,...,z_7$ (i.e. with no bars over indeces). As previously the main tool of making these investigations is the analytical continuation of the solution $\psi_1$ to the respective domains along canonical paths expressing the solution by linear combinations of FS's which can contact with these domains canonically. In our case this corresponds to make linear combinations of $\psi_1$ by the following pairs of FS's: 1. ($\psi_2,\psi_3$), 2.($\psi_2,\psi_{\bar 2}$), 3.($\psi_3,\psi_4$), 4.($\psi_3,\psi_5$), 5.($\psi_4,\psi_5$), 6.($\psi_5,\psi_{\bar 5}$), 7.($\psi_5,\psi_6$) and 8.($\psi_6,\psi_7$). Note that signatures of two FS's in each chosen pair are different. This is quite important condition since the choice of two FS's with the same signatures would provide us with some identities in the limit $\lambda\to\infty$ rather than with appropriate conditions for the limit loci of zeros (see App.C). We shall consider all the cases step by step. In this section $\lambda$ is assumed to be real and positive and the limits $\lambda\to +\infty$ which we are going to consider are regular only. These regular limits are defined by sequences $\lambda_{s_i},\;s_i=0,1,2,...,\;i=1,2,$ of $\lambda$'s satisfying the conditions: \begin{eqnarray} \lambda_{s_1}\int_{z_1}^{z_2}\sqrt{W_{10}(y)}dy=-(s_1+R_1)i\pi,\;\;\;|R_1|<\frac{1}{2},\;\;\;s_1=0,1,2,... \label{261} \end{eqnarray} or \begin{eqnarray} \lambda_{s_2}\int_{z_3}^{z_4}\sqrt{W_{10}(y)}dy=(s_2+R_2)i\pi,\;\;\;|R_2|<\frac{1}{2},\;\;\;s_2=0,1,2,... \label{262} \end{eqnarray} for {\it fixed} $R_1$ or $R_2$ respectively or by the representation $\lambda=[\lambda]+\Lambda,\;0\leq\Lambda<1$ where $[\lambda]$ is an integer part of $\lambda$ and $\Lambda$ is fixed. The integrals in \mref{261}-\mref{262} are taken above the respective cuts. To avoid possible inconsistences or errors in taking properly the limit $\lambda\to +\infty$ we first establish the exact forms of the formulae providing us with conditions for loci of zeros of $\psi_1(z,\lambda)$. Next these exact conditions are substituted by their full asymptotic expansions i.e. up to all orders in $\lambda^{-1}$ from which one could select the lowest order of the semiclassical asymptotic in a way similar to that used in {\bf Theorem a,b}. {\it The case 1.} Taking into account the representation \mref{10} and the conventions accompanying it as well as Fig.1 we get (see App.B or, for example, ref.4 of \cite{4} or ref.1,2 of \cite{5} for the respective procedure): \begin{eqnarray} \psi_1(z,\lambda)=\alpha_{\frac{1}{2}\to 3}\psi_2(z,\lambda)+\alpha_{\frac{1}{3}\to 2}\psi_3(z,\lambda)=\nonumber\\ W_{10}^{-\frac{1}{4}}(z)e^{\lambda\int_{z_1}^{z_5}\sqrt{W_{10}(y)}dy}\ll(\chi_{1\to 3}(\lambda)\chi_2(z,\lambda) e^{\lambda\int_{z_5}^z\sqrt{W_{10}(y)}dy}+i\chi_3(z,\lambda)e^{-\lambda\int_{z_5}^{z}\sqrt{W_{10}(y)}dy}\r) \label{271} \end{eqnarray} where the last representation of $\psi_1(z,\lambda)$ has been written in the sector $S_3$-side of the cut beginning at $z=z_5$ in Fig.3. Here $\alpha_{\frac{i}{j}\to{k}}=\lim_{z\to\infty_k}\frac{\psi_i(z,\lambda)}{\psi_j(z,\lambda)}$ and $\chi_{i\to j}(\lambda)=\lim_{z\to\infty_j}\chi_i(z,\lambda)=\chi_{j\to i}(\lambda)$ where $\infty_{i,j}\in S_{i,j}$ respectively and the limits are calculated along canonical paths. For particularities of getting the eq.\mref{271} as well as the others below see Appendix B. Therefore for a distribution of zeros $\zeta_{5,m}^{(1)}(\lambda)$ of $\psi_1(z,\lambda)$ in the considered domain we get the condition: \begin{eqnarray} \int_{z_5}^{\zeta_{5,m}^{(1)}(\lambda)}\sqrt{W_{10}(y)}dy=(m-\frac{1}{4})\frac{i\pi}{\lambda}-\frac{1}{2\lambda} \ln\frac{\chi_3(\zeta_{5,m}^{(1)}(\lambda),\lambda)}{\chi_{1\to 3}(\lambda)\chi_2(\zeta_{5,m}^{(1)}(\lambda),\lambda)} \label{272} \end{eqnarray} with $m$ - an integer. Taking now in \mref{272} the regular limit $\lambda(=[\lambda]+\Lambda)\to+\infty$ with fixed $\Lambda$ and noticing by \mref{551}-\mref{553} that asymptotically: \begin{eqnarray} \ll(\frac{\chi_3(\zeta_{5,m}^{(1)}(\lambda),\lambda)}{\chi_{1\to 3}(\lambda)\chi_2(\zeta_{5,m}^{(1)}(\lambda),\lambda)}\r)^{as}= e^{-\int_{K_5{(\zeta_{5,m}^{(1)}}(\lambda))}Z^-(y,\lambda)dy} \label{2721} \end{eqnarray} where $K_5{(\zeta_{5,m}^{(1)}}(\lambda))$ is a contour shown in Fig.3 which starts end ends at the point $\zeta_{5,m}^{(1)}(\lambda)$ rounding the point $z_5$ anticlockwise (this contour is not closed since it starts and finishes on different sheets of the corresponding Riemann surface) we get (putting $m=q[\lambda]+r$): \begin{eqnarray} \int_{K_5{(\zeta_{5,qr}^{(1)}}(\lambda))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}dy+\frac{1}{2\lambda} Z^-(y,\lambda)dy\r)=(q[\lambda]+r-\frac{1}{4})\frac{i\pi}{\lambda}\nonumber\\q=0,1,2,3,...,\;r=0,1,...,[\lambda]-1 \label{2722} \end{eqnarray} The last formula is the exact implicite condition for a semiclassical asymptotic expansion of $\zeta_{5,m}^{(1)}(\lambda)$ in $\lambda$ which therefore have the forms \mref{141}-\mref{142c} given in {\bf Theorem 1}. {\it The case 2.} The respective linear combination is: \begin{eqnarray} \psi_1(z,\lambda)=\alpha_{\frac{1}{2}\to{\bar 2}}\psi_2(z,\lambda)+\alpha_{\frac{1}{\bar 2}\to{2}}\psi_{\bar 2}(z,\lambda)=\nonumber\\ \frac{W_{10}^{-\frac{1}{4}}(z)}{\chi_{2\to{\bar 2}}(\lambda)} \ll(-i\chi_2(z,\lambda)e^{-\lambda\int_{z_1}^z\sqrt{W_{10}(y)}dy}+ \chi_{\bar 2}(z,\lambda)e^{\lambda\int_{z_1}^{z}\sqrt{W_{10}(y)}dy}\r) \label{273} \end{eqnarray} where the last equation has been written above the cut between $z_1$ and $z_2$. The asymptotic distribution of zeros $\zeta_{1,qr}^{(1)}(\lambda)$ in the vicinity of the inner SL between the points $z_1$ and $z_2$ when the regular limit $\lambda(=[\lambda]+\Lambda)\to+\infty,\;\Lambda$ - fixed, is taken is then given by: \begin{eqnarray} \int_{K_1{(\zeta_{1,qr}^{(1)}}(\lambda))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}dy+\frac{1}{2\lambda} Z^-(y,\lambda)dy\r)=(q[\lambda]+r-\frac{1}{4})\frac{i\pi}{\lambda}\nonumber\\q=0,1,2,...,\;r=0,1,...,[\lambda]-1 \label{274} \end{eqnarray} The number $q$ however is bounded by a "length" $\int_{z_1}^{z_2}\sqrt{W_{10}(y)}dy=-(q_1+r_1)i\pi$, measured above the cut $z_1,z_2$ of the SL, with integer $q_1\geq 0$ and $0\leq r_1<1$, i.e. $q\leq q_1$. {\it The case 3.} We get for this case: \begin{eqnarray} \psi_1(z,\lambda)=\alpha_{\frac{1}{3}\to{\bar 3}}\psi_3(z,\lambda)+\alpha_{\frac{1}{\bar 3}\to{3}}\psi_{\bar 3}(z,\lambda)=\nonumber\\ \ll(\alpha_{\frac{1}{3}\to{\bar 3}}+\alpha_{\frac{1}{\bar 3}\to{3}}\alpha_{\frac{\bar 3}{3}\to{4}}\r)\psi_3(z,\lambda)+ \alpha_{\frac{1}{\bar 3}\to{3}}\alpha_{\frac{\bar 3}{4}\to{3}}\psi_4(z,\lambda) \label{34} \end{eqnarray} or \begin{eqnarray} \psi_1(z,\lambda)=-i\frac{e^{-\lambda\ll(\int_{z_2}^{z_6}-\frac{1}{2}\oint_{K_1}\r)\sqrt{W_{10}(y)}dy}}{\chi_{3\to{\bar 3}}(\lambda)}\times\nonumber\\ \ll(\chi_{1\to{\bar 3}}(\lambda)+ e^{-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy}\chi_{1\to 3}\chi_{{\bar 3}\to{4}}(\lambda)\r)\psi_3(z,\lambda)+\nonumber\\ e^{\lambda\ll(\int_{z_2}^{z_6}-\frac{1}{2}\oint_{K_1}\r)\sqrt{W_{10}(y)}dy}\chi_{{1}\to{3}}(\lambda)\psi_4(z,\lambda)=\nonumber\\ \frac{W_{10}^{-\frac{1}{4}}(z)}{\chi_{3\to{\bar 3}}(\lambda)}\ll[-iE_1 e^{\lambda\ll(\frac{1}{2}\oint_{K_1}-\int_{z_2}^{z}\r)\sqrt{W_{10}(y)}dy}\chi_3(z,\lambda)+\r.\nonumber\\ \ll.\chi_{{1}\to{3}}(\lambda)\chi_{3\to {\bar 3}}(\lambda) e^{-\lambda\ll(\frac{1}{2}\oint_{K_1}-\int_{z_2}^z\r)\sqrt{W_{10}(y)}dy}\chi_4(z,\lambda)\r] \label{35} \end{eqnarray} where \begin{eqnarray} E_1=\chi_{1\to{\bar 3}}(\lambda)+e^{-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy}\chi_{1\to 3}(\lambda)\chi_{{\bar 3}\to{4}}(\lambda) \label{351} \end{eqnarray} It follows from the formula \mref{35} that zeros $\zeta_2^{(1)}(\lambda)$ of $\psi_1(z,\lambda)$ in the outlined domain of Fig.4 containing the sector $S_4$ have to satisfy the equation: \begin{eqnarray} e^{2\lambda\int_{z_2}^{\zeta_{2}^{(1)}(\lambda)}\sqrt{W_{10}(y)}dy}= ie^{\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy}\frac{E_1} {\chi_{1\to 3}(\lambda)\chi_{3\to {\bar 3}}(\lambda)} \frac{\chi_3(\zeta_2^{(1)}(\lambda),\lambda)}{\chi_4(\zeta_2^{(1)}(\lambda),\lambda)} \label{36} \end{eqnarray} from which we get infinitely many solutions for $\zeta_2^{(1)}(\lambda)$: \begin{eqnarray} \int_{z_2}^{\zeta_{2,m}^{(1)}(\lambda)}\sqrt{W_{10}(y)}dy=-(m-\frac{1}{4})\frac{i\pi}{\lambda}+ \frac{1}{2}\oint_{K_1}\sqrt{W_{10}(y)}dy+ \frac{1}{2\lambda}\ln E_1+\nonumber\\ \frac{1}{2\lambda}\ln\ll(\frac{1}{\chi_{1\to 3}(\lambda)\chi_{3\to {\bar 3}}(\lambda)} \frac{\chi_3(\zeta_{2,m}^{(1)}(\lambda),\lambda)}{\chi_4(\zeta_{2,m}^{(1)}(\lambda),\lambda)}\r) \label{361} \end{eqnarray} with integer $m$. Taking therefore the regular limit $\lambda_{s_1}\to\infty$ ($R_1$ is fixed) we get: \begin{eqnarray} \int_{K_2(\zeta_{2,qr}^{(1)}(\lambda_{s_1}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_{s_1}}Z^-(y,\lambda_{s_1})\r)dy=(q[\lambda_{s_1}]+r-\frac{1}{4}-\frac{R_1}{2})\frac{i\pi}{\lambda_{s_1}}-\nonumber\\ \frac{1}{4\lambda_{s_1}}\oint_{K_1}Z^-(y,\lambda_{s_1})dy- \frac{1}{2\lambda_{s_1}}\ln(2\cos(R_1\pi-\frac{i}{2}\oint_{K_1}Z^-(y,\lambda_{s_1})dy))\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda_{s_1}]-1 \label{363} \end{eqnarray} It is seen therefore that similarly to the critical case of SG considered earlier (see {\bf Theorem 1b}) zeros of $\psi_1(z,\lambda)$ are distributed along the infinite SL emerging from the turning point $z_2$. {\it The case 4.} In fact, by considering this case we would like to check that the results of the previous case are valid in a vicinity of the inner SL between $z_6$ and $z_{\bar 6}$, where the combination in the second of eq. \mref{34} can not be continued canonically. We can use the first of the eq. \mref{34} to get: \begin{eqnarray} \psi_1(z,\lambda)=\ll(\alpha_{\frac{1}{3}\to{\bar 3}}+\alpha_{\frac{1}{\bar 3}\to{3}}\alpha_{\frac{\bar 3}{3}\to{5}}\r)\psi_3(z,\lambda)+ \alpha_{\frac{1}{\bar 3}\to{3}}\alpha_{\frac{\bar 3}{5}\to{3}}\psi_5(z,\lambda)=\nonumber\\ -iE_2\frac{\chi_3(z,\lambda)}{\chi_{3\to {\bar 3}}(\lambda)}e^{\ll(-\lambda\int_{z_2}^{z}+\frac{1}{2}\lambda\oint_{K_1}\r)\sqrt{W_{10}(y)}dy}+ \frac{\chi_{1\to 3}(\lambda)\chi_5(z,\lambda)}{\chi_{3\to 5}(\lambda)}e^{\ll(\lambda\int_{z_2}^{z}-\frac{1}{2}\lambda\oint_{K_1}\r)\sqrt{W_{10}(y)}dy} \label{364} \end{eqnarray} where \begin{eqnarray} E_2=\chi_{1\to{\bar 3}}(\lambda)+ e^{-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy}\chi_{1\to{3}}(\lambda)\frac{\chi_{{\bar 3}\to 5}(\lambda)}{\chi_{3\to 5}(\lambda)} \label{365} \end{eqnarray} Therefore from \mref{364} we get: \begin{eqnarray} \int_{z_2}^{\zeta_{2,m}^{(1)}(\lambda)}\sqrt{W_{10}(y)}dy=-(m-\frac{1}{4})\frac{i\pi}{\lambda}+ \frac{1}{2}\oint_{K_1}\sqrt{W_{10}(y)}dy+ \frac{1}{2\lambda}\ln E_2+\nonumber\\ \frac{1}{2\lambda}\ln\ll(\frac{\chi_{3\to 5}(\lambda)}{\chi_{1\to 3}(\lambda)\chi_{3\to {\bar 3}}(\lambda)} \frac{\chi_3(\zeta_{2,m}^{(1)})(\lambda),\lambda)}{\chi_5(\zeta_{2,m}^{(1)}(\lambda),\lambda)}\r) \label{366} \end{eqnarray} with integer $m$. Taking now the limit $\lambda_{s_1}\to\infty$ in the last formula we obtain again the result \mref{363}. {\it The case 5.} For this case we have: \begin{eqnarray} \psi_1(z,\lambda)=(\alpha_{\frac{1}{3}\to{\bar 3}}\alpha_{\frac{3}{4}\to 5}+ \alpha_{\frac{1}{\bar 3}\to{3}}\alpha_{\frac{\bar 3}{4}\to 5})\psi_4(z,\lambda)+\nonumber\\ (\alpha_{\frac{1}{3}\to{\bar 3}}\alpha_{\frac{3}{5}\to 4}+ \alpha_{\frac{1}{\bar 3}\to{3}}\alpha_{\frac{\bar 3}{5}\to 4})\psi_5(z,\lambda)=\nonumber\\ \frac{W_{10}^{-\frac{1}{4}}(z) e^{\ll(-\lambda\int_{z_2}^{z_6}+\frac{1}{2}\lambda\oint_{K_1}\r)\sqrt{W_{10}(y)}dy}}{\chi_{3\to{\bar 3}}(\lambda)}\times\nonumber\\ \ll(-i\chi_{3\to 5}(\lambda)E_2\chi_4(z,\lambda)e^{-\lambda\int_{z_6}^{z}\sqrt{W_{10}(y)}dy}+ E_1\chi_5(z,\lambda)e^{\lambda\int_{z_6}^{z}\sqrt{W_{10}(y)}dy}\r) \label{37} \end{eqnarray} Let us notice that: \begin{eqnarray} E_2=E_1+ e^{2\lambda\int_{z_2}^{z_6}\sqrt{W_{10}(y)}dy-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy} \chi_{1\to{3}}(\lambda)\frac{\chi_{3\to{\bar 3}}(\lambda)}{\chi_{3\to 5}(\lambda)} \label{371} \end{eqnarray} To get the last equation we have made use of the following identity: \begin{eqnarray} \frac{\chi_{{\bar 3}\to 5}(\lambda)}{\chi_{3\to 5}(\lambda)}=\chi_{\bar 3\to 4}(\lambda)+ e^{2\lambda\int_{z_2}^{z_6}\sqrt{W_{10}(y)}dy} \frac{\chi_{3\to{\bar 3}}(\lambda)}{\chi_{3\to 5}(\lambda)} \label{39} \end{eqnarray} The above identity is typical for each four FS's communicating with themselves canonically (see ref.1,2 of \cite{5} and App. B, formula \mref{B6}). Here these FS's are $\psi_i(z,\lambda)$ with $i=3,{\bar 3},4,5$. Now from \mref{37} for the distribution of zeros $\zeta_{6,m}^{(1)}$ of $\psi_1(z)$ in a vicinity of the right SL emerging from $z_6$ we get: \begin{eqnarray} \int_{z_6}^{\zeta_{6,m}^{(1)}(\lambda)}\sqrt{W_{10}(y)}dy=-(m-\frac{1}{4})\frac{i\pi}{\lambda}+\nonumber\\ \frac{1}{2\lambda}\ln\frac{E_2}{E_1}+\ln\frac{\chi_{3\to 5}(\lambda)\chi_4(\zeta_{6,m}^{(1)}(\lambda),\lambda)}{\chi_5(\zeta_{6,m}^{(1)}(\lambda),\lambda)} \label{38} \end{eqnarray} Taking now into account the relation \mref{371} we get from the above formula for the regular limit $\lambda(=[\lambda]+\Lambda)\to+\infty,\;\Lambda$ - fixed, the following result: \begin{eqnarray} \int_{K_6(\zeta_{6,qr}^{(1)}(\lambda))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+\frac{1}{2\lambda}Z^-(y,\lambda)\r)dy= -(q[\lambda]+r-\frac{1}{4})\frac{i\pi}{\lambda}\nonumber\\ q=0,1,2,3,...,\;r=0,1,...,[\lambda]-1 \label{40} \end{eqnarray} i.e. the limit loci of zeros $\zeta_{6,m}^{(1)}(\lambda)$ of $\psi_1(z,\lambda)$ in the case considered is just the right SL emerging from the turning point $z_6$ (see Fig.5). {\it The case 6.} For this case we have: \begin{eqnarray} \psi_1(z,\lambda)=A\psi_5(z,\lambda)+{\bar A}\psi_{\bar 5}(z,\lambda)=\nonumber\\ W_{10}^{-\frac{1}{4}}(y)\ll(A\chi_5(z,\lambda)e^{\lambda\int_{z_4}^{z}\sqrt{W_{10}(y)}dy}+ i{\bar A}\chi_{\bar 5}(z,\lambda)e^{-\lambda\int_{z_4}^{z}\sqrt{W_{10}(y)}dy}\r) \label{41} \end{eqnarray} where $A=\alpha_{\frac{1}{3}\to{\bar 3}}\alpha_{\frac{3}{5}\to{\bar 5}}+ \alpha_{\frac{1}{\bar 3}\to{3}}\alpha_{\frac{\bar 3}{5}\to{\bar 5}}$ and the last line in \mref{41} has been written above the cut between $z_3$ and $z_4$. By its definition $A$ is given by: \begin{eqnarray} A=-E_3 \frac{\chi_{3\to{\bar 5}}(\lambda)}{\chi_{3\to{\bar 3}}(\lambda)\chi_{5\to{\bar 5}}(\lambda)} e^{-\lambda\ll(\int_{z_2}^{z_3}+\frac{1}{2}\oint_{K_2}-\frac{1}{2}\oint_{K_1}\r)\sqrt{W_{10}(y)}dy} \label{42} \end{eqnarray} where \begin{eqnarray} E_3=\chi_{1\to{\bar 3}}+ e^{-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy}\chi_{1\to 3}(\lambda)\frac{\chi_{{\bar 3}\to{\bar 5}}(\lambda)}{\chi_{3\to{\bar 5}}(\lambda)}=E_1+\nonumber\\ e^{2\lambda\int_{z_2}^{z_6}\sqrt{W_{10}(y)}dy-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy} \chi_{1\to 3}(\lambda)\frac{\chi_{3\to{\bar 3}}(\lambda)\chi_{4\to{\bar 5}}(\lambda)}{\chi_{3\to{\bar 5}}(\lambda)} \label{421} \end{eqnarray} The last equation in \mref{421} has been obtained by using the identity: \begin{eqnarray} \frac{\chi_{{\bar 3}\to{\bar 5}}(\lambda)}{\chi_{3\to{\bar 5}}(\lambda)}=\chi_{\bar 3\to 4}(\lambda)+ e^{2\lambda\int_{z_2}^{z_6}\sqrt{W_{10}(y)}dy}\frac{\chi_{3\to{\bar 3}}(\lambda)\chi_{4\to{\bar 5}}(\lambda)}{\chi_{3\to{\bar 5}}(\lambda)} \label{44} \end{eqnarray} A condition for the distribution of zeros $\zeta_{3,m}^{(1)}$ of $\psi_1(z)$ in the vicinity of the internal SL linking $z_3$ with $z_4$ takes therefore the form (above the cut $z_3,z_4$): \begin{eqnarray} \int_{z_3}^{\zeta_{3,m}^{(1)}}\sqrt{W_{10}(y)}dy=(m-\frac{1}{4})\frac{i\pi}{\lambda}- \frac{1}{2}\oint_{K_1}\sqrt{W_{10}(y)}dy+\frac{1}{2\lambda}\ln\frac{{\bar E}_3}{E_3}+\nonumber\\ \frac{1}{2\lambda}\ln\frac{\chi_{{\bar 3}\to 5}(\lambda)\chi_{\bar 5}(\zeta_{3,m}^{(1)},\lambda)}{\chi_{3\to{\bar 5}}(\lambda)\chi_5(\zeta_{3,m}^{(1)},\lambda)} \label{43} \end{eqnarray} Taking into account the definition of $E_3$ by \mref{421} we see that in the regular limit $\lambda(=[\lambda]+\Lambda) \to\infty,\;\Lambda$ - fixed, the distribution of zeros of $\psi_1(z,\lambda)$ is given in this case by the condition: \begin{eqnarray} \int_{K_3{(\zeta_{3,qr}^{(1)}}(\lambda))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}dy+\frac{1}{2\lambda} Z^-(y,\lambda)dy\r)=-(q[\lambda]+r-\frac{1}{4})\frac{i\pi}{\lambda}\nonumber\\ q=0,1,2,...,q_2,\;\;r=0,1,...,[\lambda]-1 \label{45} \end{eqnarray} where $q_2\geq 0$ is given by $\int_{z_3}^{z_4}\sqrt{W_{10}(y)}dy=(q_2+r_2)i\pi,\;0\leq r_2<1$. The loci of zeros $\zeta_{3,m}^{(1)}(\lambda)$ of $\psi_1(z,\lambda)$ given by \mref{45} is therefore the internal SL between $z_3$ and $z_4$. {\it The case 7.} The linear combination is now the following: \begin{eqnarray} \psi_1(z,\lambda)=(A+{\bar A}\alpha_{\frac{\bar 5}{5}\to 6})\psi_5(z,\lambda)+ {\bar A}\alpha_{\frac{\bar 5}{6}\to 5}\psi_6(z,\lambda)=\nonumber\\ W_{10}^{-\frac{1}{4}}(y)\ll[(A+{\bar A}\chi_{{\bar 5}\to 6}(\lambda))\chi_5(z,\lambda) e^{\lambda\int_{z_4}^{z}\sqrt{W_{10}(y)}dy}+\r.\nonumber\\ \ll. i{\bar A}\chi_{5\to{\bar 5}}(\lambda)\chi_6(z,\lambda)e^{-\lambda\int_{z_4}^{z}\sqrt{W_{10}(y)}dy}\r] \label{46} \end{eqnarray} so that for the corresponding distribution of zeros $\zeta_{4,m}^{(1)}(\lambda)$ we get: \begin{eqnarray} \int_{z_4}^{\zeta_{4,m}^{(1)}}\sqrt{W_{10}(y)}dy=(m-\frac{1}{4})\frac{i\pi}{\lambda}- \frac{1}{2\lambda}\ln\ll(\frac{A}{\bar A}+\chi_{{\bar 5}\to 6}\r)+ \frac{1}{2\lambda}\ln\frac{\chi_{{\bar 5}\to 5}\chi_6(\zeta_{4,m}^{(1)})}{\chi_5(\zeta_{4,m}^{(1)})} \label{47} \end{eqnarray} Using again both the formulae \mref{42} and \mref{421} we get from \mref{47} for the regular limit $\lambda_{s_2}\to\infty$: \begin{eqnarray} \int_{K_4(\zeta_{4,qr}^{(1)}(\lambda_{s_2}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_{s_2}}Z^-(y,\lambda_{s_2})\r)dy=-(q[\lambda_{s_2}]+r-\frac{1}{4}- \frac{R_2}{2})\frac{i\pi}{\lambda_{s_2}}-\nonumber\\ \frac{1}{4\lambda_{s_2}}\oint_{K_2}Z^-(y,\lambda_{s_2})dy+ \frac{1}{2\lambda_{s_2}}\ln(2\cos(R_2\pi-\frac{i}{2}\oint_{K_2}Z^-(y,\lambda_{s_2})dy))\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda_{s_2}]-1 \label{48} \end{eqnarray} It follows from \mref{48} that the zeros $\zeta_{4,m}^{(1)}(\lambda_{s_2})$ all lie along the infinite SL emerging from $z_4$. {\it The case 8.} Expressing $\psi_5(z,\lambda)$ in the formula \mref{46} by $\psi_6(z,\lambda)$ and $\psi_7(z,\lambda)$ we get: \begin{eqnarray} \psi_1(z,\lambda)=[(A+{\bar A}\alpha_{\frac{\bar 5}{5}\to 6})\alpha_{\frac{5}{6}\to 7}+ {\bar A}\alpha_{\frac{\bar 5}{6}\to 5}]\psi_6(z,\lambda)+ (A+{\bar A}\alpha_{\frac{\bar 5}{5}\to 6})\alpha_{\frac{5}{7}\to 6}\psi_7(z,\lambda) \label{49} \end{eqnarray} and the condition for zeros $\zeta_{7,m}^{(1)}(\lambda)$ of $\psi_1(z,\lambda)$ takes on the form: \begin{eqnarray} \int_{z_7}^{\zeta_{7,m}^{(1)}(\lambda)}\sqrt{W_{10}(y)}dy=(m-\frac{1}{4})\frac{i\pi}{\lambda}- \frac{1}{2\lambda}\ln\ll(\chi_{5\to 7}(\lambda)+ \frac{\chi_{5\to{\bar 5}}(\lambda)e^{-2\lambda\int_{z_4}^{z_7}\sqrt{W_{10}(y)}dy}}{\frac{A}{\bar A}+\chi_{{\bar 5}\to 6}(\lambda)}\r)+\nonumber\\ \frac{1}{2\lambda}\ln\frac{\chi_7(\zeta_{7,m}^{(1)}(\lambda),\lambda)}{\chi_6(\zeta_{7,m}^{(1)}(\lambda),\lambda)} \label{50} \end{eqnarray} The asymptotic regular limit $\lambda(=\lambda]+\Lambda)\to\infty$ with fixed $\Lambda$ which we get from \mref{50} is therefore the following: \begin{eqnarray} \int_{K_7(\zeta_{7,qr}^{(1)}(\lambda))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda}Z^-(y,\lambda)\r)dy=(q[\lambda]+r-\frac{1}{4})\frac{i\pi}{\lambda}\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda]-1 \label{51} \end{eqnarray} i.e. all zeros $\zeta_{7,m}^{(1)}(\lambda)$ of $\psi_1(z,\lambda)$ tend to lie on the infinite SL emerging from $z_7$ and being a part of the sector $S_7$ boundary. Since the rest of the asymptotic distribution of zeros of $\psi_1(z,\lambda)$ can be obtained by the complex conjugation of loci of zeros just established in the distinguished cases 1.-8. above the final picture of their loci on the $C_{cut}$-plane is shown in Fig.4a as the bold lines. \vskip 15pt \begin{tabular}{c} \psfig{figure=Fig4a.EPS,width=11cm}\\ Fig.4a ESL's (bold Stokes lines) for $\psi_1(z,\lambda)$ in the regular limits $\lambda\to\infty$.\\ The non-quantized case \end{tabular} \vskip 15pt We can perform analogous calculation looking for zeros of $\psi_7(z,\lambda)$ in the limit $\lambda\to\infty$ and {\it mutatis mutandis} we get the results drawn in Fig.4b as the bold SL's. Let us note also in a relation with the next sections that the obtained two patterns of the limit loci of zeros of the FS's $\psi_1(z,\lambda)$ and $\psi_7(z,\lambda)$ do not change essentially if the double well considered is symmetric. These patterns get simply additional property to be mutually symmetric i.e. they can be obtained from each other by the inversion operation $z\to-z$. \section{The quantized asymmetric double--well potential} \hskip+2em Let us consider now possible changes which quantization of $\lambda$ can cause in the above distribution pictures of zeros of $\psi_1(z,\lambda)$. We quantize $\lambda$ by matching $\psi_1(z,\lambda)$ with $\psi_7(z,\lambda)$. By this condition both the solution are in fact identified and one can expect that the two separate distributions shown in Fig.Fig.4a-4b are also unified somehow. It is clear that to get figures corresponding to such a unified configuration of zeros one has to remove some ESL's from the figures as well as to add some of them. However one needs to know rules governing such a procedure and the goal of the detailed calculations below is to provide us with the rules. We shall start matching the solutions $\psi_1(z,\lambda)$ and $\psi_7(z,\lambda)$. We can do it using the combination \mref{49} and putting equal to zero the coefficient at $\psi_6(z,\lambda)$. We get: \begin{eqnarray} (A+{\bar A}\alpha_{\frac{\bar 5}{5}\to 6})\alpha_{\frac{5}{6}\to 7}+ {\bar A}\alpha_{\frac{\bar 5}{6}\to 5}=\alpha_{\frac{5}{6}\to 7}(A+{\bar A}\alpha_{\frac{\bar 5}{5}\to 7})=\nonumber\\ \frac{\alpha_{\frac{5}{6}\to 7}}{\chi_{5\to 7}}(A\chi_{5\to 7}+{\bar A}\chi_{{\bar 5}\to 7})=0 \label{52} \end{eqnarray} \vskip 15pt \begin{tabular}{c} \psfig{figure=Fig4b.EPS,width=11cm}\\ Fig.4b ESL's (bold Stokes lines) of $\psi_7(z,\lambda)$ in the regular limits $\lambda\to\infty$.\\ The non-quantized case \end{tabular} \vskip 15pt Taking into account \mref{42} we obtain the following form of the condition for the $\lambda$-quantization: \begin{eqnarray} E_3(\lambda)\chi_{5\to 7}(\lambda)\chi_{3\to{\bar 5}}(\lambda)e^{-\lambda\oint_{K_2}\sqrt{W_{10}(y)}dy}+ E_2(\lambda)\chi_{{\bar 5}\to 7}(\lambda)\chi_{3\to 5}(\lambda)=0 \label{31} \end{eqnarray} The above quanization condition can be further elaborated with the help of the relation \mref{371} and \mref{421} to get: \begin{eqnarray} E_1E_4=E_1\frac{\chi_{5\to 7}\chi_{5\to{\bar 5}}}{\chi_{3\to 5}}e^{2\lambda\int_{z_6}^{z_3}\sqrt{W_{10}(y)}dy- \lambda\oint_{K_2}\sqrt{W_{10}(y)}dy}-\nonumber\\ E_4\frac{\chi_{1\to 3}\chi_{3\to{\bar 3}}}{\chi_{3\to 5}}e^{2\lambda\int_{z_2}^{z_6}\sqrt{W_{10}(y)}dy- \lambda\oint_{K_1}\sqrt{W_{10}(y)}dy} \label{33} \end{eqnarray} where \begin{eqnarray} E_4(\lambda)=\chi_{{\bar 5}\to 7}(\lambda)+ e^{-\lambda\oint_{K_2}\sqrt{W_{10}(y)}dy}\chi_{5\to 7}(\lambda)\chi_{4\to{\bar 5}}(\lambda) \label{53} \end{eqnarray} The formula \mref{33} is the {\it exact} quantization formula valid for {\it any} real D-W polynomial potential with the properly chosen FS's $\psi_i(z),\;i=3,{\bar 3},5,{\bar 5}$. The following equivalent form of the quantization condition \mref{52} will appear also to be useful in our further analysis: \begin{eqnarray} \frac{A}{\bar A}+{\chi_{\bar 5\to 6}}=-\frac{\chi_{5\to{\bar 5}}}{\chi_{5\to 7}} e^{-2\lambda\int_{z_4}^{z_7}\sqrt{W_{10}(y)}dy} \label{54} \end{eqnarray} which follows directly from \mref{52} when the indentity: \begin{eqnarray} \frac{\chi_{{\bar 5}\to 7}}{\chi_{5\to 7}}={\chi_{\bar 5\to 6}}+ \frac{\chi_{5\to{\bar 5}}}{\chi_{5\to 7}}e^{-2\lambda\int_{z_4}^{z_7}\sqrt{W_{10}(y)}dy} \label{55} \end{eqnarray} is used. Consider now the $\lambda\to\infty$-limit of the above quantization formulae up to {\it any} order of $\lambda^{-1}$. Using the representations \mref{551}-\mref{554} for the semiclassical expansion of the $\chi$-factors we get for the semiclassical limit of the quantization formula \mref{33}, the following result: \begin{eqnarray} E_1^{as}(\lambda)E_4^{as}(\lambda)=\ll[1+\exp\ll(-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy- \sum_{n{\geq}0}\left(\frac{1}{2\lambda}\right)^{2n+1}\oint_{K_1}X_{2n+1}(y)dy\r)\r]\times \nonumber\\ \ll[1+\exp\ll(-\lambda\oint_{K_2}\sqrt{W_{10}(y)}dy- \sum_{n{\geq}0}\left(\frac{1}{2\lambda}\right)^{2n+1}\oint_{K_2}X_{2n+1}(y)dy\r)\r]=0 \label{56} \end{eqnarray} what means that asymptotically $\lambda(>0)$ is quantized in each well independently being defined in the respective wells by the following conditions: \begin{eqnarray} \lambda_r^{(l)}\oint_{K_l}\sqrt{W_{10}(y)}dy+ \sum_{n{\geq}0}\left(\frac{1}{2\lambda_r^{(l)}}\right)^{2n+1}\oint_{K_l}X_{2n+1}(y)dy=(-1)^{l+1}(2r+1)i\pi\nonumber\\ \lambda_r^{(l)}\in\Lambda_l,\;l=1,2 \label{57} \end{eqnarray} while $r$ is natural and large, i.e. $r\gg 1$. Comparing \mref{57} with \mref{261}-\mref{262} it is seen that $R=\frac{1}{2}$ for both $l=1,2$, i.e. the limits $\lambda_r^{(l)}\to\infty$ are singular what needs a special care in considering these limits. The explicite solutions to the above quantization conditions can be obtained by iterations to get the following forms: \begin{eqnarray} \lambda_r^{(l)}=(-1)^{l+1}(2r+1)\frac{i\pi}{\oint_{K_l}\sqrt{W_{10}(y)}dy}- \sum_{n{\geq}0}\left(\frac{1}{2\lambda_r^{(l)}}\right)^{2n+1}\frac{\oint_{K_l}X_{2n+1}(y)dy} {\oint_{K_l}\sqrt{W_{10}(y)}dy}=\nonumber\\ (-1)^{l+1}(2r+1)\frac{i\pi}{\oint_{K_l}\sqrt{W_{10}(y)}dy}+\sum_{n{\geq}1}\frac{a_n^{(l)}}{(2r+1)^n}\nonumber\\ \;\;\;\;l=1,2,\;\;r=0,1,2,... \label{571} \end{eqnarray} with the following first three coefficients: \begin{eqnarray} a_1^{(l)}=\frac{(-1)^l}{2\pi i}\oint_{K_l}X_1(y)dy\nonumber\\ a_2^{(l)}=0\nonumber\\ a_3^{(l)}=-\frac{(-1)^l}{(i\pi)^3}\oint_{K_l}\sqrt{W_{10}(y)}dy\ll(\ll(\oint_{K_l}X_1(y)dy\r)^2+\r.\nonumber\\ \ll.\oint_{K_l}\sqrt{W_{10}(y)}dy\oint_{K_l}X_3(y)dy\r),\;\;\;l=1,2 \label{572} \end{eqnarray} While a usefullness of the formulae \mref{571} as the asymptotic ones is for large $r$ the formulae themselves can be considered for any $r\geq 0$ and this last range for $r$ will be assumed in our further considerations. It is obvious from the form of the expansions \mref{571} that the spectra $\Lambda_l,\;l=1,2$ can coincide only for the symmetric D-W polynomial potentials since in other cases there are no a sufficient number of coefficients of the polynomial $W_{10}(z)$ to satisfy all the equations $a_n^{(1)}=a_n^{(2)},\;n\geq 1$. It means that these spectra can coincide on some of their part only up to some order. This can happen for example when the following conditions are satisfied: \begin{eqnarray} \frac{\oint_{K_1}\sqrt{W_{10}(y)}dy}{\oint_{K_2}\sqrt{W_{10}(y)}dy}= \frac{\oint_{K_1}X_1(y)dy}{\oint_{K_2}X_1(y)dy}=...= \frac{\oint_{K_1}X_{2k-1}(y)dy}{\oint_{K_2}X_{2k-1}(y)dy}=-\frac{2p+1}{2q+1}\nonumber\\ p,q=0,1,...,\;p\neq q,\;\;\;0\leq k<10 \label{573} \end{eqnarray} i.e. the equalities $a_n^{(1)}=a_n^{(2)},\;n\geq 1,$ are then satisfied up to $n=2k-1$. For these cases the spectra have subsequences consisiting of $\lambda_{(2p+1)r+p}^{(1)}$ and $\lambda_{(2q+1)r+q}^{(2)}$, $r=0,1,2...$, for which their expansions \mref{571} coincide up to $2k+1$-th order. Using \mref{573} we have then: \begin{eqnarray} \lambda_{(2q+1)r+q}^{(2)}\oint_{K_1}\sqrt{W_{10}(y)}dy+ \sum_{n{\geq}0}\left(\frac{1}{2\lambda_{(2q+1)r+q}^{(2)}}\right)^n\oint_{K_1}X_{2n+1}(y)dy=\nonumber\\ -(2(2p+1)r+2p+1)i\pi+ \sum_{n>k}\left(\frac{1}{2\lambda_{(2q+1)r+q}^{(2)}}\right)^n\ll(\oint_{K_1}+ \frac{2p+1}{2q+1}\oint_{K_2}\r)X_{2n+1}(y)dy \label{5731} \end{eqnarray} and {\it mutatis mutandis} \begin{eqnarray} \lambda_{(2p+1)r+p}^{(1)}\oint_{K_2}\sqrt{W_{10}(y)}dy+ \sum_{n{\geq}0}\left(\frac{1}{2\lambda_{(2p+1)r+p}^{(1)}}\right)^n\oint_{K_2}X_{2n+1}(y)dy=\nonumber\\(2(2q+1)r+2q+1)i\pi+ \sum_{n>k}\left(\frac{1}{2\lambda_{(2p+1)r+p}^{(1)}}\right)^n\ll(\oint_{K_2}+ \frac{2q+1}{2p+1}\oint_{K_1}\r)X_{2n+1}(y)dy \label{5732} \end{eqnarray} We can conclude therefore that for asymmetric D-W polynomial potentials considering the limit $\lambda\to\infty$ we have to take into account only sequences $\{\lambda_r^{(l)},\;r=1,2,...\},\;l=1,2$, consisting of not coinciding spectra or the subsequences just considered which we denote by $\{\lambda_s^{(3)},\;s=1,2,...\}$, so that $\{\lambda_s^{(3)}\}\equiv\{\lambda_{(2p+1)s+p}^{(1)}\}$ or $\{\lambda_s^{(3)}\}\equiv\{\lambda_{(2q+1)s+q}^{(2)}\}$. Let us now discuss changes which the above different cases of the $\lambda$-quantization conditions can introduce to the $\lambda\to\infty$-limit zero distributions of $\psi_1(z,\lambda)$ (as well as of $\psi_7(z,\lambda)$ this time) considered in the previous section. In our analysis we have to be particularly careful about these formulae of the previous section which contain as the $\log$-function arguments terms which limits are the factors of the limit quantization formula \mref{56} or the factors leading to the formulae \mref{5731}-\mref{5732}. We shall consider all the cases of the previous section subsequently. {\it The case} $1.'$ The $\lambda$-quantization for this case does not disturb the condition \mref{272} for zeros distribution of $\psi_1(z,\lambda)$ so that its $\lambda_s^{(i)}\to\infty$-limit, i.e. for $s\to\infty$, also remains unchanged independently of $i=1,2,3$ and we get readily: \begin{eqnarray} \int_{K_5{(\zeta_{5,qr}^{(1)}}(\lambda_s^{(i)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}dy+\frac{1}{2\lambda_s^{(i)}} Z^-(y,\lambda_s^{(i)})dy\r)=(q[\lambda_s^{(i)}]+r-\frac{1}{4})\frac{i\pi}{\lambda_s^{(i)}}\nonumber\\ q=0,1,2,3,...,\;r=0,1,...,[\lambda_s^{(i)}]-1 \label{574} \end{eqnarray} {\it The case} $2.'$ Also in this case the $\lambda$-quantization does change almost nothing in the $\lambda\to\infty$-limit distribution of zeros except that it fixes $R_1$ on $\frac{1}{2}$ for the sequence $\lambda_s^{(1)}\to\infty$ while $R_1$ depends on $s$ for the sequence $\lambda_s^{(2)}\to\infty$, i.e we have: \begin{eqnarray} \int_{K_1{(\zeta_{1,qr}^{(1)}}(\lambda_s^{(i)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}dy+\frac{1}{2\lambda_s^{(i)}} Z^-(y,\lambda_s^{(i)})dy\r)=(q[\lambda_s^{(i)}]+r-\frac{1}{4})\frac{i\pi}{\lambda_s^{(i)}}\nonumber\\ q=0,1,2,...,q_1,\;r=0,1,...,[\lambda_s^{(i)}]-1 \label{575} \end{eqnarray} with integer $q_1\geq 0$ given by $\int_{z_1}^{z_2}\sqrt{W_{10}(y)}dy=-(q_1+r_1)i\pi,\;0\leq r_1<1$. {\it The case} $3.'$ In this case the quantization of $\lambda$ can disturb only the term $\ln E_1$ present in the r.h.s. of the \mref{361} when $\lambda$ is quantized in the first well. To estimate its $\lambda\to\infty$-behaviour let us note that the quantization condition can be also written in the form: \begin{eqnarray} E_1E_5=-e^{2\lambda\int_{z_2}^{z_6}\sqrt{W_{10}(y)}dy-\lambda\oint_{K_1}\sqrt{W_{10}(y)}dy} \frac{\chi_{1\to 3}\chi_{3\to{\bar 3}}}{\chi_{3\to 5}}E_4\nonumber\\ \label{58} \end{eqnarray} where \begin{eqnarray} E_5=\chi_{{\bar 5}\to 7}+e^{-\lambda\oint_{K_2}\sqrt{W_{10}(y)}dy}\chi_{5\to{7}}\frac{\chi_{3\to{\bar 5}}}{\chi_{3\to 5}} =\nonumber\\ E_4-\frac{\chi_{5\to 7}\chi_{5\to{\bar 5}}}{\chi_{3\to 5}}e^{2\lambda\int_{z_6}^{z_3}\sqrt{W_{10}(y)}dy- \lambda\oint_{K_2}\sqrt{W_{10}(y)}dy} \label{59} \end{eqnarray} where the second equation has been obtained due to the following identity: \begin{eqnarray} \frac{\chi_{3\to{\bar 5}}}{\chi_{3\to 5}}=\chi_{{\bar 5\to 4}}- e^{2\lambda\int_{z_6}^{z_3}\sqrt{W_{10}(y)}dy}\frac{\chi_{5\to{\bar 5}}}{\chi_{3\to 5}} \label{60} \end{eqnarray} From \mref{58} and \mref{59} we get: \begin{eqnarray} \frac{1}{E_1}=-e^{\ll(-2\lambda\int_{z_2}^{z_6}+\lambda\oint_{K_1}\r)\sqrt{W_{10}(y)}dy} \frac{\chi_{3\to 5}}{\chi_{1\to 3}\chi_{3\to{\bar 3}}}\frac{E_5}{E_4}=\nonumber\\ -e^{\ll(-2\lambda\int_{z_2}^{z_6}+\lambda\oint_{K_1}\r)\sqrt{W_{10}(y)}dy} \frac{\chi_{3\to 5}}{\chi_{1\to 3}\chi_{3\to{\bar 3}}}\times\nonumber\\ \ll(1-\frac{\chi_{5\to 7}\chi_{5\to{\bar 5}}}{\chi_{3\to 5}E_4}e^{2\lambda\int_{z_6}^{z_3}\sqrt{W_{10}(y)}dy- \lambda\oint_{K_2}\sqrt{W_{10}(y)}dy}\r) \label{61} \end{eqnarray} Now we take in \mref{61} the limit $\lambda_s^{(1)}\to\infty$. Therefore $E_4(\lambda_s^{(1)})\neq 0$ in \mref{61} and we get in this limit for $\ln E_1$: \begin{eqnarray} \ln E_1^{as}=-\ln\frac{1}{E_1^{as}}=\pm i\pi+2\lambda_s^{(1)}\int_{z_2}^{z_6}\sqrt{W_{10}(y)}dy- \lambda_s^{(1)}\oint_{K_1}\sqrt{W_{10}(y)}dy-\nonumber\\ \ln\ll(\frac{\chi_{3\to 5}}{\chi_{1\to 3}\chi_{3\to{\bar 3}}}\r)^{as} \label{62} \end{eqnarray} Substituting the last result to \mref{361} we get for $\lambda$'s quantized in the left well: \begin{eqnarray} \int_{K_6(\zeta_{6,qr}^{(1)}(\lambda_s^{(1)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_s^{(1)}}Z^-(y,\lambda_s^{(1)})\r)dy=(q[\lambda_s^{(1)}]+r-\frac{1}{4})\frac{i\pi}{\lambda_s^{(1)}}\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda_s^{(1)}]-1 \label{63} \end{eqnarray} The result \mref{63} is essentially different from \mref{363} since now zeros $\zeta_{2,m}^{(1)}(\lambda_{s_1})$ of $\psi_1(z,\lambda)$ have all been shifted to the positions $\zeta_{6,m}^{(1)}(\lambda_s^{(1)})$ on the left infinite SL emerging from the turning point $z_6$. Let us note however that the distribution defined by \mref{363} is kept unchanged if the limit $\lambda_s^{(2)}\to\infty$ is taken so that we have for this case: \begin{eqnarray} \int_{K_2(\zeta_{2,qr}^{(1)}(\lambda_s^{(2)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_s^{(2)}}Z^-(y,\lambda_s^{(2)})\r)dy=(q[\lambda_s^{(2)}]+r-\frac{1}{4}- \frac{R_1}{2})\frac{i\pi}{\lambda_s^{(2)}}-\nonumber\\ \frac{1}{4\lambda_s^{(2)}}\oint_{K_1}Z^-(y,\lambda_s^{(2)})dy- \frac{1}{2\lambda_s^{(2)}}\ln(2\cos(R_1\pi-\frac{i}{2}\oint_{K_1}Z^-(y,\lambda_s^{(2)})dy))\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda_s^{(2)}]-1 \label{631} \end{eqnarray} Finally for the limit $\lambda_s^{(3)}\equiv\lambda_{(2p+1)s+p}^{(1)}\to\infty$ we get of course again the result \mref{63} while for $\lambda_s^{(3)}\equiv\lambda_{(2q+1)s+q}^{(2)}\to\infty$ we get: \begin{eqnarray} \int_{K_1(\zeta_{2,qr}^{(1)}(\lambda_s^{(3)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_s^{(3)}}Z^-(y,\lambda_s^{(3)})\r)dy=\nonumber\\ (q[\lambda_s^{(3)}]+r+(2p+1)s+p+\frac{1}{2})\frac{i\pi}{\lambda_s^{(3)}}-\nonumber\\ \frac{1}{4\lambda_s^{(3)}}\sum_{n>k}\left(\frac{1}{2\lambda_s^{(3)}}\right)^n\ll(\oint_{K_1}+ \frac{2p+1}{2q+1}\oint_{K_2}\r)X_{2n+1}(y)dy-\nonumber\\ \frac{1}{2\lambda_s^{(3)}}\ln\ll(2\sin\frac{i}{2}\sum_{n>k}\left(\frac{1}{2\lambda_s^{(3)}}\right)^n\ll(\oint_{K_1}+ \frac{2p+1}{2q+1}\oint_{K_2}\r)X_{2n+1}(y)dy\r)\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda_s^{(3)}]-1 \label{632} \end{eqnarray} The last result is a modification of the formula \mref{631} when the relation \mref{5731} is taken into account. {\it The case} $4.'$ Considering this case it is necessary to calculate the limits $\lambda_s^{(1,2,3)}\to\infty$ of $\ln E_2$. To this goal in the case of $\lambda_s^{(1)}\to\infty$ we can use $\ln E_2=\ln(E_2/E_1)+\ln E_1$ together with \mref{371} and \mref{61} to obtain: \begin{eqnarray} \frac{E_2}{E_1}=\frac{\chi_{5\to 7}\chi_{5\to{\bar 5}}}{\chi_{3\to 5}E_4}e^{2\lambda\int_{z_6}^{z_3}\sqrt{W_{10}(y)}dy- \lambda\oint_{K_2}\sqrt{W_{10}(y)}dy} \label{64} \end{eqnarray} And since in the limit considered \begin{eqnarray} E_4(\lambda_s^{(1)})\to\nonumber\\ \chi_{{\bar 5}\to 7}^{as}\ll[1+\exp\ll(-\lambda_s^{(1)}\oint_{K_2}\sqrt{W_{10}(y)}dy- \sum_{n{\geq}0}\left(\frac{1}{2\lambda_s^{(1)}}\right)^{2n+1}\oint_{K_2}X_{2n+1}(y)dy\r)\r]\neq 0 \label{641} \end{eqnarray} then for the distribution of zeros of $\psi_1(z,\lambda_s^{(1)})$ in the direction of the infinite SL emerging from $z_3$ we get to {\it all} orders in $\lambda_s^{(1)}$: \begin{eqnarray} \int_{K_3(\zeta_{3,qr}^{(1)}(\lambda_s^{(1)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_s^{(1)}}Z^-(y,\lambda_s^{(1)})\r)dy= (q[\lambda_s^{(1)}]+r-\frac{1}{4}-\frac{R_2}{2})\frac{i\pi}{\lambda_s^{(1)}}+\nonumber\\ \frac{1}{4\lambda_s^{(1)}}\oint_{K_2}Z^-(y,\lambda_s^{(1)})dy+ \frac{1}{2\lambda_s^{(1)}}\ln(2\cos(R_2\pi+\frac{i}{2}\oint_{K_2}Z^-(y,\lambda_s^{(1)})dy))\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda_s^{(1)}]-1 \label{642} \end{eqnarray} The last result shows its deep difference with the formula \mref{363}, i.e. the left well quantization removes all zeros from the infinite SL emerging from $z_2$ to shift them all on the SL emerging now from the turning point $z_3$ (see Fig.5a). Consider now the limit $\lambda_s^{(2)}\to\infty$. In this case \begin{eqnarray} E_2\to\chi_{1\to{\bar 3}}^{as}(\lambda_s^{(2)})\ll(1+e^{-\oint_{K_1}\ll(\lambda_s^{(2)}\sqrt{W_{10}(y)}+ Z^-(y,\lambda_s^{(2)})\r)dy}\r)\neq 0 \label{643} \end{eqnarray} and therefore the form of the formula \mref{363} remains essentially unchanged in this limit but it is not enough to substitute $\lambda_{s_1}$ by $\lambda_s^{(2)}$ there. As previously we have to take into account a dependence of $R_1$ on $\lambda_s^{(2)}$ in \mref{363}. But this is the same task as the previous one and finally we get in this case: \begin{eqnarray} \int_{K_2(\zeta_{2,qr}^{(1)}(\lambda_s^{(2)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_s^{(2)}}Z^-(y,\lambda_s^{(2)})\r)dy=(q[\lambda_s^{(2)}]+r-\frac{1}{4}-\frac{R_1}{2})\frac{i\pi}{\lambda_s^{(2)}}-\nonumber\\ \frac{1}{4\lambda_s^{(2)}}\oint_{K_1}Z^-(y,\lambda_s^{(2)})dy- \frac{1}{2\lambda_s^{(2)}}\ln(2\cos(R_1\pi-\frac{i}{2}\oint_{K_1}Z^-(y,\lambda_s^{(2)})dy))\nonumber\\ q=0,1,2,...,\;r=0,1,...,[\lambda_s^{(2)}]-1 \label{644} \end{eqnarray} i.e. exactly the same picture of zeros ditributions around the SL emerging from $z_2$ as in the limit $\lambda_{s_1}\to\infty$ considered in the formula \mref{363}. It is worth to note that the formulae \mref{642} and \mref{644} are copies of each other reflecting a symmetry of the SG considered with respect to the asymptotic quantized solutions $\psi_1^{as}(z,\lambda_s^{(1,2)})$ and $\psi_7^{as}(z,\lambda_s^{(1,2)})$. These solutions have of course to coincide, but the solutions $\psi_{1,7}^{as}(z,\lambda_s^{(1)})$ behave as they would be quantized in the left well only while the solutions $\psi_{1,7}^{as}(z,\lambda_s^{(2)})$ as they would be quantized only in the right one. This observation will be confirmed in our further calculations. At last we have to consider the cases when $\lambda_s^{(3)}=\lambda_{(2p+1)r+p}^{(1)}\to\infty$ and $\lambda_s^{(3)}=\lambda_{(2q+1)r+q}^{(2)}\to\infty$ when the equalities $\lambda_{(2p+1)r+p}^{(1)}=\lambda_{(2q+1)r+q}^{(2)}$ are satisfied to $k$-th order. It easy to note that both the formulae \mref{642} and \mref{644} have to be modified in a similar way we have got the formula \mref{632} from \mref{5731}. Therefore we get readily: \begin{eqnarray} \int_{K_3(\zeta_{3,q'r}^{(1)}(\lambda_s^{(3)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_s^{(3)}}Z^-(y,\lambda_s^{(3)})\r)dy=\nonumber\\ (q'[\lambda_s^{(3)}]+r+(2q+1)s+q+\frac{1}{2})\frac{i\pi}{\lambda_s^{(3)}}-\nonumber\\ \frac{1}{4\lambda_s^{(3)}}\sum_{n>k}\left(\frac{1}{2\lambda_s^{(3)}}\right)^n\ll(\oint_{K_2}+ \frac{2q+1}{2p+1}\oint_{K_1}\r)X_{2n+1}(y)dy-\nonumber\\ \frac{1}{2\lambda_s^{(3)}}\ln\ll(2\sin\frac{i}{2}\sum_{n>k}\left(\frac{1}{2\lambda_s^{(3)}}\right)^n\ll(\oint_{K_2}+ \frac{2q+1}{2p+1}\oint_{K_1}\r)X_{2n+1}(y)dy\r)\nonumber\\ q'=0,1,2,...,\;r=0,1,...,[\lambda_s^{(3)}]-1 \label{632} \end{eqnarray} for the first limit and \begin{eqnarray} \int_{K_2(\zeta_{2,q'r}^{(1)}(\lambda_s^{(3)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+ \frac{1}{2\lambda_s^{(3)}}Z^-(y,\lambda_s^{(3)})\r)dy=\nonumber\\ (q'[\lambda_s^{(3)}]+r+(2p+1)s+p+\frac{1}{2})\frac{i\pi}{\lambda_s^{(3)}}-\nonumber\\ \frac{1}{4\lambda_s^{(3)}}\sum_{n>k}\left(\frac{1}{2\lambda_s^{(3)}}\right)^n\ll(\oint_{K_1}+ \frac{2p+1}{2q+1}\oint_{K_2}\r)X_{2n+1}(y)dy-\nonumber\\ \frac{1}{2\lambda_s^{(3)}}\ln\ll(2\sin\frac{i}{2}\sum_{n>k}\left(\frac{1}{2\lambda_s^{(3)}}\right)^n\ll(\oint_{K_1}+ \frac{2p+1}{2q+1}\oint_{K_2}\r)X_{2n+1}(y)dy\r)\nonumber\\ q'=0,1,2,...,\;r=0,1,...,[\lambda_s^{(3)}]-1 \label{633} \end{eqnarray} for the second one. {\it The case} $5.'$ It follows from \mref{38} that in this case we have to consider the limit $\lambda_s^{(1,2,3)}\to\infty$ of $\ln(E_2/E_1)$. In the cases $\lambda_s^{(2)}\to\infty$ and $\lambda_s^{(3)}=\lambda_{(2q+1)s+q}^{(2)}\to\infty$ we can proceed directly using \mref{38} to get: \begin{eqnarray} \int_{K_6(\zeta_{6,q'r}^{(1)}(\lambda_s^{(2,3)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+\frac{1}{2\lambda_s^{(2,3)}}Z^-(y,\lambda_s^{(2,3)})\r)dy= -(q'[\lambda_s^{(2,3)}]+r-\frac{1}{4})\frac{i\pi}{\lambda_s^{(2,3)}}\nonumber\\ q'=0,1,2,3,...,\;r=0,1,...,[\lambda_s^{(2,3)}]-1 \label{40} \end{eqnarray} In the cases $\lambda_s^{(1)}\to\infty$ and $\lambda_s^{(3)}=\lambda_{(2p+1)s+p}^{(1)}\to\infty$ we have to make use of \mref{64}. In this way in the first case we come back to the formula \mref{642} while in the second case we get again the formula \mref{632}. {\it The case} $6.'$ As it follows directly from the formula \mref{41} for $\psi_1(z,\lambda)$ we have to consider the limit $\lambda_s^{(1,2,3)}\to\infty$ of $\ln(-i{\bar A}/A)$. This limit follows however directly from the quantization condition \mref{52} to be equal to $\ln(i\chi_{5\to 7}/\chi_{{\bar 5}\to 7})$. Therefore independently of how $\lambda$ is quantized we have for this case: \begin{eqnarray} \int_{K_4(\zeta_{4,qr}^{(1)}(\lambda_s^{(i)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+\frac{1}{2\lambda_s^{(i)}}Z^-(y,\lambda_s^{(i)})\r)dy= -(q[\lambda_s^{(i)}]+r-\frac{1}{4})\frac{i\pi}{\lambda_s^{(i)}}\nonumber\\ q=0,1,2,3,...,q_2,\;r=0,1,...,[\lambda_s^{(i)}]-1 \label{66} \end{eqnarray} with integer $q_2\geq 0$ given by $\int_{z_3}^{z_4}\sqrt{W_{10}(y)}dy=(q_1+r_1)i\pi,\;0\leq r_2<1$. {\it The case} $7.'$ From the form \mref{54} of the quantization formula and the formula \mref{47} we get readily for this case: \begin{eqnarray} \int_{K_7(\zeta_{7,qr}^{(1)}(\lambda_s^{(i)}))}\ll(\frac{1}{2}\sqrt{W_{10}(y)}+\frac{1}{2\lambda_s^{(i)}}Z^-(y,\lambda_s^{(i)})\r)dy= -(q[\lambda_s^{(i)}]+r-\frac{1}{4})\frac{i\pi}{\lambda_s^{(i)}}\nonumber\\ q=0,1,2,3,...,\;r=0,1,...,[\lambda_s^{(i)}]-1 \label{67} \end{eqnarray} again independently of the way $\lambda$ is quantized. In comparison with \mref{48} the distribution of zeros in the quantized case is now shifted totaly on the infinite SL emerging from the turning point $z_7$. {\it The case} $8.'$ This case is obvious since $\psi_1(z)$ coincides with $\psi_7(z)$ (up to a constant) and therefore can not have zeros along the infinite SL emerging from $z_7$ and bounding the sector $S_7$. We have collected the results of this analysis in Fig.5a,b for $\lambda$ quantized in the left and right well respectively. It follows from the figure that the limit loci of zeros of the quantized $\psi_1(z,\lambda)$ and $\psi_7(z,\lambda)$ depends on the well in which the FS's considered are quantized in this limit. Compairing Fig.5a,b with respective Fig.4a,b we see however that this dependence is expressed by the way the limit zeros distributions of both the unquantized FS's considered are unified by quantization. If these FS's are quantized in the first well the distribution of their zeros "around" the first well is disturbed while the corresponding zeros distribution around the second well is a copy of the unquantized $\psi_7(z,\lambda)$ and vice versa. \section{The quantized double well - the symmetric case} \hskip+2em The pattern of the limit loci of zeros for $\psi_1(z,\lambda)$ and $\psi_7(z,\lambda)$ quantized in the symmetric double well can be now easily obtained from the previous considerations as a particular unification of the two patterns of Fig.5a,b. \begin{tabular}{c} \psfig{figure=Fig5a.EPS,width=11cm}\\ Fig.5a ESL's (bold Stokes lines) of $\psi_1(z,\lambda)$ in the limit $\lambda_s^{(1)}\to\infty$\\ quantized in the first (left) well \end{tabular} \vskip 15pt \begin{tabular}{c} \psfig{figure=Fig5b.EPS,width=11cm}\\ Fig.5b ESL's (bold Stokes lines) of $\psi_1(z,\lambda)$ in the limit $\lambda_s^{(2)}\to\infty$\\ quantized in the second (right) well \end{tabular} \vskip 15pt First however we have to get from the SG's of Fig.4a,b corresponding symmetric SG's. We can do in two ways. At the beginning we put the turning points $z_6,z_{\bar 6}$ on the imaginary axis and then apply the inversion operation to the left part of the SG from Fig.4a or to the right one. We get in this way two cases of the symmetric double wells shown in Fig.6a,b If both the cases are not quantized the limit zeros distributions of $\psi_1(z,\lambda)$ are shown in Fig.7a and Fig.7b for the corresponding symmetric double-wells. For $\psi_7(z,\lambda)$ the corresponding pictures are the mirror reflections in the imaginary axis of the last figures. \vskip 25pt \begin{tabular}{c} \psfig{figure=Fig6a.EPS,width=12cm}\\ Fig.6a The first variant of the symmetric double-well \end{tabular} \vskip 25pt \begin{tabular}{c} \psfig{figure=Fig6b.EPS,width=12cm}\\ Fig.6b The second variant of the symmetric double-well \end{tabular} \vskip 25pt For the quantized cases Fig.7a and Fig.8a have to be unified with their mirror reflections just mentioned to form figures shown in Fig.8a,b respectievly. \vskip 25pt \begin{tabular}{c} \psfig{figure=Fig7a.EPS,width=13cm}\\ Fig.7a ESL's (bold Stokes lines) of $\psi_1(z,\lambda)$ in the regular limits $\lambda\to\infty$.\\ The not quantized case of the first variant of the symmetric double-well \end{tabular} \vskip 25pt \begin{tabular}{c} \psfig{figure=Fig7b.EPS,width=13cm}\\ Fig.7b ESL's (bold Stokes lines) of $\psi_1(z,\lambda)$ in the regular limits $\lambda\to\infty$.\\ The not quantized case of the second variant of the symmetric double-well \end{tabular} \begin{tabular}{c} \psfig{figure=Fig8a.EPS,width=11cm}\\ Fig.8a ESL's (bold Stokes lines) of $\psi_1(z,\lambda_s)$ in the regular limits $\lambda_s\to\infty$.\\ The quantized case of the first variant of the symmetric double-well \end{tabular} \begin{tabular}{c} \psfig{figure=Fig8b.EPS,width=11cm}\\ Fig.8b ESL's (bold Stokes lines) of $\psi_1(z,\lambda_s)$ in the regular limits $\lambda_s\to\infty$.\\ The quantized case of the second variant of the symmetric double-well \end{tabular} \section{Simple generalizations -- multiple-well potentials} \hskip+2em The results obtained in the previous section suggest simple generalizations of them by enlarging a number of wells on the real axis so the number of internal SL's lying on this axis while the remaining internal SL's have to cross the real axis only. However as we have mentioned in sec.4 there are no obvious ways of generalizations of the results obtained so far when $\hbar$ is to be quantized and we have to limit ourselves rather to situations when the quantization of $\hbar$ for small values of it are possible and can be performed effectively. An enlarging a number of wells in the way described above seems to satisfy these conditions. The figures Fig.9 - Fig.12c illustrate this situation and show the limit distribution of zeros of the FS's $\psi_1(z,\lambda)$ and $\psi_{n+2}(z,\lambda)$ in all basic variants of mutual relations between these solutions. To get these figures we have applied the rules given below which can be read off from the results of the previous sections. For this goal let us remind a general observation mentioned in sec.5 which generates the rules. This is that non-critical SG's arise in the case of a fixed polynomial potential from critical ones by small changes of $\arg\lambda$ so that three SL's emerging from each root of the polynomial rotate around the root. During these rotations continuously glued internal and external SL's split into external ones only leading to non-critical SG's. Since however in the latter cases the limit locus of zeros of FS's are known to lie on their ESL's ({\bf Theorem 1a}) then by the continuity argument these limit zeros loci of the considered FS have had to coincide earlier with the splitting SL's of the critical case. For the non-quantized critical SG's this argument allows us to identify ESL's of such critical SG's immediately. For the solutions $\psi_1(z,\lambda)$ and $\psi_{n+2}(z,\lambda)$ it leads to the patterns of Fig.Fig. 9-10. However when the solutions $\psi_1(z,\lambda)$ and $\psi_{n+2}(z,\lambda)$ are matched according to the quantization condition defined by the $q^{th}$-well (see Fig.11) then to the four external SL's emerging from the two turning points defining the $q^{th}$-well applies {\bf Corollary 1}, i.e. these SL's are not any longer ESL's of any of these solutions while the internal SL linking these two turning points of the well still maintains to be ESL. All the other ESL's however satisfy the rules described above for the non quantized case independently for each solution with the restriction that to the left from the $q^{th}$-well the ESL's are determined by $\psi_1(z,\lambda)$ while to the right from the well the system of ESL's is determind by $\psi_{n+2}(z,\lambda)$. The ESL's which lie inside the $q^{th}$-well are determind according to the above rules by both the solutions simultanuously. In the quantized cases ($\psi_1(z,\lambda_s)$ matched with $\psi_{n+2}(z,\lambda_s)$) for the symmetric multiple-well potential the corresponding rules change accordingly further collecting common features of the quantized asymmetric cases and being the following \begin{enumerate} \item the two internal SL's of the two symmetric quantized wells remain to be exceptional while the infinite SL's emerging from their ends are not ESL's any longer \item the exceptional SL's on the left from the left quantized well coincide with the exceptional ones of the not quantized $\psi_1(z,\lambda_s)$, these on the right from the right quantized well with the exceptional lines of the not quantized $\psi_{n+2}(z,\lambda_s)$ while those emerging from the complex turning points occupying the two quantized wells are unchanged (in comparison with the unquantized case) \item the exceptional SL's between the left quantized well and the vertical symmetry axis of the SG considered remain the same as for the unquantized $\psi_{n+2}(z,\lambda_s)$ while those between this axis and the right quantized well coincide with the exceptional lines for the unquantized $\psi_1(z,\lambda_s)$ \item the middle sectors (if they are) collect all the properties of the last point \item the exceptional lines for $\psi_1(z,\lambda_s)$ when the corresponding SG contains the single middle well and the latter is quantized are identified by the rules described in the first two points above \end{enumerate} \vskip 15pt \begin{tabular}{c} \psfig{figure=Fig9.EPS,width=13cm}\\ Fig.9 ESL's (bold Stokes lines) of $\psi_1(z,\lambda_s)$ in the regular limits\\$\lambda_s\to\infty$ for $W_{2n}(z)$ polynomial potential. The non-quantized case \end{tabular} \vskip 15pt \begin{tabular}{c} \psfig{figure=Fig10.EPS,width=13cm}\\ Fig.10 ESL's (bold Stokes lines) of $\psi_{n+2}(z,\lambda_s)$ in the regular limits\\$\lambda_s\to\infty$ The non-quantized case \end{tabular} \begin{tabular}{c} \psfig{figure=Fig11.EPS,width=14cm}\\ Fig.11 ESL's (bold Stokes lines) of $\psi_1(z,\lambda_q)$=$C\psi_{n+2}(z,\lambda_q)$ in the regular\\ limit $\lambda_q\to\infty$ (bold Stokes lines). $\lambda_q$ is quantized in the $q^{th}$-well \end{tabular} \vskip 18pt \begin{tabular}{c} \psfig{figure=Fig12a.EPS,width=14cm}\\ Fig.12a ESL's (bold Stokes lines) of $\psi_1(z,\lambda_s^{(p)})=$$C\psi_{n+2}(z,\lambda_s^{(p)})$ in\\the regular limit $\lambda_s^{(p)}\to\infty$. The symmetric multiple-well\\variant quantized in the $p^{th}$-well \end{tabular} \vskip 18pt As we have mentioned the rules formulated above can be easily extracted from the previous sections but they can be also proved by the same methods as used previously. In particular all the ways of obtaining the limit $\lambda\to\infty$ considered earlier and the limit loci of zeros induced by them are applied also in these generalizations so that the corresponding formulae of the previous sections can be rephrased accordingly with a respective effort. \vskip 18pt \begin{tabular}{c} \psfig{figure=Fig12b.EPS,width=14cm}\\ Fig.12b ESL's (bold Stokes lines) of $\psi_1(z,\lambda_s^{(p)})=$$C\psi_{n+2}(z,\lambda_s^{(p)})$ in\\the regular limit $\lambda_s^{(p)}\to\infty$. The second symmetric multiple-well\\variant quantized in the $p^{th}$-well \end{tabular} \vskip 18pt \begin{tabular}{c} \psfig{figure=Fig12c.EPS,width=14cm}\\ Fig.12c The regular limit $\lambda_s^{(q+1)}\to\infty$ of loci of zeros of $\psi_1(z,\lambda_s^{(q+1)})=$\\ $C\psi_{n+2}(z,\lambda_s^{(q+1)})$ (bold Stokes lines). The symmetric multiple-well case\\quantized in the middle well \end{tabular} \section{Summary and discussion} \hskip+2em In this paper we have considered the small-$\hbar$ semiclassical limit for a general non-critical case of SG with simple turning points for which we have got the general {\bf Theorem 1a}. For the critical case with a unique inner SL we have got {\bf Theorem 1b}. It appeared that both the last theorems looked very similarly to the corresponding {\bf Teorems 2a,b} of our previous paper \cite{11} when the high energy semiclassical limit was considered. The quantized case however needs some special conditions to be satisfied for the case to be considered. In the case of SG's with more than a unique inner SL a pattern of the limit zeros loci of fundamental solutions has appeared to be not essentially different in comparison with the previous cases of SG's considered. While we have limited ourselves initially to a real D-W potential of tenth-degree its degree has appeared in fact not much important, i.e. we have been able to formulate a generalization of this case by enlarging a number of wells and enhancing arbitralily the potential degree. Nevertheless we have concluded that when the $\hbar$-variable eigenvalue problem was considered for a polynomial potential then a special arrangement of roots of the potential was necessary as well as the special choice of accompanying FS's to provide the problem for which the small-$\hbar$ semiclassical limit exists. Our calculations are in agreement with those of Hezari's results \cite{9} the latter being particular cases of ours. In the language of Hezari's paper we can express our results in the double-well case, both symmetric or not, that there are a number of zero limit measures defined on SL's shown in Fig. 7-13 and given by the integrals of the formulae \mref{2721} and similar others. It is clear that in the multiple-well cases a number of zero limit measures grows accordingly to a number of wells and to relations between the phase integrals corresponding to the wells. Nevertheless a general picture of zeros distributions along the Stokes lines is such as described by the rules formulated in the sections 3 and 7. \section*{Acknowledgment} \hskip+2em I would like to thank to dr Haim Hezari for his interest to my paper and for his pointing me out that there are no inconsistences in his callculations made in his paper \cite{9} what I erroneuosly suggested to be in the previous versions of my paper. \section*{Appendix A} \hskip+2em The formula \mref{14a} has the following asymptotic expansion allowing us to calculate the semiclassical expansion \mref{141}: \begin{eqnarray} \int_{K_l(\zeta_{l,qr0}^{(k)}(\Lambda))}\ll(\frac{1}{2}\sqrt{W_n(y)}- \frac{1}{2\lambda}Z_k(y,\lambda)\r)dy+\nonumber\\ 2\sum_{s\geq 1}\frac{1}{s!}\ll.\ll(\frac{1}{2}\sqrt{W_n(y)}- \frac{1}{2\lambda}Z_k(y,\lambda)\r)^{(s)}\r|_{y=\zeta_{l,qr0}^{(k)}(\Lambda)} \ll(\sum_{p\geq 1}\frac{1}{\lambda^p}\zeta_{l,qrp}^{(k)}(\Lambda)\r)^s=\nonumber\\ \ll(q[|\lambda|]+r-\frac{1}{4}\r)\frac{i\pi}{\lambda},\;\;\;q>0 \label{A234} \end{eqnarray} and with a similar formula for $q=0$ obtained from the last one where $\zeta_{l,qr0}^{(0)}(\Lambda)$ is substituted by $z_l+\zeta_{l,0r1}^{(k)}(\Lambda)/\lambda$ and $p>1$. \section*{Appendix B} \hskip+2em We describe here the way the coefficients $\alpha_{\frac{k}{i}\to j}$ and $\alpha_{\frac{k}{j}\to i}$ in the linear combination: \begin{eqnarray} \psi_k(z)=\alpha_{\frac{k}{i}\to j}\psi_i(z)+\alpha_{\frac{k}{j}\to i}\psi_j(z) \label{B1} \end{eqnarray} are calculated. First it is assumed that the three FS's which are involved in such a linear combination can be continued along canonical paths to the sectors in which their partner FS's are defined, i.e. the sectors $S_x,\;x=i,j,k,$ have to communicate canonically with themselves. Next according to our earlier conventions for each $\psi_k(z)$ we choose as $z_k$ in the formula \mref{10} one of TP's lying on the boundary of the corresponding sector $S_k$. Then the first coefficient in the formula \mref{B1} can be calculated according to the rule \begin{eqnarray} \alpha_{\frac{k}{i}\to j}=\lim_{z\to\infty_j}\frac{\psi_k(z)}{\psi_i(z)} \label{B2} \end{eqnarray} with the analogous formula for the second one. Since the above rules need all $\psi$'s to be continued analytically along canonical paths on the $C_{cut}$-plane we have to formulate rules for crossing the cuts by $\psi$'s. On the $C_{cut}$-plane there are cuts, each the same for the three factors: $\sqrt{W_n(z)}$, $W_n^{-\frac{1}{4}}(z)$ and $\chi(z)$ of $\psi(z)$, emerging from each TP $z_k,\;k=1,...,n$. We adopt the following convention for calculation of the argument $\arg(z-z_k)$: 1. it is calculated with respect to the axis emerging from $z_k$ and being parallel to the real axis; 2. it is calculated with the positive sign in the anticlockwise direction and with the negative one in the opposite direction. According to these conventions $\arg(z-z_k)$ jumps by $\pm\pi$ when the cut emerging from $z_k$ is crossed clockwise or anticlockwise respectively. It then follows that $\sqrt{W_n(z)}$ changes its sign only being continued by the cut while $W_n^{-\frac{1}{4}}(z)$ has to be multiplied by $\pm i$ when continued through the cut clockwise (with respect to $z_k$) or anticlockwise respectively. A corresponding change of the $\chi$-factor is a little bit more complicated, namely, the $n^{th}$-term of the expansion \mref{11} defining it changes its sign by $(-1)^n$ when $\chi(z)$ is continued through the cut so changing effectively it signature $\sigma$ to the opposite one, i.e. such a crossing switches the continued FS between the two forms it has in the two sectors which are the mutuall projections of each other. This feature of FS's has been discussed earlier in sec.2 (see \mref{13a} and the discussion following it). The above conventions define now completely the procedure of calculating both the coefficients in the formula \mref{B1}. Consider for example the way the formula \mref{271} has been obtained. The corresponding cuts emerging from the TP's $z_1$ and $z_5$ could be drawn on Fig.3 to the left parallely to the real axis. We then have put: \begin{eqnarray} \psi_1(z,\lambda)= W_{10}^{-\frac{1}{4}}(z)e^{\lambda\int_{z_1}^{z}\sqrt{W_{10}(y)}dy}\chi_1(z,\lambda) \label{B31} \end{eqnarray} where $z\in S_1$ and is above the cut emerging from $z_1$ so that $\sigma_1=1$. \begin{eqnarray} \psi_2(z,\lambda)= W_{10}^{-\frac{1}{4}}(z)e^{-\lambda\int_{z_5}^{z}\sqrt{W_{10}(y)}dy}\chi_2(z,\lambda) \label{B32} \end{eqnarray} for $z\in S_2$ and below the cut emerging from $z_5$ so that $\sigma_2=-1$ and \begin{eqnarray} \psi_3(z,\lambda)= W_{10}^{-\frac{1}{4}}(z)e^{-\lambda\int_{z_5}^{z}\sqrt{W_{10}(y)}dy}\chi_3(z,\lambda) \label{B3} \end{eqnarray} assuming $z\in S_3$ and above the cut emerging from $z_5$ so that $\sigma_{3}=-1$. Continuing now the solutions $\psi_i(z,\lambda),\;i=1,2,$ to the sector $S_3$ along canonical paths we get : \begin{eqnarray} \alpha_{\frac{1}{2}\to 3}=\lim_{z\to\infty_3}\frac{\psi_1(z,\lambda)}{\psi_2(z,\lambda)}= \lim_{z\to\infty_3}\frac{W_{10}^{-\frac{1}{4}}(z)e^{\ll(\lambda\int_{z_1}^{z_5}+\lambda\int_{z_5}^{z}\r) \sqrt{W_{10}(y)}dy}\chi_1(z,\lambda)} {iW_{10}^{-\frac{1}{4}}(z)e^{\lambda\int_{z_5}^{z}\sqrt{W_{10}(y)}dy}\chi_2(z,\lambda)}=\nonumber\\ -ie^{\lambda\int_{z_1}^{z_5}\sqrt{W_{10}(y)}dy}\chi_{1\to 3}(\lambda) \label{B4} \end{eqnarray} where $\chi_{1\to 3}(\lambda)=\lim_{z\to\infty_3}\chi_1(z,\lambda)$ and $\chi_{2\to 3}(\lambda)\equiv 1$. Similarly for the second coefficient in \mref{271} we get \begin{eqnarray} \alpha_{\frac{1}{3}\to 2}=ie^{\lambda\int_{z_1}^{z_5}\sqrt{W_{10}(y)}dy} \label{B5} \end{eqnarray} since now $\chi_{1\to 2}(\lambda)=\chi_{3\to 2}(\lambda)\equiv 1$. Taking into account \mref{B4}-\mref{B5} we get \mref{271} for the case when $z\in S_3$ (the factor $W_{10}^{-\frac{1}{4}}(z)$ of $\psi_2(z,\lambda)$ gets then additionally the factor $i$ crossing the cut emerging from $z_5$). In the calculations made in this paper we have made use of the relation $\chi_{i\to j}(\lambda)=\chi_{j\to i}(\lambda)$ valid for any pair of such coefficients (see for example Ref.2 of \cite{4}) as well as of the following identity which is valid for every four different FS's $\psi_x(z,\lambda),\;x=i,j,k,l,$ which sectors they are defined in can communicate canonically by pairs (see for example Ref. 2 of \cite{5}): \begin{eqnarray} \alpha_{\frac{i}{j}\to k}=\alpha_{\frac{i}{j}\to l}+\alpha_{\frac{i}{l}\to j}\alpha_{\frac{l}{j}\to k} \label{B6} \end{eqnarray} \section*{Appendix C} \hskip+2em We discuss here a necessity of using a pair of FS's with different signatures in forming a linear combinations of the solution which the limit zero loci is investigated to obtain non trivial conditions for these loci. For this goal consider the linear combination \mref{271} of $\psi_1(z,\lambda)$ by $\psi_2(z,\lambda)$ and $\psi_3(z,\lambda)$. This combination can be continued along canonical paths to a vicinity of the external SL emerging from $z_2$ and running to the upper infinity of the $C_{cut}$-plane. This combination can be written there as follows \begin{eqnarray} \psi_1(z,\lambda)=-iW_{10}^{-\frac{1}{4}}(z)e^{\lambda\int_{z_1}^{z_5}\sqrt{W_{10}(y)}dy}\ll(\chi_{1\to 3}(\lambda)\chi_2(z,\lambda) -\chi_3(z,\lambda)\r)e^{-\lambda\int_{z_5}^{z}\sqrt{W_{10}(y)}dy} \label{C1} \end{eqnarray} A condition for $\psi_1(z,\lambda)$ to vanish there is therefore \begin{eqnarray} \chi_{1\to 3}(\lambda)\chi_2(z,\lambda)-\chi_3(z,\lambda)=0 \label{C2} \end{eqnarray} where $z$ is close to the SL mentioned. If however we now take in \mref{C2} the limit $\lambda\to\infty$ then with the help of the exponential representions \mref{551}-\mref{553} we get: \begin{eqnarray} \exp\ll(\int_{\infty_1}^{\infty_3}Z_1(y,\lambda)dy+\int_{\infty_2}^{z}Z_2(y,\lambda)dy- \int_{\infty_3}^{z}Z_3(y,\lambda)dy\r)-1=0 \label{C3} \end{eqnarray} Using further the properties of $Z^\pm(z,\lambda)$ in the decomposition $Z_k(z,\lambda)=Z^+(z,\lambda)+\sigma_kZ^-(z,\lambda)$ we get finally from \mref{C3} \begin{eqnarray} \exp\ll[\ll(\int^{\infty_3}_{z}+\int_{\infty_3}^{\infty_1}+\int_{\infty_1}^{\infty_2}+\int_{\infty_2}^{z}\r) \ll(Z^+(y,\lambda)-Z^-(y,\lambda)\r)dy\r]-1=0 \label{C4} \end{eqnarray} where we have taken into account that $\int_{\infty_1}^{\infty_2}Z^+(y,\lambda)dy= \int_{\infty_1}^{\infty_2}Z^-(y,\lambda)dy=0$. However all the integrations in \mref{C4} run along paths none of which crosses any cut of the $C_{cut}$-plane, i.e. in the connected domain of the plane. It then follows from the geometry of these paths that they form a closed contour in this domain and therefore the integrations of $ Z^+(y,\lambda)$ and $Z^-(y,\lambda)$ along this contour have to vanish. But then the condition \mref{C4} appears simply to be the identity. A simple explanation of this a little bit unexpected situation is that in the considered domain $\psi_1(z,\lambda)$ grows exponentially when $\Re z$ approaches the considered SL emerging from $z_2$. By comparison of $\psi_1(z,\lambda)= W_{10}^{-\frac{1}{4}}(z)e^{\lambda\int_{z_1}^{z}\sqrt{W_{10}(y)}dy}\chi_{1}(z,\lambda)$ with its linear combination \mref{C1} we have: \begin{eqnarray} \chi_{1\to 3}(\lambda)\chi_2(z,\lambda)-\chi_3(z,\lambda)=ie^{2\lambda\int_{z_5}^{z}\sqrt{W_{10}(y)}dy} \label{C5} \end{eqnarray} where $\Re\ll(\lambda\int_{z_5}^{z}\sqrt{W_{10}(y)}dy\r)<0$ for $z\in T$ with $T$ being the strip on Fig.3 which boundaries are formed by the SL's emerging from $z_1$ and $z_2$ from the one side and by the SL's emerging from $z_5$ from the other side (there are no SL's inside $T$). The relation \mref{C5} means that the l.h.s. of it has to vanish exponentially when $\lambda\to\infty$, i.e. the semiclassical expansion of $\chi_{1\to 3}(\lambda)\chi_2(z,\lambda)-\chi_3(z,\lambda)$ has to vanish identically in $T$, i.e. to any order. On the other hand one never meets identities such as discussed above if $\psi_1(z,\lambda)$ is expressed by a linear combination of a pair of FS's with different signatures.
2,877,628,090,647
arxiv
\section{Introduction} \label{sec:intro} Density functional theory (DFT) is the workhorse of computational materials science and commonly predicts ground-state properties with high accuracy~\cite{kohn1965,kresse1996,staroverov2004,paier2006,haas2009,norskov2011,burke2012,freysoldt2014}. Extending this success to the prediction of excited-state properties is an ongoing challenge. Time-dependent DFT (TDDFT)~\cite{runge1984} struggles to treat neutral excitations in the condensed phase and reasonable accuracy requires the use of hybrid functionals and/or frequency-dependent exchange-correlation kernels~\cite{gavrilenko1997,tokatly2001,reining2002,onida2002,sottile2005,botti2007,paier2008,izmaylov2008,sharma2011,rigamonti2015,ullrich2016}. Instead, the community typically builds upon DFT through Green's function-based many-body perturbation theory~\cite{onida2002}. For weakly-correlated materials, including simple metals, semiconductors, and insulators, the GW approximation to the self-energy~\cite{hedin1965,strinati1980,strinati1982,hybertsen1985,hybertsen1986,pulci2005} combined with the same approximation to the Bethe-Salpeter equation~\cite{albrecht1998,hanke1980,sham1966,strinati1984,rohlfing2000} yields excited state properties that are typically in good agreement with experiment. Strongly-correlated materials are more commonly treated via dynamical mean-field theory (DMFT)~\cite{georges1992,georges1996}, either in the DFT+DMFT framework~\cite{georges2004,kotliar2006,held2007} or the GW+DMFT framework~\cite{biermann2003,biermann2014}. Alternatively, quantum Monte Carlo methods have been adapted for the calculation of excitation energies, including variational~\cite{zhao2019} and diffusion Monte Carlo~\cite{williamson1998,hunt2018,yang2019qmc} and auxiliary-field quantum Monte Carlo~\cite{ma2013}. In recent years, wavefunction-based techniques from the quantum chemistry community have been adapted for periodic boundary conditions and applied to condensed-phase systems. For ground-state properties, we mention the application of M\o ller-Plesset perturbation theory~\cite{sun1996,ayala2001,pisani2005,marsman2009}, coupled-cluster theory~\cite{hirata2001,hirata2004,katagiri2005,gruneis2011,mcclain2017}, and -- for small supercells -- full configuration interaction quantum Monte Carlo~\cite{booth2013}. Charged excitation energies, as quantified through the one-particle spectral function or the band structure, have been studied by periodic equation-of-motion coupled-cluster theory for the uniform electron gas~\cite{mcclain2016}, for the simple semiconductors silicon and diamond~\cite{mcclain2017}, for two-dimensional MoS$_2$~\cite{pulkin2019}, and for transition metal oxides~\cite{gao2019}. Neutral excitation energies have been primarily studied by configuration interaction with single excitations (CIS)~\cite{hirata1999,lorenz2011}, which is equivalent to the use of the Hartree-Fock self-energy in the Bethe-Salpeter equation (with the Tamm-Dancoff approximation). Due to the unscreened nature of the Coulomb interaction, periodic CIS typically predicts excitation energies which are much too large~\cite{lorenz2012}. Recently, periodic equation-of-motion coupled-cluster theory with single and double excitations (EOM-CCSD) was applied to one-dimensional polyethylene in a small single-particle basis set~\cite{katagiri2005}, producing results in reasonable agreement with experiment. For three-dimensional condensed-phase systems, our group has recently applied EOM-CCSD to the neutral excited-state properties of the uniform electron gas~\cite{lewis2019} as well as the absorption and pump-probe spectroscopy of the naphthalene crystal at the $\Gamma$ point~\cite{lewis2019a}. Here, we continue this endeavor and present the results of a new EOM-CCSD implementation with translational symmetry and Brillouin zone sampling for three-dimensional atomistic solids. The layout of the article is as follows. In section \ref{sec:theory}, we describe the theory underlying our implementation, including details about our symmetry-adapted Gaussian basis sets and periodic EOM-CCSD. In section~\ref{sec:comput}, we present computational details, including details about the materials studied, the basis sets used, and integral evaluation. In section~\ref{sec:results}, we provide EOM-CCSD results on the excitation energies (including a comparison with CIS and a discussion of finite-size effects), the exciton binding energy, the dispersion of excitons with nonzero momentum, and the exciton-phonon interaction. In section~\ref{sec:conclusions}, we summarize our results and conclude with future directions. \section{Theory} \label{sec:theory} Our periodic calculations are performed using a translational-symmetry-adapted single-particle basis, \begin{equation} \phi_{\mu {\bm{k}}}({\bm{r}}) = \sum_{{\bm{T}}} \mathrm{e}^{\mathrm{i} {\bm{k}} \cdot {\bm{T}}} \tilde{\phi}_{\mu}({\bm{r}}-{\bm{T}}) , \end{equation} where $\tilde{\phi}_\mu({\bm{r}})$ is an atom-centered Gaussian orbital, ${\bm{T}}$ is a lattice translation vector, and ${\bm{k}}$ is a crystal momentum vector sampled from the first Brillouin zone. A Hartree-Fock calculation in this basis produces crystalline orbitals (COs) \begin{equation} \psi_{p{\bm{k}}} ({\bm{r}}) = \sum_{\mu} C_{\mu p}({\bm{k}})\phi_{\mu{\bm{k}}} ({\bm{r}}), \label{eq:co} \end{equation} where $p$ is the band index and $C_{\mu p}({\bm{k}})$ are the CO coefficients. The periodic CCSD energy and cluster amplitudes are determined by the usual equations~\cite{coester1960,cizek1966,kummel1991,bartlett1981,bartlett2007,hirata2004}, \begin{align} E_0 & = \langle \Phi_0| \bar{H} | \Phi_0 \rangle, \label{eq:cc_energy_eq} \\ 0 & = \langle \Phi_{\p{i} }^{\p{a} }| \bar{H} | \Phi_0 \rangle, \label{eq:cc_amp_eq_t1} \\ 0 & = \langle \Phi_{\p{i} \p{j} }^{\p{a} \p{b} } | \bar{H} | \Phi_0 \rangle, \label{eq:cc_amp_eq_t2} \end{align} where $\Phi_{\p{i} }^{\p{a} }$ and $\Phi_{\p{i} \p{j} }^{\p{a} \p{b} }$ are Slater determinants with one and two electron-hole pairs, indices $i, j, k, l$ denote occupied orbitals, and $a, b, c, d$ denote virtual orbitals. The similarity transformed Hamiltonian is given by $\bar{H} \equiv e^{-T} H e^T$, where $T = T_1+T_2$ is a momentum-conserving cluster operator with \begin{align} T_1 & = \sum_{ai} \sideset{}{'}\sum_{{\bm{k}}_a {\bm{k}}_i} \cct{i}{a} a_{\p{a}}^{\dagger} a_{\p{i}}, \label{eq:t1_def} \\ T_2 & = \frac{1}{4} \sum_{abij} \sideset{}{'}\sum_{{\bm{k}}_a {\bm{k}}_b {\bm{k}}_i {\bm{k}}_j} \cctt{i}{j}{a}{b} a_{\p{a}}^{\dagger} a_{\p{b}}^{\dagger} a_{\p{j}} a_{\p{i}}. \label{eq:t2_def} \end{align} The primed summations indicate momentum conservation. Excited states are accessed in coupled-cluster theory using the equation-of-motion (EOM) formalism~\cite{emrich1981,koch1990,stanton1993,kobayashi1994,bartlett2007,krylov2008}, which amounts to diagonalizing the effective Hamiltonian $\bar{H}$ in a truncated space of excitations. For neutral excitations considered in this work, we use electronic excitation (EE) EOM-CCSD, where the diagonalization is performed in the basis of 1-particle+1-hole and 2-particle+2-hole states. We study excitations with zero and nonzero momentum ${\bm{q}}$ by using a basis of determinants with the corresponding momentum. The (right-hand) excited state is therefore given by \begin{equation} |\Psi({\bm{q}})\rangle = \left[R_1({\bm{q}}) + R_2({\bm{q}})\right] e^{T} |\Phi_0\rangle \end{equation} with \begin{align} R_1({\bm{q}}) & = \sum_{ai} \sideset{}{'}\sum_{{\bm{k}}_a {\bm{k}}_i} \ccr{i}{a} a_{\p{a}}^{\dagger} a_{\p{i}}, \label{eq:r1_def} \\ R_2({\bm{q}}) & = \frac{1}{4} \sum_{abij} \sideset{}{'}\sum_{{\bm{k}}_a {\bm{k}}_b {\bm{k}}_i {\bm{k}}_j} \ccrr{i}{j}{a}{b} a_{\p{a}}^{\dagger} a_{\p{b}}^{\dagger} a_{\p{j}} a_{\p{i}}, \label{eq:r2_def} \end{align} where the primed summations indicate that the momenta sum to ${\bm{q}}$. The use of translational symmetry in CCSD leads to a computational cost that scales like $o^2 v^4 N_k^4 $, where $o$ is the number of occupied orbitals per unit cell, $v$ is the number of virtual orbitals per unit cell, and $N_k$ is the number of $k$-points sampled. Further details about periodic Gaussian-based Hartree-Fock and CCSD can be found in Ref.~\citenum{mcclain2017}. \section{Computational Details} \label{sec:comput} We study eight semiconducting and insulating materials featuring the diamond/zinc-blende crystal structure and the rock salt crystal structure. The materials have a wide range of both direct and indirect band gaps and a variety of ionic and covalent bonding. The eight materials and the lattice constants used are given in Tab.~\ref{tab:cis_finite_size}. In all calculations, the Brillouin zone was sampled with a uniform Monkhorst-Pack mesh~\cite{monkhorst1976} of $N_k$ $k$-points that includes the $\Gamma$ point. Our calculations are performed with GTH pseudopotentials~\cite{goedecker1996,hartwigsen1998}, although we perform some all-electron calculations for comparison. For pseudopotential calculations, we use the corresponding polarized double- and triple-zeta basis sets DZVP and TZVP~\cite{vandevondele2005}. For all-electron calculations, we use a modification of the cc-pVDZ basis set presented in Ref.~\onlinecite{lorenz2012}, denoted AE-PVDZ. The finite-size errors of periodic calculations are influenced by the treatment of two-electron repulsion integrals (ERIs). Many of these integrals are formally divergent, due to the long-range nature of the Coulomb interaction; however, these divergent ERIs enter into expressions for observables as integrable divergences, producing well-defined results in the thermodynamic limit. In order to avoid divergent ERIs, we calculate all atomic orbital ERIs as \begin{equation} (\mu{\bm{k}}_\mu, \nu{\bm{k}}_\nu | \kappa{\bm{k}}_\kappa, \lambda{\bm{k}}_\lambda) = N_k^{-1} \int d{\bm{r}}_1 \int d{\bm{r}}_2 \frac{\rho_{\mu\nu}^{{\bm{k}}_\mu{\bm{k}}_\nu}({\bm{r}}_1) \rho_{\kappa\lambda}^{{\bm{k}}_\kappa{\bm{k}}_\lambda}({\bm{r}}_2) } {|{\bm{r}}_1 - {\bm{r}}_2 |} , \end{equation} where the orbital-pair densities have had their net charge removed, \begin{subequations} \begin{align} \rho_{\mu\nu}^{{\bm{k}}_\mu{\bm{k}}_\nu}({\bm{r}}) &= \phi_{\mu{\bm{k}}_\mu}^*({\bm{r}})\phi_{\nu{\bm{k}}_\nu}({\bm{r}}) - \overline{\rho}_{\mu\nu} , \\ \overline{\rho}_{\mu\nu} &= \frac{1}{N_k \Omega} \int d{\bm{r}} \phi_{\mu{\bm{k}}_\mu}^*({\bm{r}})\phi_{\nu{\bm{k}}_\nu}({\bm{r}}), \end{align} \end{subequations} and $\Omega$ is the volume of the unit cell. The ERIs are then calculated using periodic Gaussian density fitting with an even-tempered auxiliary basis as described in Ref.~\onlinecite{sun2017a}. We note that the use of chargeless pair densities is equivalent to neglecting the ${\bm{G}}=0$ component of the ERIs when calculated in a plane-wave basis. At the HF level, this treatment of ERIs produces an energy that converges to the thermodynamic limit as $N_k^{-1/3}$, due to the exchange energy; this can be corrected with a Madelung constant, leading to $N_k^{-1}$ convergence~\cite{oba2008,sundararaman2013}. This particular finite-size behavior is also present in the occupied orbital energies $\varepsilon_i({\bm{k}})$. We use a closed-shell implementation of periodic EOM-CCSD and study singlet excitations in this work, which are calculated by Davidson diagonalization~\cite{davidson1975,hirao1982}. The initial guess used in the iterative diagonalization is obtained from dense diagonalization of the effective Hamiltonian in the single excitation subspace. All calculations were performed with the open-source PySCF software package~\cite{sun2018}. \section{Results and Discussion} \label{sec:results} \subsection{Direct optical excitation energy} \label{sec:converg_k} In this section, we study the performance of periodic EOM-CCSD on the lowest-lying direct singlet excitation energies for the eight selected solids. Such states are relevant for absorption spectroscopy where no momentum is transferred during excitation. In periodic electronic structure calculations, it is desirable to achieve convergence with respect to the single-particle basis set, the level of correlation, and Brillioun zone sampling. The first two categories have been widely studied in molecular systems and the convergence behavior of the third one is well understood at the mean-field level~\cite{carrier2007,spencer2008,sundararaman2013}. However, at the correlated level, it is still an open question how to efficiently converge the Brillioun zone sampling in order to reach the thermodynamic limit~\cite{gruneis2010,gruneis2011,booth2016,liao2016}. \subsubsection{CIS} \label{ssec:cis} As a warm-up to EOM-CCSD, we first present results for periodic configuration interaction with single excitations (CIS), which forms a minimal theory for electronic excited states in the condensed phase. Importantly, the relatively low cost of CIS allows us to study the convergence with respect to Brillouin zone sampling up to relatively large $k$-point meshes (either $7^3$ or $8^3$). The finite-size error of the CIS excitation energy can be analyzed approximately by considering it as the difference between the energy of two determinants, the HF one $|\Phi_0\rangle$ and a single excitation $|\Phi_{i{\bm{k}}}^{a{\bm{k}}}\rangle$. As discussed above, the energy of a single determinant calculated using our handling of ERIs exhibits a finite-size error decaying like $N_k^{-1/3}$, but which can be removed by a Madelung constant that depends only on the number of electrons in the unit cell. Therefore, the correction is identical for both states, and this leading-order error cancels in the energy difference. We thus posit that the CIS energy converges at least as fast as $N_k^{-1}$. In Fig.~\ref{fig:lif_cis}, we show the excitation energy predicted by CIS for the LiF crystal as a function of $N_k^{-1}$. Clearly, the finite-size error decays at least as fast as $N_k^{-1}$ and so we use the three-parameter fitting function \begin{align} E(N_k) = E_{\infty} + a\, N_k^{-1} + b\, N_k^{-2}, \label{eq:extrap_n1n2} \end{align} in order to extrapolate to the thermodynamic limit ($N_k\rightarrow \infty$). This fit is shown by the dashed line in Fig.~\ref{fig:lif_cis}, which includes results between $3^3$ and $7^3$ $k$-points. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.85]{lif_cis} \caption{Excitation energy of LiF calculated with CIS using the DZVP basis set. The dashed line is a fit to results obtained with a number of $k$-points between $3^3$ and $7^3$. The dotted line includes only $2^3$, $3^3$, and $4^3$ $k$-points. } \label{fig:lif_cis} \end{center} \end{figure} Using an all-electron double-zeta basis set, we performed CIS calculations on a subset of our eight materials, in order to compare to previously published CIS results from Lorenz et al~\cite{lorenz2012}. By extrapolation of our results obtained at large $k$-point meshes using the above form, we obtain converged excitation energies (the ``AE-PVDZ/Conv'' column of Tab.~\ref{tab:cis_finite_size}) that are within 0.1~eV of Ref.~\onlinecite{lorenz2012} for all materials studied. However, we emphasize that the CIS excitation energies are larger than experiment by 2~eV or more. \begin{table}[!b] \caption{Singlet excitation energies (eV) from all-electron and pseudopotential-based periodic CIS} \label{tab:cis_finite_size} \begin{threeparttable} \begin{ruledtabular} \begin{tabular*}{0.48\textwidth}{l @{\extracolsep{\fill}} d{-1} d{-1} d{-1} d{-1} d{-1} d{-1}} \toprule System & \multicolumn{1}{c}{$a$ (\AA)} &\multicolumn{3}{c}{AE-PVDZ} & \multicolumn{2}{c}{DZVP}\\ & & \multicolumn{1}{c}{234-Extrap\tnote{a}} & \multicolumn{1}{c}{Conv\tnote{b}} & \multicolumn{1}{c}{Ref. \citenum{lorenz2012}} & \multicolumn{1}{c}{234-Extrap\tnote{a}} & \multicolumn{1}{c}{Conv\tnote{b}} \\ \cline{1-7} Diamond & 3.567 & 10.99 & 11.76 & 11.72 & 11.07 & 11.81 \\ Si & 5.431 & 5.84 & 6.14 & 6.05 & 5.43 & 5.71 \\ SiC & 4.360 & 9.41 & 9.83 & 9.74 & 9.12 & 9.47 \\ LiF & 3.990 & 16.02 & 15.85 & 15.84 & 16.06 & 15.87 \\ LiCl & 5.130 & & & & 11.04 & 10.89 \\ MgO & 4.213 & 12.00 & 11.91 & 11.94 & 11.69 & 11.66 \\ BN & 3.615 & & & & 14.17 & 14.32 \\ AlP & 5.451 & & & & 6.59 & 6.76 \\ \bottomrule \end{tabular*} \end{ruledtabular} \begin{tablenotes} \item[a] Extrapolation based on results obtained with $2^3$, $3^3$, and $4^3$ $k$-point meshes \item[b] Extrapolation based on results obtained with $3^3$ up to $7^3$ $k$-point meshes \end{tablenotes} \end{threeparttable} \end{table} Before moving on to our EOM-CCSD results, we use CIS to assess some of the future approximations we will have to make. In particular, we will only access $k$-point meshes up to $4\times 4\times 4$ and we will use GTH pseudopotentials and corresponding DZVP basis sets. First, considering finite-size errors, we re-fit the CIS data using the same form but excluding all $k$-point meshes larger than $4\times 4\times 4$; for LiF, the result of this fit is shown as the dotted line in Fig.~\ref{fig:lif_cis}. Clearly, without larger $k$-point meshes, this extrapolation predicts an excitation energy which is too high by about 0.2~eV. These limited extrapolation results are shown for all materials in the ``AE-PVDZ/234-Extrap'' column of Tab.~\ref{tab:cis_finite_size} and exhibit errors of about $\pm 0.5$~eV. Second, considering pseudopotential errors, in the last two columns of Tab.~\ref{tab:cis_finite_size}, we show the excitation energies calculated with GTH pseudopotentials. In many cases, the pseudopotential error is less than 0.1~eV; naturally, materials containing heavier second-row atoms such as Si or Mg exhibit the largest errors, which are about 0.3~eV. \subsubsection{EOM-CCSD} \label{ssec:eomccsd} We now move on to our results from periodic EOM-CCSD. Again, due to the high cost of these calculations, we utilized GTH pseudopotentials, sampled the Brillouin zone with meshes up to $4\times 4\times 4$, and used basis set corrections. In particular, our primary calculations were based on a HF reference obtained with the full DZVP basis set; subsequent CCSD and EOM-CCSD calculations then employed frozen virtual orbitals, typically correlating 4 lowest virtual bands. The results of these calculations were used to extrapolate to the thermodynamic limit using the same empirical formula as described above (Eq.~\ref{eq:extrap_n1n2}). These results are given in the ``234-Extrap'' column of Tab.~\ref{tab:ee_finite_size}. To these values, we then added two basis set corrections, $\Delta_\mathrm{frz}$ and $\Delta$TZ; $\Delta_\mathrm{frz}$ is the energy difference between complete and frozen-orbital DZVP calculations with a $3\times 3\times 3$ $k$-point mesh; $\Delta$TZ is the energy difference between TZVP and DZVP calculations with a $2\times 2\times 2$ $k$-point mesh. Whereas $\Delta_\mathrm{frz}$ is typically between 0.2 and 0.8~eV, $\Delta$TZ is less than 0.1~eV. \begin{table}[!t] \caption{Singlet excitation energies (eV) from periodic EOM-CCSD}\label{tab:ee_finite_size} \begin{threeparttable} \begin{ruledtabular} \begin{tabular*}{0.48\textwidth}{l @{\extracolsep{\fill}} d{-1} d{-1} d{-1} d{-1} c} \toprule System & \multicolumn{1}{c}{234-Extrap\tnote{a}} & \multicolumn{1}{c}{$\Delta_{\mathrm{frz}}$\tnote{b}} & \multicolumn{1}{c}{$\Delta$TZ\tnote{c}} & \multicolumn{1}{c}{Final} & \multicolumn{1}{c}{Expt.} \\ \hline Diamond & 7.70 & -0.18 & -0.05 & 7.47 & 7.3~\cite{phillip1964} \\ Si & 3.96 & -0.37 & -0.07 & 3.52 & 3.4~\cite{lautenschlager1987} \\ SiC & 6.53 & -0.19 & -0.08 & 6.27 & 6.0~\cite{logothetidis1996} \\ LiF & 14.29 & -0.82 & +0.01\tnote{d} & 13.48 & 12.7~\cite{rao1975}, 13.68~\cite{abbamonte2008} \\ LiCl & 9.62 & -0.27 & -0.07 & 9.29 & 8.9~\cite{eby1959} \\ MgO & 8.55 & -0.19 & -0.07 & 8.29 & 7.6~\cite{roessler1967} \\ BN & 11.48 & -0.35 & -0.02 & 11.11 & 11~\cite{tararan2018} \\ AlP & 4.97 & -0.42 & -0.07 & 4.48 & 4.6~\cite{hwang2014,wing2019} \\ \hline MSE & & & & 0.24 & \\ MAE & & & & 0.27 & \\ \bottomrule \end{tabular*} \end{ruledtabular} \begin{tablenotes} \item[a] Extrapolation based on frozen-virtual DZVP results obtained with $2^3$, $3^3$, and $4^3$ $k$-point meshes \item[b] $\Delta_\mathrm{frz}$ is the energy difference between complete DZVP and frozen-virtual DZVP on a 3$\times$3$\times$3 $k$-point grid \item[c] $\Delta$TZ is the energy difference between TZVP and DZVP on a 2$\times$2$\times$2 $k$-point grid \item[d] For LiF, the TZVP basis set has severe linear dependencies and was modified by doubling the exponents of the two most diffuse s- and p-type primitive Gaussian functions of Li (all basis functions of F are unchanged) \end{tablenotes} \end{threeparttable} \end{table} Basis-set corrected excitation energies as a function of $N_k^{-1}$ are shown in \cref{fig:ee_vs_kpts} for diamond, Si, LiF, and MgO. Our final values for all eight materials are given in the ``Final'' column of Tab.~\ref{tab:ee_finite_size} and compared to experiment. Unsurprisingly, EOM-CCSD is a massive improvement over CIS; for the eight solids studied here, EOM-CCSD predicts excitation energies with a mean signed error (MSE) of 0.24~eV and a mean absolute error (MAE) of 0.27~eV. A few results in Tab.~\ref{tab:ee_finite_size} are noteworthy. First, we note that the excitation energy of cubic BN is frequently reported as 6.4~eV, which is almost 5~eV lower than the EOM-CCSD prediction. However, a GW/BSE calculation reported in 2004~\cite{satta2004} also found a value of around 11~eV and proposed a reinterpretation of the experimental data. Indeed, a joint theory-experiment paper published in 2018~\cite{tararan2018} attributed the lower energy absorption features to a combination of defects and domains of hexagonal BN, further supporting a direct excitation energy of about 11~eV. Second, the excitation energy of AlP has frequently been reported as 3.6~eV, about 1~eV below the EOM-CCSD prediction. A 2019 publication reporting the results of TDDFT and GW/BSE calculations~\cite{wing2019} suggested that the 3.6~eV feature seen in experimental spectra is due to the \textit{indirect} transition of AlP. Experimental spectra show a much stronger peak at around 4.6~eV~\cite{hwang2014}, which is the likely value of the first direct excitation energy. The two materials with the largest error are LiF and MgO. Whereas absorption spectra of LiF typically show a narrow peak at about 12.7~eV~\cite{rao1975} (leading to an overprediction of 0.8~eV), inelastic X-ray scattering data is consistent with a value of 13.68~eV~\cite{abbamonte2008}. For MgO, EOM-CCSD overpredicts the excitation energy by about 0.7~eV. This tendency to overestimate excitation energies is the same as the one typically observed in molecules and could potentially be addressed via inclusion of triple excitations. However, we also emphasize that our calculations include no information about finite-temperature or exciton-phonon effects, which are expected to contribute to a reduction in the purely electronic excitation energy~\cite{marini2008,giustino2010,antonius2014,ponce2014,zacharias2015}. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.85]{diamond-silicon-lif-mgo-eomee-and-cis} \caption{Excitation energy of diamond, Si, LiF, and MgO calculated with CIS and EOM-CCSD. Results are extrapolated to $N_k \rightarrow \infty$ assuming an error with finite-size scaling as described in the text. The dashed lines are fitted based on results obtained with a number of $k$-points between $3^3$ and $7^3$. The dotted lines include only $2^3$, $3^3$, and $4^3$ $k$-points. A variety of experimental results are indicated by solid lines. } \label{fig:ee_vs_kpts} \end{center} \end{figure} \subsection{Exciton binding energy} \label{sec:exciton_binding} We now consider the exciton binding energy, defined as the difference between the fundamental band gap and the first neutral excitation energy (i.e.~the optical gap). Within periodic coupled-cluster theory, the band gap is obtained from the calculation of the ionization potential (IP-EOM-CCSD) and the electron affinity (EA-EOM-CCSD), as described in Ref.~\onlinecite{mcclain2017}. Here, our IP/EA-EOM-CCSD calculations are basis-set-corrected in the same way as described for our EOM-CCSD calculations. We focus on LiF, which is a direct-gap ionic insulator with a concomitantly large exciton binding energy. The minimum band gap occurs at the $\Gamma$ point, which is where we calculate the IP and EA. In Fig.~\ref{fig:exciton_binding}(a), we show the fundamental (IP/EA) gap and the optical (EE) gap, as a function of the number of $k$-points sampled in the Brillouin zone. Clearly the fundamental gap is larger than the optical gap such that the exciton binding energy is positive, as expected. The exciton binding energy is plotted in Fig.~\ref{fig:exciton_binding}(b), and seen to be around 0.8~eV with a $4\times 4\times 4$ $k$-point mesh. Unlike neutral excitation energies, which conserve particle number, the IP and EA are charged excitation energies corresponding to a change in particle number. In particular, the same approximation considered above for CIS, i.e.~the use of a single determinant, leads to Koopmans' approximation to the ionization potential, IP$\approx -\varepsilon_i$; as discussed above, occupied orbital energies converge slowly with $N_k$ when the ERIs are handled as described in section~\ref{sec:comput}. Therefore, as shown in Fig.~\ref{fig:exciton_binding}(b), we fit the exciton binding energies calculated with $3\times 3\times 3$ and $4\times 4\times 4$ grids to the form $E(N_k) = E_\infty + a N_k^{-1/3}$, in order to extrapolate to the thermodynamic limit. Doing so gives 1.47~eV, which is in good agreement with the experimental value of 1.6~eV~\cite{roessler1967}. We note that if we separately extrapolate the band gap and the optical gap and take the difference, we get a larger value of 2.74~eV. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.85]{lif-exciton-binding} \caption{Fundamental band gap and optical gap (a) and exciton binding energy (b) of LiF from periodic EOM-CCSD. Due to the behavior of IP/EA-EOM-CCSD, the fundamental band gap (red squares) and exciton binding energy (brown diamonds) are extrapolated to $N_k \rightarrow \infty$ assuming a finite-size scaling of the form $E_{N_k} = E_{\infty} + a\,N_k^{-1/3}$. } \label{fig:exciton_binding} \end{center} \end{figure} \subsection{Exciton dispersion} \label{sec:ee_vs_kshift} \begin{figure}[b!] \begin{center} \includegraphics[scale=0.85]{lif-exciton-dispersion} \caption{Exciton dispersion of LiF. Periodic EOM-CCSD data are obtained with different $k$-point meshes in order to access more values of the momentum transfer ${\bm{q}}$. Rigid shifts were applied, as described in the text. Also shown are the experimental inelastic X-ray scattering (IXS) data from Ref.~\citenum{abbamonte2008} and the tight-binding (TB) model fitted to that data.} \label{fig:ee_vs_kshift} \end{center} \end{figure} Although optical absorption spectroscopy and modern theoretical approaches have primarily focused on excitons with ${\bm{q}} = 0 $, it is important to also consider excitons that carry a finite momentum ${\bm{q}}$. For example, electron-hole pairs with finite momentum are realized in many indirect semiconductors and are also important for a quantitative modeling of the exciton-phonon interaction, excitonic dynamics, and radiative lifetimes. The simulation of excitons with finite-momentum is straightforward in EOM-CCSD, and simply requires that the involved crystal momenta sum to ${\bm{q}}$ in \cref{eq:r1_def,eq:r2_def}. The EOM Hamiltonian is block-diagonal with respect to the exciton momentum and thus all accessible momenta can be studied independentaly and calculated in parallel. Because the exciton momentum ${\bm{q}}$ corresponds to a momentum difference, a periodic calculation only has access to values ${\bm{q}} = {\bm{k}} - {\bm{k}}^\prime$, where ${\bm{k}}$ are momenta from the $k$-point mesh. Therefore, different $k$-point meshes can be utilized in order to access different values of the exciton momentum ${\bm{q}}$, albeit with an impact on the finite-size error. Again we focus on LiF, for which we show the EOM-CCSD exciton dispersion in Fig.~\ref{fig:ee_vs_kshift}. Our results are compared to inelestic X-ray scattering (IXS) spectroscopy measurements performed by Abbamonte et al.~\cite{abbamonte2008} (open circles). We utilize $3\times 3\times 3$ $k$-point mesh (up-pointing triangles) and $4\times 4\times 4$ $k$-point mesh (down-pointing triangles) in order to access more momenta ${\bm{q}}$ in the Brillouin zone. The $3\times 3\times 3$ values were rigidly shifted in order to achieve agreement at the $\Gamma$ point and subsequently, both dispersion data were rigidly shifted in order to place the exciton band center at 14.2~eV, as was observed experimentally. From their experimental data along the $\Gamma-X$ line, those authors paramaterized a tight-binding model (with band center 14.2~eV and nearest-neighbor transfer integral $-0.065$~eV), which we have extended to the entire Brillouin zone for comparison (dashed line). We can see that the EOM-CCSD results are in good agreement with the IXS data, with an error less than 0.2 eV along the $\Gamma \rightarrow X$ direction. Our largest discrepancy is at the $L$ point, although we emphasize that experimental data is unavailable at that momentum and the disagreement may indicate a failure of the simple tight-binding model. \subsection{Exciton-phonon interaction} \label{sec:ee_vs_volume} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.85]{diamond-pes} \caption{Ground- and excited-state potential energy surface of diamond as a function of the unit cell volume (a) and the behavior of the excitation energy as a function of hydrostatic strain (b). Results are obtained from CCSD and EOM-CCSD with the DZVP basis set and a $3\times 3\times 3$ sampling of the Brillouin zone.} \label{fig:diamond_pes_ee_vs_vol} \end{center} \end{figure} Finally, we consider the behavior of excitation energies as a function of lattice strain, as predicted by EOM-CCSD. In Fig.~\ref{fig:diamond_pes_ee_vs_vol}(a), we show the ground-state and first excited-state potential energy surfaces of diamond, associated with hydrostatic strain, i.e.~isotropic variation of the unit cell. While the ground-state energy minimum occurs at a lattice constant of 3.567~\AA~(which is fortuitously the exact experimental value~\cite{haas2009,kittel1996}), the excited state has a minimum which is shifted to a larger lattice constant of 3.674~\AA. The behavior of the excitation energy as a function of lattice strain can be used to determine properties of the exciton-phonon interaction. In particular, the interaction with acoustic phonons can be modeled within the deformation potential approximation for the change in the excitation energy~\cite{bardeen1950,herring1956}, $\Delta E_\mathrm{X} = 3 D \varepsilon$, where $D$ is the hydrostatic deformation potential, $3\varepsilon$ is the trace of the strain tensor, $\varepsilon = (a-a_0)/a_0$ is the relative strain, $a$ is the strained lattice constant, and $a_0$ is the unstrained lattice constant. Defined in this way, our calculations predict the excitonic hydrostatic deformation potential of diamond to be $D=-2.84$~eV. Repeating the same procedure for MgO, we predict a larger $D=-11.73$~eV, indicating a stronger exciton-phonon interaction than in diamond. Experimentally determined deformation potentials for exciton-phonon interactions are sporadic in the literature and we consider a direct comparison on a wider variety of materials to be a topic for future work. \section{Conclusions} \label{sec:conclusions} To summarize, we have presented the results of periodic EOM-CCSD for various neutral excited-state properties of semiconductors and insulators. The method has shown promising results for optical excitation energies, exciton binding energies, exciton dispersion relations, and exciton-phonon interaction energies. Collectively, these results demonstrate that EOM-CCSD is a promising and tractable approach for the study of excited-state properties of solids. While we have attempted to address finite-size errors, arising from incomplete sampling of the Brillouin zone, the high cost of EOM-CCSD precludes a definitive extrapolation. Future work will be focused on both analytical and numerical exploration of finite-size errors and strategies for amelioration, such as those that have been developed for ground-state CCSD~\cite{gruneis2011,booth2016,liao2016}. In a similar direction, we are exploring the use of approximations to EOM-CCSD with reduced scaling~\cite{goings2014}, which will enable the study of simple solids with more $k$-points or the study of solids with more complex unit cells. Additional future work is targeted at the study of simple metals (where finite-order perturbation theory breaks down~\cite{shepherd2013}), the study of exciton-phonon interactions beyond the deformation potential approximation, and the efficient calculation of optical spectra. \begin{acknowledgments} X.W.~thanks Dr.~Varun Rishi for helpful discussions. This work was supported in part by the National Science Foundation under Grant No.~CHE-1848369. All calculations were performed using resources provided by the Flatiron Institute. The Flatiron Institute is a division of the Simons Foundation. \end{acknowledgments}
2,877,628,090,648
arxiv
\section{Introduction} Since the dawn of the \emph{Fermi}-LAT era it has became clear that blazars are the dominant source type ($\sim$75\% of sources, excluding unknown blazar types) in the extragalactic $\gamma$-ray sky above 10 GeV \citep{Ajello2017}. The spectral energy distributions (SED) of blazars are supposedly dominated by the emission from relativistic jets \citep{Lewis2018}. In leptonic models, blazar jets radiate from radio to $\gamma$-ray via two primary mechanisms, synchrotron and inverse Compton scattering (IC); these are responsible for radio to UV and X-ray to $\gamma$-ray emission, respectively (but see also \citealt{Cerruti2015}, for hadronic models explaining the high energy emission via proton synchrotron in the jets). \citet{Fossati1998} found a correlation (the `blazar sequence') in blazar SEDs between the synchrotron hump and the IC hump (e.g., \citealt{KimDW2017}) which implies a tight connection between the powerful radio jet and $\gamma$-ray emission in blazars \citep{Hughes2011, Jorstad2013}. Indeed, several studies suggest a strong correlation between the radio and $\gamma$-ray emission \citep{Tavares2012, Lico2017}. However, the origin and physical radiation mechanisms of the high energy ($\gamma$-rays) emission in blazar jets are still a matter of debate \citep{Chatterjee2012, Fuhrmann2014, Moerbeck2014, Jorstad2016, Lewis2018}. Emission regions located within relativistic jets (e.g., \citealt{Marscher2008, Boccardi2017}) are thought to be the production site of $\gamma$-radiation -- notably, of $\gamma$-ray flares -- in blazars. Models of the location of $\gamma$-ray flares broadly suggest two potential locations (e.g., \citealt{Dotson2012}): the broad line region (BLR) and the radio core which is the surface of unity optical depth. At sub-parsec scale distances from the black hole (BH), optical--UV photons from the BLR can be up-scattered by electrons accelerated in a relativistic shock \citep{Sikora1994}. Such shocks are supposed to occur when a disturbance moving along the jet passes through the acceleration and collimation zone (ACZ) (see also \citealt{Marscher2008}, for the canonical model of a blazar jet). In opposition to this scenario, many observations which found concurrence of events at different wavebands (including $\gamma$-rays) point to the radio core (or a region downstream of the core) as the place of origin of $\gamma$-ray events. In this scenario, $\gamma$-radiation is produced via inverse Compton scattering of radio-to-IR seed photons from the jet itself (e.g., \citealt{Marscher2010}) or from the dusty torus located a few parsecs from the black hole \citep{Tavares2011}. The parsec-scale scenario has been supported by \citet{Jorstad2001a, Jorstad2001b} who observed a connection of the $\gamma$-ray emission with the ejection of (apparently) superluminal jet components and the flux density of the VLBI core. Further insights are provided multi-waveband photometry plus polarimetry VLBI campaigns. \citet{Agudo2011} reported that a disturbance propagating through the 7-mm core caused a very strong $\gamma$-ray outburst with counterparts at frequencies from radio to $\gamma$-rays in the blazar AO 0235+164 in 2008. The disturbance showed strong linear polarization, corresponding to the signature of a moving shock \citep{Marscher2010, Wehrle2012, Jorstad2013}. \citet{Lico2014} found a correlation between the flux of the radio core (at 15, 24, and 43 GHz) and $\gamma$-ray emission in the blazar Mrk\,421 throughout 2011, albeit with substantial scatter (with Pearson correlation coefficients of 0.42 to 0.46). During their observations, only the first $\gamma$-ray peak (on MJD 55627) had a radio counterpart (on MJD 55621) in the 7-mm core flux. Given the sampling interval of the radio observations, it is unclear where the radio counterpart peaked. They also reported no spectral hardening in the $\gamma$-ray spectrum during the enhanced $\gamma$-ray state in Mrk\,421, which is a rare feature in blazars. The BL Lac object 1749+096 (OT\,081, redshift 0.32, image scale 4.64 pc/mas, assuming $H_{0}$ = 71 km~Mpc~s$^{-1}$, $\Omega_{\Lambda}$ = 0.73, and $\Omega_{m}$ = 0.27) is a flat spectrum radio source emitting variable radio radiation in total intensity and linear polarization \citep{Stickel1988, Gabuzda1996}. 1749+096 has been classified as a low-synchrotron peaked (LSP) blazar,\footnote{\url{http://www.physics.purdue.edu/MOJAVE/sourcepages/1749+096.shtml}} has been observed at X-rays, but was not detected at $\gamma$-rays before the advent of \emph{Fermi} \citep{Sambruna1999}. A review of the physical characteristics of this highly compact radio source can be found in \citet{Lu2012}, covering features such as multi-frequency variability from radio to X-ray, a quiescent flux level of below 1 Jy at high radio frequencies (above 37\,GHz), a curved extended jet, and superluminal motion of jet components, with apparent speeds from $5c$ to $21c$. \citet{Jorstad2017} presented a recent estimate of Doppler factor of $\sim$17.7 and viewing angle of $\sim$2.4$^{\circ}$ in the jet of 1749+096. The first $\gamma$-ray detection of 1749+096 was reported by \citet{Abdo2009}. Interestingly, there were no $\gamma$-ray flares until 2015, which is why the $\gamma$-ray outburst in 2016 is notable. \citet{osullivan2009} revealed the linear polarization of the 1749+096 jet at 4.6\,--\,43\,GHz by using very long baseline array (VLBA) observations. They found that the radio core shows a degree of linear polarization of about 3\% across the range of their frequencies, with the polarization angle being about $-50^{\circ}$ at 43\,GHz. Additionally, 1749+096 is known to show a Faraday rotation measure (RM) significantly different from zero at cm-wave bands \citep{Pushkarev2001}. Contrary to this, however, \citet{osullivan2009} found no significantly non-zero RM at frequencies up to 43\,GHz in the radio core during a flare, thus indicating that the underlying magnetic field is most likely responsible for EVPA changes in some specific circumstances \citep{Homan2002}. Recently, it was also found that 1749+096 shows variability in optical polarization on time scales of a few days \citep{Uemura2017}. In this study, we explore the powerful $\gamma$-ray outburst in 1749+096 that occurred in the middle of 2016 by using multi-waveband observations including, especially, VLBI data. Overall, the multi-waveband data span about two years (2015 and 2016) across a frequency range from radio to $\gamma$-rays obtained from the Korean VLBI Network (KVN) at 22, 43, 86, and 129\,GHz; the Owens Valley Radio Observatory (OVRO) at 15\,GHz; the VLBA at 43\,GHz; the All-Sky Automated Survey for Supernovae (ASAS-SN) in the optical V-band; \emph{Swift}-XRT at X-rays; and \emph{fermi}-LAT at $\gamma$-rays. Due to a rather spotty $\gamma$-ray light curve, we focus on a specified $\gamma$-ray active period spanning $\sim$5 months (see Figure~\ref{fig:f1}) which includes both the $\gamma$-ray outburst and a notable local peak (temporary flux enhancement). We address multi-waveband correlations, the evolution of the $\gamma$-ray spectrum, and the linear polarization at 43\,GHz as observed by the VLBA. We discuss the connection between the $\gamma$-ray events and radio core activity, assuming that the primary candidate of the $\gamma$-ray production site is the radio core. \section{Observations and Data} \subsection{KVN 22/43/86/129 GHz \& VLBA 43 GHz} \label{sec:obsA} We obtained multi-frequency VLBI data from the Interferometric Monitoring of Gamma-ray Bright AGNs (iMOGABA) project.\footnote{\url{http://radio.kasi.re.kr/sslee}} iMOGABA employs the KVN for multi-frequency simultaneous observations at 22, 43, 86, and 129\,GHz in single polarization (LCP). The KVN consists of three identical (diameter of 21\,m) antennas with baseline lengths up to $\sim$470\,km; accordingly, angular resolutions are on the order of a few milliseconds of arc. iMOGABA has been monitoring $\sim$30 $\gamma$-ray bright AGNs monthly since late 2012 \citep[see, e.g.,][]{Lee2013, Algaba2015}. Data reduction was conducted with the KVN pipeline \citep{Hodgson2016} which applies all standard procedures required for reduction of VLBI data. We used the frequency phase transfer (FPT) technique \citep{Zhao2018} to improve the quality of the data from the higher frequency bands. We followed the procedure used in \citet{Lee2016} for imaging our data with the software package \texttt{Difmap} \citep{Shepherd1997}. We conservatively estimated an error of 10\% on the flux density of each image component; for our 129\,GHz data, we applied a 30\% error due to possible systematic amplitude losses \citep{KimDW2017}. Usually, we detected only one component (i.e., the KVN core) at the map center over the four frequencies owing to the relatively large beam size of the KVN and its limited sensitivity. In a few cases, closure phase analysis at 43 and 86\,GHz made it possible to detect a jet pointing toward the northeast -- which is consistent with the known morphology of the radio jet of 1749+096 (e.g., \citealt{Lu2012}). Given the performance and limitations of the KVN, we consider those detections marginal and use only the KVN core in this study. We selected seven VLBA observations (2016 June to 2016 November) around the time of the two $\gamma$-ray events (see Figure~\ref{fig:f1}) from the Boston University blazar group (BU) archival dataset\footnote{\url{http://www.bu.edu/blazars/VLBA_GLAST/1749.html}} to look into the source more deeply, including the linear polarization at 43\,GHz. The BU group has been monitoring several tens of $\gamma$-ray bright blazars monthly via the VLBA in close association with the \emph{Fermi}-LAT \citep{Jorstad2016}. The public BU data were already fully calibrated as described in \citet{Jorstad2017}. Hence, we simply used the calibrated visibility data to produce Stokes $I$, $Q$, and $U$ maps. We imaged the data with \texttt{Difmap} and produced linear polarization maps using the Astronomical Image Processing System (AIPS) task \texttt{COMB} \citep{Moorsel1996}. We note that the BU observations missed the SC and KP stations on 2016 September 5 and the HN and MK stations on 2016 October 6. Hence, results from those epochs need to be interpreted with care. We fit circular Gaussian profiles to the image components in the total intensity maps to investigate the evolution of their flux densities, assuming again a conservative error of 10\%. \subsection{OVRO 15 GHz} \label{sec:obsB} We collected 15\,GHz data of 1749+096 from the OVRO 40\,m telescope monitoring program \citep{Richards2011}. In close association with the \emph{Fermi}-LAT program, the OVRO has been monitoring more than 1800 blazars about twice per week since 2008. The large sample size and the high cadence allow for a detailed exploration of blazar variability at cm-wavelengths. Details of the data reduction process can be found in \citet{Richards2011}. The calibrated OVRO data is available via the OVRO Internet database.\footnote{\url{http://www.astro.caltech.edu/ovroblazars/index.php?page=home}} In this study, we use OVRO flux data spanning from the beginning of 2015 to early 2017. \subsection{ASAS-SN} \label{sec:obsC} We obtained optical V-band data from the All-Sky Automated Survey for Supernovae (ASAS-SN) project\footnote{\url{http://www.astronomy.ohio-state.edu/asassn/index.shtml}} \citep{Shappee2014, Kochanek2017}. The survey is ongoing every night with 20 telescopes located around the globe including Hawaii, Chile, and South Africa. The ASAS-SN aims to survey and discover bright transients down to a V-band magnitude of about 17 across the entire sky. The project provides an online tool that produces an aperture photometry light curve for an arbitrary point on the celestial sphere, thus making it possible to study sources other than supernovae. We extract optical light curve of 1749+096 by using this online tool. \subsection{Swift-XRT} \label{sec:obsD} We collected X-ray data (0.3--10 keV) from \emph{Swift}-XRT observations \citep{Gehrels2004}. The \emph{Swift}-XRT is a mission launched in 2004 to investigate X-ray afterglows of $\gamma$-ray bursts (GRB). The UK Swift Science Data Centre\footnote{\url{http://www.swift.ac.uk/index.php}} provides an automatic online pipeline that produces high level XRT products for non-GRBs with the software package HEASOFT v6.22. We employ the online pipeline to generate an X-ray light curve of 1749+096 with a $3\sigma$ cutoff. Details of the pipeline and the data reduction process are provided by \citet{Evans2007}. \begin{figure*} \includegraphics[width=15cm]{light.png} \caption{Multi-waveband light curves of 1749+096. From top to bottom: \emph{Fermi}-LAT at 0.1--300\,GeV, \emph{Swift}-XRT at 0.3--10\,keV, ASAS-SN at optical V-band, KVN (iMOGABA) at 22/43/86/129\,GHz plus VLBA (BU) at 43\,GHz, and OVRO at 15\,GHz. The data spans the period from 2014 December 29 to 2017 February 16. The light green shaded region indicates the $\gamma$-ray active period (MJD 57540--57700). The black dashed vertical line indicates the 2016 July 19 (MJD 57588) $\gamma$-ray outburst.} \label{fig:f1} \end{figure*} \subsection{Fermi-LAT} \label{sec:obsE} The \emph{Fermi}-LAT $\gamma$-ray space mission was launched in 2008 June to explore the high energy sky \citep{Atwood2009}. The LAT is designed to cover the energy range of 20\,MeV--300\,GeV, and performs an all-sky survey with its large field of view (2.4 sr). We use the \emph{Fermi} software \texttt{ScienceTools v10r0p5} and the instrument response function (IRF) \texttt{P8R2\_SOURCE\_V6} to extract light curves of 1749+096 from the raw LAT data. We essentially follow the data reduction steps employed by \citet{Prince2017}. The initial search radius was set to $20^{\circ}$ around 1749+096. We selected events in the SOURCE class (Pass 8) with an energy range of 0.1--300 GeV. To exclude atmospheric background events (i.e., contamination from the Earth limb $\gamma$-radiation) and select good time invervals (GTIs), we applied the \texttt{zmax} option in \texttt{gtltcube} (\texttt{zmax=90$^{\circ}$}) plus the filter \texttt{DATA\_QUAL==1 \&\& LAT\_CONFIG==1} which is the currently recommended procedure. We extracted source models within the search window from the third \textit{Fermi}-LAT catalog (3FGL). We set the spectral parameters for sources within and outside the Region of Interest (ROI) of 10$^{\circ}$ to be free and fixed to the catalog values, respectively. A power-law (PL) function was applied to the photon spectra of 1749+096. We performed an unbinned likelihood analysis where the significance of the $\gamma$-ray flux is evaluated by maximum likelihood (ML) test statistics (e.g., \citealt{Paliya2015}). We modelled the contribution by diffuse background sources with the recent isotropic background model \texttt{iso\_P8R2\_SOURCE\_V6\_v06} and the galactic diffuse emission model \texttt{gll\_iem\_v06}. We use the Perl script \texttt{like$\_$lc.pl}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/}} written by R. Corbet to produce $\gamma$-ray light curves. A weekly $\gamma$-ray light curve of 1749+096 is generated using the criterion \texttt{TS=9} (corresponding to a $3\sigma$ cutoff), flux values below this threshold are rejected. We also provide 3-day binned $\gamma$-ray light curve for further analysis. During the photon index analysis, we noted and rejected a few outliers (three and two data points in the weekly and 3-day binned data, respectively) deviating by more than $2\sigma$ from the weekly photon index trend. All relevant files and data are provided by the \emph{Fermi} data web site.\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/access/}} \section{Results and Analysis} \subsection{Multi-waveband light curves} \label{sec:resA} Figure~\ref{fig:f1} shows the multi-waveband light curves spanning from 2014 December 29 to 2017 February 16 (MJD 57020--57800). Until mid-2016, 1749+096 is $\gamma$-ray quiet with fluxes $\lesssim2\times10^{-7}$ ph cm$^{-2}$ s$^{-1}$ and detected only occasionally, whereas radio observations find enhanced activity peaking around mid-2015. In 2016 July, a powerful $\gamma$-ray outburst occurs, rising to about $\sim$15 times the quiescent level within 36 days (in the 3-day binned data). The $\gamma$-ray flux peaked at $2.9\times10^{-6}$ ph cm$^{-2}$ s$^{-1}$ on 2016 July 19 (MJD\,57588$\pm$1.5\,d; 3-day binned data). Counterparts to this $\gamma$-ray event can be found at all wavebands (radio, optical, and X-ray). The X-ray, optical, and cm-wave (OVRO 15\,GHz) counterparts peaked on 2016 July 20 (MJD\,57589), 2016 July 18 (MJD\,57587), and 2016 July 22 (MJD\,57591) respectively, meaning that the X-ray and optical counterparts were simultaneous with the $\gamma$-ray outburst within the error of $\pm$1.5\,day given by the time resolution of the $\gamma$-ray light curve. The 15-GHz peak occurs $\sim$3 days after the $\gamma$-ray peak; the difference might actually be larger because the OVRO light curce shows a gap right after its apparent maximum on MJD 57591. In addition to the 2016 July the $\gamma$-ray outburst, a smaller temporary $\gamma$-ray flux enhancement occurred on 2016 October 2 (MJD 57663$\pm$1.5 d), reaching up to about $3.9\times10^{-7}$ ph cm$^{-2}$ s$^{-1}$. This event appears to have no counterparts at radio wavelengths. Unfortunately, the available information at mm-wavelengths is poor during the period we specify as `$\gamma$-ray active' from 2016 June 1 to 2016 November 8 (the period indicated in Figure~\ref{fig:f1}). However, we do see a radio counterpart to the $\gamma$-ray outburst at mm-wavelengths from the BU data; due to the rather large sampling intervals of the BU data, $\sim$1\,month, it is unclear where the mm-wave light curve peaks exactly. From the iMOGABA and OVRO light curves, we find a period of enhanced mm-radio flux in mid-2015. Contrary to the subsequent radio flare in the middle of 2016, we do not find a corresponding increase in $\gamma$-ray activity. Overall, 1749+096 shows rather quiescent, and frequently undetectable, $\gamma$-ray emission during most of our observations except the $\gamma$-ray active period in 2016. Thus, we focus on this period in the further analysis. \begin{figure} \includegraphics[width=\columnwidth]{corr.png} \caption{Correlations between $\gamma$-ray flux (3-day binning) and three other wavebands (X-ray, optical, and 15-GHz radio) for the time of high $\gamma$-ray activity in mid-to-end 2016. The top left panel shows the correlation between 3-day and 7-day $\gamma$-ray light curves as a consistency check. Correlations are tested with data points that are simultaneous within the bin size (i.e., $\pm1.5$ days) of the $\gamma$-ray data. Each panel gives Pearson ($r_p$) and Spearman rank ($r_s$) correlation coefficients values together with the corresponding false alarm probabilities ($p$ values). Red lines indicate the best-fit linear relationships.} \label{fig:f2} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{photI.png} \caption{Evolution and distribution of the $\gamma$-ray photon index $\Gamma$ from the LAT data; red points mark the values for the weekly light curve, cyan points the values for the 3-day light curve. \emph{Top panels:} $\Gamma$ as function of time. Vertical solid lines indicate the $\gamma$-ray outburst on MJD 57588. The top left panel spans the entire time of our observations, the top right panels focuses on the period of high $\gamma$-ray activity in 2016; note the different axis scales. \emph{Bottom panel:} $\Gamma$ as funcion of $\gamma$-ray flux. The dashed horizontal and vertical lines represent a photon index of $\Gamma=-2$ and a flux of $1.8\times10^{-7}$ ph cm$^{-2}$ s$^{-1}$, respectively.} \label{fig:f3} \end{figure} \subsection{Multi-wavelength flux correlations} \label{sec:resB} A physical connection between the emission at various wavelengths is apparent already from the morphology of the light curves (cf. Figure~\ref{fig:f1}). For a more quantitative analysis, we computed Pearson ($r_p$) and Spearman rank ($r_s$) correlation coefficients to probe the degrees of correlation between the $\gamma$-ray light curve and the emission at lower energy bands. We included all data from the period of enhanced $\gamma$-ray activity in mid-to-end 2016 that are simultaneous with the $\gamma$-ray data within the bin size of three days. For the optical data, we used flux estimates in linear units, in mJy, provided by the ASAS-SN online database along with the V band magnitudes. The OVRO 15 GHz data represent the radio band in the correlation analyis; the other radio light curves did not provide enough simultaneous data points. We assume false alarm probabilities $p\leq0.05$ to indicate statistically significant correlations (e.g., \citealt{Leung2014}). Figure~\ref{fig:f2} shows the results of the correlation analysis. All correlation coefficients, with values $r_p\geq0.69$ and $r_s\geq0.75$, point toward strong positive correlations between emission at $\gamma$-rays and at lower frequencies. False alarm probabilities are lower than 0.05 with the (marginal) exception of $p_{s}=0.052$ for the X-ray--$\gamma$-ray pair. \subsection{LAT $\gamma$-ray photon indices} \label{sec:resC} \begin{figure*} \includegraphics[width=\textwidth]{pol.png} \caption{Linearly polarized flux density (\textit{color scale}), EVPA (\textit{black line segments}), and total intensity structure (\textit{background contours}) of 1749+096 at 43\,GHz as observed with the VLBA; from left to right: 2016 June 11, 2016 July 5, 2016 July 31, 2016 September 5, 2016 October 6, 2016 October 23, and 2016 November 28. The cyan arrows at the declination axis indicate the times of the two major $\gamma$-ray events: the 2016 July 19 outburst and the 2016 October 2 maximum. All maps are restored with a circular beam with a size of 0.25\,$\times$\,0.25~mas (displayed at the bottom left). Contour levels increase by factors of two from 0.25\% to 64\%, plus one additional countour corresponding to 85\% of the maximum total intensity. The two tilted red dashed lines indicate the apparent motion of the peak of the \emph{polarized} emission. The horizontal red solid line indicates the location of the 7-mm core.} \label{fig:f4} \end{figure*} We quantify the $\gamma$-ray spectrum of 1749+096 using the photon index $\Gamma$, which is defined as $dN/dE \propto E^{+\Gamma}$ with $N$ being the number of photons and $E$ being the photon energy. Figure~\ref{fig:f3} shows the photon indices obtained from the 3-day binned and weekly binned LAT light curves at 0.1--300\,GeV as function of time and as fuction of $\gamma$-ray flux, respectively. The photon index varies from $-3.5$ to $-1.7$ during the time of our observations; the two-year average value is $\Gamma=-2.3$. The photon index time series indicates a spectral hardening (i.e., an increase of $\Gamma$) around the time of the 2016 July $\gamma$-ray outburst. More specifically, the photon indices increase from about $-3$ to about $-2$ during the $\sim$40 days before the $\gamma$-ray outburst. The photon index appears to decrease (from $-1.7$ to $-3.5$) with increasing flux until the $\gamma$-ray flux reaches about $\sim1.8\times10^{-7}$ ph cm$^{-2}$ s$^{-1}$. With further increasing flux the photon index increases again and approaches a plateau at $\Gamma\approx2$ for fluxes $\gtrsim6\times10^{-7}$ ph cm$^{-2}$ s$^{-1}$. \begin{table} \caption{Properties of the polarized VLBA component} \label{tab:tb1} \centering \begin{tabular}{c @{\hspace{0.5cm}} c @{\hspace{0.2cm}} c @{\hspace{0.2cm}} c @{\hspace{0.2cm}} c} \hline Date & \textit{m}$_\mathrm{total}$ & PI$_\mathrm{peak}$ & EVPA$_\mathrm{peak}$ & rms$^a$\\ & (\%) & (mJy/beam) & ($^{\circ}$) & (mJy/beam) \\ \hline 2016 June 11 & 5.9 & 189 & $-$14 & 0.69 \\ 2016 July 05 & 4.5 & 208 & $-$19 & 0.54 \\ 2016 July 31 & 4.9 & 232 & $-$51 & 2.73 \\ 2016 Sept 05 & 2.4 & 84 & $-$3 & 0.97 \\ 2016 Oct 06 & 1.3 & 30 & $-$39 & 1.01 \\ 2016 Oct 23 & 5.0 & 102 & $-$32 & 0.59 \\ 2016 Nov 28 & 3.6 & 51 & $-$10 & 0.42 \\ \hline \multicolumn{5}{l}{$^a$ rms noise of residual polarization map.}\\ \end{tabular} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{cont.png} \caption{VLBA map of 1749+096 observed on 2016 November 28, from model component fitting. Individual circular Gaussian components are marked with red $\oplus$. The beam size (illustrated at the bottom right) is $0.34\times0.15$ mas at $-$7.6$^{\circ}$. Contour levels increase by factors of $\sqrt{2}$ from 0.11\% to 79.32\% of the total intensity peak. Blue solid lines point to the VLBA 43-GHz core and the jet component J1, respectively.} \label{fig:f5} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{comp.png} \caption{Evolution of flux density and the polarized component of 1749+096 obtained from the BU data; 2016 June 11 -- 2016 November 28. \emph{Top panel:} Core flux and total source flux. \emph{Middle panel:} PI and EVPA listed in Table~\ref{tab:tb1} with error of 10\% and 5$^{\circ}$, respectively. \emph{Bottom panel:} Evolution of the jet component J1, located at $\sim0.2$ mas from the core. The vertical solid line marks the time of the 2016 July $\gamma$-ray outburst.} \label{fig:f6} \end{figure} \subsection{Linear polarization at 43 GHz} \label{sec:resD} Figure~\ref{fig:f4} shows the 7-mm (43\,GHz) linear polarization in the innermost few parsecs of the radio jet of 1749+096. Polarized intensity scale and EVPA markers are plotted over the total intensity contour. We applied an $8\sigma$ cutoff to the polarized intensity. We calculate fractional linear polarizations (defined as \textit{m} = $\sqrt{Q^{2} + U^{2}} / I$, where $I$, $Q$, and $U$ are Stokes parameters) by using CLEANed fluxes. We estimate a typical thermal noise of $\sim$1\,mJy/beam in the polarized emission. \citet{Jorstad2005} suggests typical systematic errors of the BU data of $\sim$1\% in fractional polarization and $\sim$5$^{\circ}$ in polarization angle for bright components, up to around 10$^{\circ}$ in the worst case (see also \citealt{Roberts1994} for discussion of uncertainties in the measured polarization). On average, we found a fractional linear polarization of $\sim$3.9\% throughout the BU dataset. Our further analysis focused on monitoring the properties of the polarized source component, corresponding to the peak of the polarized emission, which usually overlaps with the VLBA core. We summarize the polarization properties of 1749+096 in Table~\ref{tab:tb1}. Through 2016 June and 2016 July, the polarized flux increases notably. After the $\gamma$-ray outburst on 2016 July 19, the polarized intensity reaches its highest value, $\sim$230 mJy, and displays a rotation of the EVPA by about 32$^{\circ}$ with respect to the previous epoch. In 2016 September, the polarized component had moved down the jet, with the EVPA being aligned closely to the jet axis which is located at a position angle of $\sim$7.5$^{\circ}$ (the average position angle of all the jet components in Table~\ref{tab:tb2}). A new polarized component emerged \emph{upstream} of the VLBA core just four days after the 2016 October 2 $\gamma$-ray flux maximum. \subsection{Flux evolution near the core} \label{sec:resE} In addition to the polarized flux, the BU data provide important information on the (total intensity) structure of the innermost region of 1749+096. In order to trace the flux evolution of the various source components, we fit circular Gaussian profiles to them. The fitted parameters of all components are displayed in Table~\ref{tab:tb2}. Despite the clear evolution of the location of the polarized flux near the core (Section \ref{sec:resD}), we do not find indication for the ejection of a new jet component. As we might have missed a new component due to insufficient angular resolution or a smaller Doppler factor (i.e., a change of orientation in the curved jet), we took a closer look at the behaviour of the jet component J1. This component is located within 0.2 mas from the core in all 43-GHz VLBA maps (see Figure~\ref{fig:f5} and Table~\ref{tab:tb2}). Figure~\ref{fig:f6} shows the flux evolution of the core, J1, the polarized component, and total integrated source flux. The total flux is dominated by the core which contributes about 86\% of the total observed flux on average. The core flux peaks at the time of the 2016 July $\gamma$-ray outburst, thus suggesting the core as the origin of the radio flux counterpart to the $\gamma$-ray flare. In 2016 September, the flux of the component J1 increases by a factor of about 3 compared to 2016 July, a bit more than one month after the 2016 July $\gamma$-ray outburst. \begin{table*} \caption{Parameters of the model fitted jet components in the 43\,GHz total intensity images.} \label{tab:tb2} \begin{tabular*}{\textwidth}{cccccccccccc \hline \hline Date & MJD & ID$^a$ & Flux & Distance$^b$ & Angle$^c$ & Diameter & B$_\mathrm{maj}^d$ & B$_\mathrm{min}^d$ & B$_\mathrm{PA}^d$ & rms$^e$ \\ & & & (Jy) & (mas) & ($^{\circ}$) & (mas) & (mas) & (mas) & ($^{\circ}$) & (mJy/beam) \\ \hline \hline 2016 June 11 & 57550 & C & 2.47 & 0.00 & $-$ & 0.02 & 0.35 & 0.15 & $\mathrm{-}$6.33 & 1.02 \\ & & J1 & 0.69 & 0.09 & $\mathrm{-}$12.69 & 0.04 & & & & \\ & & J2 & 0.09 & 0.39 & 22.58 & 0.16 & & & & \\ & & J3 & 0.03 & 0.81 & 7.94 & 0.14 & & & & \\ & & J4 & 0.08 & 1.43 & 9.16 & 0.67 & & & & \\ \hline 2016 July 05 & 57574 & C & 4.35 & 0.00 & $-$ & 0.03 & 0.40 & 0.16 & $\mathrm{-}$12.60 & 1.09 \\ & & J1 & 0.19 & 0.14 & $\mathrm{-}$0.11 & 0.05 & & & & \\ & & J2 & 0.09 & 0.55 & 15.48 & 0.21 & & & & \\ & & J3 & 0.07 & 1.39 & 7.81 & 0.64 & & & & \\ \hline 2016 July 31 & 57600 & C & 4.02 & 0.00 & $-$ & 0.03 & 0.35 & 0.17 & $\mathrm{-}$7.63 & 1.41 \\ & & J1 & 0.12 & 0.17 & 0.30 & 0.04 & & & & \\ & & J2 & 0.06 & 0.53 & 15.15 & 0.17 & & & & \\ & & J3 & 0.03 & 0.79 & 13.23 & 0.10 & & & & \\ & & J4 & 0.05 & 1.46 & 2.73 & 0.43 & & & & \\ \hline 2016 Sept 05 & 57636 & C & 2.99 & 0.00 & $-$ & 0.02 & 0.49 & 0.17 & $\mathrm{-}$20.40 & 1.44 \\ & & J1 & 0.42 & 0.10 & 11.65 & 0.03 & & & & \\ & & J2 & 0.05 & 0.43 & $\mathrm{-}$0.53 & 0.08 & & & & \\ & & J3 & 0.07 & 0.69 & 15.21 & 0.22 & & & & \\ & & J4 & 0.04 & 1.54 & 2.12 & 0.41 & & & & \\ \hline 2016 Oct 06 & 57667 & C & 2.51 & 0.00 & $-$ & 0.05 & 0.63 & 0.20 & $\mathrm{-}$28.00 & 1.48 \\ & & J1 & 0.25 & 0.15 & $\mathrm{-}$2.75 & 0.08 & & & & \\ & & J2 & 0.10 & 0.73 & 10.75 & 0.32 & & & & \\ & & J3 & 0.04 & 1.78 & 9.65 & 0.76 & & & & \\ \hline 2016 Oct 23 & 57684 & C & 1.83 & 0.00 & $-$ & 0.02 & 0.35 & 0.15 & $\mathrm{-}$3.96 & 0.99 \\ & & J1 & 0.21 & 0.13 & 5.94 & 0.04 & & & & \\ & & J2 & 0.03 & 0.45 & 10.39 & 0.14 & & & & \\ & & J3 & 0.04 & 0.87 & 11.40 & 0.21 & & & & \\ & & J4 & 0.04 & 1.69 & 8.76 & 0.55 & & & & \\ \hline 2016 Nov 28 & 57720 & C & 1.38 & 0.00 & $-$ & 0.03 & 0.34 & 0.15 & $\mathrm{-}$7.63 & 0.53 \\ & & J1 & 0.19 & 0.16 & 5.57 & 0.05 & & & & \\ & & J2 & 0.02 & 0.50 & 3.15 & 0.14 & & & & \\ & & J3 & 0.05 & 0.94 & 11.85 & 0.42 & & & & \\ & & J4 & 0.03 & 1.82 & 9.81 & 0.77 & & & & \\ \hline \hline \multicolumn{11}{l}{$^a$ Higher ID numbers Jx correspond to larger downstream distances from the core.}\\ \multicolumn{11}{l}{$^b$ Distance from the core.}\\ \multicolumn{11}{l}{$^c$ Position angle relative to core component.}\\ \multicolumn{11}{l}{$^d$ Parameters of the elliptical beam: major axis, minor axis, position angle}\\ \multicolumn{11}{l}{$^e$ rms noise of residual map.}\\ \end{tabular*} \end{table*} \section{Discussion} \subsection{$\gamma$-ray activity} \label{sec:disA} The 2016 July $\gamma$-ray outburst is an exceptional event with no precedent since the beginning of $\gamma$-ray observations in 2009. During phases of low as well as very high $\gamma$-ray fluxes, we observe photon indices close to the value $\Gamma\approx-2.2$ expected for LSP blazars, suggesting a spectral break located at around 100 MeV or less \citep{Lico2017}. Else than intermediate-synchrotron peaked (ISP) and high-synchrotron peaked (HSP) blazars, LSP blazars are known to experience severe cooling in the energy range 0.1--300\,GeV (i.e, our LAT band), thus producing the IC component of the SED \citep{Lico2017}. During the 2016 July $\gamma$-ray outburst we observe an increase in photon index (see Figure~\ref{fig:f3}), meaning a spectral hardening with increasing flux (e.g., \citealt{Nandikotkur2007}). Such behavior is rare in BL Lac objects \citep{Lico2014}. \citet{Kushwaha2014} suggested shock acceleration as explanation for the hardening of $\gamma$-ray spectra. In our case, the temporal agreement of the apex of the spectral hardening with the $\gamma$-ray flux peak points toward shock acceleration inducing a surge of $\gamma$-ray photons at higher energies efficiently \citep{Kusunose2000}. \citet{Abdo2010} found a transition between a harder-when-brighter trend and a softer-when-brighter trend in PKS 1510$-$089 for energies above 0.2 GeV. In their observation, photon indices softened with fluxes increasing up to $\sim2.4\times10^{-7}$ ph cm$^{-2}$ s$^{-1}$, then hardened again with fluxes increasing further. This matches our observation of decreasing photon index with the flux increasing up to about $1.8\times10^{-7}$\,ph\,cm$^{-2}$\,s$^{-1}$ (weekly light curve, see Figure~\ref{fig:f3}). Assuming a threshold value of $1.7\times10^{-7}$ ph cm$^{-2}$ s$^{-1}$, we find a strong negative correlation ($r_p=-0.86$) between $\gamma$-ray flux and $\Gamma$, corresponding to a softer-when-brighter scaling. The physical mechanisms behind this softer-when-brighter trend as well as the inversion of this trend at a certain threshold flux are unclear \citep{Abdo2010}. Candidate mechanisms are a change in emission mechanism (e.g., \citealt{Asano2014}), the cooling time scale (e.g., \citealt{Dotson2012}), or the magnetic field strength (e.g., \citealt{Kusunose2000}). \subsection{Multi-wavelength correlations} \label{sec:disB} The $\gamma$-ray outburst in 2016 July was accompanied by simultaneous flux enhancements from radio to X-rays, indicating a physical connection across all wavelengths \citep{Jorstad2001a, Tavares2011, Lico2017}. The fact the source flux from radio to $\gamma$ peaks simultaneously within few days strongly suggests that the emission at all wavelengths (largely) originates from the same location within the source \citep{Tavares2011, Wehrle2012, Jorstad2013, Casadio2015}. A possible exception is the 15-GHz radio peak which we cannot locate exactly and which might be delayed by up to a few more days relative to the events at higher energy bands. If it were indeed delayed, we would have to assume a displacement of the radio emitting region relative to the regions emitting higher energy radiation. This is supposed to occur when the emission at higher energies is produced in a region that is optically thick at 15\,GHz; radio light is emitted only once the disturbance in the jet has entered a region transparent at 15\,GHz (\citealt{Wehrle2012}; see also \citealt{Agudo2011}, for discussion of a physically extended disturbance). Figure~\ref{fig:f6} shows that the BU VLBA core is the origin of the 7-mm outburst, thus implying a connection between the mm-wave core and the simultaneous events at higher energy bands (e.g., \citealt{Agudo2011, Wehrle2012, Jorstad2013}). The peak of the $\gamma$-ray outburst seems to coincide with the peak of the 7-mm emission. This is unexpected as the conventional picture of the radio--$\gamma$-ray connection expects a $\gamma$-ray outburst at the onset (during the rise) of a radio flare (\citealt{Marscher2016}; but see also \citealt{Valtaoja1995, Moerbeck2014}, for various timings of $\gamma$-ray events relative to radio flares). However, we note that a considerable number of studies observed $\gamma$-ray outbursts being (quasi-)simultaneous with radio flares (e.g., \citealt{Tavares2011, Wehrle2012, Lico2014, Casadio2015}). \subsection{Origin of the $\gamma$-ray outburst} \label{sec:disC} The relative timing of the $\gamma$-ray outburst and the 7-mm outburst of the VLBA core suggests the mm-wave core to be the production site of the $\gamma$-radiation \citep{Wehrle2012, Jorstad2013, Casadio2015}. The behaviour of the polarized VLBA component after the 2016 July $\gamma$-ray outburst supports this idea (see Figure~\ref{fig:f4}). The linear polarization image of 2016 September 5 shows clearly that the region of polarized emission, which was located at the VLBI core before, moved down the jet. This can be interpreted as the signature of a shock emerging within, and moving away from, the core (e.g., \citealt{Ros2000, Marscher2008, Pushkarev2008, Marscher2010, Agudo2011, Wehrle2012, Marscher2012, Jorstad2013}). This picture connects the $\gamma$-ray outburst with the passage of a propagating disturbance, like a new jet component, through the core. The flux evolution of component J1, that was detected within 0.2\,mas from the core consistently over six months, supports this idea (see Figure~\ref{fig:f6}). The enhancement of the flux from J1 as observed on 2016 September 5 is consistent with the disturbance propagating through a region downstream of the core \citep{Casadio2015, Hodgson2017}. However, we do not find a newly emerging feature in the (total intensity) VLBA maps that could be associated with displacement of the polarized component. Interestingly, \citet{Lico2014} and \citet{Ros2000} encountered similar situations in Mrk\,421 and 3C\,345. The absence of a directly observed new jet component may be attributed to a complex structure of the jet around the core or spatial blending of multiple emission regions in the jet \citep{Ros2000, Jorstad2013, Hodgson2017}. As noted in Section~\ref{sec:disA}, the evolution of the $\gamma$-ray spectrum of 1749+096 around the time of the 2016 July $\gamma$-ray flare is consistent with the acceleration of a relativistic shock. Further evidence in favor of this interpretation is provided by the evolution of the BU VLBA core flux both in total intensity and linear polarization during 2016 June and 2016 July. The polarized flux reached its maximum on 2016 July 31, when the BU core flux was in decline already (see Figure~\ref{fig:f6}). This is the signature expected from a disturbance propagating through the core (e.g., \citealt{Lico2014}) but may also be connected to the evolution of a relativistic shock (\citealt{Ros2000}; see also \citealt{Tavares2012} for discussions of strong core polarization). The maximum of the polarized intensity, about 230\,mJy/beam, coincides with an EVPA swing by $\sim$32$^{\circ}$ in the core region on 2016 July 31. \citet{Hughes2011} suggested that initially random and turbulent magnetic fields in a blazar jet can be compressed by a propagating oblique shock, thus leading to both an enhancement of polarized intensity and a swing of the EVPA \citep{Laing1980, Hughes1985, Hughes1991}. Accordingly, we suggest that the passage of a propagating disturbance through the mm-wave core is responsible for the $\gamma$-ray outburst. \citep{Marscher2008, Agudo2011, Wehrle2012, Jorstad2013}. Further clues for constraining the production site of the optical-to-$\gamma$-ray outbursts could be provided by the opacity of the core at 7\,mm (see Section~\ref{sec:disD}) during the event. It seems that the $\gamma$-ray outburst is contemporaneous with its 7-mm counterpart, whereas the cm-wave counterpart in the OVRO light curve is slightly delayed relative to the peak of the $\gamma$-ray outburst. This leads us to consider a region downstream of the mm-wave core to be the origin of the $\gamma$-ray outburst. This is in agreement with the disturbance being spatially extended (e.g., \citealt{Agudo2011}). The duration of the $\gamma$-ray outburst, roughly 50 days, can be considered as the time needed for the disturbance to pass through the mm-wave core \citep{Jorstad2013}. Then, the strongest $\gamma$-ray emission might be produced by the back region of the propagating disturbance $\sim$10 days before the disturbance fully escapes the mm-wave core. \subsection{The enhanced $\gamma$-ray emission in 2016 October} \label{sec:disD} \begin{figure} \includegraphics[width=\columnwidth]{INDEX.png} \caption{Pairwise spectral indices ($S_\nu \propto \nu^{\alpha}$) of 1749+096 at radio wavelengths observed in 2016. Different colors and symbols indicate different frequency pairs. The epochs of the $\gamma$-ray events are indicated by two black vertical solid lines.} \label{fig:f7} \end{figure} We find a notable $\gamma$-rays flux enhancement around 2016 October 2. Contrary to the prior (2016 July) $\gamma$-ray outburst, this local peak was, if at all, accompanied only by an optical counterpart without corresponding flux enhancements at radio and X-rays. The 43-GHz linear polarization maps obtained after the time of the $\gamma$-ray enhancement show a polarized component propagating down the jet from the BU core. This suggests the presence of a propagating disturbance similar to the situation discussed in Section~\ref{sec:disB}. Notably, we observe an upstream displacement of the polarized component relative to the peak of the total intensity on 2016 October 6. However, the data from this epoch need to be interpreted with care owing to reduced sensitivity and resolution caused by antennas missing from the array (see Section~\ref{sec:obsA}). We calculated spectral indices of 1749+096 from the radio data as follows. For the pair of 15--43\,GHz, we employed OVRO and the BU total fluxes observed within 1 day, assuming that the VLBI-scale structure of the source dominates the OVRO fluxes (the ratio of the OVRO and BU fluxes is 1.1 on average). Although the data points are sparse, it seems that the source was opaque ($\alpha$~$\sim$~0) at 43\,GHz during the $\gamma$-ray flaring period. This is consistent with what \citet{osullivan2009} reported (i.e., a spectral index of the core region of 1749+096 $\sim-0.1$ between 12.9 and 43\,GHz). We consider that the second $\gamma$-ray event might have been caused by a propagating disturbance at (nearly) the same location in the jet as the first $\gamma$-ray event, but with smaller Doppler factor and energization. This explains the relatively lower $\gamma$-ray flux density and the absence of a radio counterpart compared to the major $\gamma$-ray outburst. \section{Conclusions} In this study, we explored the nature of two $\gamma$-ray events, the 2016 July outburst and the 2016 October flux enhancement, in the blazar 1749+096. From the combined evidence provided by multi-waveband flux observations plus 43-GHz VLBA maps, we conclude that both $\gamma$-ray events are connected to the propagation of a disturbance in the jet \citep{Jorstad2001b, Marscher2008, Jorstad2016}. Regarding the origin of the two $\gamma$-ray events, we suggest the `parsec-scale scenario' (e.g., \citealt{Agudo2011, Wehrle2012, Jorstad2013}) where a relativistic shock moving down the jet causes an enhancement of $\gamma$-ray flux in the radio core by providing highly accelerated electrons. As the disturbance passes through the mm-wave core, the VLBA core flares simultaneously with the fluxes at the higher energy bands. Eventually, a moving feature can be seen in the linear polarization images. Given that the cm-wave flux peak is slightly delayed (around 5 days) relative to the $\gamma$-ray outburst, we tentatively conclude that a region downstream of the mm-wave core is the origin of the $\gamma$-ray outburst. The $\gamma$-ray outburst matches the growth of a strong shock. We find a hardening of the $\gamma$-ray spectrum with increasing flux during the rising stage of the $\gamma$-ray outburst. The subsequent presence of abundant polarized emission in the core region further supports the presence of growing shock \citep{Tavares2012}. In the case of the $\gamma$-ray enhancement on 2016 October 2, we find an upstream displacement of the polarized peak relative to the total intensity in the linear polarization image on 2016 October 6. Given the opacity of 1749+096 at 43\,GHz, however, the polarized component cannot be detected upstream of the BU core due to synchrotron self-absorption. For this $\gamma$-ray event, we expect that the event was less energized with relatively smaller Doppler factor, thus resulting in some differences in the observed evolution between the two $\gamma$-ray events. The origin of the bulk of the seed photons remains unclear for both $\gamma$-ray events. In general, both the internal IC process with seed photons from the jet itself (\citealt{Marscher2010}), and external Compton (EC) scattering with seed photons from the dusty torus at parsec-scales or the BLR at subparsec-scales (see also \citealt{Tavares2011}, for discussion of outflowing BLR at parsec-scales) can be considered. Given the observed hardening of the $\gamma$-ray spectrum however, the EC process with infrared seed photons from the dusty torus might be the dominant emission mechanism for the 2016 July $\gamma$-ray outburst \citep{Agudo2011}. A better understanding of $\gamma$-ray flares in blazar jets not only requires monitoring the properties of the linear polarization (which reflects the underlying magnetic field configuration; e.g., \citealt{Homan2002, Marscher2012}), but also changes in jet component Doppler factors caused by viewing angle variations that could substantially affect the observed $\gamma$-rays \citep{Jorstad2001b, Casadio2015, Raiteri2017}. \section*{Acknowledgements} We appreciate the referee for constructive and fruitful comments which improved the manuscript. We thank R. Prince (RRI), J. Perkins (NASA/GSFC), and R. Corbet (UMBC/NASA) for their valuable advice on the analysis of the LAT data. We thank E. Ros (MPIfR) for useful comments. We are grateful to the staff of the KVN who helped to operate the array and to correlate the data. The KVN and a high-performance computing cluster are facilities operated by the KASI (Korea Astronomy and Space Science Institute). The KVN observations and correlations are supported through the high-speed network connections among the KVN sites provided by the KREONET (Korea Research Environment Open NETwork), which is managed and operated by the KISTI (Korea Institute of Science and Technology Information). This study makes use of 43 GHz VLBA data from the VLBA-BU Blazar Monitoring Program (VLBA-BU-BLAZAR; \url{http://www.bu.edu/blazars/VLBAproject.html}), funded by NASA through the Fermi Guest Investigator Program. The VLBA is an instrument of the Long Baseline Observatory. The Long Baseline Observatory is a facility of the National Science Foundation operated by Associated Universities, Inc. This research has made use of data from the OVRO 40-m monitoring program \citep{Richards2011} which is supported in part by NASA grants NNX08AW31G, NNX11A043G, and NNX14AQ89G and NSF grants AST-0808050 and AST-1109911. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. We acknowledge financial support by the National Research Foundation of Korea (NRF) via research grants 2015-R1D1A1A01056807 (D. Kim, S. Trippe, JC. Algaba), 2016-R1C1B2006697 (S. Lee, S. Kang), 2015-H1D3A1066561 (G. Zhao), and 2014-H1A2A1018695 (J. Park). J. W. Lee is grateful for the support of the National Research Council of Science and Technology, Korea (Project Number EU-16-001).
2,877,628,090,649
arxiv
\section{Introduction} There are various forms of a natural disaster such as flood, earthquake, volcano eruptions, storms, etc. but the flood is one of the lethal and prominent forms of natural disaster according to World Meteorological Organization (WMO) for most of the countries. National Weather Services (NWS) reported 28,826 flash floods events in the United States from October 2007 to October 2015 which resulted in 278 live loss and million-dollar worth crop and property damage \cite{gourley2017flash}. Monitoring and detecting floods in advance and proactively working towards saving peoples live and minimizing damage at the same time is amongst one of the most important tasks nowadays. In recent times, humans are extremely active on social media such as Twitter, Facebook, Youtube, Flickr, Instagram, etc. People use these platform extensively to share crucial information via message, photos and videos in real-time on social media for their interaction and information dissemination on every topic and acts as an active human sensor. It has been observed in the past few years via several case studies that social media also contributes significantly and being used extensively for crisis-related feeds \cite{yu2018big} and extremely helpful in situation awareness towards crisis management \cite{kim2018social,palen2018social,imran2015processing}. Emergency first responders agency, humanitarian organizations, city authorities and other end users are always looking for the right amount and content that would be helpful in the crisis scenarios but generally, social media provides an overwhelming amount of unlabeled data and it is very crucial to filter out the right kind of information using text classification. The advances in Artificial Intelligence (AI) which includes machine learning and Natural Language Processing (NLP) methods can track and focus on humanitarian relief process and extract meaningful insights from the huge amount of social media data generated regularly in a timely manner. One of the major challenge while building a reliable and high accuracy model, it needs a huge amount of labeled data in order to be evaluated properly and achieve higher accuracy. Some of the platforms which uses crowdsourcing services and manually observe the data to label the disaster-related information such as CrisisLex\cite{olteanu2014crisislex}, CrisisNLP\cite{imran2016twitter}, CrisisMMD\cite{alam2018crisismmd}, AIDR\cite{imran2014aidr} etc. with already labeled data and pre-trained models, we can efficiently utilize the learned knowledge for the new target domain. \begin{figure}[htbp] \centerline{\includegraphics[width=9cm]{comparisionTL.png}} \caption{General Transfer Learning VS NLP Transfer Learning } \label{fig:comparisionTL} \end{figure} In general, to make a good predictive model we need a huge amount of labeled data with specific domain to train that provide accurate, reliable results for the new domain. Transfer learning models efficiently leverage the existing knowledge and perform effectively the intended task by adapting to the new domain. In Figure \ref{fig:comparisionTL} shows the comparison of general transfer learning and NLP transfer learning. Transfer learning learns from the source data model and applies the gained knowledge from the source domain to the target domain that requires relatively less labeled data. Social media growth in last decade and availability of existing disaster-related data sources labeled by crowdsourcing platforms provide an opportunity to utilize this data and build a learning model which learns the domain knowledge and transfer the learned knowledge to classify new data with higher accuracy and confidence automatically. This can effectively solve some of the important problems in disaster management such as flood detection, executing rescue operations, sending feedback and contextual warnings to authorities, improved situation awareness, etc. Transfer learning contains various type of knowledge sharing such as inductive, transductive depending on the source and target domain data distribution and source/target task relatedness \cite{pan2009survey}. Figure \ref{fig:comparisionTL} shows basic transfer Learning concept in NLP is slightly different than the general transfer learning. In general transfer learning, we have source domain and target domain, the model build and learned from the source domain data is used to transfer the knowledge to the target domain task model. Whereas, in NLP the source domain is the general understanding of the text learned from not only one domain but from a giant corpus of text, build a language model known as a pre-trained language model. These pre-trained language models are further used for different downstream task such as text classification, spam detection, question answering, etc. We are using here the inductive transfer learning where we have a pre-trained model as source task and improve the performance of the target task (flood tweet classification). We present in this study that using a pre-trained model and very few labeled flood tweets we can achieve great accuracy effectively in no time. The main contributions of this work are as follows: \begin{itemize} \item We propose to use the inductive transfer learning method and adapt the ULMFiT Pre-train model for text classification. \item We fine-tune the target model parameters by knowledge obtained from the source domain for quick and efficient flood tweet classification. \item We show that ULMFiT method needs a very small amount of labeled data (5\%) to achieve high accuracy and performance. \item This study demonstrates that this model can be applied in real-time flood detection and information extraction with very small training data for new application domain. \end{itemize} \section{Related Work} Growing active user base on social media and has been created a great opportunity for extracting crucial information in real-time for various events and topics. Social media is being vigorously used as the communication channel in the time of any crisis or any natural disaster in order to convey the actionable information to the emergency responders to help them by more situational awareness context so that they make a better decision for rescue operations, sending alerts, reaching out people right on time. There have been numerous works proposed related to crisis management using social media content which is discussed in the following section. \textbf{Social media for crisis management} Mainly in the analysis of social media content related to crisis situations data type such as images, geolocation, videos, text, etc. but most of the focus of these work has been images and geolocation towards crisis management \cite{kim2018social,palen2018social,imran2015processing,singh2019analyzing}. Processing social media content is itself a huge challenge and comes with great challenges as well such as information processing, cleaning, filtering, summarizing, extracting, etc. There has been some progress lately in developing methods to extract meaningful information during a crisis for better situation awareness and better decision making \cite{keim2011emergent}. The text domain of the social media data has not been exploited to its fullest and it is generally the most valuable and available data on social media. Text processing can provide great amount of details which can be useful for situation awareness and help towards extracting actionable insights. Identifying relevant text data would eventually result in major event detection which is difficult to correctly track in a short amount of time and fast processing is needed in these scenarios. \cite{keim2011emergent,singh2019analyzing}. \textbf{Domain adaptation for crisis management} Transfer learning is very popular and active research area of machine learning. This learning method is known for learning the domain knowledge while solving the task and transfer its knowledge from one domain (source) to another domain (target) to solve the task in the new domain. We need to know these basic things while applying transfer learning (1). What needs to be transferred? (2). When to transfer the learned knowledge? (3). How to transfer knowledge? There are few basic transfer learning algorithm principles that include few simple steps as follows: (i) it aims to minimize the error measure by reweighting the source label sample such that it appears as a target. (ii) Adapt the methods iteratively and label target example using these common steps (a) model learned from labeled example, (b) labels some target example (c) New model learns from new labels \cite{kaboli2017review,do2006transfer}. Transfer learning has been explored and applied in various classification problems for high quality and reliable results with less labeled data in the target domain. It has also been used for feature selection, pedestrian detection, improving visual tracking and subtractive bias removal in medial domain\cite{kaboli2017review}. Some of the other example where transfer learning have been used are text classification\cite{do2006transfer}, sentiment classification \cite{blitzer2007biographies,tan2009adapting}, domain adaptation \cite{li2015twitter}, object classification\cite{bergamo2010exploiting}. \section{Data Collection and Processing} In this section, we explain about our data collection and cleaning process of the data followed by some data visualization for better understanding of the data. The text data are decidedly very crucial and if leveraged carefully in time, it can assist in various emergency response services. It could greatly benefit the authorities in their decision-making process, rescue operation, increase situational awareness and early warnings. We are using Twitter data since it is one of the widely used social media platform in recent times.\\ \textbf{Data Collection:} We are using the disaster data from \cite{olteanu2014crisislex}. It contains various dataset including the \textbf{CrisiLexT6} dataset which contains six crisis events related to English tweets in 2012 and 2013, labeled by relatedness (on-topic and off-topic) of respective crisis. Each crisis event tweets contain almost 10,000 labeled tweets but we are only focused on flood-related tweets thus, we experimented with only two flood event i.e. \textit{Queensland flood in Queensland, Australia} and \textit{Alberta flood in Alberta, Canada} and relabeled all on-topic tweets as Related and Off-topic as Unrelated for implicit class labels understanding in this case. The data collection process and duration of CrisisLex data is described in \cite{olteanu2014crisislex} details. \\ \begin{table}[htbp] \centering \begin{tabular}{ |c|c| } \hline \textbf{Class Label} &\textbf{Tweets Counts} \\ \hline Related & 5414\\ \hline Unrelated & 4619 \\ \hline \end{tabular} \caption{Data Distribution} \label{table:classlabel} \end{table} \begin{figure}[htpb] \centering \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[height=1.2in]{queensland/word_dist.png} \caption{Word Distribution} \label{fig:Tword_dist} \end{subfigure} ~~~~~~~~ \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[height=1.2in]{queensland/len_dist.png} \caption{Tweet Length Distribution} \label{fig:Tlen} \end{subfigure} \caption{Tweet Distribution} \end{figure} \textbf{Data cleaning:} The tweets, in general, are very noisy and we need to clean the tweets in order to use them for efficient model building. We removed the stop words, numerical, special symbols and characters, punctuation, white space, random alphabets, and URLs, etc. We also transform all the tweets into lower case alphabet to normalize it and remove the redundancy in the data. After cleaning the tweets we performed some data visualization next for better data insights. \\ \begin{figure}[htbp] \centering \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[height=1.7in]{queensland/word_count.png} \caption{Frequent Words} \label{fig:count} \end{subfigure}% ~~~~~~~~~~~~~~~ \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[height=1.7in]{queensland/bigram.png} \caption{Bigram} \label{fig:bigram} \end{subfigure} ~\ \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[height=1.8in]{queensland/trigram.png} \caption{Trigram} \label{fig:trigram} \end{subfigure} \caption{Tweet Data Visualization} \end{figure} \textbf{Data Visualization:} Our focus here is to understand the basic characteristics of tweets and demonstrate the power of transfer learning method in this application. Although both of the datasets are similar in distribution thus, we have selected Queensland flood dataset for elaboration. Table \ref{table:classlabel} shows the fairly equal class distribution in Queensland flood tweets with 5414 related flood tweet and 4619 unrelated flood tweets. Figure \ref{fig:Tword_dist} shows the number of words in a tweet which ranges from 5 words up to 30 words in a single tweet. Figure \ref{fig:Tlen} shows the tweet length distribution contains from 30 characters up to 140 characters in a tweet. Figure \ref{fig:count}, \ref{fig:bigram}, \ref{fig:trigram} shows the top 20 most frequent words, bi-gram and tri-gram respectively of the tweet dataset. By visual inspection of these most frequent words, bigram and trigram, we have a general understanding of the major topics and themes in the data. Tweets characteristics are generally similar in most of the cases so it is highly probable that it can be effectively applied for other scenarios or new location as well. \section{Methodology} It is well known that numerous state-of-the-art models in NLP require huge data to be trained on from scratch to achieve reasonable results. These models take paramount of memory and immensely time-consuming. NLP researchers have been looking into various successful methods/models in computer vision (CV) and to attain similar success in NLP. A major breakthrough in CV was transferring knowledge obtained from pre-trained models on ImageNet \cite{krizhevsky2012imagenet} as a source task to target tasks for efficient results. There has been a huge advancement in the area of transfer learning in NLP due to the introduction of the pre-trained language models such as ULMFIT\cite{howard2018universal}, ELMO \cite{peters2018deep},GLUE \cite{wang2018glue}, BERT \cite{devlin2018bert}, Attention-net \cite{vaswani2017attention}, XL-Net \cite{yang2019xlnet} and many more to come etc. These pre-trained models have acquired state-of-the-art performance for many NLP task since they use a huge amount of training data for language understanding as their source models and fine-tune the model to achieve the high accuracy in the target task. We are using ULMFiT in this study since it has been shown significant performance for target domain classification task with minimal labelled data along with less training time with reasonable hardware requirement. Whereas, other models such as BERT, XL-Net etc. are much bigger and complex that need large training time and higher hardware architecture. \subsection{Universal Language Model Fine-tuning (ULMFiT)} This method ULMFiT \cite{howard2018universal} was introduced by Howard and Ruder which can effectively be applied as a transfer learning method for various NLP task. In inductive transfer learning the source task (Language model) is generally different than the target task (Flood detection) and requires labeled data in the target domain. ULMFiT is very suitable for efficient and text classification \cite{howard2018universal} is a pre-trained model. This model significantly outperformed in text classification, reducing error by 18-24\% on various datasets and achieving accuracy with very small labeled data. Some of the examples where researchers have used ULMFiT to solve a specific problem using power of transfer learning are \cite{hepburn2018universal,rother2018ulmfit}. Although, ULMFiT has the capability to handle any type of classification task such as topic classification, question classification, etc. but we are specifically targeting the flood-related tweet classification. \subsection{ULMFiT adaptation for Flood Tweet Classification} Text classification in any new area generally suffers from no or very little labeled data to work with initially. Inductive transfer learning addressees this very same challenge and ULMFiT method is primarily based on this concept. We have used the pre-trained language model ULMFiT to do the classification for the target task and classify the related and unrelated flood tweets coming from different location social media (Twitter). As shown in Figure \ref{fig:framework} our overall framework adapted from \cite{howard2018universal} to do the flood tweet classification. \begin{figure}[htpb] \centerline{\includegraphics[width=9cm]{framework.png}} \caption{Flood Text Classification Framework adapted from \cite{howard2018universal}} \label{fig:framework} \end{figure} As shown in Figure \ref{fig:framework} we are using the ULMFIiT architecture to solve the flood tweet classification problem. The source domain here is trained on the paramount of text data corpus from WikiText-103 dataset which contains 103 million words, 400 dimensional embedding size, 3 layers neural network architecture (AWD-LSTM) and 1150 hidden activations per layer that creates a general domain language model for \textbf{general domain LM pretraining} to predict the next word in the sequence, learns general features of the language. AWD-LSTM \cite{merity2017regularizing} is a regular LSTM, used for the Language Modeling with various regularization and optimization techniques that produce state-of-the-art results. Next step is \textbf{Target Task LM Fine-Tuning} which entertain the transfer learning idea by gaining the knowledge from the previous step and utilize it in the target task. Here the target task is flood tweet detection which has different data distribution and features so the general model fine-tunes according to the target task and adapt to the new domain (target) by learning the target task-specific features of the language. It is done using discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM. Finally, \textbf{Target Task Classifier} provide classification results as the probability distribution over flood class labels (related and unrelated) which is a very critical part of transfer learning method. it needs to be very balanced (not too slow or fast fine-tuned) using the gradual unfreezing for fine-tuning the classifier. We used some of the same hyperparameters for this task. \section{Experimental Results and Discussion} In this section, we will discuss our experimental results of the text classification. As described above in the methodology section that our source domain model comes from the ULMFiT and the target domain data is Queensland flood data which has almost 10,000 tweets labeled as flood \textbf{Related} and \textbf{Unrelated}. The pre-train ULMFiT model uses the AWD-LSTM language model with embedding size of 400, 3 layers, 1150 hidden activations per layer with a batch size of 70 and a back propagation through time (BPTT) \cite{howard2018universal}. Dropout here has been used as 0.7 to language model learner and 0.7 to text classifier learner. A base learning rate of 0.01 for LM fine-tuning and multiple values ranging from 0.00001 to 0.1 of learning rate have been used for target classifier fine-tuning for various instances. We have used gradual unfreezing of the model layers in this case to avoid the risk of catastrophic forgetting. It starts fine-tuning of the last layer (minimal general knowledge) to the next lower layer on wards in every iterations to attain the highest performance of the model. We have used the following hardware for the experimentation: Windows 10 Education desktop consisting of intel core i-7 processor and 16GB RAM. We have used python 3.6 and Google colab notebook to execute our model and obtained the results discussed below: The train and test data have divided into 70-30 ratio and we got these results as shown in Table \ref{table:accuracy} for the individual dataset and the combination of both. The pre-trained network was already trained and we used the target data Queensland flood which provided 96\% accuracy with 0.118 Test loss in only 11 seconds provided we used only 70\% of training labeled data. The second target data is Alberta flood with the same configuration of train-test split which provided 95\% accuracy with 0.118 Test loss in just 19 seconds. As we can see it takes very less time to work with 20,000 of tweets (combined) and at times of emergency it can handle a huge amount of unlabeled data to classify into meaningful categories in minutes. \begin{table}[htpb] \centering \begin{tabular}{ |c|p{0.7cm}|p{0.7cm}|c|c| } \hline \textbf{Target Data} & \textbf{Train \newline Loss} & \textbf{Test \newline Loss} & \textbf{Accuracy} & \textbf{Time(sec)} \\ \hline Queensland & 0.162 & 0.118 & 0.960 & 00:11\\ \hline Alberta & 0.193 & 0.176 & 0.953 & 00:19 \\ \hline Combined & 0.200 & 0.136 & 0.957 &00:19 \\\hline \end{tabular} \caption{Classification Accuracy Comparison} \label{table:accuracy} \end{table} Here, Our focus is localized flood detection thus we are not merging multiple datasets, we will leave the combination for our future work and staying with one Queensland flood data and explore that in details. As it can be seen in Table~\ref{table:queens} that event with the 5\% of data which is only 500 labeled tweets as target labeled data the model can adapt and fine-tuned the classification model wit 95\% accuracy. This model is very efficient and effective when we have a time-sensitive application and instead of training a model from scratch with huge data we can use the pre-trained model and successfully applies to the target domain application. The Table~\ref{table:queens} also depicts that even with the very small labeled training data the model was able to achieve the accuracy almost equivalent to the 80\% of the training data. There is generally a direct relation which says the more training data is the better but here increased labeled data the accuracy did not contribute significantly towards the accuracy improvement. \begin{table}[htpb] \centering \begin{tabular}{|p{1.2cm}|p{1.2cm}|p{1.1cm}|p{0.8cm}|p{.6cm}|p{1.2cm}|}\hline \textbf{Labeled Data\% Target} & \textbf{Class Label} & \textbf{Precision} &\textbf{Recall} & \textbf{F1-score} &\textbf{Accuracy}\\ \hline 5\% & Related \newline Unrelated &0.97\newline 0.93 & 0.94 \newline 0.97 &0.96 \newline0.95 & 0.95 \\ \hline 10\% & Related \newline Unrelated & 0.98 \newline 0.94 & 0.95 \newline 0.97&0.96\newline0.96 & 0.96 \\ \hline 20\% & Related \newline Unrelated & 0.97\newline 0.95 & 0.95\newline 0.97 &0.96\newline0.96 & 0.96 \\ \hline 50\% & Related \newline Unrelated& 0.98\newline0.93 &0.94\newline 0.98 &0.96\newline0.95& 0.96 \\ \hline 80\% & Related \newline Unrelated & 0.97\newline0.94 &0.95\newline 0.96 &0.96\newline 0.95 & 0.96 \\ \hline \end{tabular} \caption{Evaluation Metrics for Queensland Flood Data} \label{table:queens} \end{table} There are some more measures for accessing the quality of the classification such as training/testing loss and average precision to avoid the bias in the accuracy. Thus, Figure \ref{fig:lr} shows the learning rate adjusting according to the target classifier model, showing with the specific learning rate it achieves the low amount of loss which is called as the slanted triangular Learning rate. Figure \ref{fig:eval} shows the Precision-Recall curve for a particular classification instance where the average Precision is 0.94. It shows that the overall quality of the classification is fairly good and does not favor one class over another. As described above and based on the experimental results we can use a very low amount of labeled data and solved the localized flood disaster situation efficiently for any new location. We faced some limitations in this work and plan to include in our future work described in the next section. \begin{figure}[htpb] \centering \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[height=1.2in]{results/lr-alberta.png} \caption{learning rate} \label{fig:lr} \end{subfigure} ~~~~~~~~ \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[height=1.2in]{results/eval_1.png} \caption{Precision-Recall curve} \label{fig:eval} \end{subfigure} \caption{Data Evaluation Metrics for Queensland Data} \end{figure} \section{Limitation and Future Work} We have been focused on a specific type of disaster (Flood) here and did not explore other disaster types since we wanted to capture specific kind of disaster characteristics and learn from it for another flood disaster. We plan to perform extensive experimentation with some other kind of disaster data as well in the future. We have explored and experimented with the twitter dataset only so far because it is widely available and accessible for everyone but we would attempt to include different kinds of data sources such as other social media platforms, news feeds, blogs, text, images, etc. as well to make it a multimodel transfer learning approach in our future models. There are other state-of-the-art pre-trained language model such as BERT, GPT-2, Transformer-XL, etc. for text classification available and we would want to compare this adaptation with other models as well for the most time effective models in the given situation. There can be many more application where multi-class classification including various classes such damage, rescue, buildings, transportation, medical, etc. can be labeled with a small amount in order to build a very efficient classification model. We also have the plan to formulate this a multi-class problem in order to deeply address the problems in disaster management. This opens up a new door for cyber-physical-social systems that would rely on social media feeds coming from human sensors along with wireless physical/environmental sensors in tandem for various applications to create another layer of smart sensors that can achieve the high quality, more reliable and fault-tolerant system. \section{Conclusion} As we are aware of the calamity due to flood/ flash flood situation which needs close monitoring and detail attention. With the exponential growth in social media users, there is an ample amount of data which can be extremely useful in flood detection. Transfer learning is very helpful in these applications where we need to train with general knowledge along with little target domain knowledge to attain a highly effective model. We have discovered that inductive transfer learning methods are very useful for social media flood detection data with minimal labeled data. We used Queensland Twitter data as one of the flood locations and used the pre-trained model ULMFiT to successfully classify with accuracy 95\% the flood-related tweets with only 5\% of labeled target samples under 10 seconds whereas in general, it takes thousands of labeled tweets and huge time to achieve the similar performance. The usage of pre-trained models with minimal space and time complexity, it can be a huge advantage to the time-sensitive application where we need to process millions of tweets efficiently and classify them accordingly with high performance without compromising on the accuracy. \section{Acknowledgment} This research is funded by the National Science Foundation (NSF) grant number 1640625. I would like to thank my mentor and advisor Dr. Nirmalya Roy for their motivation, support, and feedback for my research. I am grateful for Dr. Aryya Gangopadhyay (co-advisor) for the discussion and continuous encouragement towards my work. \footnotesize \bibliographystyle{aaai}
2,877,628,090,650
arxiv
\section{Introduction} \subsection{Background of the Study} According to the Philippine Statistics Authority, tourism accounts to 12.7\% of the country’s Gross Domestic Product in the year 2018 \cite{psa-2019-report}. Moreover, National Economic Development Authority reported that 1.5\% of the country’s GDP on 2018 is accounted to international tourism with Korea, China and USA having the largest numbers of tourists coming in \cite{}. In addition, Department of Tourism recorded that 7.4\% of the total domestic tourists or an estimated figure of 3.97 million tourists, both foreign and domestics were in Davao Region on 2018 \cite{dot-report}. Also, employment in tourism industry was roughly estimated to 5.4 million in 2018 which constitutes 13\% of the employment in the country according to the Philippine Statistics Authority \cite{psa-2018-report}. Hence, estimating the total earnings of the tourism industry in the Philippines will be very helpful in formulating necessary interventions and strategies to mitigate the effects of the COVID-19 pandemic. This paper will serve as a baseline research to describe and estimate the earnings lost of the said industry. \subsection{Problem Statement} The objective of this research is to forecast the monthly earnings loss of the tourism industry during the COVID-19 pandemic by forecasting the monthly foreign visitor arrivals using Seasonal Autoregressive Integrated Moving Average. Specifically, it aims to answer the following questions: \begin{enumerate} \item What is the order of the seasonal autoregressive intergrated moving average for the monthly foreign visitor arrivals in the Philippines? \item How much earnings did the tourism industry lost during the COVID-19 pandemic? \end{enumerate} \subsection{Scope and Limitations} The study covers a period of approximately eight years from January 2012 to December 2019. Also, the modeling technique that was considered in this research is limited only to autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA). Other modeling techniques were not tested and considered. \section{Methodology} \subsection{Research Design} The research utilized longitudinal research design wherein the monthly foreign visitor arrivals in the Philippines is recorded and analyzed. A longitudinal research design is an observational research method in which data is gathered for the same subject repeatedly over a period of time \cite{research-design}. Forecasting method, specifically the Seasonal Autoregressive Integrated Moving Average (SARIMA), was used to forecast the future monthly foreign visitor arrivals. In selecting the appropriate model to forecast the monthly foreign visitor arrivals in the Philippines, the Box-Jenkins methodology was used. The data set was divided into two sets: the training set which is composed of 86 data points from January 2012 to December 2018; and testing set which is composed of 12 data points from January 2019 to December 2019. The training set was used to identify the appropriate SARIMA order whereas the testing set will measure the accuracy of the selected model using root mean squared error. The best model, in the context of this paper, was characterized to have a low Akaike's Information Criterion and low root mean squared error. \subsection{Source of Data} The data were extracted from Department of Tourism website. The data were composed of monthly foreign visitor arrivals from January 2012 to December 2019 which is composed of 98 data points. \subsection{Procedure for Box-Jenkins Methodology} Box-Jenkins methodology refers to a systematic method of identifying, fitting, checking, and using SARIMA time series models. The method is appropriate for time series of medium to long length which is at least 50 observations. The Box-Jenkins approach is divided into three stages: Model Identification, Model Estimation, and Diagnostic Checking. \begin{enumerate} \item \textit{Model Identification} In this stage, the first step is to check whether the data is stationary or not. If it is not, then differencing was applied to the data until it becomes stationary. Stationary series means that the value of the series fluctuates around a constant mean and variance with no seasonality over time. Plotting the sample autocorrelation function (ACF) and sample partial autocorrelation function (PACF) can be used to assess if the series is stationary or not. Also, Augmented Dickey$-$Fuller (ADF) test can be applied to check if the series is stationary or not. Next step is to check if the variance of the series is constant or not. If it is not, data transformation such as differencing and/or Box-Cox transformation (eg. logarithm and square root) may be applied. Once done, the parameters $p$ and $q$ are identified using the ACF and PACF. If there are 2 or more candidate models, the Akaike's Information Criterion (AIC) can be used to select which among the models is better. The model with the lowest AIC was selected. \item \textit{Model Estimation} In this stage, parameters are estimated by finding the values of the model coefficients which provide the best fit to the data. In this research, the combination of Conditional Sum of Squares and Maximum Likelihood estimates was used by the researcher. Conditional sum of squares was utilized to find the starting values, then maximum likelihood was applied after. \item \textit{Diagnostic Checking} Diagnostic checking performs residual analysis. This stage involves testing the assumptions of the model to identify any areas where the model is inadequate and if the corresponding residuals are uncorrelated. Box-Pierce and Ljung-Box tests may be used to test the assumptions. Once the model is a good fit, it can be used for forecasting. \item \textit{Forecast Evaluation} \hspace{5mm} Forecast evaluation involves generating forecasted values equal to the time frame of the model validation set then comparing these values to the latter. The root mean squared error was used to check the accuracy of the model. Moreover, the ACF and PACF plots were used to check if the residuals behave like white noise while the Shapiro-Wilk test was used to perform normality test. \end{enumerate} \subsection{Data Analysis} The following statistical tools were used in the data analysis of this study. \begin{enumerate} \item Sample Autocorrelation Function \hspace{5mm} Sample autocorrelation function measures how correlated past data points are to future values, based on how many time steps these points are separated by. Given a time series $X_t$, we define the sample autocorrelation function, $r_k$, at lag $k$ as \cite{time-series-book-01} \begin{equation} r_k = \dfrac{\displaystyle\sum_{t=1}^{N-k} (X_t - \bar{X})(X_{t+k} - \bar{X}) }{\displaystyle\sum_{t=1}^{N} (X_t - \bar{X})^2} \qquad \text{for } k = 1,2, ... \end{equation} where $\bar{X}$ is the average of $n$ observations . \item Sample Partial Autocorrelation Function \hspace{5mm} Sample partial autocorrelation function measures the correlation between two points that are separated by some number of periods but with the effect of the intervening correlations removed in the series. Given a time series $X_t$, the partial autocorrelation of lag $k$ is the autocorrelation between $X_t$ and $X_{t+k}$ with the linear dependence of $X_t$ on $X_{t+1}$ through $X_{t+k-1}$ removed. The sample partial autocorrelation function is defined as \cite{time-series-book-01} \begin{equation} \phi_{kk} = \dfrac{r_k - \displaystyle\sum_{j = 1}^{h-1} \phi_{k-1,j} r_{k-j}}{1 - \displaystyle\sum_{j = 1}^{j - 1} \phi_{k-1,j} r_j } \end{equation} where $\phi_{k,j} = \phi_{k-1,j} - \phi_{k,k} \phi_{k-1,k-j}, \text{for } j = 1,2, ..., k-1$, and $r_k$ is the sample autocorrelation at lag $k$. \item Root Mean Square Error (RMSE) \hspace{5mm} RMSE is a frequently used measure of the difference between values predicted by a model and the values actually observed from the environment that is being modelled. These individual differences are also called residuals, and the RMSE serves to aggregate them into a single measure of predictive power. The RMSE of a model prediction with respect to the estimated variable $X_{\text{model}}$ is defined as the square root of the mean squared error \cite{LSTM-book-01} \begin{center} $RMSE = \sqrt{\dfrac{1}{n}\displaystyle\sum_{i=1}^{n} (\hat{y}_{i} - y_i)^2}$ \end{center} where $\hat{y_i}$ is the predicted values, $y_i$ is the actual value, and $n$ is the number of observations. \item Akaike's Information Criterion (AIC) \hspace{5mm} The AIC is a measure of how well a model fits a dataset, penalizing models that are so flexible that they would also fit unrelated datasets just as well. The general form for calculating the AIC is \cite{time-series-book-01} \begin{equation} AIC_{p,q} = \dfrac{-2 \ln(\text{maximized likelihood}) + 2r}{n} \end{equation} where $n$ is the sample size, $r = p + q + 1$ is the number of estimated parameters, and including a constant term. \item Ljung$-$Box Q* Test \hspace{5mm} The Ljung$-$Box statistic, also called the modified Box-Pierce statistic, is a function of the accumulated sample autocorrelation, $r_j$, up to any specified time lag $m$. This statistic is used to test whether the residuals of a series of observations over time are random and independent. The null hypothesis is that the model does not exhibit lack of fit and the alternative hypothesis is the model exhibits lack of fit. The test statistic is defined as \cite{time-series-book-01} \begin{equation} Q^* = n (n+2) \displaystyle\sum_{k = 1}^{m} \dfrac{ \hat{r}^2_k }{n - k} \end{equation} where $\hat{r}^2_k$ is the estimated autocorrelation of the series at lag $k$, $m$ is the number of lags being tested, $n$ is the sample size, and the given statistic is approximately Chi Square distributed with $h$ degrees of freedom, where $h = m - p - q$. \item Conditional Sum of Squares \hspace{5mm} Conditional sum of squares was utilized to find the starting values in estimating the parameters of the SARIMA process. The formula is given by \cite{forecast} \begin{equation} \hat{\theta}_n = \arg \min\limits_{\theta \in \ominus} s_n (\theta) \end{equation} where $s_n(\theta) = \dfrac{1}{n}\displaystyle\sum_{t=1}^{n}e^2_t(\theta) \ , e_t(\theta) = \displaystyle\sum_{j=0}^{t-1} \alpha_j(\theta)x_{t-j}$, and $\ominus \subset \mathbb{R}^p$ is a compact set. \item Maximum Likelihood \hspace{5mm} According to \cite{forecast}, once the model order has been identified, maximum likelihood was used to estimate the parameters $c$, $\phi_1, ..., \phi_p, \theta_1, ..., \theta_q$. This method finds the values of the parameters which maximize the probability of getting the data that has been observed . For SARIMA models, the process is very similar to the least squares estimates that would be obtained by minimizing \begin{equation} \displaystyle\sum_{t=1}^{T} \epsilon^2_t \end{equation} where $\epsilon_t$ is the error term. \item Box$-$Cox Transformation \hspace{5mm} Box$-$Cox Transformation is applied to stabilize the variance of a time series. It is a family of transformations that includes logarithms and power transformation which depend on the parameter $\lambda$ and are defined as follows \cite{Daimon2011} \begin{center} $y^{(\lambda)}_i = \begin{cases} \dfrac{y^\lambda_i - 1}{\lambda} & \text{, if } \lambda \neq 0\\ \ln y_i & \text{, if } \lambda = 0 \end{cases} $ $ \qquad \qquad $ $ w_i = \begin{cases} y_i^{\lambda} & \text{, if } \lambda \neq 0\\ \ln y_i & \text{, if } \lambda = 0 \end{cases} $ \end{center} where $y_i$ is the original time series values, $w_i$ is the transformed time series values using Box-Cox, and $\lambda$ is the parameter for the transformation. \end{enumerate} \subsection{Statistical Software} R is a programming language and free software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing \cite{R-software}. R includes linear and nonlinear modeling, classical statistical tests, time-series analysis, classification modeling, clustering, etc. The `forecast' package \cite{forecast} was utilized to generate time series plots, autocorrelation function/partial autocorrelation function plots, and forecasting. Also, the `tseries' package \cite{tseries} was used to perform Augmented Dickey-Fuller (ADF) to test stationarity. Moreover, the `lmtest' package \cite{lmtest} was used to test the parameters of the SARIMA model. Finally, the `ggplot2' \cite{ggplot2}, `tidyr' \cite{tidyr}, and `dplyr' \cite{dplyr} were used to plot time series data considered during the conduct of the research. \section{Results and Discussion} \begin{figure}[h] \includegraphics[width=3.4in]{figure/img01} \caption{Monthly Foreign Visitor Arrivals} \label{rd01} \end{figure} Line plot was used to describe the behavior of the monthly foreign visitor arrivals in the Philippines. Figure~\ref{rd01} shows that there is an increasing trend and a seasonality pattern in the time series. Specifically, there is a seasonal increase in monthly foreign visitor arrivals every December and a seasonal decrease every September. These patterns suggest a seasonal autoregressive integrated moving average (SARIMA) approach in modeling and forecasting the monthly foreign visitor arrivals in the Philippines. \begin{table}[h] \captionof{table}{AIC and RMSE of the Two Models Considered} \label{rd02} \renewcommand{\arraystretch}{1} \begin{tabularx}{3.35in}{Xcc} \hline \textbf{Model} & \textbf{AIC} & \textbf{RMSE} \\ \hline ARIMA (0,1,2)$\times$(1,0,1)$_{12}$ & $-414.56$ & 49517.48 \\ ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ & $-414.51$ & 47884.85 \\ \hline \end{tabularx} \end{table} Akaike Information Criterion and Root Mean Squared Error were used to identify which model was used to model and forecast the monthly foreign visitor arrivals in the Philippines. Table~\ref{rd02} shows the top two SARIMA models based on AIC generated using R. ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ has the lowest AIC with a value of $-414.56$ which is followed by ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ with an AIC value of $-414.51$. Model estimation was performed on both models and generated significant parameters for both models (refer to Appendix A.2). Moreover, diagnostic checking was performed to assess the model. Both models passed the checks using residual versus time plot, residual versus fitted plot, normal Q-Q plot, ACF graph, PACF graphs, Ljung-Box test, and Shapiro-Wilk test (refer to Appendix A.3). Finally, forecast evaluation was performed to measure the accuracy of the model using an out-of-sample data set (refer to Appendix A.4). ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ produced the lowest RMSE relative to ARIMA (0,1,2)$\times$(1,0,1)$_{12}$. Hence, the former was used to forecast the monthly foreign visitor arrivals in the Philippines. \subsection{How much Foreign Tourism Earnings was Lost during the COVID-19 Pandemic Crisis} \begin{figure}[h] \includegraphics[width=3.4in]{figure/img02} \caption{Expected Monthly Earnings Loss} \label{rd03} \end{figure} Figure~\ref{rd03} shows the estimated earnings loss (in billion pesos) of the tourism industry of the Philippines every month from April 2020 to December 2020. According to the Department of Tourism, the Average Daily Expenditure (ADE) for the month in review is \PHP 8,423.98 and the Average Length of Stay (ALoS) of tourists in the country is recorded at 7.11 nights. The figures were generated by multiplying the forecasted monthly foreign visitor arrivals, ADE, and ALoS (rounded to 7) \cite{dot-report}. Moreover, it is forecasted under community quarantine that the recovery time will take around four to five months (up to July) \cite{forecast-covid}. With this, the estimated earning loss of the country in terms of tourism will be around 170.5 billion pesos. \section{Conclusions and Recommendations} \subsection{Conclusions} Based on the results presented on the study, the following findings were drawn: \begin{enumerate} \item The order of SARIMA model used to forecast the monthly foreign visitor arrival is ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ since it produced a relatively low AIC of $-414.51$ and the lowest RMSE of 47884.85 using an out-of-sample data. This means that the model is relatively better among other SARIMA models considered in forecasting the monthly foreign visitor arrivals in the Philippines. \item If the COVID-19 Pandemic lasts up to five months, the tourism industry of the Philippines will have an estimated earnings loss of about \PHP 170.5 billion. Assumptions about average daily expenditure and average length of stay of tourists were based on the Department of Tourism reports. \end{enumerate} \subsection{Recommendations} The projected \PHP 170.5 billion loss on Philippine’s foreign tourism is really a huge money. Regaining such loss the soonest time, however, would only jeopardize the lives of the Filipino people. On the other hand, the government can, perhaps, reopen the Philippines’ domestic tourism. This would somehow help regain the country’s loss on revenue from tourism, although not fully. However, the following recommendations, shown in scenarios/options below, may be helpful in regaining it, both in foreign and domestic tourism, and ensuring safety among Filipinos, as well. \begin{enumerate} \item Option 1: Stop foreign tourism until the availability of the vaccine, but gradually open domestic tourism starting July of 2020. In this scenario/option, the following considerations may be adhered to, viz. \begin{enumerate} \item not all domestic tourism shall be reopened in the entire country; only those areas with zero covid-19 cases; \item for areas where domestic tourism is allowed/reopened, appropriate guidelines should be strictly implemented by concerned departments/agencies to eliminate/prevent covid-19 transmission; and \item digital code that would help in tracing the contacts and whereabouts of domestic tourists, as being used in China and Singapore, should be installed before the reopening of the domestic tourism. \end{enumerate} \item Option 2: Gradual opening of foreign tourism starting July 2020 and full reopening of domestic tourism on the first semester of 2021 or when the covid-19 cases in the Philippines is already zero. However, the following considerations should be satisfied, viz. \begin{enumerate} \item only countries with covid-19 zero cases are allowed to enter the Philippines; \item appropriate guidelines should be strictly implemented by concerned departments/ agencies both for foreign and domestic tourism to eliminate/ prevent the spread of the said virus; and \item digital code that would help in tracing the contacts and whereabouts of foreign tourists, as being used in China and Singapore, should be installed before reopening the foreign tourism in the Philippines. \end{enumerate} \end{enumerate} \bibliographystyle{asmems4} \section{Introduction} \subsection{Background of the Study} According to the Philippine Statistics Authority, tourism accounts to 12.7\% of the country’s Gross Domestic Product in the year 2018 \cite{psa-2019-report}. Moreover, National Economic Development Authority reported that 1.5\% of the country’s GDP on 2018 is accounted to international tourism with Korea, China and USA having the largest numbers of tourists coming in \cite{}. In addition, Department of Tourism recorded that 7.4\% of the total domestic tourists or an estimated figure of 3.97 million tourists, both foreign and domestics were in Davao Region on 2018 \cite{dot-report}. Also, employment in tourism industry was roughly estimated to 5.4 million in 2018 which constitutes 13\% of the employment in the country according to the Philippine Statistics Authority \cite{psa-2018-report}. Hence, estimating the total earnings of the tourism industry in the Philippines will be very helpful in formulating necessary interventions and strategies to mitigate the effects of the COVID-19 pandemic. This paper will serve as a baseline research to describe and estimate the earnings lost of the said industry. \subsection{Problem Statement} The objective of this research is to forecast the monthly earnings loss of the tourism industry during the COVID-19 pandemic by forecasting the monthly foreign visitor arrivals using Seasonal Autoregressive Integrated Moving Average. Specifically, it aims to answer the following questions: \begin{enumerate} \item What is the order of the seasonal autoregressive intergrated moving average for the monthly foreign visitor arrivals in the Philippines? \item How much earnings did the tourism industry lost during the COVID-19 pandemic? \end{enumerate} \subsection{Scope and Limitations} The study covers a period of approximately eight years from January 2012 to December 2019. Also, the modeling technique that was considered in this research is limited only to autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA). Other modeling techniques were not tested and considered. \section{Methodology} \subsection{Research Design} The research utilized longitudinal research design wherein the monthly foreign visitor arrivals in the Philippines is recorded and analyzed. A longitudinal research design is an observational research method in which data is gathered for the same subject repeatedly over a period of time \cite{research-design}. Forecasting method, specifically the Seasonal Autoregressive Integrated Moving Average (SARIMA), was used to forecast the future monthly foreign visitor arrivals. In selecting the appropriate model to forecast the monthly foreign visitor arrivals in the Philippines, the Box-Jenkins methodology was used. The data set was divided into two sets: the training set which is composed of 86 data points from January 2012 to December 2018; and testing set which is composed of 12 data points from January 2019 to December 2019. The training set was used to identify the appropriate SARIMA order whereas the testing set will measure the accuracy of the selected model using root mean squared error. The best model, in the context of this paper, was characterized to have a low Akaike's Information Criterion and low root mean squared error. \subsection{Source of Data} The data were extracted from Department of Tourism website. The data were composed of monthly foreign visitor arrivals from January 2012 to December 2019 which is composed of 98 data points. \subsection{Procedure for Box-Jenkins Methodology} Box-Jenkins methodology refers to a systematic method of identifying, fitting, checking, and using SARIMA time series models. The method is appropriate for time series of medium to long length which is at least 50 observations. The Box-Jenkins approach is divided into three stages: Model Identification, Model Estimation, and Diagnostic Checking. \begin{enumerate} \item \textit{Model Identification} In this stage, the first step is to check whether the data is stationary or not. If it is not, then differencing was applied to the data until it becomes stationary. Stationary series means that the value of the series fluctuates around a constant mean and variance with no seasonality over time. Plotting the sample autocorrelation function (ACF) and sample partial autocorrelation function (PACF) can be used to assess if the series is stationary or not. Also, Augmented Dickey$-$Fuller (ADF) test can be applied to check if the series is stationary or not. Next step is to check if the variance of the series is constant or not. If it is not, data transformation such as differencing and/or Box-Cox transformation (eg. logarithm and square root) may be applied. Once done, the parameters $p$ and $q$ are identified using the ACF and PACF. If there are 2 or more candidate models, the Akaike's Information Criterion (AIC) can be used to select which among the models is better. The model with the lowest AIC was selected. \item \textit{Model Estimation} In this stage, parameters are estimated by finding the values of the model coefficients which provide the best fit to the data. In this research, the combination of Conditional Sum of Squares and Maximum Likelihood estimates was used by the researcher. Conditional sum of squares was utilized to find the starting values, then maximum likelihood was applied after. \item \textit{Diagnostic Checking} Diagnostic checking performs residual analysis. This stage involves testing the assumptions of the model to identify any areas where the model is inadequate and if the corresponding residuals are uncorrelated. Box-Pierce and Ljung-Box tests may be used to test the assumptions. Once the model is a good fit, it can be used for forecasting. \item \textit{Forecast Evaluation} \hspace{5mm} Forecast evaluation involves generating forecasted values equal to the time frame of the model validation set then comparing these values to the latter. The root mean squared error was used to check the accuracy of the model. Moreover, the ACF and PACF plots were used to check if the residuals behave like white noise while the Shapiro-Wilk test was used to perform normality test. \end{enumerate} \subsection{Data Analysis} The following statistical tools were used in the data analysis of this study. \begin{enumerate} \item Sample Autocorrelation Function \hspace{5mm} Sample autocorrelation function measures how correlated past data points are to future values, based on how many time steps these points are separated by. Given a time series $X_t$, we define the sample autocorrelation function, $r_k$, at lag $k$ as \cite{time-series-book-01} \begin{equation} r_k = \dfrac{\displaystyle\sum_{t=1}^{N-k} (X_t - \bar{X})(X_{t+k} - \bar{X}) }{\displaystyle\sum_{t=1}^{N} (X_t - \bar{X})^2} \qquad \text{for } k = 1,2, ... \end{equation} where $\bar{X}$ is the average of $n$ observations . \item Sample Partial Autocorrelation Function \hspace{5mm} Sample partial autocorrelation function measures the correlation between two points that are separated by some number of periods but with the effect of the intervening correlations removed in the series. Given a time series $X_t$, the partial autocorrelation of lag $k$ is the autocorrelation between $X_t$ and $X_{t+k}$ with the linear dependence of $X_t$ on $X_{t+1}$ through $X_{t+k-1}$ removed. The sample partial autocorrelation function is defined as \cite{time-series-book-01} \begin{equation} \phi_{kk} = \dfrac{r_k - \displaystyle\sum_{j = 1}^{h-1} \phi_{k-1,j} r_{k-j}}{1 - \displaystyle\sum_{j = 1}^{j - 1} \phi_{k-1,j} r_j } \end{equation} where $\phi_{k,j} = \phi_{k-1,j} - \phi_{k,k} \phi_{k-1,k-j}, \text{for } j = 1,2, ..., k-1$, and $r_k$ is the sample autocorrelation at lag $k$. \item Root Mean Square Error (RMSE) \hspace{5mm} RMSE is a frequently used measure of the difference between values predicted by a model and the values actually observed from the environment that is being modelled. These individual differences are also called residuals, and the RMSE serves to aggregate them into a single measure of predictive power. The RMSE of a model prediction with respect to the estimated variable $X_{\text{model}}$ is defined as the square root of the mean squared error \cite{LSTM-book-01} \begin{center} $RMSE = \sqrt{\dfrac{1}{n}\displaystyle\sum_{i=1}^{n} (\hat{y}_{i} - y_i)^2}$ \end{center} where $\hat{y_i}$ is the predicted values, $y_i$ is the actual value, and $n$ is the number of observations. \item Akaike's Information Criterion (AIC) \hspace{5mm} The AIC is a measure of how well a model fits a dataset, penalizing models that are so flexible that they would also fit unrelated datasets just as well. The general form for calculating the AIC is \cite{time-series-book-01} \begin{equation} AIC_{p,q} = \dfrac{-2 \ln(\text{maximized likelihood}) + 2r}{n} \end{equation} where $n$ is the sample size, $r = p + q + 1$ is the number of estimated parameters, and including a constant term. \item Ljung$-$Box Q* Test \hspace{5mm} The Ljung$-$Box statistic, also called the modified Box-Pierce statistic, is a function of the accumulated sample autocorrelation, $r_j$, up to any specified time lag $m$. This statistic is used to test whether the residuals of a series of observations over time are random and independent. The null hypothesis is that the model does not exhibit lack of fit and the alternative hypothesis is the model exhibits lack of fit. The test statistic is defined as \cite{time-series-book-01} \begin{equation} Q^* = n (n+2) \displaystyle\sum_{k = 1}^{m} \dfrac{ \hat{r}^2_k }{n - k} \end{equation} where $\hat{r}^2_k$ is the estimated autocorrelation of the series at lag $k$, $m$ is the number of lags being tested, $n$ is the sample size, and the given statistic is approximately Chi Square distributed with $h$ degrees of freedom, where $h = m - p - q$. \item Conditional Sum of Squares \hspace{5mm} Conditional sum of squares was utilized to find the starting values in estimating the parameters of the SARIMA process. The formula is given by \cite{forecast} \begin{equation} \hat{\theta}_n = \arg \min\limits_{\theta \in \ominus} s_n (\theta) \end{equation} where $s_n(\theta) = \dfrac{1}{n}\displaystyle\sum_{t=1}^{n}e^2_t(\theta) \ , e_t(\theta) = \displaystyle\sum_{j=0}^{t-1} \alpha_j(\theta)x_{t-j}$, and $\ominus \subset \mathbb{R}^p$ is a compact set. \item Maximum Likelihood \hspace{5mm} According to \cite{forecast}, once the model order has been identified, maximum likelihood was used to estimate the parameters $c$, $\phi_1, ..., \phi_p, \theta_1, ..., \theta_q$. This method finds the values of the parameters which maximize the probability of getting the data that has been observed . For SARIMA models, the process is very similar to the least squares estimates that would be obtained by minimizing \begin{equation} \displaystyle\sum_{t=1}^{T} \epsilon^2_t \end{equation} where $\epsilon_t$ is the error term. \item Box$-$Cox Transformation \hspace{5mm} Box$-$Cox Transformation is applied to stabilize the variance of a time series. It is a family of transformations that includes logarithms and power transformation which depend on the parameter $\lambda$ and are defined as follows \cite{Daimon2011} \begin{center} $y^{(\lambda)}_i = \begin{cases} \dfrac{y^\lambda_i - 1}{\lambda} & \text{, if } \lambda \neq 0\\ \ln y_i & \text{, if } \lambda = 0 \end{cases} $ $ \qquad \qquad $ $ w_i = \begin{cases} y_i^{\lambda} & \text{, if } \lambda \neq 0\\ \ln y_i & \text{, if } \lambda = 0 \end{cases} $ \end{center} where $y_i$ is the original time series values, $w_i$ is the transformed time series values using Box-Cox, and $\lambda$ is the parameter for the transformation. \end{enumerate} \subsection{Statistical Software} R is a programming language and free software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing \cite{R-software}. R includes linear and nonlinear modeling, classical statistical tests, time-series analysis, classification modeling, clustering, etc. The `forecast' package \cite{forecast} was utilized to generate time series plots, autocorrelation function/partial autocorrelation function plots, and forecasting. Also, the `tseries' package \cite{tseries} was used to perform Augmented Dickey-Fuller (ADF) to test stationarity. Moreover, the `lmtest' package \cite{lmtest} was used to test the parameters of the SARIMA model. Finally, the `ggplot2' \cite{ggplot2}, `tidyr' \cite{tidyr}, and `dplyr' \cite{dplyr} were used to plot time series data considered during the conduct of the research. \section{Results and Discussion} \begin{figure}[h] \includegraphics[width=3.4in]{figure/img01} \caption{Monthly Foreign Visitor Arrivals} \label{rd01} \end{figure} Line plot was used to describe the behavior of the monthly foreign visitor arrivals in the Philippines. Figure~\ref{rd01} shows that there is an increasing trend and a seasonality pattern in the time series. Specifically, there is a seasonal increase in monthly foreign visitor arrivals every December and a seasonal decrease every September. These patterns suggest a seasonal autoregressive integrated moving average (SARIMA) approach in modeling and forecasting the monthly foreign visitor arrivals in the Philippines. \begin{table}[h] \captionof{table}{AIC and RMSE of the Two Models Considered} \label{rd02} \renewcommand{\arraystretch}{1} \begin{tabularx}{3.35in}{Xcc} \hline \textbf{Model} & \textbf{AIC} & \textbf{RMSE} \\ \hline ARIMA (0,1,2)$\times$(1,0,1)$_{12}$ & $-414.56$ & 49517.48 \\ ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ & $-414.51$ & 47884.85 \\ \hline \end{tabularx} \end{table} Akaike Information Criterion and Root Mean Squared Error were used to identify which model was used to model and forecast the monthly foreign visitor arrivals in the Philippines. Table~\ref{rd02} shows the top two SARIMA models based on AIC generated using R. ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ has the lowest AIC with a value of $-414.56$ which is followed by ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ with an AIC value of $-414.51$. Model estimation was performed on both models and generated significant parameters for both models (refer to Appendix A.2). Moreover, diagnostic checking was performed to assess the model. Both models passed the checks using residual versus time plot, residual versus fitted plot, normal Q-Q plot, ACF graph, PACF graphs, Ljung-Box test, and Shapiro-Wilk test (refer to Appendix A.3). Finally, forecast evaluation was performed to measure the accuracy of the model using an out-of-sample data set (refer to Appendix A.4). ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ produced the lowest RMSE relative to ARIMA (0,1,2)$\times$(1,0,1)$_{12}$. Hence, the former was used to forecast the monthly foreign visitor arrivals in the Philippines. \subsection{How much Foreign Tourism Earnings was Lost during the COVID-19 Pandemic Crisis} \begin{figure}[h] \includegraphics[width=3.4in]{figure/img02} \caption{Expected Monthly Earnings Loss} \label{rd03} \end{figure} Figure~\ref{rd03} shows the estimated earnings loss (in billion pesos) of the tourism industry of the Philippines every month from April 2020 to December 2020. According to the Department of Tourism, the Average Daily Expenditure (ADE) for the month in review is \PHP 8,423.98 and the Average Length of Stay (ALoS) of tourists in the country is recorded at 7.11 nights. The figures were generated by multiplying the forecasted monthly foreign visitor arrivals, ADE, and ALoS (rounded to 7) \cite{dot-report}. Moreover, it is forecasted under community quarantine that the recovery time will take around four to five months (up to July) \cite{forecast-covid}. With this, the estimated earning loss of the country in terms of tourism will be around 170.5 billion pesos. \section{Conclusions and Recommendations} \subsection{Conclusions} Based on the results presented on the study, the following findings were drawn: \begin{enumerate} \item The order of SARIMA model used to forecast the monthly foreign visitor arrival is ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ since it produced a relatively low AIC of $-414.51$ and the lowest RMSE of 47884.85 using an out-of-sample data. This means that the model is relatively better among other SARIMA models considered in forecasting the monthly foreign visitor arrivals in the Philippines. \item If the COVID-19 Pandemic lasts up to five months, the tourism industry of the Philippines will have an estimated earnings loss of about \PHP 170.5 billion. Assumptions about average daily expenditure and average length of stay of tourists were based on the Department of Tourism reports. \end{enumerate} \subsection{Recommendations} The projected \PHP 170.5 billion loss on Philippine’s foreign tourism is really a huge money. Regaining such loss the soonest time, however, would only jeopardize the lives of the Filipino people. On the other hand, the government can, perhaps, reopen the Philippines’ domestic tourism. This would somehow help regain the country’s loss on revenue from tourism, although not fully. However, the following recommendations, shown in scenarios/options below, may be helpful in regaining it, both in foreign and domestic tourism, and ensuring safety among Filipinos, as well. \begin{enumerate} \item Option 1: Stop foreign tourism until the availability of the vaccine, but gradually open domestic tourism starting July of 2020. In this scenario/option, the following considerations may be adhered to, viz. \begin{enumerate} \item not all domestic tourism shall be reopened in the entire country; only those areas with zero covid-19 cases; \item for areas where domestic tourism is allowed/reopened, appropriate guidelines should be strictly implemented by concerned departments/agencies to eliminate/prevent covid-19 transmission; and \item digital code that would help in tracing the contacts and whereabouts of domestic tourists, as being used in China and Singapore, should be installed before the reopening of the domestic tourism. \end{enumerate} \item Option 2: Gradual opening of foreign tourism starting July 2020 and full reopening of domestic tourism on the first semester of 2021 or when the covid-19 cases in the Philippines is already zero. However, the following considerations should be satisfied, viz. \begin{enumerate} \item only countries with covid-19 zero cases are allowed to enter the Philippines; \item appropriate guidelines should be strictly implemented by concerned departments/ agencies both for foreign and domestic tourism to eliminate/ prevent the spread of the said virus; and \item digital code that would help in tracing the contacts and whereabouts of foreign tourists, as being used in China and Singapore, should be installed before reopening the foreign tourism in the Philippines. \end{enumerate} \end{enumerate} \bibliographystyle{asmems4}
2,877,628,090,651
arxiv
\section{Introduction}\label{sec:intro} Strong gravitational lenses are rare. Since the discovery of the first lens Q0957+561~\citep{1979Natur.279..381W}, just $\sim400$ have been discovered to date\footnote{See, e.g., {\url{http://masterlens.astro.utah.edu}} for a catalogue.}. However, this number is expected to increase to several thousand over the next ten years as new surveys, both ground-based\footnote{\url{http://pan-starrs.ifa.hawaii.edu}}$^{\rm,}$\footnote{\url{http://www.darkenergysurvey.org}} and space-based\footnote{\url{http://www.euclid-ec.org}} -- together with a community of citizen-science volunteers examining the image data for candidates\footnote{\url{http://spacewarps.org}} -- come online. Since lensing depends only on gravity, strong lenses offer a unique window onto dark matter and cosmology \citep{2010CQGra..27w3001B,2013LRR....16....6A}. However, extracting dark matter properties or cosmological constraints from these lensing data will require sophisticated modelling. In particular, with an unprecedented data set imminent, it is prudent to look again at systematic errors in the lens models to determine what quality of data (in particular complementary data from stellar/gas kinematics, lens time delays and/or stellar mass constraints) are required to address problems of interest. It is towards that goal that this present work is directed. To see why lens modelling details are of crucial importance, let us recall the essential quantities that appear in lensing (see also \S\ref{sec:theory} for a more detailed exposition). First we have the distances. Let $D_L$, $D_S$, $D_{LS}$ be the angular-diameter distances to the lens, source, and from lens to source; these are all proportional to $c/H_0$ but have factors that depend on the particular choice of cosmology\footnote{Here, $c$ is the speed of light in vacuo and $H_0$ is the Hubble parameter.}. Typically: \begin{equation} D_L \approx z_L \frac c{H_0}\ \hbox{and}\ \frac{D_S}{D_{LS}} \sim 1. \end{equation} where $z_L$ is the redshift of the lens. For multiple images, the sky-projected density must exceed the critical lensing density in some region: \begin{equation} \Sigma_\mathrm{crit} = \frac{c^2}{4\pi G D_L} \sim \frac{1\rm\,kg\,m^{-2}}{z_L} \end{equation} where $G$ is Newton's gravitational constant. The angular separation between the lensed images is of order the Einstein radius $\theta_E$, which is related to the mass by: \begin{equation} \theta_E \sim \frac{R_G}{D_L} \frac{D_{LS}}{D_S} \end{equation} where $R_G = GM/c^2$ (with $M$ the projected mass enclosed within $\theta_E$) is the gravitational radius. If the source is a quasar or otherwise rapidly variable, a time delay $\Delta t$ in the variability will be present where: \begin{equation} \Delta t \sim R_G/c \end{equation} So in principle, one can not only measure the mass of the lens, one can use the dependence on the cosmology-dependent $D$ factors to extract the cosmological model and all its parameters. \cite{1937ApJ....86..217Z} drew attention to the former, and \cite{1964MNRAS.128..307R,1966MNRAS.132..101R} pointed out the latter, all long before lenses were discovered. The difficulty with actually doing this, however, became apparent soon after the discovery of the first lens by \cite{1979Natur.279..381W}. In the first ever paper on lens modelling, \cite{1981ApJ...244..736Y} found that many plausible mass distributions could reproduce the data. \cite{1981ApJ...244..736Y} were remarkably prescient about the subsequent development of lens modelling. First, they introduced the technique of choosing a parametric form for the lensing mass and then fitting for the parameters, which is still the most common strategy \citep[see for example][]{2010GReGr..42.2151K,2011A&ARv..19...47K}. Second, they pointed out the non-uniqueness of lens models -- lensing degeneracies. Third, they suggested combining lensing data with stellar kinematics and X-rays, to reduce the effect of the degeneracies. Later work, as well as following up these suggestions, has introduced some further new ideas. Five of these are important for the present work: \begin{enumerate} \item {\bf Free-form modelling:} In `free-form' or non-parametric modelling, there is no specified parametric form for the mass distribution. There are still assumptions (or priors) on the mass distribution, such as smoothness or being centrally concentrated \citep{1997MNRAS.292..148S,2005MNRAS.360..477D,2009A&A...500..681M,2010ApJ...723.1678C} but these are much less restrictive than parametric forms. A particularly elegant prior is implemented by \cite{2006MNRAS.367.1209L}, requiring that the mass distribution to be non-negative and no extra images allowed. To be concrete, we define from here on: \begin{quote} {\it Non-parametric, or `free-form' $\equiv$ more parameters than data constraints (i.e. deliberately under-constrained)} \end{quote} Being under-constrained, it is then {\it necessary} to explore model degeneracies rather than finding a single `best-fit' solution. Free-form models are more commonly used with cluster lenses \citep{2006ApJ...652L...5S,2009ApJ...690..154S,2009A&A...500..681M,2014MNRAS.437.2642S}, but can be used with galaxy lenses as well, where their less restrictive assumptions can be important. For example, in time-delay galaxy lenses, parametric model measures of the Hubble parameter $H_0$ have historically been at tension with independent measures \citep[e.g.,][]{2002astro.ph..4043K,2002ApJ...578...25K}; these are resolved once the less restrictive assumptions of free-form models are permitted \citep{2007ApJ...667..645R}. Hybrid methods, using a mass grid on top a parametric model, have also been explored \citep[e.g.,][]{2010MNRAS.408.1969V}. \item {\bf Model ensembles:} Model ensembles, exploring a diverse range of possible mass distributions that nonetheless all fit the data, are a way of combating the non-uniqueness of models. Such ensembles are possible in parametric models \citep[e.g.,][]{1999AJ....118...14B,2010Sci...329..924J, 2014MNRAS.444..268R,2014arXiv1405.0222J,2014arXiv1405.0011C}, but are more common in free-form models, where -- since such models are deliberately under-constrained -- they become vital \citep{2000AJ....119..439W,2009ApJ...690..154S,2012MNRAS.425.3077L}. \item {\bf Stellar kinematic constraints:} This was first suggested by \citet{2002MNRAS.337L...6T} as a means to break lensing degeneracies. The idea is that stellar kinematics can provide an independent estimate of the Einstein radius, via the virial theorem: \begin{equation} \frac{\langle v^2_\mathrm{los}\rangle}{c^2} \approx \frac{\theta_E}{6\pi} \frac{D_S}{D_{LS}} \label{eqn:virial} \end{equation} where $\langle v^2_\mathrm{los}\rangle$ is the line of sight stellar velocity dispersion, and the above relation becomes exact for isothermal lenses. This can then be used to probe cosmological parameters if lenses are known to be isothermal \citep[e.g.,][]{2012MNRAS.424.2864C}; or to break the steepness degeneracy in the more general situation (see \S\ref{sec:kinematics}). The technique has since been applied to many lenses \citep[e.g.,][]{2006ApJ...649..599K,2008ApJ...682..964B}. Going further, the use of two-dimensional kinematics \citep{2011MNRAS.415.2215B} is especially interesting. \item {\bf Stellar mass constraints:} The stellar mass in a lens can be inferred from photometry and compared with the total mass \citep[e.g.,][]{1998ApJ...509..561K,2000ApJ...543..131K,2003ApJ...587..143R,2005ApJ...623L...5F,2008MNRAS.383..857F,2011ApJ...740...97L}. Since the inferred stellar mass depends on the assumed IMF, lenses in which stellar mass dominates can be used to derive upper bounds on the stellar $M/L$ \citep{2010MNRAS.409L..30F}. Lower bounds on stellar $M/L$ have also recently been claimed by fitting $\Lambda$CDM semi-analytic models to the tilt of the fundamental plane \citep{2013MNRAS.428.3183D}. \item {\bf Testing modelling strategies:} Using mock data to see how well a given model can recover simulated lenses is increasingly being recognised as essential. Simple blind tests have appeared in earlier work \citep[for example, Figure 2 in][]{2000AJ....119..439W}, but more recently, tests against dynamically simulated galaxies or clusters are favoured \citep{2007ApJ...667..645R,2007MNRAS.380.1729L,2009A&A...500..681M,2009MNRAS.393.1114B,2010ApJ...723.1678C}. \end{enumerate} There are three further key modelling ideas in the literature that we will not touch upon in this present work: to use X-ray intensity and temperature profiles as a mass constraint \citep[e.g.,][]{2013ApJ...765...25N}; and to model multiple lenses simultaneously, with one or more cosmological parameters variable but shared between the lenses. This latter strategy has been used to constrain $H_0$ from time delay lenses \citep{2006ApJ...652L...5S,2008ApJ...679...17C,2010ApJ...712.1378P} and recently the cosmological parameters $\Omega$ as well \citep{2014MNRAS.437..600S}. Third, it is in principle possible to estimate the $\Omega$ parameters even from a single lens, if there are lensed sources at multiple redshifts \citep{2014MNRAS.437.2461L} or by using additional priors \citep{2010Sci...329..924J,2014ApJ...788L..35S}. In this paper, we introduce a new non-parametric lens modelling framework -- {\sc Glass}{} (Gravitational Lensing AnalysiS Software). This shares some aspects with an earlier code {\sc PixeLens}{}~\citep{Saha2004,2008ApJ...679...17C}. However, {\sc Glass}{} -- which contains all new code written from the ground up -- significantly improves upon {\sc PixeLens}{} in several key ways: \begin{enumerate} \item At the heart of {\sc Glass}{} is a new uniform sampling algorithm for high dimensional spaces \citep{2012MNRAS.425.3077L}. This allows for large ensembles of $>10,000$ models to be efficiently generated. \item {\sc Glass}{} provides a modular framework that allows new priors to be added and modified easily. \item The basis functions approximating a model can be easily changed (in this paper, we assume pixels as in {\sc PixeLens}). \item With so many models in the final ensemble, we can afford to apply non-linear constraints (for example stellar kinematic data; or the removal of models with spurious extra images) to accept/reject models in a post-processing step. \item The central region of the mass map can have a higher resolution to more efficiently capture steep models. \item Stellar density can be used as an additional constraint on the models. \item Point or extended mass objects can be placed in the field. \end{enumerate} As a first application, we use {\sc Glass}{} on mock data to determine which combination of lensing, stellar mass and/or stellar kinematic constraints best constrain the projected mass profile and shape of a gravitational lens. We will apply {\sc Glass}{} to real lens data in a series of forthcoming papers. This paper is organised as follows. In \secref{sec:glass}, we describe the {\sc Glass}{} code. In \secref{sec:theory}, we review the key elements of lensing theory, stellar population synthesis, and stellar dynamics we will need. In \secref{sec:mockdata}, we describe our mock data. In \secref{sec:results}, we present our results from applying {\sc Glass}{} to these mock data. Finally, in \secref{sec:conclusions} we present our conclusions. \section{Theory}\label{sec:theory} \subsection{Lensing essentials}\label{sec:lensing_basic} In the following summary, we follow \cite{1986ApJ...310..568B} with some differences in notation, in particular putting back the speed of light $c$ and the gravitational constant $G$. The lens equation: \begin{equation} \begin{aligned} \vec\beta &= \vec\theta - \frac{D_{LS}}{D_S}\vec\alpha(\vec\theta) \\ \vec\alpha(\vec\theta) &= \frac{4G}{c^2D_L} \int \Sigma(\vec\theta') \frac{(\vec\theta - \vec\theta')} {\ |\vec\theta - \vec\theta'|^2} \, d^2\vec\theta' \end{aligned} \label{eqn:lens_equation} \end{equation} maps an observed image position $\vec\theta$ to a source position $\vec\beta$. Using the thin lens approximation, the lens can be thought of as a projected surface density $\Sigma$ which diverts the path of a photon instantaneously through the bending angle $\vec\alpha$. The $D$ factors, as in the previous section, are angular diameter distances, which depend on the cosmological density-parameters $\Omega$, the redshifts $z_L,z_S$ of the lens and the source, and the Hubble parameter $H_0$, thus \begin{equation} D_{LS} = \frac c{H_0} \frac{1+z_S}{1+z_L} \int_{z_L}^{z_S} \frac{dz}{\sqrt{\Omega_m(1+z)^3 + \Omega_\Lambda}} \end{equation} and $D_L \equiv D_{0,L}$, $D_S \equiv D_{0,S}$. One way to understand the lens equation is via Fermat's principle. We can think of light as travelling only along extremum paths where lensed images occur. Such paths occur at the extrema of the photon {\it arrival time} $t(\vec\theta)$ that depends on the geometric path the photon takes and the general relativistic gravitational time dilation due to a thin lens at redshift $z_L$: \begin{equation} \begin{aligned} \frac{ct(\vec\theta)}{(1+z_L)D_{L}} &= {\textstyle\frac12} |\vec\theta - \vec\beta|^2 \cdot \frac{D_{S}}{D_{LS}} \\ &- \frac{4GD_L}{c^2} \int \Sigma(\vec\theta') \ln |\vec\theta-\vec\theta'| \, d^2\vec\theta' \label{full arrival time} \end{aligned} \end{equation} We can simplify the above equation by introducing a dimensionless time $\tau$ and density $\kappa$: \begin{equation} \tau(\vec\theta) = \frac{ct(\vec\theta)}{(1+z_L)D_{L}} \quad ; \quad \kappa(\vec\theta) \equiv \frac{\Sigma(\vec\theta)}{\Sigma_\mathrm{crit}} \end{equation} and hence rewrite \eqnref{full arrival time} as: \begin{equation} \tau(\vec\theta) = {\textstyle\frac12} |\vec\theta - \vec\beta|^2 \cdot \frac{D_{S}}{D_{LS}} - \frac1\pi \int \kappa (\vec\theta') \ln|\vec\theta - \vec\theta'| d^2\vec\theta' \label{arrival time2} \end{equation} The scaled arrival time $\tau$ is like a solid angle. It is of order the area (in steradians) of the full lensing system. The expression $|\vec\theta - \vec\beta|^2$ is of order the image-separation squared, and the other terms are of similar size. For this reason, is convenient to measure $\tau$ in arcsec$^{2}$. Lensing observations provide information only at $\vec\theta$ where there are images. Hence, the arrival-time surface $\tau(\vec\theta)$ is not itself observable. Its usefulness lies in that observables can be derived from it. An image observed at $\vec\theta_1$ implies that $\nabla\tau(\vec\theta_1)=0$. A measurement of time delays between images at $\theta_1$ and $\theta_2$ implies that $t(\vec\theta_1)-t(\vec\theta_2)$ is known. Interestingly, both these types of observations give constraints that are linear in $\kappa$ and $\vec\beta$. The rather complicated dependence of lensing observables on the mass distribution $\kappa(\vec\theta)$ has an important consequence: very different mass distributions can result in similar observables. This is the phenomenon of lensing degeneracies. While the non-uniqueness of lens models noted by \cite{1981ApJ...244..736Y} already hinted at degeneracies, their existence was first derived by \cite{1985ApJ...289L...1F}. The most important is the so-called mass-sheet degeneracy, which is that image positions remain invariant if $\tau(\vec\theta)$ is multiplied by an arbitrary constant. This corresponds to rescaling the surface density at the images $\kappa(\vec\theta)$. In fact there are infinitely many degeneracies \citep{2000AJ....120.1654S} because any transformation of the arrival-time surface away from the images has no effect on the lensing observables. In particular, there are degeneracies that involve the shape of the mass distribution \citep{2006ApJ...653..936S,2014A&A...564A.103S}. Degeneracies tend to be suppressed if there are sources at very different redshifts or `redshift contrast' \citep{1998AJ....116.1541A,2009ApJ...690..154S}, because the presence of different factors of $D_S/D_{LS}$ in the image plane makes it more difficult to change the mass distribution and the arrival-time surface without affecting the lensing observables. But degeneracies are still present with multiple source redshifts \citep{2008MNRAS.386..307L,2014A&A...568L...2S}. \subsection{Stellar populations} For many galaxy lenses, the gravitational potential in the inner region is dominated by the stellar mass. Stellar mass can be estimated by combining photometry and colours with models of the stellar populations. Such estimates are reasonably robust, even if the star-formation history is very uncertain: given a stellar-population model \citep[such as][]{2003MNRAS.344.1000B} and an initial mass function (IMF), the stellar mass can be inferred to 0.1 to 0.2 dex using just two photometric bands \citep[see, e.g., Figure~1 in][]{2008MNRAS.383..857F}. By comparing the lensing-mass and stellar-mass profiles in elliptical galaxies, it is possible to extract the radial dependence of the baryonic vs dark-matter fraction \citep{2005ApJ...623L...5F,2008MNRAS.383..857F,2011ApJ...740...97L}. The major uncertainty at present in the stellar mass is probably the IMF. In the lensing galaxy of the Einstein Cross, the IMF cannot be much more bottom-heavy than \cite{2003PASP..115..763C}, because otherwise the stellar mass would exceed the lensing mass \cite{2010MNRAS.409L..30F}. More massive galaxies, however, do appear to have more of their stellar mass in low-mass stars. This is indicated by molecular spectral features characteristic of low mass stars \citep{2004ApJ...614L.101C,2012ApJ...747...69C,2013MNRAS.429L..15F}. The \cite{2003PASP..115..763C} IMF would, however, still provide a robust lower limit on the stellar mass and hence, also a limit on the total mass. Accordingly, {\sc Glass}{} allows a constraint of the form \begin{equation} M(\vec\theta) \geq M_\mathrm{stel}(\vec\theta) \end{equation} on the total mass. \subsection{Stellar kinematics}\label{sec:kinematics} Another useful constraint follows from the velocity of stars within the lensing galaxy. Assuming spherical symmetry, stars obey the projected Jeans equations \citep[e.g.,][]{2008gady.book.....B}: \begin{equation} \sigma_p^2(R) = \frac{2}{I(R)}\int_R^\infty dr \left(1-\beta \frac{R^2}{r^2}\right) \frac{\nu \sigma_r^2 r}{\sqrt{r^2 - R^2}}; \label{eqn:sphericaljeans} \end{equation} \begin{equation} \sigma_r^2(r) = \frac{r^{-2\beta}}{\nu}\int_r^\infty r'^{2\beta} \nu \frac{G\ensuremath{M_\mathrm{3D}}(r')}{r'^2}dr' \end{equation} where $\sigma_p$ is the projected velocity dispersion of the stars as a function of projected radius $R$; $I(R)$ is the surface density of the stars; $\nu(r)$ is the three dimensional stellar density; $\sigma_{r,t}(r)$ are the radial and tangential velocity dispersions, respectively; $\beta(r) = 1 - \sigma_t^2/2\sigma_r^2 = \mathrm{const.}$ is the velocity anisotropy (here assumed to be constant, and not to be confused with $\vec\beta(\vec\theta)$ from lensing); $G$ is Newton's gravitational constant; and $\ensuremath{M_\mathrm{3D}}(r)$ is the mass profile that we would like to measure. By convention, we always write $R$ for a projected radius, and $r$ for a 3D radius. It is immediately clear from \eqnref{eqn:sphericaljeans} that, even assuming spherical symmetry, we have a degeneracy between the enclosed mass profile $\ensuremath{M_\mathrm{3D}}(r)$ and the velocity anisotropy $\beta(r)$. This can be understood intuitively since $\beta(r)$ measures the relative importance of radial versus circular orbits and is intrinsically difficult to constrain given only one component of the velocity vector for each star. Nonetheless, $\beta(r)$ can be constrained given sufficiently many stars, since radial Doppler velocities sample eccentric orbits as $r\rightarrow 0$ and tangential orbits as $r\rightarrow \infty$ \citep[e.g.,][]{2002MNRAS.330..778W}. It can also be estimated if an independent measure of $\ensuremath{M_\mathrm{3D}}(r)$ is available -- for example coming from strong lensing. While $\ensuremath{M_\mathrm{3D}}(r)$ is difficult to measure from stellar kinematics alone, the mass within the half light radius is robustly recovered \citep[e.g.,][]{2009ApJ...704.1274W,2010MNRAS.406.1220W,2012ApJ...754L..39A} since stellar systems in dynamic quasi-equilibrium obey the virial theorem (equation \ref{eqn:virial}). This means that stellar kinematics can break the steepness degeneracy if $r_{1/2} \neq r_E$, where $r_E = D_L \theta_E$ is the physical Einstein radius. We test this expectation in \secref{sec:results}. We describe our numerical solution of \eqnref{eqn:sphericaljeans} in \secref{sec:glasskinematics} and present tests applied to mock data in \secref{sec:results}. \section{Numerical Methods}\label{sec:glass} \subsection{A new lens modelling framework: {\sc Glass}} {\sc Glass}{} is the Gravitational Lensing AnalysiS Software. It extends and develops some of the concepts from the free form modelling tool {\sc PixeLens}{}~\citep{Saha2004,2008ApJ...679...17C}, but with all new code. The most compute intensive portion was written in C but Python was chosen because of its flexibility as a language and for its large scientific library support. The flexibility allows {\sc Glass}{} to have quite sophisticated behavior while at the same time simplifying the user experience and reducing the overall development time. One of the striking features is that the input file to {\sc Glass}{} is itself a Python program. Understanding Python is not necessary for the most basic use, but this allows a user to build complex analysis of a model directly into the input file. {\sc Glass}{} may furthermore be used as an external library to other Python programs. {\it The software is freely available for download or from the first author.}% \footnote{\url{http://www.jpcoles.com}} The key scientific and technical improvements are: \begin{enumerate} \setcounter{enumi}{0} \item A new uniform sampling algorithm for high dimensional spaces. \end{enumerate} At the heart of {\sc Glass}{} lies a new algorithm for sampling the high dimensional linear space that represents the modelling solution space. This algorithm was described and tested in \cite{2012MNRAS.425.3077L}; it is multi-threaded allowing it to run efficiently on many-cored machines. \begin{enumerate} \setcounter{enumi}{1} \item A modular framework that allows new priors to be added and modified easily. \end{enumerate} Each prior is a simple function that adds linear constraints that operate on either a single lens object or the entire ensemble of objects. {\sc Glass}{} comes with a number of useful priors (the default ones will be described in \secref{sec:discrete}), but a user can write their own directly in the input file, or by modifying the source code. \begin{enumerate} \setcounter{enumi}{2} \item The basis functions approximating a model can be changed. \end{enumerate} {\sc Glass}{} currently describes the lens mass as a collection of pixels, but the code has been designed to support alternative methods. In particular, there are future plans to develop a module using Bessel functions. This will require a new set of priors that operate on these functions. \begin{enumerate} \setcounter{enumi}{3} \item Non-linear constraints can be imposed in an automated post-processing step. \end{enumerate} Once {\sc Glass}{} has generated an ensemble of models given the linear constraints, any number of post processing functions can be applied. Not only can these functions be used to derive new quantities from the mass models, they can also be used as a filter to accept or reject a model based on some non-linear constraint. For example, we can reject models that have spurious extra images (\secref{sec:glassextraimages}), or models that do not match stellar kinematic constraints (\secref{sec:glasskinematics}). The plotting functions within {\sc Glass}{} will correctly display models that have been accepted or rejected. \begin{enumerate} \setcounter{enumi}{4} \item The central region can have a higher resolution to capture steep models. \end{enumerate} With the default basis set of pixels, the mass distribution of the lens is described by a uniform grid. However, in the central region of a lensing galaxy where the mass profile may rise steeply, the center pixel uses a higher resolution. This allows the density to increase smoothly but still allow for a large degree of freedom within the inner region without allowing the density to be arbitrarily high. \begin{enumerate} \setcounter{enumi}{5} \item Stellar density can be used as an additional constraint. \end{enumerate} The mass in inner regions of galaxies is often dominated by the stellar component which one can estimate using standard mass-to-light models. This data can be added to the potential as described later in \secref{stellar mass}. By using the stellar mass one can place a lower bound on the mass and help constrain the inner most mass profile. \begin{enumerate} \setcounter{enumi}{6} \item Point or extended mass objects can be placed in the field. \end{enumerate} A shear term can be added to the potential, as shown later in \eqnref{shear}, to account for mass external to the modelled region. This is useful to capture the gross effects of a distant neighbour, since there is a degeneracy between the ellipticity of a lens and its shear field (the greater the allowed shear, the more circular the lens may be). {\sc Glass}{} also allows further analytic potential components to be included. These can be used to model substructure or multiple neighbours close to the main lens. The substructure may have only a small effect if the lens is a single galaxy, but if the lens is a group or cluster then a potential can be added for each of the known member galaxies. A few standard functions are already included in {\sc Glass}{} including those for a point mass, a power law distribution, or an isothermal (a particular case of the power law). \subsection{Analysis Tools}\label{sec:tools} {\sc Glass}{} is not only a modeling tool but also an analysis engine. {\sc Glass}{} provides many functions for viewing and manipulating the computed models. These functions can either be called from a program written by the user or by using the program \textsc{viewstate.py} included with {\sc Glass}. There is also a tool, \textsc{lenspick.py} for creating a lens, either analytically or from an $N$-body simulation file. To load the simulation data, {\sc Glass}{} relies on the \textsc{Pynbody} library \citep{pynbody} and can thus load any file supported by that package. \subsection{Pixelated models}\label{sec:discrete} For this paper, we will restrict ourselves to using a pixelated basis set as used by {\sc PixeLens}{} \citep{Saha2004,2008ApJ...679...17C}, but note that it is straightforward to add other basis function expansions to {\sc Glass}. The algorithm for generating models in {\sc Glass}{} samples a convex polytope in a high dimensional space whose interior points satisfy both the lens equation and other physically motivated {\it linear} priors \citep{2012MNRAS.425.3077L}. A limitation of our sampling strategy is that only linear constraints may be applied when building the model ensemble; however, non-linear constraints can be applied in post-processing (see \secref{sec:glassextraimages} and \secref{sec:glasskinematics}). We therefore formulate all of our equations as equations linear in the unknowns. We describe the density distribution $\kappa$ as a set of discrete grid cells or pixels $\kappa_i$ and rewrite the potential \eqnrefp{lensing potential} as: \begin{equation} \psi(\vec\theta) = \sum_n \kappa_n Q_n(\vec\theta) \label{discrete potential} \end{equation} where the sum runs over all the pixels and $Q_n$ is the integral of the logarithm over pixel $n$. The exact form for $Q$ is described in \appref{Q derivation}. We can find the discretized lens equation by simply taking the gradient of the above equations. The pixels only cover a finite circular area with physical radius $\ensuremath{R_\mathrm{map}}$ and pixel radius $\ensuremath{R_\mathrm{pix}}$ with the central cell centered on the lensing galaxy. To account for any global shearing outside this region from, e.g., a neighboring galaxy, we also add to \eqnref{discrete potential} two shearing terms: \begin{equation} \label{shear} \gamma_1(\theta_x^2 - \theta_y^2) + 2\gamma_2\theta_x\theta_y\quad. \end{equation} We can continue adding terms to account for other potentials. For instance, we may want to impose a base potential over the field, or add potentials from the presence of other galaxies in the field. {\sc Glass}{} already includes potentials for a point mass or an exponential form, but custom potentials are straightforward to add and can be included directly in the input file. If the stellar density $\kappa_s$ has been estimated we can use this as a lower bound where the stellar potential is a known constant of the form \eqnref{discrete potential}, e.g., $\kappa_n = \kappa_{\mathrm{dm},n} + \kappa_{s,n}$ for a two-component model. \subsubsection{Priors} The lens equation and the arrival times alone are typically not enough to form a closed volume in the solution space. We therefore require additional linear constraints -- {\it priors}. Some of these are `physical' in the sense that they are unarguable -- for example demanding that the mass density is everywhere positive; others are more subjective, for example demanding that the mass map is smooth over some region. Such `regularisation' priors may be switched off for all or some of the mass map if the data are sufficiently constraining. The priors built in to {\sc Glass}{} are similar to those used in {\sc PixeLens}{}~\citep{2008ApJ...679...17C}. The physical priors are always used by default; the regularisation priors are used sparingly -- i.e. only if the data are not sufficiently constraining to obtain sensible solutions without them: \vspace{2mm} \noindent {\bf Physical priors} \begin{enumerate} \item The density must be non-negative everywhere. \item Image parity is enforced. \end{enumerate} \vspace{1mm} \noindent {\bf Regularisation priors} \begin{enumerate} \item The local gradient everywhere must point within $45^{\circ}$ of the center. \item The azimuthally averaged density profile must have a slope everywhere $\le 0$. \item The density is inversion symmetric. \end{enumerate} For typical lens data, the regularisation priors are very important for creating physically sensible solutions. Prior (i) demands that the peak in the mass density is at the centre of the mass map. Secondary `plateaus' in the mass map are possible, but not secondary peaks. Note that this prior still successfully allows merging galaxy systems to be correctly captured, provided that the two galaxies are not equally dense in projection (see, for example the {\sc PixeLens}{} model of the merger system B1608 in \citealt{2007ApJ...667..645R}); and for the successful detection of `meso-structure' in strong lensing galaxy clusters \citep{2007ApJ...663...29S}. Prior (ii) is arguably a physical prior since a positive slope in the azimuthally averaged density profile would be unstable \citep[e.g.,][]{2008gady.book.....B}. Note that this prior does not preclude successful modelling of mergers or substructure unless the total projected mass in substructure is comparable to the projected mass of the host in an azimuthal annulus \citep{2007ApJ...667..645R,2007ApJ...663...29S}. Prior (iii) is only used for doubles that ought to be inversion symmetric and quads where inversion symmetry is clear from the image configuration. Finally, we remind the reader that all of the regularisation priors can be switched off or changed/improved depending on the data quality available. For clusters, substructure can be explicitly modelled by adding analytic potentials at the known locations of galaxies; furthermore the above priors can be relaxed in regions of the mass map where the data are particularly constraining (for example near the images). We will apply {\sc Glass}{} to a host of strong lensing clusters in forthcoming work, where we will explicitly test the prior on mock data that has significant substructure. \subsection{Building the model ensemble} In the simplest form, a single model for a lens is a tuple $\ensuremath{\mathscr{M}} = (\vec\kappa, \vec\beta, \gamma_1, \gamma_2)$. A single model represents a single point in the solution space polytope. Using the MCMC sampling strategy described in \cite{2012MNRAS.425.3077L} we uniformly sample this space. Collectively, the sampled models are referred to as an ensemble $\ensuremath{\mathscr{E}} = \{\ensuremath{\mathscr{M}}_i\}$, where we usually generate $|\ensuremath{\mathscr{E}}| \sim 1000$ models. One can choose to further process these models to impose priors that may be difficult to enforce during the modeling process. For instance, non-linear constraints, or simply filtering of models that do not meet some criteria can be excluded, or weighted against as discussed previously. In this paper, we do not exclude any models and treat all models as equally likely. The time to generate the model ensemble is mostly a function of the size of the parameter space. The MCMC algorithm has a ``warm-up'' phase where it estimates the size and shape of each dimension in the solution space. Once this has been completed, the models are sampled very quickly. In fact, there is little difference between generating 1,000 or 10,000 models, although we find little statistical difference after 1,000 models. For the mock lenses, the typical ``warm-up'' time was about 4s, and the modelling time was 20s using a parallel shared-memory machine with 40 cores. The ability to rapidly generate so many models is what allows us to then accept/reject models to apply non-linear constraints (see \secref{sec:glassextraimages} and \secref{sec:glasskinematics}). This is a key advantage over our earlier pixelated strong lens tool {\sc PixeLens}. \subsection{Raytracing}\label{Raytracing} {\sc Glass}{} can also determine the position of images and time delays from particle-based simulation output given a source position $\vec\beta$. This is used to generate the lens configurations used in the parameter study. The particles are first projected onto a very high resolution grid representing the lens plane. The centers $\vec\theta_i$ of each of the grid cells are mapped back onto the source plane using \eqnref{eqn:lens_equation}. If the location on the source plane $\vec\beta_i$ is within a user specified $\ensuremath{\varepsilon}_\mathrm{accept}$ of $\vec\beta$ then $\vec\theta_i$ is accepted and further refined using a root finding algorithm until the distance to $\vec\beta$ is nearly zero. If multiple points converge to an $\ensuremath{\varepsilon}_\mathrm{root}$ of each other then only one point is taken. Care must be taken that the grid resolution is high enough that the resulting image position error is below the equivalent observational error. Time delays are then calculated in order of the arrival time at each image \eqnrefp{tau}. \subsection{Removing models with extra images}\label{sec:glassextraimages} While linear constraints are applied in {\sc Glass}{} by the nature of the sampling algorithm, non-linear constraints must be applied in post-processing. Models that are inconsistent with such constraints must then be statistically discarded via a likelihood analysis. An example of such a non-linear constraint is the spurious presence of unobserved images. This `null-space' prior was first proposed and explored by \citet{2006MNRAS.367.1209L} and found to be extremely powerful. We find that our gradient prior in {\sc Glass}{} (see \secref{sec:discrete}), performs much of the same function as \citeauthor{2006MNRAS.367.1209L}'s null-space prior, but some models can still rarely turn up spurious images. We reject these in a post-processing step, where we sweep through the model ensemble applying the ray tracing algorithm described in \ref{Raytracing}. \subsection{A post-processing module for stellar kinematics}\label{sec:glasskinematics} Similarly to the null-space constraint (\secref{sec:glassextraimages}), stellar kinematic constraints constitute a non-linear prior on the mass map and must be applied in post-processing. We sweep through the model ensemble performing an Abel deprojection to determine $\ensuremath{M_\mathrm{3D}}(r)$ from the projected surface density $\Sigma(R)$ assuming spherical symmetry \citep[e.g.,][]{2008gady.book.....B,2008MNRAS.390.1647B}: \begin{eqnarray} \ensuremath{M_\mathrm{3D}}(r) & = & M_\mathrm{p}(<r) - 4r^2 \int_0^{\pi/2} \Sigma\left(x\right) \left[\frac{1}{\cth[2]} \right. \nonumber \\ & & \left. - \frac{\sth}{\cth[3]} \arctan\left(\frac{\cth}{\sth}\right) \right] d\theta \end{eqnarray} where \begin{equation} M_\mathrm{p}(<r) = 2\pi \int_0^r R \Sigma(R) dR \end{equation} is the projected enclosed mass evaluated at 3D radius $r$; and $x = r/\cth$. This de-projection algorithm was tested on triaxial figures in \citet{2006ApJ...652L...5S}. They found that for triaxialities typical of our current cosmology, the method works extremely well unless the triaxial figure is projected directly along the line of sight such that we see the galaxy or galaxy cluster `down the barrel'. Such a situation is unlikely, but in any case avoidable since the resultant figure appears spherical in projection. This leads to the seemingly counter-intuitive result that the kinematic constraints -- that rely on the above de-projection -- are most secure for systems that do not appear spherical in projection (unless independent data can confirm the three dimensional shape is indeed very round). We use the deprojected mass to numerically solve \eqnref{eqn:sphericaljeans} for constant $\beta(r)$, assuming either $\beta(r) = 1$ or $\beta(r) = 0$ at all radii to bracket the two extremum situations. Where the data are good enough, these two may be distinguished giving dynamical information about $\beta(r)$. In more typical situations, however, we seek to simply marginalise over the effect of $\beta(r)$, using the stellar kinematics as a robust measure of $\ensuremath{M_\mathrm{3D}}(r_{1/2})$ (see \secref{sec:kinematics}). \section{The mock data}\label{sec:mockdata} We now present a study of four mock galaxies with known analytic forms. These are used to verify that {\sc Glass}{} is able to correctly recover the mass profile, and -- more importantly -- to determine what type and quality of data best constrain the mass profile and shape of a lens. \subsection{The triaxial N-body mock galaxies} We generate four two-component mock galaxies, where the dark matter and stellar profiles are allowed to be both steep and shallow. The enclosed mass of the stars and dark matter are both fixed to be $M_{*,\mathrm{DM}} = 1.8\e{10}$\,M$_\odot$ at the stellar scale radius $a_* = 2$\,kpc, such that the stars and dark matter contribute equally to the total mass at $a_*$. The dark matter scale length is fixed for all models at $a_\mathrm{DM} = 20$\,kpc. These values were chosen to closely resemble the lensing galaxy PG1115+080 \citep{1980Natur.285..641W}. We place the galaxy at a redshift of $z_L = 0.31$ for lensing. Throughout, we assume a cosmology where $H_0^{-1}=13.7$ Gyr, $\Omega_M=0.28$, and $\Omega_\Lambda=0.72$. The critical lensing density is $\kappa_\mathrm{crit}\sim 1.8\e{9}$\ensuremath{\mathrm{M}_\odot}/kpc$^2$. The galaxies were generated as three dimensional particle distributions as in \citet{2009MNRAS.395.1079D}. Each component follows the profile: \begin{equation} \rho(\tilde r) = \frac{M}{4\pi a^3}(3-\gamma){(\tilde r/a)^{-\gamma}(1 + \tilde r/a)^{\gamma-4}} \label{Dehnen profile} \end{equation} where $a$ is the component scale radius mentioned in \tabref{mock galaxy params}; $\tilde r^2 = (x/\lambda_1)^2 + (y/\lambda_2)^2 + (z/\lambda_3)^2$ is the ellipsoidal radius; and the axis ratios are $\lambda_1:\lambda_2:\lambda_3 = 6:4:3$. In the case where the central density profile index $\gamma$ is unity (and in the limit of spherical symmetry), this is the Hernquist profile \citep{1990ApJ...356..359H}. The four combinations of profile indices are shown in \tabref{mock galaxy params}. \begin{table} \begin{tabular}{llllllll} Galaxy & $\gamma_\star$ & $M_\star$ & $\gamma_\mathrm{DM}$ & $M_\mathrm{DM}$ & $\ensuremath{R_\mathrm{map}}$ \\ \hline {\sc star1.0-dmCore} & 1 & 4 & 0.05 & $11^{2.95}$ & 50 kpc\\ {\sc star1.0-dmCusp} & 1 & 4 & 1 & $11^2$ & 50 kpc \\ {\sc star1.5-dmCore} & 1.5 & $2^{1.5}$ & 0.16 & $11^{2.84}$ & 50 kpc \\ {\sc star1.5-dmCusp} & 1.5 & $2^{1.5}$ & 1 & $11^2$ & 10 kpc \end{tabular} \caption{Profile parameters for the four mock galaxies. The name indicates whether the galaxy is centrally dark matter or stellar dominated with a shallow or cuspy dark matter density profile. Masses are in units of $1.8\e{10}\ensuremath{\mathrm{M}_\odot}$. The scale lengths for all lenses are $(a_\star,a_\mathrm{DM})=(2,20)$\,kpc. $\ensuremath{R_\mathrm{map}}$ is the 2D projected radius used to generate the lens configurations. In the case of {\sc star1.5-dmCusp}, the profile is sufficiently steep that the profile could be truncated at $\ensuremath{R_\mathrm{map}} = 10$ kpc.} \label{mock galaxy params} \end{table} \begin{figure*} \includegraphics[width=0.33\textwidth]{MockGalProfile-a.pdf} \includegraphics[width=0.33\textwidth]{MockGalProfile-b.pdf} \includegraphics[width=0.33\textwidth]{MockGalProfile-c.pdf} \caption{ Profiles of the four mock galaxies showing the stellar (dotted) and dark matter (dashed) components and the total (solid). \textbf{Left:} The spherically averaged density. The stars in models {\sc star1.5-dmCore}{} and {\sc star1.5-dmCusp}{} contribute significantly to the central potential. \textbf{Middle:} The radially averaged two-dimensional projected density. The critical lensing density at $z_L=0.31$, $\kappa_\mathrm{crit}\sim 1.8\e{9}$\ensuremath{\mathrm{M}_\odot}/kpc$^2$, is marked by the horizontal line. \textbf{Right:} The enclosed projected mass. } \label{mock galaxies} \end{figure*} In \figref{mock galaxies}, we show the 3D radial density, the 2D projected density, and the 2D enclosed mass for each galaxy. \subsection{Lens configurations}\label{sec:lensconfig} For each of the four galaxies, we used the raytracing feature of {\sc Glass}{} described in \secref{Raytracing} to construct 6 basic lensing morphologies: \begin{enumerate} \item one double and one extended double; \item one quad and one extended quad; \item two 2-source quads with varying redshift contrast. \end{enumerate} The `extended' configurations use multiple point sources at the same redshift to simulate an extended source that will produces an arc-like image. \figref{arrival surfaces} shows the lens configurations for the {\sc star1.5-dmCusp}\ galaxy. The configurations for the other galaxies are similar. The labels Z1, Z2, Z3 within the names refer to the redshift of the sources. We have chosen Z1=1.72, Z2=0.72, and Z3=0.51 so that the radial distribution of the images is roughly equally spaced. For all mocks, we do not apply any external shear field. Only the central image of the Z1 source is used to avoid over-constraining the models, otherwise all the central images would fall within the central pixel and no solution exists that satisfies all locations simultaneously for one pixel value. \begin{figure*} \includegraphics[width=0.85\textwidth]{BCarrival_surfaces} \caption{The lens configurations for the six test cases using the {\sc star1.5-dmCusp}{} mock galaxy. The other mock galaxies produce similar results. Here, the central image is shown, although not all tests include it. The naming convention indicates the redshift of the sources with Z1=1.72, Z2=0.72, and Z3=0.51. The central image only belongs to the Z1 source to avoid over-constraining the models (see \secref{sec:lensconfig} for further details). Small diamonds identify the location of the source(s) and images of the same shape share a common source. The extended source examples have been constructed so that the images will form arclets. The maximum separation of the sources in the source plane is 2.23 kpc in the extended double and 0.92 kpc in the extended quad. Grey circles are a visual aid to help determine radial separation between images. The axes are in arcseconds.} \label{arrival surfaces} \end{figure*} Each of these configurations were modelled with and without time delays; with and without a central image; and with and without the stellar mass as a lower bound, for a total of 48 test cases. (The central image is typically highly demagnified. For galaxy lenses it is very difficult to find since it lies along the sight line to the bright lensing galaxy; in clusters, however, such images have been seen -- e.g., \citealt{2005PASJ...57L...7I}). We assumed for all our tests that the lensing mass was radially symmetric (Prior vi). For our mock data, this is known to be true; it is most often the case with real galaxies, unless there is an obvious observed asymmetry. (We explore the effect of switching off the symmetry prior in \appref{no_symm_prior}. For the quads, the difference is small; for the doubles -- as expected -- the results are significantly degraded without this prior.) We use, by default, 8 pixels from the centre to the edge of the mass map; the central pixel was further refined into $5\times5$ pixels to capture any steep rise in the profile (two of the four mock galaxies have a steeply rising inner profile). We demonstrate that our results are robust to changing the grid resolution in \appref{pix_convergence_test}. In all cases -- despite applying no external shear to the mock lenses -- we allow a broad range of external shear in our lens model reconstructions. {\sc Glass}{} correctly returns a small or zero shear in all cases. It is possible that more complex shear fields present in real lensing galaxies could introduce further degeneracies beyond those discussed here. However, any such shear field can, at least in principle, be constrained by data (e.g., combining weak lensing constraints, or assuming that the shear field correlates with visible galaxies -- e.g., \citealt{2009A&A...500..681M,2011ApJ...726...84W}). \section{Results}\label{sec:results} \subsection{Radial profile recovery} \figref{reconstruction} shows some example reconstructions of the radial profile of our mock lenses. The left column shows the ensemble average arrival time surface with images marked as circles and the inferred source positions as diamonds. The centre column shows the radial density profile. The error bars cover a $1\sigma$ range around the median; the grey bands show the full ensemble range. The true density profile from the mock data is also plotted for comparison. The vertical lines mark the radial position of the images. The right column shows the enclosed mass. From top to bottom, the rows correspond to an extended double for {\sc star1.5-dmCusp}; an extended double with stellar mass constraints for {\sc star1.5-dmCusp}; a quad with time delay data for {\sc star1.5-dmCusp}; and a quad with time delays for {\sc star1.0-dmCore}. \figref{2d mass reconstruction} shows an example 2D reconstruction for {\sc star1.5-dmCusp}\ for a quad; we discuss shape recovery further in \secref{sec:shape}. As expected, the accuracies and precisions are best in the range of radii with lensed images where the most information about the lens is present. Even in the weakly constrained case of the extended double where the radial profile is poor, the true enclosed mass $M(<R)$ is well recovered at the image radii and our ensemble always encompasses it. We have verified this is the case in all of our tests, although for brevity we have not included the plots here. In all cases, there is a dip in the profile at large $R$ due to the cut off in mass in the lensing map. This is of little importance, though, as there is no lensing information there. Notice that the extended double (top row) gives the poorest constraints, as expected. Adding stellar mass (second row) significantly improves the constraints, for this example where the stars contribute significantly to the potential. Moving to a quad with time delays gives constraints almost as strong as the double with stellar mass, but note that {\it focussing only on the goodness of the fit can be misleading.} In the third row of \figref{reconstruction}, we obtain a better recovery than in the bottom row for {\it precisely the same data quality}. This occurs because the {\sc Glass}{} prior favours steeper models consistent with {\sc star1.5-dmCusp}, but not {\sc star1.0-dmCore}. It is the {\sc Glass}{} prior, rather than the data that is driving the good recovery for {\sc star1.5-dmCusp}\ in this example. This emphasises the importance of using a wide range of mock data tests to determine the role of data versus prior in strong lensing. \begin{figure*} \plotthree{BCExtendedDoubleR1_tms-a.pdf} {BCExtendedDoubleR1_tms-b.pdf} {BCExtendedDoubleR1_tms-c.pdf} \plotthree{BCExtendedDoubleR1_tmS-a-1.pdf} {BCExtendedDoubleR1_tmS-b-1.pdf} {BCExtendedDoubleR1_tmS-c-1.pdf} \caption{ Two reconstructions of the mock galaxy {\sc star1.5-dmCusp}\ for an extended double without stellar mass (\textbf{Top}) and with stellar mass (\textbf{Bottom}). No time delays were assumed. The improved constraints on the mass distribution when a lower bound is given by the stellar mass is evident in the reduced range of allowable models. \textbf{Left:} The ensemble average arrival time surface with just the iso-contours for the saddle points drawn. The central diamonds show the reconstructed source positions. \textbf{Middle:} The surface density of the dark matter (DM; magenta); the stars (yellow); and the total (black). The original $N$-body mass model (with stars) used to create the lens is shown in green. The vertical lines mark the radial positions of the images. The higher resolution feature of {\sc Glass}{} has been used on the central pixel allowing the steep profile to be captured. \textbf{Right:} The cumulative mass. The error bars on all plots are $1\sigma$; the grey bands show the full range of models.} \label{reconstruction} \end{figure*} \begin{figure*} \plotthree{BCQuadR1a_Tms-a.pdf} {BCQuadR1a_Tms-b.pdf} {BCQuadR1a_Tms-c.pdf} \plotthree{AAQuadR1a_Tms-a.pdf} {AAQuadR1a_Tms-b.pdf} {AAQuadR1a_Tms-c.pdf} \caption{ Two further reconstructions similar to \figref{reconstruction}. \textbf{Top:} The mock galaxy {\sc star1.5-dmCusp} but including time delays for a single quad and no stellar mass. With the added information from the quad, the outer regions of the lens are the image radii are better constrained. \textbf{Bottom:} A quad with time delays, but using the {\sc star1.0-dmCore}{} mock galaxy. This galaxy has a shallower stellar density index, and a core in the dark matter. Due to the priors used in {\sc Glass}, the modelling favours steep solutions without additional information. } \label{reconstruction 2} \end{figure*} \begin{figure*} \includegraphics[width=0.33\textwidth]{BCQuadR1a_TmS-kappa-b.pdf} \includegraphics[width=0.33\textwidth]{BCQuadR1a_TmS-kappa-a.pdf} \caption{ \textbf{Left:} The mock data distribution for {\sc star1.5-dmCusp}{} projected onto a coarse grid. \textbf{Right:} The recovered ensemble average $\kappa$ distribution for the single quad with time delays. The contours are logarithmic base 10 values, where level 0 corresponds to the critical lensing density. Contours below the critical lensing density are drawn with dashed lines.} \label{2d mass reconstruction} \end{figure*} \figref{main results} and \figref{main results pixel-wise} show the results for our full mock data ensemble. Each subplot corresponds to a different mock galaxy, as marked. We show the fractional error of the mass distribution for each of the test configurations with (red) and without (black) stellar mass. In \figref{main results} we define the error: \begin{equation} \label{ferror R} f_R = \frac {\sum_i \left|M(i) - \widehat M(i)\right| } {\sum \widehat M(i)} \end{equation} based on the mass $M(i)$ of each pixel ring $i$ and the mass $\widehat M$ from the mock galaxy. In \figref{main results pixel-wise} the error is defined over all the pixels $\vec\theta$: \begin{equation} \label{ferror theta} f_\theta = \frac {\sum_{\vec\theta} \left|M(\vec\theta) - \widehat M(\vec\theta)\right| } {\sum_{\vec\theta} \widehat M(\vec\theta)} \end{equation} Since both error measurements consider the mass of each pixel, we are implicitly weighting the recovered density by the varying size of the pixels. The value $f_R$ emphasises the error one would see from radial profiles, while $f_\theta$ is useful as a measure of how well each individual pixel is recovered. For both $f_R$ and $f_\theta$, we only consider mass up to one pixel length passed the outermost image, since there is no longer any lensing information beyond that point. This means we typically use 8 bins, linearly spaced, ignoring the outermost 3 bins. The spacing changes, however, at the border between the high resolution region in the middle. \begin{figure*} \includegraphics[width=0.49\textwidth]{AAferror_profile-1sig.pdf} \includegraphics[width=0.49\textwidth]{BBferror_profile-1sig.pdf}\\ \includegraphics[width=0.49\textwidth]{ACferror_profile-1sig.pdf} \includegraphics[width=0.49\textwidth]{BCferror_profile-1sig.pdf} \caption{Our main results showing the quality of the radially averaged model recovery \eqnrefp{ferror R} for all our test cases. Within each panel are six groups of results for each of six lens morphologies. Each morphology considered the presence of time delays (TD) and a central image (central). The black markers are for tests that did not include the stellar mass as a lower bound constraint, while the red markers indicate where the stellar mass has been included. Error bars show the $1\sigma$ range of the model ensemble.} \label{main results} \end{figure*} \begin{figure*} \includegraphics[width=0.49\textwidth]{AAferror-1sig.pdf} \includegraphics[width=0.49\textwidth]{BBferror-1sig.pdf}\\ \includegraphics[width=0.49\textwidth]{ACferror-1sig.pdf} \includegraphics[width=0.49\textwidth]{BCferror-1sig.pdf} \caption{Similar to \figref{main results} but for the fractional error in the pixel-wise recovery \eqnrefp{ferror theta} of all models. The colours and labels are the same as previously. Error bars show the $1\sigma$ range of the model ensemble.} \label{main results pixel-wise} \end{figure*} The abundance of strong lensing data increases from left to right within each plot. As a result, there is a general trend for the reconstruction quality to increase (and therefore for $f$ to decrease). When both time delays and a central image are present (TD+central), the quality is highest. A double is known to provide very little constraint on the mass distribution. This is particularly evident in galaxies {\sc star1.0-dmCusp}{} and {\sc star1.5-dmCusp}{} where the mass profile is steepest and the reconstruction of the double is poorest. However, the addition of an arc from the extended source is sufficient to correct this. Notice that, as in \figref{reconstruction}, the recovery for {\sc star1.5-dmCusp}\ quickly saturates; there is little improvement as the data improves beyond a single quad. This occurs because the {\sc Glass}{} sample prior in the absence of data favours steep models like {\sc star1.5-dmCusp}\ over shallower models like {\sc star1.0-dmCore}\ (see also \figref{reconstruction}). \subsection{Shape recovery}\label{sec:shape} \figref{main results pixel-wise} already gives us important information about how well we can recover the {\it shape} of a lens. The trends are very similar to the radial profile recovery in \figref{main results}, suggesting that if the radial profile is well-recovered then, typically, the shape is too. A notable exception is for the {\sc star1.5-dmCusp}\ models where adding stellar mass constraints aids the shape recovery, but little-improves the radial mass profile. A visual example of the shape recovery is given in \figref{2d mass reconstruction}. We can also more directly probe the recovery of the shape of the mass distribution by considering the ratio of the major and minor axes $\lambda_1, \lambda_2$ of the inertia ellipse. If they are equal, the mass is distributed uniformly on the projected disc. The more dissimilar they are, the more elliptical the mass distribution. We define the global measure of lens shape as: \begin{equation} s \equiv \lambda_1/\lambda_2 \end{equation} where $\lambda_1$ and $\lambda_2$ are the eigenvalues of the 2D inertia tensor: \begin{equation} \left( \begin{matrix} \sum_{\vec\theta} M(\vec\theta) \theta^2_y & - \sum_{\vec\theta} M(\vec\theta) \theta_x \theta_y \\ - \sum_{\vec\theta} M(\vec\theta) \theta_x \theta_y & \sum_{\vec\theta} M(\vec\theta) \theta^2_x \end{matrix} \right) \label{eqn:inertia} \end{equation} We always take $\lambda_1$ to be the largest value. As with $f_R$ and $f_\theta$, we only consider mass up to one radial position passed the outermost image and compute the fractional error as: \begin{equation} \label{ferror shape} f_\mathrm{shape} = \left|s - \widehat s\right| / \widehat s \end{equation} where $\widehat s$ is the shape of the mock galaxy. The distribution of $f_\mathrm{shape}$ for each mock galaxy and each test case is shown in \figref{shape results}. Interestingly, for this global shape parameter recovery it appears more important to have time delay data and/or a central image (TD,TD+central) than to have a quad or multiple sources with wide redshift separation. In all cases, the stellar mass little-aids the recovery, reflecting the fact that $s$ is heavily weighted towards the shape at the {\it edge} of the mass map, rather than at the centre where the stars may dominate the potential (see \eqnref{eqn:inertia}). \begin{figure*} \includegraphics[width=0.49\textwidth]{AAferror_shape-1sig.pdf} \includegraphics[width=0.49\textwidth]{BBferror_shape-1sig.pdf}\\ \includegraphics[width=0.49\textwidth]{ACferror_shape-1sig.pdf} \includegraphics[width=0.49\textwidth]{BCferror_shape-1sig.pdf} \caption{Here we demonstrate our ability to recover the shape of the lensing mass. The shape ratio $\lambda_1/\lambda_2$ is measured from the principal components $\lambda_1, \lambda_2$ of the mass up to the outermost image. We plot the distribution of fractional error compared with the shape of the mock galaxies \eqnref{ferror shape}.} \label{shape results} \end{figure*} \begin{figure*} \includegraphics[width=0.33\textwidth]{BCQuadR1a_Tms_sigp-1.pdf} \includegraphics[width=0.33\textwidth]{BCQuadR1a_TmS_sigp-2.pdf} \includegraphics[width=0.33\textwidth]{BC_beta.pdf} \caption{ Estimated projected radially averaged velocity dispersion $\sigma_p$ \eqnrefp{eqn:sphericaljeans} for a single quad from the {\sc star1.5-dmCusp}{} mock galaxy without stellar mass (\textbf{left}) and with stellar mass (\textbf{middle}) assuming an anisotropy $\beta=0$ (black triangles) and $\beta=1$ (black squares). Error bars are $1\sigma$ and two overlapping hatched areas indicate the full range of models. The equivalent curves are also shown for the projected mock data after using the same analysis routines (green). The solid blue line is the actual cylindrically averaged velocity dispersion of the original mock particle data. The stellar half mass radius (orange) Einstein radius (black) are marked by vertical lines. For this configuration, these two radii are well-separated. The actual variation in $\beta(r)$ is also shown (\textbf{right}).} \label{fig:sigp} \end{figure*} \subsection{Stellar mass} \label{stellar mass} The stellar mass distribution gives a lower bound on the total mass. Where the stars dominate the central potential, it can provide a powerful constraint extra to the strong lensing data. We took the stellar mass directly from the generated galaxies and projected the particles onto the pixels. {\sc Glass}{} also offers an option to interpolate any map of stellar mass (e.g., from an observation) onto the pixels. The linear constraint is added to {\sc Glass}{} by writing $\kappa_n = \kappa_{dm,n} + \kappa_{s,n}$ as the sum of the dark matter and stellar mass components in the potential \eqnrefp{discrete potential}. Since each $\kappa_{s,n}$ is just a constant we do not add new, separate equations for each pixel. Although we assume a perfect recovery of the stellar mass with no error on the lower mass bound, it is straightforward to add errors as the stellar mass constraint remains linear: $\kappa_n = \kappa_{dm,n} + \epsilon \kappa_{s,n}$, where $\epsilon \sim 1$ is an additional error parameter. With the stellar mass lower bound, there is a significant improvement of the reconstruction quality shown in \figref{main results} and \figref{main results pixel-wise} for the doubles in the steepest mock galaxies ({\sc star1.0-dmCusp}{} and {\sc star1.5-dmCusp}). This is because these models are dominated by stars in the inner region. By contrast, the other two galaxies -- where the stars contribute negligibly to the potential -- are largely unaffected. \subsection{Stellar kinematics}\label{sec:results_stellar_kinematics} As outlined in \secref{sec:glass}, {\sc Glass}{} can also run post processing routines on the model ensemble which can be used to apply non-linear constraints. As an example, we consider here constraints from stellar kinematics. The models in the {\sc Glass}{} ensemble are processed as described in \S\ref{sec:glasskinematics}. To illustrate the power of stellar kinematic constraints, in \figref{fig:sigp}, we plot the projected velocity dispersion calculated for one model model (extracted from the full ensemble) of the {\sc star1.5-dmCusp}{} Quad with time delays and no stellar mass (left), and the same but with stellar mass (middle). In both cases, we calculate curves for two extrema velocity anisotropies: $\beta=0$ (green) and $\beta=1$ (red). Over-plotted is the correct answer for the {\sc star1.5-dmCusp}{} model (black). The stellar half mass radius (yellow) Einstein radius (black) are marked by vertical lines. For this configuration, these two radii are well-separated. Without even sweeping through the model ensemble and formally accepting/rejecting models, \figref{fig:sigp} already illustrates what we can obtain from stellar kinematics. The left plot shows the radially averaged projected velocity dispersion $\sigma_p(R)$ \eqnrefp{eqn:sphericaljeans} for a single quad from the {\sc star1.5-dmCusp}\ galaxy without the stellar mass constraint. The blue data points show the 1$\sigma$ distribution from the ensemble assuming $\beta=0$ (solid) and $\beta = 1$ (dashed); the grey bands show the full distributions. Also marked are the $\sigma_p(R)$ calculated from the mock data assuming $\beta=0$ (solid purple) and $\beta =1$ (dashed purple); and the true $\sigma_p(R)$ measured directly from the stars (black). This latter has a non-constant $\beta(r)$ (right panel) and differs also from the purple and blue curves in that these all assume spherical symmetry, whereas the stellar distribution is really triaxial. Such triaxiality and varying $\beta(r)$ explains why the purple curves do not match the black one. However, they do largely bracket the correct solution. More interestingly, the curves approximately cross for $\beta = 0$ at the stellar half light radius (yellow vertical line). This demonstrates, as has previously been reported in the literature, that $\sigma_p(R)$ gives a good estimate of the mass enclosed within $\sim$ the half light radius $M_{1/2}$ \citep[e.g.,][]{2009ApJ...704.1274W,2010MNRAS.406.1220W}. The mass {\it profile}, however, depends on $\beta$ which is poorly constrained by these data. If we add stellar mass constraints (middle panel), the situation is little-improved. The true answer already lay close to the bottom of the ensemble distribution; it now is forced to lie right at the edge. From \figref{fig:sigp}, it is clear that $\sigma_p(R)$ provides two useful pieces of information. Firstly, it is a powerful probe of $M_{1/2}$. Given a measurement of $\sigma_p(r_1/2) \sim 150$\,km/s, we could usefully reject many models in the ensemble as being overly steep in the centre. We would not, however, obtain a strong constraint on $\beta(r_{1/2})$. We could rule out $\beta(r_{1/2}) = 1$ (blue dashed line), but since our $\beta=0$ model crosses the true $\beta \sim 0.5$ line at $r_{1/2}$ it is clear that many $\beta(r)$ profiles will be consistent with the data. On the other hand, if we have a situation where $r_{1/2} \sim r_E$ (i.e. the vertical yellow and black lines in \figref{fig:sigp} overlap), then we will obtain tight constraints on $\beta$ since we then have two strong constraints on $M(r_{1/2})$ that become redundant. This latter situation of redundancy is also exactly what we would like to constrain cosmological parameters. In this case, we require a third piece of redundant information -- in this case in the form of strong lensing time delays. We will discuss such cosmological constraints in a forthcoming paper. The results for stellar kinematics match our expectations from \secref{sec:kinematics}. Where the lens data already constrain the mass distribution at $r \sim r_{1/2}$, stellar kinematics provide valuable information about the velocity anisotropy of the stars, $\beta$ (see \figref{fig:sigp}). Where the lens data poorly constrain the mass distribution at $r_{1/2}$, we may `integrate out' the effect of unknown $\beta$ to obtain a robust measure of $\ensuremath{M_\mathrm{3D}}(r_{1/2})$ from the stellar kinematics. This latter is robust to both uncertainties in $\beta(r)$ and to our assumption of spherical symmetry in the kinematic models \citep{2012ApJ...754L..39A}. \section{Conclusions}\label{sec:conclusions} We have introduced a new gravitational lens modelling tool -- Glass{} -- and used it to test the recovery of the mass profile and shape of mock strong lensing galaxies. Our key findings are as follows: \begin{enumerate} \item For pure lens data, multiple sources with wide redshift separation give the strongest constraints as this breaks the well-known mass-sheet or steepness degeneracy; \item A single quad with time delays also performs well, giving a good recovery of both the mass profile and its shape; \item Stellar masses -- for lenses where the stars dominate the central potential -- can also break the steepness degeneracy, giving a recovery for doubles almost as good as having a quad with time delay data, or multiple source redshifts; \item If the radial density profile is well-recovered, so too is the shape of a lens; \item Stellar kinematics provide a robust measure of the mass at the half light radius of the stars $M(r_{1/2})$ that can also break the steepness degeneracy if $r_{1/2} \neq r_E$ -- the Einstein radius; and \item If $r_E \sim r_{1/2}$, then stellar kinematic data can be used to probe the stellar velocity anisotropy $\beta$ -- an interesting quantity in its own right. \end{enumerate} Where information on the mass distribution from lensing and/or other probes becomes redundant, this opens up the possibility of using strong lensing to constrain cosmological models. We will study this, and present the first results from {\sc Glass}{} applied to real data, in forthcoming papers. \section{Acknowledgments}\label{sec:Acknowledgements} The authors would like to thank Sarah Bryan and Walter Dehnen for creating the particle distributions for the mock galaxies, and the anonymous referee for many useful suggestions which has improved the manuscript. JIR would like to acknowledge support from SNF grant PP00P2\_128540/1. \bibliographystyle{mn2e}
2,877,628,090,652
arxiv
\section{Introduction} It is known that quantum field theories (QFTs) can not be defined by the straightforward perturbative expansion because of the ultraviolet (UV) divergences. In order to make meaningful for QFTs, it is necessary to remove infinities from perturbative calculations by renormalizing the fields, masses, and coupling constants. A successful renormalization of QFT was firstly realized in 1940s by Tomonaga\cite{tomonaga}, Schwinger\cite{schwinger}, Feynman\cite{fenyman} and Dyson\cite{dyson} for the case of QED, while it took until the early of 1970s when Wilson\cite{wilson} gave it full physical meaning on QFTs. The first step before renormalization is to modify the behavior of field theory at very large momentum so that all Feynman diagrams become well-defined finite quantities. This procedure is usually called regularization. The most important properties needed for a good regularization method are that it must preserve all symmetries of original field theories and meantime maintain the divergent behavior of original Feynman integrals. In fact, many regularization and renormalization methods have been proposed in the last several decades such as: cut-off regularization\cite{cutoff}, Pauli-Villars regularization\cite{PV}, Schwinger's proper time regularization\cite{propertime}, dimensional regularization\cite{DR}, lattice regularization\cite{lattice}, constrained differential renormalization\cite{CDR} and so on. As discussed in\cite{LR1,LR2}, each of them has its advantage in applying to different situations. Up to now, there exists no single regularization which is suitable to all purposes in QFTs. In refs.\cite{LR1,LR2}, a new symmetry-preserving loop regularization(LR) was introduced to meet the request mentioned above. The key concept in such a new regularization method is the introduction of irreducible loop integrals(ILIs) \cite{LR1,LR2} which are evaluated from Feynman integrals. The gauge symmetry requires a set of necessary and sufficient conditions called consistency conditions\cite{LR1} which are held between the regularized tensor type ILIs and scalar type ILIs. The loop regularization method was realized to satisfy those consistency conditions\cite{LR1,LR2} in the existence of two energy scales. We shall give a brief introduction for the loop regularization below. For more details on the loop regularization including motivations and concrete computation methods as well as general properties, we refer the original papers\cite{LR1,LR2} to readers. Some interesting applications of this new method have been investigated in \cite{DW,MW1,MW2}. This paper is devoted to explicitly demonstrate how the loop regularization preserves non-Abelian gauge symmetry by evaluating all the renormalization constants at one loop level and verifying the Ward-Takahaski-Slavnov-Taylor identities among the renormalization constants. The paper is organized as follows: in section II, we shape the gauge symmetry into the well-known Ward-Takahaski-Slavnov-Taylor identities, and give the conditions that the renormalization constants must satisfy. In section III, we briefly outline the LR method. In section IV, we explicitly evaluate all the one-loop divergent Feynman diagrams to yield all the renormalization constants of non-Abelian gauge theory by using the loop regularization method, and derive the well-known $\beta$ function\cite{GWP} once checking manifestly the Ward-Takahaski-Slavnov-Taylor identities among the obtained renormalization constants. The results are found to be consistent with those obtained via the dimensional regularization as the quadratic divergent parts cancel each other due to gauge symmetry. The conclusions and remarks are presented in the last section. \section{Renormalization of Gauge Theory and Ward-Takahaski-Slavnov-Taylor identities} The lagrangian of gauge theory with Dirac spinor fields $\psi_n\ (n=1,...,N_f)$ interacting with gauge field $A^a_\mu\ (a=1,...,d_G)$ is: \begin{eqnarray} \mathcal{L}=\bar{\psi}_n(i\gamma^{\mu}D_{\mu}-m)\psi_n-\frac{1}{4}F^a_{\mu\nu}F^{a\mu\nu} \end{eqnarray} where: \begin{eqnarray} & &F_{\mu\nu}^a=\partial_{\mu}A_{\nu}^a-\partial_{\nu}A_{\mu}^a+gf^{abc}A_{\mu}^bA_{\nu}^c\\ & &D_{\mu}\psi_n=(\partial_{\mu}-igT^aA_{\mu}^a)\psi_n \end{eqnarray} According to the Faddeev-Popov\cite{faddeevpopov} quantization method, some ghost fields are necessary to be introduced when fixing a gauge. In the covariant gauge, the lagrangian has the following form: \begin{eqnarray} \mathcal{L}_{eff}&=&\bar{\psi}_n(i\gamma^{\mu}D_{\mu}-m)\psi_n-\frac{1}{4}F^a_{\mu\nu}F^{a\mu\nu}-\frac{1}{2\xi}(\partial^{\mu}A_{\mu}^a)^2+\partial^{\mu}\bar{c}^a(\partial_{\mu}\delta^{ac}+gf^{abc}A_{\mu}^b)c^c\nonumber\\ &=&[\bar{\psi}_n(i\gamma^{\mu}\partial_{\mu}-m)\psi_n]+[-\frac{1}{4}(\partial_{\mu}A^a_{\nu}-\partial_{\nu}A^a_{\mu})^2-\frac{1}{2\xi}(\partial^{\mu}A_{\mu}^a)^2]+[\partial^{\mu}\bar{c}^a\delta^{ac}\partial_{\mu}c^c]\nonumber\\ & &+g\bar{\psi}_n\gamma_{\mu}A^{a\mu}T^a\psi_{n}-\frac{1}{2}gf^{abc}(\partial_{\mu}A^a_{\nu}-\partial{\nu}A^a_{\mu})A^{b\mu}A^{c\nu}+\frac{1}{4}g^2f^{abc}f^{ade}A^b_{\mu}A^c_{\nu}A^{d\mu}A^{e\nu}\nonumber\\ & &+gf^{abc}\partial^{\mu}\bar{c}^aA_{\mu}^bc^c \end{eqnarray} The corresponding Feynman Rules for this lagrangian are presented in App.B. All one loop Feynman diagrams are shown below (for simplicity, the permutation graphs are omitted): \includegraphics[scale=0.7]{01.eps} \includegraphics[scale=0.7]{23.eps} \includegraphics[scale=0.7]{45.eps} \includegraphics[scale=0.7]{06.eps} \includegraphics[scale=0.7]{07.eps} \begin{center}{\sl Fig.1.}\end{center} Though all loop diagrams contain divergent integrals, it was proved that gauge theories are renormalizable\cite{renormalizability1,renormalizability2,renormalizability3,renormalizability4,renormalizability5}. To remove the divergence, it is necessary to renormalize the theory by rescaling the fields and redefining the masses and coupling constant. This procedure is equivalent to the introduction of some counterterms to the Lagrangian \begin{eqnarray} \delta\mathcal{L}&=&[(z_2-1)\bar{\psi}_ni\gamma^{\mu}\partial_{\mu}\psi_n-(z_2z_m-1)m\bar{\psi}_n\psi_n] +(z_3-1)[-\frac{1}{4}(\partial_{\mu}A^a_{\nu}-\partial_{\nu}A^a_{\mu})^2]\nonumber\\ & &+(\tilde{z}_3-1)[\partial^{\mu}\bar{c}^a\delta^{ac}\partial_{\mu}c^c] +(z_{1F}-1)g\bar{\psi}_n\gamma_{\mu}A^{a\mu}T^a\psi_{n}\nonumber\\ & &-(z_1-1)\frac{1}{2}gf^{abc}(\partial_{\mu}A^a_{\nu}-\partial{\nu}A^a_{\mu})A^{b\mu}A^{c\nu} +(z_4-1)\frac{1}{4}g^2f^{abc}f^{ade}A^b_{\mu}A^c_{\nu}A^{d\mu}A^{e\nu}\nonumber\\ & &+(\tilde{z_1}-1)gf^{abc}\partial^{\mu}\bar{c}^aA_{\mu}^bc^c \end{eqnarray} where $z_1, \cdots, z_4$ are the so-called renormalization constants. They are not independent and must satisfy the relations called Slavnov-Taylor identities\cite{st} which are the generalization of the usual Ward-Takahaski identities. Those identities are actually consequence of the gauge symmetry. To obtain the relations, one can make the BRST transformation\cite{BRST} which leads to some identities for the generating functional. Then performing a Lengendre transformation we obtained the identities for the 1PI generating functional. Taking the functional derivatives of the 1PI generating functional, one arrives at the relations between the 1PI Green functions. Those relations are the strict restriction of the solution of the gauge symmetry. As a consequence, the renormalization constants should satisfy the following identities\cite{relationofz}: \begin{eqnarray} \frac{z_{1F}}{z_3^{1/2}z_2}=\frac{\tilde{z_1}}{z_3^{1/2}\tilde{z}_3}=\frac{z_1}{z_3^{3/2}}=\frac{z_4^{1/2}}{z_3} \end{eqnarray} There is a more intuitive method to yield the relations among the renormalization constants. In fact, the gauge independence and the unitarity of the renormalized S matrix require that the gauge symmetry must be maintained after renormalization\cite{smatrix}, which means that the renormalization constants of $g$ obtained from each vertex renormalization must be the same. From such a requirement, one can arrive at above identities. The two-, three- and four-point renormalization constants were evaluated in refs.\cite{CG,PS} by using the dimensional regularization. For completeness, we shall perform in this note a detailed calculation for all two-, three- and four-point renormalization constants by using the loop regularization method. As our calculations for the renormalization constants are carried out only at one loop level, which does not involve the renormalization scheme dependence, so we shall not discuss in this note the relevant issues. A detailed discussion on the renormalization scheme prescription in loop regularization will be considered elsewhere. \section{brief introduction to loop regularization method} In this section we shall briefly introduce the loop regularization method. For our current consideration, we demonstrate only the one loop case. The key concept of the loop regularization is the introduction of irreducible loop integrals (ILIs). It has been shown in\cite{LR1,LR2} that by adopting the Feynman parameterization method with appropriately shifting the integration variables, all one loop Feynman integrals can be expressed in terms of the following 1-fold ILIs: \begin{eqnarray} I_{-2\alpha}&=&\int\frac{d^4k}{(2\pi)^4}\frac{1}{(k^2-M^2)^{2+\alpha}},\nonumber\\ I_{-2\alpha\ \mu\nu}&=&\int\frac{d^4k}{(2\pi)^4}\frac{k_{\mu}k_\nu}{(k^2-M^2)^{3+\alpha}},\hspace{8mm}\alpha=-1,0,1,2,...\\ I_{-2\alpha\ \mu\nu\rho\sigma}&=&\int\frac{d^4k}{(2\pi)^4}\frac{k_{\mu}k_{\nu}k_{\rho}k_{\sigma}}{(k^2-M^2)^{4+\alpha}}\nonumber \end{eqnarray} Here $M^2$ is in general a function of the external momenta $p_i$, the masses of particles $m_i$ and the Feynman parameters. Where $I_2$ and $I_0$ are corresponding to the quadratic and logarithmic divergent integrals. To maintain the gauge invariance, the regularized 1-fold ILIs should satisfy a set of consistency conditions\cite{LR1,LR2}: \begin{eqnarray} & & I_{2\mu\nu}^R = \frac{1}{2} g_{\mu\nu}\ I_2^R, \quad I_{2\mu\nu\rho\sigma }^R = \frac{1}{8} (g_{\mu\nu}g_{\rho\sigma} + g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\rho\nu})\ I_2^R , \nonumber \\ & & I_{0\mu\nu}^R = \frac{1}{4} g_{\mu\nu} \ I_0^R, \quad I_{0\mu\nu\rho\sigma }^R = \frac{1}{24} (g_{\mu\nu}g_{\rho\sigma} + g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\rho\nu})\ I_0^R . \end{eqnarray} where the superscript "R" denotes the regularized ILIs. A simple prescription of loop regularization \cite{LR1,LR2} was realized to ensure the above consistency conditions. The procedure is: Rotating to the four dimensional Euclidean space of momentum, replacing in the ILIs the loop integrating variable $k^2$ and the loop integrating measure $\int{d^4k}$ by the corresponding regularized ones $[k^2]_l$ and $\int[d^4k]_l$: \begin{eqnarray} & & \quad k^2 \rightarrow [k^2]_l \equiv k^2+M^2_l\ , \nonumber \\ & & \int{d^4k} \rightarrow \int[d^4k]_l \equiv \lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N\int{d^4k} \end{eqnarray} where $M_l^2$ ($ l= 0,1,\ \cdots $) may be regarded as the mass factors of loop regulators. If there is no IR divergence in the integrals, one can take the initial conditions $M_0^2 = 0$ and $c_0^N = 1$ to recover the original integrals in the limit $M_l^2 \to \infty$ ($l=1,2,\cdots$ ). For IR divergent integrals, one may set $M_0^2=\mu_s^2$ to regularize it. The regularized ILIs in the Euclidean space-time are then given by: \begin{eqnarray} I_{-2\alpha}^R&=& i (-1)^{\alpha} \lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N\int\frac{d^4k}{(2\pi)^4}\frac{1}{(k^2 + M^2 + M_l^2)^{2+\alpha}},\nonumber\\ I_{-2\alpha\ \mu\nu}^R&=& -i (-1)^{\alpha} \lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N\int\frac{d^4k}{(2\pi)^4}\frac{k_{\mu}k_\nu}{(k^2+M^2+M_l^2)^{3+\alpha}},\hspace{8mm}\alpha=-1,0,1,2,...\\ I_{-2\alpha\ \mu\nu\rho\sigma}^R&=& i (-1)^{\alpha} \lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N\int\frac{d^4k}{(2\pi)^4}\frac{k_{\mu}k_{\nu}k_{\rho}k_{\sigma}}{(k^2+M^2+M_l^2)^{4+\alpha}}\nonumber \end{eqnarray} The coefficients $c_l^N$ are chosen to satisfy the following conditions: \begin{eqnarray} \lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N(M_l^2)^n = 0 \quad (n= 0, 1, \cdots)\label{cl conditions} \end{eqnarray} One can easily verify that the following set is the simplest solution of the above conditions: \begin{equation} M_l^2=\mu_s^2+lM_R^2, \quad c_l^N=(-1)^l\frac{N!}{(N-l)!l!}\label{mus} \end{equation} Here $M_R$ may be regarded as a basic mass scale of loop regulator and the notation $\lim_{N, M_l^2}$ stands for the limit $\lim_{N, M_R^2\rightarrow \infty}$. It has been shown in \cite{LR2} that the above regularization prescription can be understood in terms of Schwinger proper time formulation with an appropriate regulating distribution function. Note that the loop regularization is different from the Pauli-Villars regularization in which the regularization prescription is realized through introducing super heavy particles, so that the Pauli-Villars regularization cannot directly be applied to non-Abelian gauge theories. Unlike the Pauli-Villars regularization, the loop regularization is applicable to non-Abelian gauge theories via above regularization prescription on the ILIs. With the simple solution for $M_l^2$ and $c_l^N$ in above equation, the regularized ILIs $I_0^R$ and $I_2^R$ can be evaluated explicitly as \cite{LR1,LR2}: \begin{eqnarray} I_2^R&=&\frac{-i}{16\pi^2}\{M_c^2-\mu^2[ln\frac{M_c^2}{\mu^2}-\gamma_w+1+y_2(\frac{\mu^2}{M_c^2})]\} \nonumber \\ I_0^R&=&\frac{i}{16\pi^2}[ln\frac{M_c^2}{\mu^2}-\gamma_w+y_0(\frac{\mu^2}{M_c^2})] \end{eqnarray} with $\mu^2=\mu_s^2+M^2$, and \begin{eqnarray} & & \gamma_w \equiv \lim_{N}\{ \ \sum_{l=1}^{N} c_l^N \ln l + \ln [\ \sum_{l=1}^{N} c_l^N\ l \ln l \ ] \} = \gamma_E=0.5772\cdots, \nonumber \\ & & y_0(x)=\int_0^x d\sigma \frac{1-e^{-\sigma}}{\sigma}, \quad y_1(x)=\frac{e^{-x}-1+x}{x}\nonumber \\ & & y_2(x)=y_0(x)-y_1(x),\quad \lim_{x\rightarrow0}y_i(x)\rightarrow 0,\ i=0,1,2 \\ & & M_c^2\equiv \lim_{N,M_R} M_R^2 \sum_{l=1}^{N}c_l^N(l \ln l) =\lim_{N,M_R}M_R^2/\ln N \nonumber\\ \end{eqnarray} By comparing the above results with the ones obtained by naive cutoff regularizaton, it is easily seen that the $\mu_s $ sets an IR `cutoff' at $M^2 =0$ and $M_c$ provides an UV `cutoff'. For renormalizable quantum field theories, $M_c$ can be taken to be infinity $(M_c\rightarrow\infty)$. In a theory without infrared divergence, $\mu_s$ can safely run to $\mu_s=0$. Actually, in the case that $M_c\to\infty$ and $\mu_s=0$, one recovers the initial integral. Also once $M_R$ and $N$ are taken to be infinity, the regularized theory becomes independent of the regularization prescription. These are main properties needed for a proper regularization. For a detailed description and an explicit treatment for higher loop Feynman integrals, it is referred to the original paper on loop regularization \cite{LR1,LR2}. Note that to evaluate the ILIs, the algebraic computing for multi $\gamma$ matrices involving loop momentum $k\hspace{-0.17cm}\slash$ such as $k\hspace{-0.17cm}\slash\gamma_{\mu}k\hspace{-0.17cm}\slash$ should be carried out to be expressed in terms of the independent components: $\gamma_\mu$, $\sigma_{\mu\nu}$, $\gamma_5\gamma_{\mu}$, $\gamma_5$. It is known that in all the regularization schemes, there is an important issue that for a divergent integral it is in general not appropriate to shift the integration variable. In the loop regularization method we have actually shifted the integration variables before taking the regularization prescription, one may doubt wether such a treatment is well justified. The answer is yes. In fact, we can take the loop regularization prescription before shifting the integration variables, and the results are the same as what we get when shifting the integration variables first. For an illustration, let us examine a simple logarithmic divergent Feynman integral: \begin{eqnarray} L={\int\frac{d^4k}{(2\pi)^4}}\frac{1}{k^2-m_1^2}\frac{1}{(k-p)^2-m_2^2} \end{eqnarray} Following the standard process of the loop regularization method, the first step is to apply the general Feynman parameter formula \begin{eqnarray} \frac{1}{a_1^{\alpha_1}a_2^{\alpha_2}{\cdots}a_n^{\alpha_n}}& = & \frac{\Gamma(\alpha_1+\cdots+\alpha_n)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_n)} \int_0^1dx_1\int_0^{x_1}dx_2\cdots\int_0^{x_{n-2}}dx_{n-1} \nonumber \\ & & \frac{(1-x_1)^{\alpha_1-1}(x_1-x_2)^{\alpha_2-1}{\cdots}x_{n-1}^{\alpha_n-1}} {[a_1(1-x_1)+a_2(x_1-x_2)+\cdots+a_nx_{n-1}]^{\alpha_1+\cdots+\alpha_n}} \end{eqnarray} to the Feyman integral. For the above Feynman integral, we then obtain the following integral \begin{eqnarray} L&=&{\int\frac{d^4k}{(2\pi)^4}}\int_0^1dx\frac{1}{\{(1-x)(k^2-m_1^2)+x[(k-p)^2-m_2^2]\}^2}\nonumber\\ &=&{\int\frac{d^4k}{(2\pi)^4}}\int_0^1dx\frac{1}{\{(k-xp)^2-[(1-x)m_1^2+xm_2^2-x(1-x)p^2]\}^2}\nonumber\\ &=&\int_0^1dx{\int\frac{d^4k}{(2\pi)^4}}\frac{1}{( (k-xp)^2 -M^2)^2} \end{eqnarray} with $M^2=(1-x)m_1^2+xm_2^2-x(1-x)p^2$. When shifting the integration variable, we arrive at the standard scalar type ILI \begin{eqnarray} L&=& \int_0^1dx{\int\frac{d^4k}{(2\pi)^4}}\frac{1}{( k^2 -M^2)^2} = \int_0^1dx\ I_0 \end{eqnarray} By making Wick rotation and applying the loop regularization prescription to such an integral, we then obtain the regularized Feynman integral \begin{eqnarray} L^R= i \int_0^1dx\lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N\int\frac{d^4k}{(2\pi)^4}\frac{1}{(k^2+M^2+M_l^2)^2} \end{eqnarray} Alternatively, one can also apply for the regularization prescription before shifting the integration variable, i.e., $(k-xp)^2 \to (k-xp)^2 + M_l^2$, we then have \begin{eqnarray} L^{\prime R}= i \lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N\int\frac{d^4k}{(2\pi)^4}\frac{1}{[(k-xp)^2+M^2+M_l^2]^2} \end{eqnarray} which becomes a well defined integral, so that we can safely shift the integration variable: \begin{eqnarray} L^{\prime R}=\int_0^1dx\lim_{N, M_l^2}\sum_{l=0}^{N}c_l^N\int\frac{d^4k}{(2\pi)^4}\frac{1}{(k^2+M^2+M_l^2)^2} \equiv L^R \end{eqnarray} which explicitly shown that in loop regularization method, one can safely shift the integration variables and express all the Feynman integrals in terms of ILIs before applying for the regularization prescription. In fact, it was found from the calculation of triangle anomaly that even for the linear divergent integral, only when firstly making a shift of integral variable, which then allows one to eliminate the ambiguities and leads to a consistent result. The reason is simple that loop regularization is translational invariant. \section{Checking Ward-Takahaski-Slavnov-Taylor identities with explicit calculations of Renormalization Constants and $\beta$ function} With the above analyzes, we are in the position to calculate the renormalization constants of Non-Abelian gauge theory at one loop level by using the loop regularization method. More details can be found in Appendix C where we evaluate all the one-loop divergent diagrams in terms of the explicit forms of ILIs. \subsection{ Renormalization constant for fermion fields strength} As there is only one diagram which contributes the one-loop renormalization for the fermion fields strength, the divergent part of this diagram has been evaluated in detail in the Appendix C and explicitly given in terms of the ILIs. Here we only write down the regularized divergent part for the purpose of defining the relevant renormalization constant \begin{eqnarray} L(2)_{div}&=&(-g^2C_2)\int_0^1dx_1\{[x_1(3x_1-4)(\xi-1)-2x_1]p{\hspace{-0.17cm}\slash}+[2x_1(\xi-1)+4]m\}I_0^R \end{eqnarray} The explicit form of $I_0^R$ is given in loop regularization by the following form \begin{eqnarray} I_0^R&=&\frac{i}{16\pi^2}[ln\frac{M_c^2}{\mu^2}-\gamma_\omega+y_0(\frac{\mu^2}{M_c^2})] \end{eqnarray} The next step is to introduce appropriate renormalization conditions to make a suitable subtraction. Namely we shall find a prescription to divide the Feynman integral into divergent part and finite part, and cancel the divergent part by the counterterms. Such a prescription will fix the renormalization constants uniquely. Many different ways to introduce the renormalization conditions have been put forward in literature, they are referred as various renormalization schemes, such as: On-Shell renormalization scheme, Momentum Subtraction scheme, Minimal Subtraction scheme, and so on. Different renormalization schemes will lead to different definitions of the renormalized parameters. Nevertheless, the physics content of the theory, i.e. the renormalized S matrix elements, should not depend on the choices of renormalization schemes\cite{Collins}. As is well-known, no matter under which renormalization schemes, it is inevitable to involve a mass dimensional parameter into the original theory, even though the original theory contains only dimensionless parameters. For example, in Momentum Subtraction scheme, one needs set the reference momentum point for subtraction, and in Minimal Subtraction scheme one has to introduce a mass dimensional parameter $\mu$. In fact, this is the essential reason of the dimension transmutation\cite{dimtrans}. Any choice for the involved parameter is as good as any other, the physics should be invariant under the transformations which merely change this parameter. This is actually the consequence of renormalization group. Such a mass dimensional parameter plays the role of physically interesting sliding energy scale. To remove the infinities, it needs to specify the subtraction scheme. In the loop regularization method, we may adopt, for simplicity, a subtraction scheme similar to the Modified Minimal Subtraction scheme in dimensional regularization. Notice that the arbitrary mass parameter $\mu_s$ plays the role of the sliding energy scale, one may rewrite $I_0^R$ as follows \begin{eqnarray} I_0^R=\frac{i}{16\pi^2}[ ln\frac{M_c^2}{\mu_s^2} -\gamma_\omega ] +\frac{i}{16\pi^2}[ln\frac{\mu_s^2}{\mu^2} +y_0(\frac{\mu^2}{M_c^2})]\label{I0R} \end{eqnarray} Since the term $y_0$ approaches to zero $y_0\to 0$ in the limit $M_c\to \infty$. For the massless case with on mass shell condition, we have $\mu^2 = \mu_s^2$ and $\ln \mu_s^2/\mu^2 = 0$. Thus the substraction scheme is chosen so that the terms proportional to $\frac{i}{16\pi^2}(\ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)$ in the Feynman integral are canceled by the introduction of counterterms. As such a term doesn't depend on the Feynman parameter $x_1$, one can integrate $x_1$ easily. Final results are given by: \begin{eqnarray} L(2)_{div}&=&(-g^2C_2)\int_0^1dx_1\{[x_1(3x_1-4)(\xi-1)-2x_1]p{\hspace{-0.17cm}\slash} +[2x_1(\xi-1)+4]m\}\frac{i}{16\pi^2}(\ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ &=&\frac{-ig^2}{8\pi^2}C_2[(-\xi)p{\hspace{-0.17cm}\slash}+(\xi+3)m]\frac{1}{2}(\ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} From the condition $i(z_2-1)p\hspace{-0.17cm}\slash+L(2)_{div}=0$, we then obtain the renormalization constant $z_2$: \begin{eqnarray} z_2=1-\frac{g^2}{8\pi^2}C_2\xi\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} \subsection{Renormalization constant for gluon fields strength } Four diagrams can contribute to the $A_\mu^a$'s renormalization as shown in Fig.1. These four diagrams have explicitly been evaluated in \cite{LR1,LR2} with the result: \begin{eqnarray} L^{ab}_{R\mu\nu}&=&g^2\delta^{ab}(p^2g_{\mu\nu}-p_{\mu}p_{\nu}) {\int_0^1dx}\{C_1[1+4x(1-x)+\frac{1}{2}(1-\xi)]I_0^R\nonumber\\ & &-N_fT_28x(1-x)I_0^R(m)-4C_1(1-\xi)[1-\frac{1}{8}(1-\xi)]x(1-x)p^2I_{-2}^R\} \end{eqnarray} where $I_0^R$ is the renormalized divergent ILIs and given by Eq.\ref{I0R} in the loop regularization. Thus the purely renormalized divergent term turns out to have the following form: \begin{eqnarray} L^{ab}_{\mu\nu;div}&=&g^2\delta^{ab}(p^2g_{\mu\nu}-p_{\mu}p_{\nu}){\int_0^1dx}\{C_1[1+4x(1-x)+\frac{1}{2}(1-\xi)] \frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ & &-N_fT_28x(1-x)\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\}\nonumber\\ &=&i\{\frac{g^2}{16\pi^2}(\frac{13}{3}-\xi)C_1\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)-\frac{g^2}{6\pi^2}N_fT_2\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\}\delta^{ab}(p^2g_{\mu\nu}-p_{\mu}p_{\nu}) \end{eqnarray} The above divergent term can be canceled by introducing the counterterm \begin{eqnarray} i(z_3-1)\delta^{ab}(p^2g_{\mu\nu}-p_{\mu}p_{\nu})=L^{ab}_{\mu\nu;div}\nonumber \end{eqnarray} with the renormalization constant $z_3$ \begin{eqnarray} z_3=1+\frac{g^2}{16\pi^2}\left[(\frac{13}{3}-\xi)C_1-\frac{g^2}{6\pi^2}N_fT_2 \right] \frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} \subsection{Ghost self-energy diagram and renormalization of ghost fields} There is only one diagram (fig.3) which contributes to the one-loop renormalization for the ghost fields strength. The divergent part of this diagram is evaluated in the Appendix C and reads in terms of the renormalized divergent ILIs as follows \begin{eqnarray} L(3)^{cd}_{div}&=&-C_1g^2\delta^{cd}\int_0^1dx{\int\frac{d^4k}{(2\pi)^4}}(x-(1-\frac{3}{2}x)(\xi-1))p^2I_0^R \end{eqnarray} Using Eq.(\ref{I0R}) and noticing that the subtracting divergent term $\frac{i}{16\pi^2}ln\frac{M_c^2}{\mu_s^2}$ is independent of the Feynman parameter $x$, we have \begin{eqnarray} L(3)^{cd}_{div}=\frac{ig^2}{16\pi^2}(\frac{1}{2}\xi-\frac{3}{2})C_1\delta^{cd}p^2\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} The counterterm should satisfy the condition \begin{eqnarray} i(\tilde{z}_3-1)p^2\delta^{cd}+L(3)^{cd}_{div}=0 \end{eqnarray} which leads the renormaliztion constant $\tilde{z_3}$ to be \begin{eqnarray} \tilde{z_3}&=&1+\frac{g^2}{16\pi^2}C_1(\frac{3}{2}-\frac{\xi}{2})\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} \subsection{Fermion-gluon vertex renormalization} Two kind of diagrams including their permutation (fig.4) contribute to the one-loop renormalization for the fermion-gluon vertex. They are explicitly evaluated in the Appendix C, the divergent parts are given in terms of the renormalized divergent ILIs as follows \begin{eqnarray} L(4a)^{aR}_{\mu;div}&=&g^3(C_2-\frac{1}{2}C_1)T^a\int_0^1dx_1\int_0^{x_1}dx_2[2+6(1-x_1)(\xi-1)]\gamma_{\mu}I_0^R(M_{4a})\\ L(4b)^{aR}_{\mu;div}&=&g^3C_1T^a\gamma_{\mu}\int_0^1dx_1\int_0^{x_1}dx_2[3+\frac{9}{4}x_1(\xi-1)]I_0^R(M_{4b}) \end{eqnarray} The corresponding subtracting divergent terms are found to be \begin{eqnarray} L(4a)^{aR}_{\mu;div}&=&\frac{ig^3}{8\pi^2}(C_2-\frac{1}{2}C_1)\xi\gamma_{\mu}T^a\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\\ L(4b)^{aR}_{\mu;div}&=&\frac{ig^3}{8\pi^2}\frac{3}{4}(\xi+1)C_1T^a\gamma_{\mu}\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} The total contribution is given \begin{eqnarray} L(4)^{aR}_{\mu;div}&=&L(4a)^{aR}_{\mu;div}+L(4b)^{aR}_{\mu;div}\nonumber\\ &=&\frac{ig^3}{8\pi^2}[(\frac{3}{4}+\frac{1}{4}\xi)C_1+{\xi}C_2]T^a\gamma_{\mu}\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} From the renormaliztion condition $(z_{1F}-1)igT^a\gamma_\mu+L(4)^{aR}_{\mu;div}=0$, the renormalization constant $z_{1F}$ reads: \begin{eqnarray} z_{1F}&=&1-\frac{g^2}{8\pi^2}\left[(\frac{3}{4}+\frac{\xi}{4})C_1+{\xi}C_2 \right] \frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} \subsection{Ghost-gluon vertex renormalization} For the one-loop renormalization of three-gluon vertex, there are two diagrams including their permutation (fig.5). Their explicit evaluation is presented in the Appendix C. The divergent parts are given in terms of the renormalized divergent ILIs as follows \begin{eqnarray} L(5a)^{acb}_{\mu;div}&=&-\frac{i}{2}g^3C_1f^{acb}\int_0^1dx_1\int_0^{x_1}dx_2\frac{1}{2}{\xi}p_{2\mu}I_0^R(M_{5a})\\ L(5b)^{acb}_{\mu;div}&=&-\frac{3i}{4}g^3C_1f^{acb}\int_0^1dx_1\int_0^{x_1}dx_2(3(x_1-x_2)(\xi-1)+1)p_{2\mu}I_0^R(M_{5b}) \end{eqnarray} The corresponding subtracting divergent terms are given by integrating over Feynman parameters $x_1$, $x_2$ \begin{eqnarray} L(5a)^{acb}_{\mu;div}&=&\frac{g^3}{16\pi^2}\frac{1}{4} \xi C_1f^{acb}p_{2\mu}\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\\ L(5b)^{acb}_{\mu;div}&=&\frac{g^3}{16\pi^2}\frac{3}{4}{\xi}C_1f^{acb}p_{2\mu}\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} with the final result \begin{eqnarray} L(5)^{acb}_{\mu;div}&=&L(5a)^{acb}_{\mu;div}+L(5b)^{acb}_{\mu;div}\nonumber\\ &=&\frac{g^3}{16\pi^2}{\xi}C_1f^{acb}p_{2\mu}\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} From the renormalization condition $(\tilde{z_1}-1)gf^{acb}p_{2\mu}+L(5)^{acb}_{\mu;div}=0$, the renormalization constant $\tilde{z_1}$ is given by: \begin{eqnarray} \tilde{z_1}=1-\frac{g^2}{16\pi^2}{\xi}C_1\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} \subsection{Three-gluon vertex renormalization} Four loop diagrams including their permutation graphs will contribute to the one-loop renormalization of three-gluon vertex. More detailed evaluation is presented in the Appendix C, the divergent parts in terms of the renormalized divergent ILIs read \begin{eqnarray} L(6a)_{\mu\nu\lambda;div}^{abc}&=&2ig^3f^{abc}T_2{\int_0^1dx_1}{\int_0^{x_1}dx_2}I_0^R(M_{6a})[4(-x_1+x_2+1)(\frac{1}{2}k_{2\mu}g_{\nu\lambda}+\nonumber\\ & &+\frac{1}{2}k_{2\nu}g_{\mu\lambda}+k_{2\lambda}g_{\mu\nu}-k_{2\mu}g_{\nu\lambda}-k_{2\nu}g_{\mu\lambda}-\frac{1}{2}k_{2\lambda}g_{\mu\nu})+x_2(\frac{1}{2}k_{3\mu}g_{\nu\lambda}+\nonumber\\ & &+\frac{1}{2}k_{3\nu}g_{\mu\lambda}+k_{3\lambda}g_{\mu\nu}-k_{3\mu}g_{\nu\lambda}-k_{3\nu}g_{\mu\lambda}-\frac{1}{2}k_{3\lambda}g_{\mu\nu})+\nonumber\\ & &+(x_2-x_1)(\frac{1}{2}k_{2\nu}g_{\lambda\mu}+\frac{1}{2}k_{2\lambda}g_{\nu\mu}+k_{2\mu}g_{\nu\lambda}-k_{2\nu}g_{\lambda\mu}-k_{2\lambda}g_{\nu\mu}-\frac{1}{2}k_{2\mu}g_{\nu\lambda})+\nonumber\\ & &+x_2(\frac{1}{2}k_{3\nu}g_{\lambda\mu}+\frac{1}{2}k_{3\lambda}g_{\nu\mu}+k_{3\mu}g_{\nu\lambda}-k_{3\nu}g_{\lambda\mu}-k_{3\lambda}g_{\nu\mu}-\frac{1}{2}k_{3\mu}g_{\nu\lambda})+\nonumber\\ & &+(x_2-x_1)(\frac{1}{2}k_{2\lambda}g_{\mu\nu}+\frac{1}{2}k_{2\mu}g_{\lambda\nu}+k_{2\nu}g_{\lambda\mu}-k_{2\lambda}g_{\mu\nu}-k_{2\mu}g_{\lambda\nu}-\frac{1}{2}k_{\nu}g_{\lambda\mu})+\nonumber\\ & &+(x_2-1)(\frac{1}{2}k_{3\lambda}g_{\mu\nu}+\frac{1}{2}k_{3\mu}g_{\lambda\nu}+k_{3\nu}g_{\lambda\mu}-k_{3\lambda}g_{\mu\nu}-k_{3\mu}g_{\lambda\nu}-\frac{1}{2}k_{3\nu}g_{\lambda\mu})]\\ L(6b)_{\mu\nu\lambda;div}^{abc}&=&2ig^3f^{anm}f^{bmp}f^{cpn}{\int_0^1dx_1}{\int_0^{x_1}dx_2}I_0^R(M_{6b})\nonumber\\ & &\{[-(1-x_1)k_2+x_2k_3]_\lambda{\frac{1}{4}g_{\mu\nu}}+(x_1k_2+x_2k_3)_\nu{\frac{1}{4}g_{\mu\lambda}}+[-(1-x_1)k_2-(1-x_2)k_3]_\mu{\frac{1}{4}g_{\nu\lambda}}\}\nonumber\\ & &+(k_2{\rightarrow}k_3,\nu\rightarrow\lambda,b{\rightarrow}c)\\ L(6c)_{\mu\nu\lambda;div}^{abc}&=&-iC_1g^3f^{abc}\int_0^1dx_1\int_0^{x_1}dx_2I_0^R(M_{6c})\nonumber\\ & & \times \{(\frac{1}{4}g^\alpha\gamma{g}_{\lambda\nu}-\frac{2}{4}g^\alpha_\lambda{g}_{\nu\gamma} -\frac{1}{4}g_{\gamma\lambda}g^\alpha_\nu-\frac{1}{4}g^\alpha_\nu{g}_{\lambda\gamma} -\frac{2}{4}g_{\nu\gamma}g^\alpha_\lambda+\frac{4}{4}g_{\nu\lambda}g^\alpha_\gamma + g^\alpha_\nu{g}_{\gamma\lambda})\nonumber\\ & &\times [((-1-x_1)k_2+(-1-x_2)k_3)^\gamma{g_{\mu\alpha}}+((-1+2x_1)k_2+(-1+2x_2)k_3)_\mu{g_\alpha^\gamma} +((2-x_1)k_2 \nonumber\\ & & +(2-x_2)k_3)_\alpha{g^\gamma_\mu}]+ (-\frac{2}{4}g_{\alpha\lambda}g^\rho_\mu-\frac{1}{4}g_{\alpha\mu}g^\rho_\lambda +\frac{4}{4}g_{\mu\lambda}g^\rho_\alpha+\frac{1}{4}g^\rho_\alpha{g}_{\mu\lambda} -\frac{1}{4}g^\rho_\lambda{g}_{\mu\alpha}-\frac{2}{4}g^\rho_\mu{g}_{\alpha\lambda} +g^\rho_\lambda{g}_{\alpha\mu})\nonumber\\ & &\times [((2-x_1)k_2-x_2k_3)^\alpha{g_{\nu\rho}}+((-1+2x_1)k_2+2x_2k_3)_\nu{g_\rho^\alpha} +((-1-x_1)k_2-x_2k_3)_\rho{g^\alpha_\nu}]\nonumber\\ & & + (-\frac{1}{4}g^\gamma_\mu{g}_{\nu\rho}+g^\gamma_\mu{g}_{\nu\rho}-\frac{2}{4}g^\gamma_\nu{g}{\mu\rho} +\frac{4}{4}g_{\mu\nu}g^\gamma_\rho+\frac{1}{4}g^\gamma_\rho{g}_{\mu\nu}-\frac{2}{4}g_{\mu\rho}g^\gamma_\nu -\frac{1}{4}g_{\nu\rho}g^\gamma_\mu)\nonumber\\ & & \times [((1-x_1)k_2+(2-x_2)k_3)^\rho{g_{\lambda\gamma}} +((-2+2x_1)k_2+(-1+2x_2)k_3)_\lambda{g_\gamma^\rho} \nonumber \\ & & +((1-x_1)k_2+(-1-x_2)k_3)_\gamma{g^\rho_\lambda}]\}\\ L(6d)_{\mu\nu\lambda;div}^{abc}&=&\frac{3i}{4}g^3C_1f^{abc}(g_{\nu\rho}g_{\lambda\sigma}-g_{\rho\lambda} g_{\sigma\nu})\int_0^1dx_1I_0^R(M_{6d})[(1+x_1)k_1^\sigma{g}^\rho_\mu+(1-2x_1)k_{1\mu}g^{\rho\sigma}+(-2+x_1)k_1^\rho{g}^\sigma_\mu]\nonumber\\ & &+permutation\ graphs \end{eqnarray} The corresponding subtracting divergent terms are simply obtained by integrating over the Feynman parameters $x_1$, $x_2$ \begin{eqnarray} L(6a)_{\mu\nu\lambda;div}^{abc}&=&-\frac{4}{3}ig^2T_2\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) gf^{abc}[g_{\mu\nu}(k_{1\lambda}-k_{2\lambda})+g{\nu\lambda}(k_{2\mu}-k_{3\mu})+g{\lambda\mu}(k_{3\nu}-k_{1\nu})]\\ L(6b)_{\mu\nu\lambda;div}^{abc}&=&\frac{i}{24}g^2C_1\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) gf^{abc}[g_{\mu\nu}(k_1-k_2)_\lambda+g_{\nu\lambda}(k_2-k_3)_\mu+g_{\lambda\mu}(k_3-k_1)_\nu]\\ L(6c)_{\mu\nu\lambda;div}^{abc}&=&-\frac{13i}{8}C_1g^2\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) gf^{abc}[g_{\mu\nu}(k_1-k_2)_\lambda+g_{\nu\lambda}(k_2-k_3)_\mu+g_{\lambda\mu}(k_3-k_1)_\nu]\\ L(6d)_{\mu\nu\lambda;div}^{abc}&=&\frac{9i}{4}C_1g^2\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) gf^{abc}[g_{\mu\nu}(k_1-k_2)_\lambda+g_{\nu\lambda}(k_2-k_3)_\mu+g_{\lambda\mu}(k_3-k_1)_\nu] \end{eqnarray} with the total result being given by summing over the four diagrams including their permutation graphs \begin{eqnarray} L(6)_{\mu\nu\lambda;div}^{abc}&=&N_fL(6a)_{\mu\nu\lambda;div}^{abc}+L(6b)_{\mu\nu\lambda;div}^{abc}+L(6c)_{\mu\nu\lambda;div}^{abc}+L(6d)_{\mu\nu\lambda;div}^{abc}\nonumber\\ &=&[(\frac{2}{3}ig^2C_1-\frac{4}{3}ig^2N_fT_2)\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)]gf^{abc}[g_{\mu\nu}(k_{1\lambda}-k_{2\lambda})+g_{\nu\lambda}(k_{2\mu}-k_{3\mu})+g_{\lambda\mu}(k_{3\nu}-k_{1\nu})]\nonumber\\ \end{eqnarray} Using the renormalization condition \begin{eqnarray} (z_1-1)gf^{abc}[g_{\mu\nu}(k_{1\lambda}-k_{2\lambda})+g_{\nu\lambda}(k_{2\mu}-k_{3\mu})+g_{\lambda\mu}(k_{3\nu}-k_{1\nu})]+L(6)_{\mu\nu\lambda;div}^{abc}=0 \end{eqnarray} we obtain the renormalization constant $z_1$ in Feynman gauge $\xi =1$ to be \begin{eqnarray} z_1=1+(\frac{g^2}{12\pi^2}C_1-\frac{g^2}{6\pi^2}N_fT_2)\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} The evaluation in the $\xi$ gauge is rather length, the result is \begin{eqnarray} z_1=1+\left[\frac{g^2}{12\pi^2}[1 + \frac{9}{8} (1 - \xi)]C_1-\frac{g^2}{6\pi^2}N_fT_2 \right] \frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} \subsection{Four-gluon vertex renormalization} We finally consider the four-gluon vertex renormalization, there are five loop diagrams which contribute to its renormalizaion. The detailed evaluation can be found in the Appendix C, we present here only the divergent parts in terms of the renormalized divergent ILIs \begin{eqnarray} L(7a)_{\mu\nu\lambda\rho;div}^{abcd}&=&6g^4f^{aef}f^{bfj}f^{cjn}f^{dne}\int_0^1dx_1\int_0^{x_1}dx_2\int_0^{x_2}dx_3[\frac{5}{2}(g_{\mu\nu}g_{\lambda\rho}+g_{\mu\rho}g_{\nu\lambda})+\nonumber\\ & &\frac{34}{24}(g_{\mu\nu}g_{\lambda\rho}+g_{\mu\lambda}g_{\nu\rho}+g_{\mu\rho}g_{\nu\lambda})]I_{0}^R(M_{7a})+2\ permutations\\ L(7b)_{\mu\nu\lambda\rho;div}^{abcd}&=&2g^4f^{aef}f^{dme} [f^{lfb}f^{lcm}(g^{\beta}_{\lambda}g_{\nu}^{\chi}-g^{\beta\chi}g_{\nu\lambda})+f^{lfc}f^{lmb}(g^{\beta\chi}g_{\lambda\nu}-g^{\beta}_{\nu}g_{\lambda}^{\chi})+f^{lfm}f^{lbc}(g^{\beta}_{\nu}g^{\chi}_{\lambda}-g^{\beta}_{\lambda}g^{\chi}_{\nu})]\nonumber\\ & &\int_0^1\int_0^{x_1}dx_1dx_2(g_{\beta\mu}g_{\rho\chi}-\frac{1}{4}g_{\beta\mu}g_{\rho\chi}-\frac{2}{4}g_{\beta\rho} g_{\mu\chi}+\frac{4}{4}g_{\mu\rho}g_{\beta\chi}+\frac{1}{4}g_{\beta\chi}g_{\mu\rho}-\frac{2}{4}g_{\mu\chi} g_{\beta\rho}-\frac{1}{4}g_{\rho\chi}g_{\mu\beta})I_0^R(M_{7b})\nonumber\\ & &+5\ permutations\\ L(7c)_{\mu\nu\lambda\rho;div}^{abcd}&=&\frac{1}{2}g^4[f^{eai}f^{ejd}(g_{\mu\beta}g_{\alpha\rho}-g_{\mu\rho}g_{\alpha\beta})+f^{eaj}f^{edi}(g_{\mu\rho}g_{\beta\alpha}-g_{\mu\alpha}g_{\beta\rho})+f^{ead}f^{eij}(g_{\mu\alpha}g_{\rho\beta}-g_{\mu\beta}g_{\rho\alpha})]\times\nonumber\\ & &[f^{fib}f^{fcj}(g^{\alpha}_{\lambda}g_{\nu}^{\beta}-g^{\alpha\beta}g_{\nu\lambda})+f^{fic}f^{fjb}(g^{\alpha\beta}g_{\lambda\nu}-g^{\alpha}_{\nu}g_{\lambda}^{\beta})+f^{fij}f^{fbc}(g^{\alpha}_{\nu}g^{\beta}_{\lambda}-g^{\alpha}_{\lambda}g^{\beta}_{\nu})]\nonumber\\ & &\int_0^1dx_1I_0^R(M_{7c})+2\ permutations\\ L(7d)_{\mu\nu\lambda\rho;div}^{abcd}&=&-\frac{1}{4}g^4f^{aie}f^{bmi}f^{cpm}f^{dep}(g_{\mu\nu}g_{\lambda\rho} +g_{\mu\lambda}g_{\nu\rho}+g_{\mu\rho}g_{\nu\lambda})\int_0^1dx_1\int_0^{x_1}dx_2\int_0^{x_2}dx_3I_{0}^R(M_{7d})+\nonumber\\ & &+5\ permutations\\ L(7e)_{\mu\nu\lambda\rho;div}^{abcd}&=&-8g^4Tr(T^{a}T^{d}T^{c}T^{b})(g_{\mu\nu}g_{\lambda\rho}-2g_{\mu\lambda} g_{\nu\rho}+g_{\mu\rho}g_{\nu\lambda})\int_0^1dx_1\int_0^{x_1}dx_2\int_0^{x_2}dx_3I_0^R(M_{7e})+\nonumber\\ & & 5\ permutations \end{eqnarray} The corresponding subtracting divergent terms are yielded by integrating over the Feynman parameters $x_1$, $x_2$, $x_3$ \begin{eqnarray} L(7a)_{\mu\nu\lambda\rho;div}^{abcd}&=&g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(\frac{47}{12}F^{abcd}+\frac{17}{12}F^{acbd}+\frac{47}{12}F^{abdc})+\nonumber\\ & &g_{\mu\lambda}g_{\nu\rho}(\frac{17}{12}F^{abcd}+\frac{47}{12}F^{acbd}+\frac{47}{12}F^{abdc})+g_{\mu\rho}g_{\nu\lambda}(\frac{47}{12}F^{abcd}+\frac{47}{12}F^{acbd}+\frac{17}{12}F^{abdc})]\\ L(7b)_{\mu\nu\lambda\rho;div}^{abcd}&=&g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(-\frac{17}{2}F^{abcd}+2F^{acbd}-\frac{17}{2}F^{abdc}-\frac{3}{2}C_1f^{lad}f^{lbc}-\frac{3}{2}C_1f^{lac}f^{lbd})+\nonumber\\ & &g_{\mu\lambda}g_{\nu\rho}(2F^{abcd}-\frac{17}{2}F^{acbd}-\frac{17}{2}F^{abdc}+\frac{3}{2}C_1f^{lad}f^{lbc}-\frac{3}{2}C_1f^{lab}f^{lcd})+\nonumber\\ & &g_{\mu\rho}g_{\nu\lambda}(-\frac{17}{2}F^{abcd}-\frac{17}{2}F^{acbd}+2F^{abdc}+\frac{3}{2}C_1f^{lac}f^{lbd}+\frac{3}{2}C_1f^{lab}f^{lcd})]\\ L(7c)_{\mu\nu\lambda\rho;div}^{abcd}&=&g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(2C_1f^{ead}f^{ebc}+2C_1f^{eac}f^{ebd}+3F^{abcd}+3F^{abdc})+\nonumber\\ & &g_{\mu\lambda}g_{\nu\rho}(2C_1f^{eab}f^{ecd}-2C_1f^{ead}f^{ebc}+3F^{abdc}+3F^{acbd})+\nonumber\\ & &g_{\mu\rho}g_{\nu\lambda}(-2C_1f^{eab}f^{ecd}-2C_1f^{eac}f^{ebd}+3F^{abcd}+3F^{acbd})\nonumber\\ L(7d)_{\mu\nu\lambda\rho;div}^{abcd}&=&g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(-\frac{1}{12}F^{abcd}-\frac{1}{12}F^{acbd}-\frac{1}{12}F^{abdc})+\nonumber\\ & &g_{\mu\lambda}g_{\nu\rho}(-\frac{1}{12}F^{abcd}-\frac{1}{12}F^{acbd}-\frac{1}{12}F^{abdc})+g_{\mu\rho}g_{\nu\lambda}(-\frac{1}{12}F^{abcd}-\frac{1}{12}F^{acbd}-\frac{1}{12}F^{abdc})]\\ L(7e)_{\mu\nu\lambda\rho;div}^{abcd}&=&-\frac{4}{3}T_2g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ & &\{g_{\mu\nu}g_{\lambda\rho}(f^{adl}f^{bcl}+f^{acl}f^{bdl})+g_{\mu\lambda}g_{\nu\rho}(f^{abl}f^{cdl}-f^{adl}f^{bcl})+g_{\mu\rho}g_{\nu\lambda}(-f^{abl}f^{cdl}-f^{acl}f^{bdl})\} \end{eqnarray} with $F^{abcd}{\equiv}f^{aef}f^{bfg}f^{cgh}f^{dhe}$. By adding those five diagrams together, we have \begin{eqnarray} L(7)_{\mu\nu\lambda\rho;div}^{abcd}&=&L(7a)_{\mu\nu\lambda\rho;div}^{abcd}+L(7b)_{\mu\nu\lambda\rho;div}^{abcd}+L(7c)_{\mu\nu\lambda\rho;div}^{abcd}+L(7d)_{\mu\nu\lambda\rho;div}^{abcd}+N_fL(7e)_{\mu\nu\lambda\rho;div}^{abcd}\nonumber\\ &=&g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(\frac{47}{12}F^{abcd}+\frac{17}{12}F^{acbd}+\frac{47}{12}F^{abdc})+\nonumber\\ &&g_{\mu\lambda}g_{\nu\rho}(\frac{17}{12}F^{abcd}+\frac{47}{12}F^{acbd}+\frac{47}{12}F^{abdc})+g_{\mu\rho}g_{\nu\lambda}(\frac{47}{12}F^{abcd}+\frac{47}{12}F^{acbd}+\frac{17}{12}F^{abdc})]\nonumber\\ &&+g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(-\frac{17}{2}F^{abcd}+2F^{acbd}-\frac{17}{2}F^{abdc}-\frac{3}{2}C_1f^{lad}f^{lbc}-\frac{3}{2}C_1f^{lac}f^{lbd})+\nonumber\\ &&g_{\mu\lambda}g_{\nu\rho}(2F^{abcd}-\frac{17}{2}F^{acbd}-\frac{17}{2}F^{abdc}+\frac{3}{2}C_1f^{lad}f^{lbc}-\frac{3}{2}C_1f^{lab}f^{lcd})+\nonumber\\ &&g_{\mu\rho}g_{\nu\lambda}(-\frac{17}{2}F^{abcd}-\frac{17}{2}F^{acbd}+2F^{abdc}+\frac{3}{2}C_1f^{lac}f^{lbd}+\frac{3}{2}C_1f^{lab}f^{lcd})]\nonumber\\ &&+g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(2C_1f^{ead}f^{ebc}+2C_1f^{eac}f^{ebd}+3F^{abcd}+3F^{abdc})+\nonumber\\ &&g_{\mu\lambda}g_{\nu\rho}(2C_1f^{eab}f^{ecd}-2C_1f^{ead}f^{ebc}+3F^{abdc}+3F^{acbd})+\nonumber\\ &&g_{\mu\rho}g_{\nu\lambda}(-2C_1f^{eab}f^{ecd}-2C_1f^{eac}f^{ebd}+3F^{abcd}+3F^{acbd})\nonumber\\ &&+g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(-\frac{1}{12}F^{abcd}-\frac{1}{12}F^{acbd}-\frac{1}{12}F^{abdc})+\nonumber\\ & &g_{\mu\lambda}g_{\nu\rho}(-\frac{1}{12}F^{abcd}-\frac{1}{12}F^{acbd}-\frac{1}{12}F^{abdc})+g_{\mu\rho}g_{\nu\lambda}(-\frac{1}{12}F^{abcd}-\frac{1}{12}F^{acbd}-\frac{1}{12}F^{abdc})]\nonumber\\ &&-\frac{4}{3}N_fT_2g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ &&\{g_{\mu\nu}g_{\lambda\rho}(f^{adl}f^{bcl}+f^{acl}f^{bdl})+g_{\mu\lambda}g_{\nu\rho}(f^{abl}f^{cdl}-f^{adl}f^{bcl})+g_{\mu\rho}g_{\nu\lambda}(-f^{abl}f^{cdl}-f^{acl}f^{bdl})\}\nonumber\\ &=&g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)[g_{\mu\nu}g_{\lambda\rho}(\frac{1{}}2C_1f^{ead}f^{ebc}+\frac{1}{2}C_1f^{eac}f^{ebd}+\frac{-5}{3}F^{abcd}+\frac{10}{3}F^{acbd}+\frac{-5}{3}F^{abdc})+\nonumber\\ &&g_{\mu\lambda}g_{\nu\rho}(\frac{1}{2}C_1f^{eab}f^{ecd}-\frac{1}{2}C_1f^{ead}f^{ebc}+\frac{-5}{3}F^{abcd}+\frac{10}{3}F^{acbd}+\frac{-5}{3}F^{abdc})+\nonumber\\ &&g_{\mu\rho}g_{\nu\lambda}(-\frac{1}{2}C_1f^{eab}f^{ecd}-\frac{1}{2}C_1f^{eac}f^{ebd}+\frac{-5}{3}F^{abcd}+\frac{10}{3}F^{acbd}+\frac{-5}{3}F^{abdc})]\nonumber\\ &&-\frac{4}{3}N_fT_2g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ &&\{g_{\mu\nu}g_{\lambda\rho}(f^{adl}f^{bcl}+f^{acl}f^{bdl})+g_{\mu\lambda}g_{\nu\rho}(f^{abl}f^{cdl}-f^{adl}f^{bcl})+g_{\mu\rho}g_{\nu\lambda}(-f^{abl}f^{cdl}-f^{acl}f^{bdl})\}\nonumber\\ &=&[-\frac{1}{3}C_1-\frac{4}{3}N_fT_2]g^4\frac{i}{16\pi^2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ &&\{g_{\mu\nu}g_{\lambda\rho}(f^{adl}f^{bcl}+f^{acl}f^{bdl})+g_{\mu\lambda}g_{\nu\rho}(f^{abl}f^{cdl}-f^{adl}f^{bcl})+g_{\mu\rho}g_{\nu\lambda}(-f^{abl}f^{cdl}-f^{acl}f^{bdl})\} \end{eqnarray} By applying for the renormalization condition \begin{eqnarray} & & (z_4-1)(-ig^2)\{g_{\mu\nu}g_{\lambda\rho}(f^{adl}f^{bcl}+f^{acl}f^{bdl}) +g_{\mu\lambda}g_{\nu\rho}(f^{abl}f^{cdl}-f^{adl}f^{bcl})+g_{\mu\rho}g_{\nu\lambda} (-f^{abl}f^{cdl}-f^{acl}f^{bdl})\} \nonumber \\ & & +L(7)_{\mu\nu\lambda;div}^{abc}=0 \end{eqnarray} we then obtain in the Feyman gauge $\xi = 1$ the renormalization constant $z_4$ \begin{eqnarray} z_4=1-(\frac{g^2}{24\pi^2}C_1+\frac{g^2}{6\pi^2}N_fT_2)\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} A similar but length evaluation in the $\xi$ gauge leads to the result \begin{eqnarray} z_4=1-\left[\frac{g^2}{24\pi^2}(1 + 3(\xi -1) )C_1+\frac{g^2}{6\pi^2}N_fT_2\right]\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} \subsection{Ward-Takahaski-Slavnov-Taylor identities and $\beta$ function} We shall summarize all the renormalization constants in this section to check Ward-Takahaski-Slavnov-Taylor identities and calculate $\beta$ function. All the results are listed as below: \begin{eqnarray} z_2&=&1-\frac{g^2}{8\pi^2}C_2\xi\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ z_3&=&1+[\frac{g^2}{16\pi^2}(\frac{13}{3}-\xi)C_1-\frac{g^2}{6\pi^2}N_fT_2]\frac{1}{2} (ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ \tilde{z_3}&=&1+\frac{g^2}{16\pi^2}C_1(\frac{3}{2}-\frac{\xi}{2})\frac{1}{2} (ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ z_{1F}&=&1-\frac{g^2}{8\pi^2}[(\frac{3}{4}+\frac{\xi}{4})C_1+{\xi}C_2]\frac{1}{2} (ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ \tilde{z_1}&=&1-\frac{g^2}{16\pi^2}{\xi}C_1\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ z_1&=&1+[\frac{g^2}{12\pi^2}(\frac{17}{8} - \frac{9}{8}\xi) C_1-\frac{g^2}{6\pi^2}N_fT_2] \frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber\\ z_4&=&1-[\frac{g^2}{24\pi^2}(-2+ 3\xi)C_1 + \frac{g^2}{6\pi^2}N_fT_2]\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)\nonumber \end{eqnarray} It is straight forward to verify explicitly the Ward-Takahaski-Slavnov-Taylor identities: \begin{eqnarray} z_g = \frac{z_{1F}}{z_3^{1/2}z_2}=\frac{\tilde{z_1}}{z_3^{1/2}\tilde{z}_3}=\frac{z_1}{z_3^{3/2}}=\frac{z_4^{1/2}}{z_3} \end{eqnarray} which leads to the gauge independent renormalization constant for the gauge coupling constant $g = z_g^{-1} g_0$ \begin{eqnarray} z_g&=&1-(\frac{11}{48\pi^2}C_1-\frac{1}{12\pi^2}N_fT_2)g^2\frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega) \end{eqnarray} In the loop regularization method, the energy scale $\mu_s$ plays the role of the sliding energy scale. According to the definition of $\beta$ function, we obtain the one-loop $\beta$ function: \begin{eqnarray} \beta(g)&{\triangleq}&\lim_{M_c\to \infty} \mu_s\frac{\partial}{\partial\mu_s}g\mid_{g_0,m_0}\nonumber\\ &=& \lim_{M_c\to \infty} g\mu_s\frac{\partial}{\partial\mu_s}ln{z_g}\mid_{g_0,m_0}\nonumber\\ &{\simeq}&g\mu_s\frac{\partial}{\partial\mu_s}[(\frac{11}{48\pi^2}C_1-\frac{1}{12\pi^2}N_fT_2)g^2 \frac{1}{2}(ln\frac{M_c^2}{\mu_s^2}-\gamma_\omega)]\nonumber\\ &{\simeq}&g^3\mu_s(\frac{11}{48\pi^2}C_1-\frac{1}{12\pi^2}N_fT_2)\frac{-1}{\mu_s}\nonumber\\ &=&-\frac{g^3}{(4\pi)^2}(\frac{11}{3\pi^2}C_1-\frac{4}{3\pi^2}N_fT_2) \end{eqnarray} which agrees with the well-known result obtained by using dimesional regularization. It is noticed that a simple corresponding for the logarithmic divergences between the loop regularization method and dimensional regularization scheme is \begin{eqnarray} \frac{2}{\varepsilon}\longleftrightarrow ln\frac{M_c^2}{\mu_s^2} \end{eqnarray} with $\varepsilon \to 0$ and $M_c\to \infty$. \section{conclusion} We have performed a complete calculation for all one loop diagrams of non-Abelian gauge theory by using the loop regularization method\cite{LR1,LR2} and provided an explicit check for the consistency of loop regularization method from the Ward-Takahaski-Slavnov-Taylor identities satisfied among the renormalization constants. It has been shown that the loop regularization method can lead to a consistent $\beta$ function. From above explicit calculations, the conclusions stated in \cite{LR1,LR2} become manifest that the loop regularization method preserves not only non-Abelian gauge symmetry, but also Lorentz and translational symmetries though the existence of two energy scales $M_c$ and $\mu_s$ introduced intrinsically in this method. As the scales $M_c$ and $\mu_s$ play the role of ultraviolet divergent cutoff and infrared divergent cutoff respectively, the loop regularization method can deal with both the ultraviolet and infrared divergences. The existence of two energy scales also makes the loop regularization to maintain the divergent behavior of original theories, while the quadratic divergences in gauge theories are found to cancel each other as the loop regularization preserves gauge symmetry. Thus both loop regularization and dimensional regularization lead to the same renormalization constants for gauge theories with making a simple replacement between $\ln M_c/\mu_s$ and $1/\varepsilon$. The possible distinguishable properties between loop regularization and dimensional regularization may occur for treating chiral field theories with anomaly action concerning the $\gamma_5$ matrix \cite{MW1,MW2}, and for deriving effective field theories with dynamically generated spontaneous symmetry breaking\cite{DW} as well as for applying to supersymmetric theories involving the exact dimension\cite{CTW}. Finally, we would like to point out that the renormalization scheme dependence is not involved in our present consideration as our computation for the renormalization constants is only at the one loop level and our focus in this note is mainly on the check of Ward-Takahaski-Slavnov-Taylor identities among the renormalization constants. It is interesting to see that the loop regularization method generally allows one to make on-shell renormalization prescription due to the existence of the energy scale $\mu_s$ which plays the role of infrared cutoff and sliding energy scale, such a feature may provide a practical way for reducing the renormalization scheme dependence, which is worthwhile to be further investigated elsewhere. \acknowledgments \label{ACK} The authors would like to thank Einhorn Marty for valuable discussions during the KITPC program. This work was supported in part by the National Science Foundation of China (NSFC) under the grant 10475105, 10491306 and the Project of Knowledge Innovation Program (PKIP) of Chinese Academy of Sciences.
2,877,628,090,653
arxiv
\section{Introduction}\label{sec.intro} Since the invention of density matrix renormalization group (DMRG)\cite{white1992density}, tensor network states (TNS) have become important numerical and theoretical tools in quantum many-body physics. At its core, the scaling of the entanglement entropy was the novel physics concept that made possible the invention of the recent TNS methods \cite{schollwock2011}. The understanding of the behavior of entanglement entropies of different phases of matter fundamentally improved the numerical tools for condensed matter systems and initiated new research directions for both condensed matter and high energy physics \cite{ryu2006holographic,latorre2015holographic,Pastawski2015}. The study of the entanglement entropy of one-dimensional (1D) gapped ground states led to the invention of Matrix Product State (MPS)\cite{perez2006matrix,verstraete2008matrix,orus2014,verstraete2008matrix} in 1D - the simplest but also the most successful example of a TNS. An MPS is a wave function whose coefficients (in some basis decomposition) are represented as matrix products. The immediate generalization of MPS in two dimensions (2D) is the Projected Entangled Pair States (PEPS)\cite{verstraete2008matrix,eisert2013entanglement} -- the type of TNS we use in this paper. Similar to an MPS, a PEPS is a wave function whose coefficients are tensor contractions. See Fig.~\ref{fig.TNS_example} for a pictorial description of MPS and PEPS. Other types of TNSs include tree Tensor Network (TTN)\cite{shi2006classical}, Multi-Entanglement Renormalization Ansatz (MERA)\cite{vidal2008class}, which are beyond the scope of this paper. In three dimensions (3D), the study of the TNS is not yet well-developed. \begin{figure}[b] \centering \includegraphics[width=0.4\columnwidth]{figures/TNS_examples.pdf} \caption{Examples of TNS lattice wave functions in 1D and 2D. Each node is a tensor whose indices are the lines connecting to it. The physical indices - of the quantum Hilbert space - are the lines with arrows, while the lines without any arrows are the virtual indices. Connected lines means the corresponding indices are contracted. Panel (a) is an MPS for 1D systems. Panel (b) is a PEPS on a 2D square lattice.} \label{fig.TNS_example} \end{figure} TNS have been heavily used in condensed matter physics in the past decade, especially in the study of 1D and 2D topological phases\cite{wen2016zoo}. Amongst many examples, \begin{enumerate} \item Numerical simulations of the 1D Haldane chain led to the discovery of symmetry protected topological phases (SPT)\cite{pollmann2012symmetry}. \item Fractional quantum Hall states can be exactly written as MPS\cite{zaletel2012exact,zaletel2013topological,estinne2013matrix,estienne2013fractional,wu2014braiding,estienne2015correlation,wu2015matrix,Grushin2015characterization,zaletel2015infinite,geraedts2015competing,regnault2017entanglement,geraedts2017emergent} which allows performing numerical calculations not accessible by exact diagonalization techniques. \item A large class of spin liquids wave functions can be constructed using TNS with global spin rotation symmetries and lattice symmetries\cite{schuch2012resonating,wang2013constructing,iqbal2014semionic,poilblanc2014resonating,poilblanc2015critical,mambrini2016systematic,mei2017gapped}. \end{enumerate} The ground states of gapped local Hamiltonians are conjectured to obey the area law: the entanglement entropy of the ground states with respect to a subsystem $A$ grows linearly with the area of the subsystem's boundary $\mathrm{Area}(\partial A)$: $$S_A \sim \mathrm{Area}(\partial A).$$ Specifically, in 1D, the entanglement entropy of the subsystem $A$ is a constant: $$S_l \sim \mathrm{Const},$$ since the boundary of $A$ contains only two points. In 2D, the entanglement entropy of the subsystem $A$ obeys: $$S_{A} \sim l,$$ where $l$ is the perimeter length of the boundary of $A$. For long-range entangled topological phases in 2D, the area law gets supplemented by a leading constant contribution dubbed as topological entanglement entropy \cite{kitaev2006topological,levin2006detecting}, which contains the total quantum dimension. In higher spatial dimensions than 2D, more exotic gapped states of matter exist, beyond the paradigm of topological phases.\cite{kitaev2003fault,wen2016zoo}. Recently, 3D so-called fracton models\cite{haah2011local,bravyi2011quantum,haah2013,haah2013lattice,haah2014bifurcation,vijay2015A,haah2016algebraic,Devakul-2017arXiv170910071D,1707.02308,1703.02973,1706.07070,PhysRevB.94.155128,2017arXiv170703838P,2017arXiv170804619S,2017arXiv170910094P,2017arXiv170909673P,2017arXiv171001744M,2017PhRvB..96k5102P,2017arXiv170509300S,2017PhRvD..96b4051P,2017PhRvB..96c5119P,2017EPJST.226..749J,pretko2017fracton,schmitz2017recoverable,slagle2017x,shirley2017fracton,gromov2017fractional} represented by Haah code\cite{haah2011local} and X-cube model have been proposed, attracting the attention of both quantum information\cite{PhysRevA.54.1862} and condensed matter community\cite{BRAVYI2011839,PhysRevLett.94.040402,PhysRevB.95.115139,PhysRevB.93.205406,PhysRevB.95.155133}. They can be realized by stabilizer code Hamiltonians, whose fundamental property is that they consist solely of sums of terms that commute with each other. They are hence exactly solvable. The defining features of fracton models include (but are not restricted to) that: \begin{enumerate} \item Fracton models are gapped, since they can be realized by commuting Hamiltonian terms. \item The ground state degeneracy on the torus changes as the system size changes. Hence, fracton models seem not to have thermodynamic limits. \item The low energy excitations can have fractal shapes, other than only points and loops available in conventional topological phases. \item The excitations of fracton models are not fully mobile: they can only move either along submanifold of the 3D lattice (Type I fracton model), or completely immobile without energy dissipation (Type II fracton model). \end{enumerate} In this paper, we obtain a TNS representation for \emph{some} of the ground states of three stabilizer codes in 3D: the 3D toric code model\cite{kitaev2003fault}, the X-cube model and the Haah code. The two latter ones belong to the catalog of fracton models, while the first one belongs to the conventional topological phases. For instance, the ground state degeneracies on the torus of the X-cube model and the Haah code do not converge to a single number, as the system size increases. In contrast, the ground state degeneracy (GSD) on the torus for 3D toric code model is 8 for all system sizes. Ref.~\onlinecite{vijay2016fracton} treated the X-cube and Haah code models using the idea of lattice gauge theory. The gauge symmetry is generally generated by part of the commuting Hamiltonian terms; the rest of the Hamiltonian terms are interpreted as enforcing flat flux conditions. More explicitly, the authors treated the terms only made of Pauli $Z$ operators as the gauge symmetry generators, and the terms only made of Pauli $X$ operators as the flux operators. The gauge symmetries in the X-cube and the Haah code models are not the conventional $\mathbb{Z}_2$ gauge symmetry such as that in the 3D toric code model, since the gauge symmetry generators, the Pauli $Z$ terms of the X-cube and the Haah code models, are different from those in the 3D toric code model. Refs.~\onlinecite{ma2017fracton,vijay2017isotropic} derived the X-cube model from ``isotropically" layered 2D toric code models and condensations. The caveat is that this condensation is weaker than the conventional boson condensation in modular tensor category or field theory\cite{eliens2014,kong2014anyon,neupert2016boson,neupert2016nogo}. The authors condense ``composite flux loop" of coupled layers of 2D toric code model. The ``composite flux loop" refers to a composite of four flux excitations near a bond of the lattice. See Ref.~\onlinecite{vijay2017isotropic} for explicit explanations. Using the TNS representations of some of the ground states, we obtain the entanglement entropy upper bounds for all three models. We then derive the reduced density matrix cuts for which the TNS represents the singular value decomposition (SVD) of the state. For these types of cuts, the entanglement entropy of the three stabilizer codes can be computed exactly. We find that for the fracton models, the entanglement entropy has linear corrections to the area law, corresponding to an exponential degeneracy in the TNS transfer matrix. The transfer matrices of TNS of 2D toric code\cite{kitaev2003fault}, whose eigenvalues and eigenstates dominate the correlation functions, have been studied in Ref.~\onlinecite{schuch2013topological,haegeman2015shadows}. The flat entanglement spectra\cite{li2008entanglement} of the 2D toric code were studied in Refs.~\onlinecite{cirac2011entanglement,ho2015edge}. Our TNS construction, when restricted to 2D toric code model, gives the exact results of transfer matrices and entanglement spectra. See App.~\ref{app.toriccode_2D} for explicit calculations and explanations. Beyond the 2D toric code, Refs.~\onlinecite{Zou2016Spurious,2017arXiv171001744M} prove that the reduced density matrix of any stabilizer code is a projector. Hence, the corresponding entanglement spectrum is flat, a property that we will rederive from our TNS. We will \emph{not} discuss cocycle twisted topological phases, including Dijkgraaf-Witten theories\cite{hu2013twisted,wan2015twisted,he2017field} or (generalized) Walker-Wang models\cite{walker2012,keyserlingk2013three,burnell2013phase,keyserlingk2015walker,zheng2017structure}, even though they can still be realized by commuting Hamiltonians on lattice\cite{wan2015twisted,hu2013twisted,walker2012,levin2005string}. However, the presence of nontrivial cocycles will make the TNS construction very different, based on the experiences in 2D TNS. Our construction will not work for these twisted models. For instance, in 2D, the virtual index dimension using our construction is the same as the physical index dimension. However, when we consider cocycle twisted topological phases, the ``minimal" virtual bond dimension is generally larger than the physical index dimension\cite{buerschaper2014twisted,he2014modular,buerschaper2009explicit,gu2009tensor}. More explicitly, the minimal virtual bond dimension for 2D toric code model is 2, while the minimal virtual bond dimension for 2D double semion model (twisted toric code) is 4\cite{buerschaper2014twisted,he2014modular,buerschaper2009explicit,gu2009tensor}. The 2D cocycle twisted TNS has been systematically explored in the literature for bosonic\cite{buerschaper2014twisted} and fermionic\cite{wille2017fermionic,williamson2016fermionic} systems respectively. The organization of this paper is as follows: In Sec.~\ref{sec.overview}, we set the notations and provide an overview and the general idea of the TNS construction In Sec.~\ref{sec.TNSSVD}, we present the calculation of the entanglement properties using the developed TNS construction In Sec.~\ref{sec.toriccode}, we present the TNS construction for the toric code model in 3D. The entanglement entropy is calculated from the obtained TNS. The transfer matrix is constructed afterwards and is proven to be a projector of rank 2. In Sec.~\ref{sec.Xcube}, we present the TNS construction for X-cube model. The same calculations for the entanglement entropy and the transfer matrix are presented. They are quickly shown to be very different from the toric code model. Indeed, the entanglement entropies have linear corrections to the area law, and the transfer matrix is exponentially degenerate. In Sec.~\ref{sec.Haah}, we present the TNS construction for Haah code. The entanglement entropies are calculated for several types of cuts. In Sec.~\ref{sec.discussion}, we summarize the paper and discuss future directions. \section{Stabilizer Code Tensor Network States}\label{sec.overview} In this section, we provide an overview of the stabilizer codes and the tensor network state description of their ground states. In this article, we focus on a few ``main" stabilizer codes in three dimensions : the toric code\cite{kitaev2003fault}, the X-cube model\cite{vijay2016fracton} and the Haah code\cite{haah2011local}. The TNSs for these models have similarities in their derivation and they share several (but importantly not all!) common features. Both aspects are presented in this section. For pedagogical purposes, we discuss the 2D toric code in App.~\ref{app.toriccode_2D}. \subsection{Notations}\label{subsec.notations} We first fix some of the notations used in the paper, to which we will refer throughout the manuscript: \begin{enumerate} \item The Pauli matrices $X$ and $Z$ are defined as: \begin{equation} \begin{split} X = \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right), \; Z = \left(\begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix}\right). \end{split} \end{equation} \item We introduce a $g$ tensor, which denotes the projector from a physical index to virtual indices. $g$ tensors are essentially the same (up to the number of indices) for all stabilizer codes. $g$ tensors have two virtual indices and one physical index for the 3D toric code model and the X-cube model, while $g$ tensors for the Haah code have four virtual indices and one physical index. They are depicted in Eq.~\eqref{eq.projector2}, \eqref{eq.projector4L} and \eqref{eq.projector4R}. \item We introduce the $T$ tensor, which denotes the local tensor for each model. It only has virtual indices and thus no physical indices. The specific tensor elements are determined by the Hamiltonian terms. \item Since we consider mostly models on cubic lattices, the indices of $T$ tensors will be denoted as $x$, $\bar{x}$, $y$, $\bar{y}$, $z$ and $\bar{z}$ in the 3 directions (forward and backward) respectively. The indices will be collectively denoted using curly brackets. For instance, the physical indices are collectively denoted as $\{s\}$, while the virtual indices are denoted as $\{t\}$. The virtual indices which are not contracted over are called ``open indices". Both the physical indices and the virtual indices are non-negative integer values. \item Graphically, the physical indices are denoted by arrows, while the virtual indices are not associated with any arrows. See Fig.~\ref{fig.TNS_example}. \item The contraction of a network of tensors over the virtual indices is denoted as $\mathcal{C}^{\mathcal{M}} \left( \; \right)$ where $\mathcal{M}$ is the spatial manifold that the TNS lives on. The corresponding wave function that arises from the contraction is denoted as $\ket{\mathrm{TNS}}_{\mathcal{M}}$. When evaluating the TNS norms or any other physical quantities, we contract over the virtual indices from both the bra and the ket layer. This contraction is still denoted by $\mathcal{C}^{\mathcal{M}} \left( \; \right)$. \item $L_x$, $L_y$ and $L_z$ refer to the system sizes in the three directions (the boundary conditions will be specified), while $l_x$, $l_y$ and $l_z$ refer to the sizes of the entanglement cut. Both are measured in units of vertices. \item The TNS gauge is defined as the gauge degrees of freedom of TNS such that the wave function stays invariant while the local tensors change. One can insert identity operators $\mathbb{I}=UU^{-1}$ on the virtual bonds, where $U$ is any invertible matrix acting on the virtual index, multiplying $U$ and $U^{-1}$ to nearby local tensors respectively. The local tensors then change but the wave function stays invariant. We refer to this gauge degree of freedom as the TNS gauge. The TNS gauge exists in MPS, PEPS etc. See Fig.~\ref{fig.TNSgauge} for an illustration. In our calculations, we only fix the tensor elements up to the TNS gauge. \end{enumerate} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figures/TNSgauge.pdf} \caption{An illustration of the TNS gauge in MPS. (a) A part of an MPS. $A_1$ and $A_2$ are two local tensors contracted together. (b) We insert the identity operator $\mathbb{I}=UU^{-1}$ at the virtual level - it acts on the virtual bonds. The tensor contraction of $A_1$ and $A_2$ does not change. (c) We further multiply $U$ with $A_1$ and $U^{-1}$ with $A_2$, resulting in $\tilde{A}_1$ and $\tilde{A}_2$ respectively in Panel (d). The tensor contraction of $A_1$ and $A_2$ is the same as the tensor contraction of $\tilde{A}_1$ and $\tilde{A}_2$. The TNS does not change as well. Similar TNS gauges also appear in other TNS such as PEPS.} \label{fig.TNSgauge} \end{figure} \subsection{Stabilizer Code and TNS Construction}\label{subsec.TNSingeneral} \begin{figure*}[t] \centering \includegraphics[width=0.6\textwidth]{figures/TNS.pdf} \caption{(a) A plane of TNS on a cubic lattice. (b) TNS on a cube. The lines with arrows are the physical indices. The connected lines are the contracted virtual indices, while the open lines are not contracted. On each vertex, there lives a $T$ tensor, and on each bond, we have a projector $g$ tensor. } \label{fig.TNS} \end{figure*} We now summarize the general idea of constructing TNSs for stabilizer codes. In App.~\ref{app.toriccode_2D}, we provide the construction of the TNS for the 2D toric code model on a square lattice. In the following, we assume that the physical spins are defined on the bonds of the cubic lattice (such as the 3D toric code and the X-cube models). The cases where the physical spins are defined on vertices can be analyzed similarly. The generic philosophy of any stabilizer code model is captured by the following exactly solvable Hamiltonian: \begin{equation} H = - \sum_{v} A_{v} - \sum_{p} B_p \label{stabilizercodeplaqutteandvertex} \end{equation} where the Hamiltonian is the sum of the $A_v$ terms which are products of only Pauli $Z$ operators, and the $B_p$ terms which are products of only Pauli $X$ operators. $v$ and $p$ denotes the positions of the $A_v$ and $B_p$ operators on the lattice. In the 3D toric code, $v$ is the vertex of the cubic lattice while $p$ is the plaquette. In the X-cube model, $v$ is the vertex while $p$ is the cube. In the Haah code, both $v$ and $p$ are cubes. See Sec.~\ref{subsec.toriccode}, \ref{subsec.Xcube} and \ref{subsec.HaahCode} for the definitions of Hamiltonians of these three models. All these \textit{local} operators commute with each other: \begin{equation} \begin{split} \commute{A_{v}}{A_{v^\prime}} &= 0, \quad \forall\; v,v^\prime \\ \commute{B_{p}}{B_{p^\prime}} &= 0, \quad \forall\; p,p^\prime \\ \commute{A_{v}}{B_{p}} &= 0, \quad \forall\; v,p .\\ \end{split} \end{equation} The Hamiltonian eigenstates are the eigenstates of these local terms individually. In particular, any ground state $\ket{\mathrm{GS}}$ should satisfy: \begin{equation} \begin{split} A_{v} \ket{\mathrm{GS}} = \ket{\mathrm{GS}}, \quad \forall\; v \\ B_{p} \ket{\mathrm{GS}} = \ket{\mathrm{GS}}, \quad \forall\; p \\ \end{split} \end{equation} for all positions labeled by $v$ and $p$. In this paper, we only consider Hamiltonians being of a sum of local terms that are either a product of Pauli $Z$ operators, or a product of Pauli $X$ operators. Thus, we do not include the case of mixed products of Pauli $Z$ and $X$ operators. The ground states for the stabilizer codes with Hamiltonian as in Eq.~\eqref{stabilizercodeplaqutteandvertex} can be written exactly in terms of TNS. Our construction, when restricted to the 2D toric code model, is the same as in the literature\cite{buerschaper2009explicit,gu2009tensor}. In the following, we provide one possible general construction for such TNSs. We introduce a projector $g$ tensor with one physical index $s$ and two virtual indices $i, j$: \begin{equation}\label{eq.projector2} g^{s}_{ij} = \begin{gathered} \includegraphics[width=3cm]{figures/TNS_projector.pdf} \end{gathered} = \begin{cases} 1 & s=i=j \\ 0 & \text{otherwise} \end{cases} \end{equation} where the line with an arrow represents the physical index, and the lines without arrows correspond to the virtual indices. The physical index $s=0, 1$ represents the Pauli $Z$ eigenstates of $|\mathord{\uparrow}\rangle, |\mathord{\downarrow}\rangle$ respectively where $Z|\mathord{\uparrow}\rangle=|\mathord{\uparrow}\rangle$, and $Z|\mathord{\downarrow}\rangle=-|\mathord{\downarrow}\rangle$. The projector $g$ tensor maps the physical spin into the virtual spins exactly. As a result, the virtual index has a bond dimension $2$. When a Pauli operator acts on the physical index of a projector $g$ tensor, its action transfers to the virtual indices of $g$. For instance, a Pauli operator $X$ acting on the physical index of a $g$ tensor amounts to two Pauli operators $X$ acting on \emph{both} virtual indices of the same $g$ tensor, and a Pauli operator $Z$ acting on the physical index of a $g$ tensor amounts to a Pauli operator $Z$ acting on \emph{either} virtual index of the same $g$ tensor. To each vertex, we associate a local tensor $T$ which only has virtual indices. To each bond, we associate a projector $g$ tensor. The TNS is obtained by contracting the $g$ and $T$ tensors as depicted in Fig.~\ref{fig.TNS} (a) and (b). We define the TNS as: \begin{equation}\label{eq.TNS} \ket{\mathrm{TNS}} = \sum_{\{s\}} \mathcal{C}^{\mathcal{R}^3} \left( g^{s_1}g^{s_2}g^{s_3} \ldots TTT \ldots \right) \ket{\{s\}} \end{equation} where $\mathcal{C}^{\mathcal{R}^3}$ denotes the contraction over all virtual indices on $\mathcal{R}^3$ as illustrated in Fig.~\ref{fig.TNS} (b); $\ket{\{s\}}$ is a wave function basis for spin configurations on the cubic lattice in Pauli $Z$ basis. The TNS can be put on other spatial manifolds such as $\mathcal{T}^3$ and $\mathcal{T}^2 \times \mathcal{R}$. In our notation, they are denoted by changing $\mathcal{C}^{\mathcal{R}^3}$ to $\mathcal{C}^{\mathcal{T}^3}$ and $\mathcal{C}^{\mathcal{T}^2 \times \mathcal{R}}$. The TNS for the ground states satisfies: \begin{equation}\label{eq.ABcondition} \begin{split} A_v \ket{\mathrm{TNS}} = \ket{\mathrm{TNS}}, \quad \forall \; v \\ B_p \ket{\mathrm{TNS}} = \ket{\mathrm{TNS}}, \quad \forall \; p \\ \end{split} \end{equation} for all positions labeled by $v$ and $p$. The actions of $A_v$ and $B_p$ operators on the TNS can be transferred to the virtual indices, using the definition of the $g$ tensor. Since the virtual indices of projector $g$ tensors are contracted with the virtual indices of $T$ tensors, the actions of $A_v$ and $B_p$ on the physical indices will be transferred to actions on the local tensors $T$. By enforcing the local tensors $T$ to be invariant under $A_v$ and $B_p$ actions, we obtain Eq.~\eqref{eq.ABcondition}, and $\ket{\mathrm{TNS}}$ belongs to the ground state manifold. For the three models analyzed in this paper, we have found that up to TNS gauge, the elements of the local tensor $T$ can be reduced to two values, either $1$ or $0$. The first equation of Eq.~\eqref{eq.ABcondition} restricts the local $T$ tensor to be: \begin{equation} \begin{split} T_{x\bar{x}y\ldots} \begin{cases} \neq 0 & \parbox[t]{.6\textwidth}{if the indices $x\bar{x}y\ldots$ satisfy \\ some constraints}\\ = 0 & \text{otherwise}\\ \end{cases}. \end{split} \end{equation} Applying the second equation of Eq.~\eqref{eq.ABcondition} will further restrict the local $T$ tensor to be: \begin{equation} \begin{split} T_{x\bar{x}y\ldots} = \begin{cases} 1 & \parbox[t]{.6\textwidth}{if the indices $x\bar{x}y\ldots$ satisfy \\ some constraints}\\ 0 & \text{otherwise}\\ \end{cases}. \end{split} \end{equation} For simplicity, we calculate the entanglement entropies of the wave function on $\mathcal{R}^3$ with respect to some specific entanglement cuts, and compute the ground state degeneracy (GSD) of the 3D toric code and X-cube model on $\mathcal{T}^3$. We emphasize that in this paper, we are only concerned with the bulk wave functions and their entanglement entropies. In principle, the TNS of Eq.~\eqref{eq.TNS} requires boundary conditions, i.e. the virtual indices at infinity on $\mathcal{R}^3$. The boundary conditions are assumed not to make a difference to the reduced density matrices in the bulk. (Notice that this is true as long as the region considered for the reduced density matrices does not contain any boundary virtual index.) Hence, we do not need to specify the boundary conditions for the TNS in the following calculations of entanglement entropies. \subsection{TNS Norm}\label{subsec.TNSnorm} Evaluating the norm of the TNS given by Eq.~\eqref{eq.TNS} (or any scalar product between two TNS) is straightforward. Indeed the $g$ tensors are projectors, and hence greatly simplify the expression of the tensor network norm when we contract over the physical indices. Given the wave function of Eq.~\eqref{eq.TNS}, we can compute its norm as follows: \begin{widetext} \begin{equation}\label{eq.TNSnorm} \begin{split} \overlap{\mathrm{TNS}}{\mathrm{TNS}} =& \left(\sum_{\{s\}} \mathcal{C}^{\mathcal{R}^3} \left( g^{s_1}g^{s_2}g^{s_3} \ldots TTT \ldots \right) \bra{\{s\}}\right)^\star \left(\sum_{\{s\}} \mathcal{C}^{\mathcal{R}^3} \left( g^{s_1}g^{s_2}g^{s_3} \ldots TTT \ldots \right) \ket{\{s\}}\right) \\ =& \sum_{\{s\}} \mathcal{C}^{\mathcal{R}^3} \left( g^{s_1\star}g^{s_2\star}g^{s_3\star} \ldots T^\star T^\star T^\star \ldots \right) \mathcal{C}^{\mathcal{R}^3} \left( g^{s_1}g^{s_2}g^{s_3} \ldots TTT \ldots \right) \\ =& \sum_{\{s\}} \mathcal{C}^{\mathcal{R}^3} \left( g^{s_1}g^{s_2}g^{s_3} \ldots T^\star T^\star T^\star \ldots \right) \mathcal{C}^{\mathcal{R}^3} \left( g^{s_1}g^{s_2}g^{s_3} \ldots TTT \ldots \right), \\ \end{split} \end{equation} \end{widetext} where $\star$ is the complex conjugation, and we have used the fact that the $g$ tensors are real for our models. Now we specify a contraction order in Eq.~\eqref{eq.TNSnorm}: we first contract over the physical indices and then we contract over the virtual indices. If the physical indices of two projector $g$ tensors are contracted over, the four virtual indices will be enforced to be the same following the definition of the projector $g$ tensor: \begin{equation}\label{eq.projectorcontraction} \begin{split} \sum_{s} g^{s}_{ij}g^{s}_{mn} &= \begin{gathered} \includegraphics[width=2.5cm]{figures/ProjectorContraction.pdf} \end{gathered} \\ &= \begin{cases} 1 & i=j=m=n \\ 0 & \text{otherwise} \\ \end{cases}. \end{split} \end{equation} Thus when computing wave function overlap $\overlap{\mathrm{TNS}}{\mathrm{TNS}}$, the virtual indices in the bra layer and the ket layer at the same place are enforced to be same. As a result, we have: \begin{equation}\label{eq.TNSoverlap} \overlap{\mathrm{TNS}}{\mathrm{TNS}} = \mathcal{C}^{\mathcal{R}^3} \left( \ldots \mathbb{T}\mathbb{T}\mathbb{T}\ldots\right), \end{equation} where $\mathcal{C}^{\mathcal{R}^3}$ stands for the contraction of a network of tensors $\mathbb{T}$ over the virtual indices on $\mathcal{R}^3$. In a slight abuse of notation, $\mathcal{C}^{\mathcal{R}^3}$ in Eq.~\eqref{eq.TNSoverlap} stands for the contraction taken over the virtual indices of both the bra and the ket layer, while the contraction in Eq.~\eqref{eq.TNS} is taken over the virtual indices in only the ket layer. The double tensor $\mathbb{T}$ is defined as \begin{equation}\label{eq.T2} \begin{split} \mathbb{T}_{x\bar{x}y\ldots, x'\bar{x}'y'\ldots} &= T^*_{x\bar{x}y\ldots} T_{x'\bar{x}'y'\ldots} \delta_{xx'}\delta_{\bar{x}\bar{x}'}\delta_{yy'}\ldots \\ &= |T_{x\bar{x}y\ldots}|^2 \delta_{xx'}\delta_{\bar{x}\bar{x}'}\delta_{yy'}\ldots, \\ \end{split} \end{equation} for all the elements of $T$ and $\mathbb{T}$. The indices are not summed over in the above equation. The indices $x\bar{x}y\ldots$ come from the bra layer while the indices $x'\bar{x}'y'\ldots$ come from the ket layer. In a 2D square lattice, a $T$ tensor usually has 4 virtual indices $x,\bar{x},y,\bar{y}$, while in a 3D cubic lattice, a $T$ tensor usually has 6 virtual indices $x,\bar{x},y,\bar{y},z,\bar{z}$. If the elements of the $T$ tensor are only either $0$ or $1$, we get, \begin{equation}\label{eq.T2=T1} \begin{split} \mathbb{T}_{x\bar{x}y\ldots, x'\bar{x}'y'\ldots} &= |T_{x\bar{x}y\ldots}|^2\delta_{xx'}\delta_{\bar{x}\bar{x}'}\delta_{yy'}\ldots \\ &=T_{x\bar{x}y\ldots}\delta_{xx'}\delta_{\bar{x}\bar{x}'}\delta_{yy'}\ldots. \\ \end{split} \end{equation} Then, \begin{equation}\label{eq.TNSnormContractT} \begin{split} \overlap{\mathrm{TNS}}{\mathrm{TNS}} =& \mathcal{C}^{\mathcal{R}^3} \left( \ldots \mathbb{T}\mathbb{T}\mathbb{T}\ldots\right) \\ =& \mathcal{C}^{\mathcal{R}^3} \left( \ldots TTT\ldots \right). \end{split} \end{equation} This result will be frequently used in the following discussions, especially when we compute wave function overlaps or transfer matrices. Eqs.~\eqref{eq.TNSoverlap} and \eqref{eq.TNSnormContractT} hold true on other manifolds as well, such as $\mathcal{T}^3$ and $\mathcal{T}^2\times \mathcal{R}$. \subsection{Transfer Matrix}\label{subsec.TransferMatrix} \begin{figure}[t] \centering \includegraphics[width=0.4\columnwidth]{figures/MPS_TM.pdf} \caption{Transfer matrix (red dashed square) of a 1D MPS. The connected lines are the contracted virtual indices. The connected arrow lines are the contracted physical indices. The MPS norm (or any other quantities) can be built using the transfer matrix. Higher dimensional transfer matrices are similarly defined for TNS on a cylinder or a torus, by contracting in all directions except one. This leads to a 1D MPS with a bond dimension exponentially larger than the TNS one.} \label{fig.MPS_TM} \end{figure} The transfer matrix method is ubiquitous when using MPS (see Fig.~\ref{fig.MPS_TM} for an illustration of the transfer matrix). It can be generalized to TNS on a 2D cylinder by contracting tensors along the periodic direction of the cylinder. This implies that the bond dimension of the transfer matrix is exponentially large with respect to the cylinder perimeter. In 3D, the TNS norm on $\mathcal{T}^3$ of size $L_x \times L_y \times L_z$ can be written as an MPS using transfer matrices $\mathrm{TM}_{xy}$ in each $xy$-plane: \begin{equation}\label{eq.TNSnormTransferMatrix} \begin{split} \overlap{\mathrm{TNS}}{\mathrm{TNS}} &= \mathrm{Tr} \left( \mathrm{TM}_{xy,z=1} \mathrm{TM}_{xy,z=2} \ldots \right) \\ &= \mathrm{Tr} \left( \left( \mathrm{TM}_{xy,z=1}\right)^{L_z} \right) \end{split} \end{equation} where we have assumed that all transfer matrices in each plane are the same: \begin{equation} \mathrm{TM}_{xy,z=1} = \mathrm{TM}_{xy,z=2} = \ldots \end{equation} Eq.~\eqref{eq.TNSnormTransferMatrix} is an alternative way of writing the wave function norm and specifies a contraction order of the tensors in Eq.~\eqref{eq.TNSoverlap}: we first contract the virtual indices along $xy$-plane which defines the transfer matrix $\mathrm{TM}_{xy}$, and then contract the virtual indices in $z$-direction which leads to the multiplication and the trace of transfer matrices. The transfer matrix $\mathrm{TM}_{xy}$ is defined as: \begin{widetext} \begin{equation}\label{eq.TransferMatrix} \mathrm{TM}_{xy} = \left(\sum_{\{s\}}\mathcal{C}^{\mathcal{T}^2_{xy}} \left( g^{s_1\star}g^{s_2\star}g^{s_3\star}\ldots T^{\star}T^{\star}T^{\star} \ldots \right) \bra{\{s\}}\right) \left(\sum_{\{s\}}\mathcal{C}^{\mathcal{T}^2_{xy}} \left( g^{s_1}g^{s_2}g^{s_3}\ldots TTT \ldots \right) \ket{\{s\}}\right), \end{equation} \end{widetext} where the TNS contraction is performed along the $xy$-plane with periodic boundary conditions, i.e., the 2D torus $\mathcal{T}^2_{xy}$. Denoting $\mathbf{T}_{xy}$ as the TNS in its ket layer, it can depicted as: \begin{equation} \begin{gathered} \includegraphics[width=0.8\columnwidth]{figures/TNS_TM_singlelayer.pdf} \end{gathered} \end{equation} $\mathrm{TM}_{xy}$ is the overlap of the bra and ket layer of this TNS over the plane with periodic boundary conditions. Furthermore, applying Eq.~\eqref{eq.projectorcontraction} to Eq.~\eqref{eq.TransferMatrix}, the virtual indices in the bra layer and the ket layer are identified after the physical indices are contracted in Eq.~\eqref{eq.TransferMatrix}. Hence, we have: \begin{equation}\label{eq.TransferMatrixContractPhysics} \mathrm{TM}_{xy} = \mathcal{C}^{\mathcal{T}^2_{xy}} \left( \ldots \mathbb{T}\mathbb{T}\mathbb{T}\ldots \right), \end{equation} where the tensors $\mathbb{T}$, defined in Eq.~\eqref{eq.T2}, are in the $xy$-plane with periodic boundary conditions. The indices in the $z$-direction are open. By Eq.~\eqref{eq.T2=T1} - which is true when the elements of the $T$ tensor are either $0$ or $1$ - the transfer matrices is further simplified to: \begin{equation}\label{eq.TransferMatrixContractT} \mathrm{TM}_{xy} = \mathcal{C}^{\mathcal{T}^2_{xy}} \left( \ldots TTT \ldots \right). \end{equation} Graphically: \begin{equation}\label{eq.TransferMatrixGraph} \begin{split} &\mathrm{TM}_{xy} = \\ &\begin{gathered} \includegraphics[width=0.7\columnwidth]{figures/TNS_TM.pdf} \end{gathered}. \end{split} \end{equation} Suppose the virtual index is of dimension $D$. Then in Eq.~\eqref{eq.TransferMatrix}, the transfer matrix is of dimension $D^{2L_xL_y} \times D^{2L_xL_y}$. However, in Eq.~\eqref{eq.TransferMatrixContractPhysics}, the transfer matrix reduces to dimension $D^{L_xL_y} \times D^{L_xL_y}$, since the indices in the bra layer and the ket layer are identified due to the contraction over the physical indices of projector $g$ tensors. \section{Entanglement properties of the stabilizer code TNS}\label{sec.TNSSVD} The specific structure of the TNS discussed in the previous section allows us to derive its entanglement properties. In this section, we show that for a large class of entanglement cuts, the TNS is already in Schmidt form, i.e. is exactly a singular value decomposition (SVD). We also summarize the main results for the entanglement entropies and the transfer matrices that we have obtained for the three stabilizer codes. \subsection{TNS as an exact SVD}\label{subsec.TNSSVD} We propose a general sufficient condition that the TNS is an SVD with respect to particular entanglement cuts. Let us denote the TNS with open virtual indices $\{t\}$ as: \begin{equation} \ket{\{t\}} = \sum_{\{s\}} \mathcal{C}^{\mathcal{M}} \left( TTT \ldots g^{s_1}g^{s_2}g^{s_3}\ldots \right) \ket{\{s\}}, \label{tfunction} \end{equation} where $\mathcal{M}$ is an open manifold which the TNS lives on, $\mathcal{C}^{\mathcal{M}}$ stands for the contraction over the virtual indices inside $\mathcal{M}$, but not over the open ones $\{t\}$ that straddle the boundary of $\mathcal{M}$. In Eq.~\eqref{tfunction}, the $T$ tensors and $g$ tensors are the tensors inside $\mathcal{M}$ such that the nodes of the local $T$ tensors and the projector $g$ tensors are inside $\mathcal{M}$. For example, when $\mathcal{M}$ is a cube, we have a TNS figure: \begin{equation}\label{eq.TNSbasis} \ket{\{t\}} = \begin{gathered} \includegraphics[width=0.3\columnwidth]{figures/SVD_Basis.pdf} \end{gathered}, \end{equation} where inside the cube is a network of contracted tensors which are not explicitly drawn, and the red lines denote the open virtual indices $\{t\}$. With this notation of $\ket{\{t\}}$, the TNS can be written as: \begin{equation}\label{eq.TNSgeneralDecomposition} \ket{\mathrm{TNS}} = \sum_{\{t\}} \ket{\{t\}}_{A} \otimes \ket{\{t\}}_{\bar{A}} \end{equation} with respect to a region $A$ and its complement $\bar{A}$. $\ket{\{t\}}_{A}$ is the TNS in region $A$ with open indices $\{t\}$, while $\ket{\{t\}}_{\bar{A}}$ is the TNS in region $\bar{A}$ with the same open indices $\{t\}$ due to tensor contraction. In other words, the TNS naturally induces a bipartition of the wave functions. However, the two partitions do not need to each form orthonormal sets. We now propose a simple sufficient (but not generally necessary) condition to determine when Eq.~\eqref{eq.TNSgeneralDecomposition} is an exact SVD for the TNS constructed in this paper. We first have to make an assumption, satisfied by all our TNSs: \textit{Local $T$ tensor assumption:} We assume that the indices of the nonzero elements of the local $T$ tensor are constrained: if all the indices of the element $T_{\ldots t\ldots}$ except for $t$ are fixed, then there is only one choice of $t$ such that $T_{\ldots t \ldots}$ is nonzero. This assumption can be easily verified when the local $T$ tensors are obtained for the three models studied in this paper, such as the 3D toric code model in Eq.~\eqref{eq.ToricCodeTtensor}. We are now ready to express our SVD condition: \textit{SVD condition: If there are no two open virtual indices in $\{t\}$ (see Eq.~\eqref{eq.TNSbasis}) of the region $A$ that connect to the same $T$ tensor in the region $A$, then the non-vanishing states $\ket{\{t\}}_{A}$ span an orthogonal basis. Similarly, if there are no two open virtual indices in $\{t\}$ of the region $\bar{A}$ that connect to the same $T$ tensor in the region $\bar{A}$, then the non-vanishing states $\ket{\{t\}}_{\bar{A}}$ form an orthogonal basis. Therefore, Eq.~\eqref{eq.TNSgeneralDecomposition} is an exact SVD.} \noindent\textbf{Proof}: We first prove the statement for the region $A$. Suppose that $\ket{\{t\}}_{A}$ and $\ket{\{t^\prime\}}_{A}$ are two non-vanishing TNSs in the region $A$. Any open index in $\{t\}$ of the region $A$ must connect to either a projector $g$ tensor or a local tensor $T$. We discuss the two situations respectively, and examine the overlap of two different states $_A\overlap{\{t^\prime\}}{\{t\}}_{A}$ as a function of the two indices configurations $\{t^\prime\}$ and $\{t\}$. (1) If the open virtual index $m$ in the ket layer (i.e. $\ket{\{t\}}_A$) connects to a projector $g$ tensor, then the open virtual index $m^\prime$ in the bra layer (i.e. $_A\bra{\{t^\prime\}}$), at the same place as the index $m$, also connects to a projector $g$ tensor. If we ``zoom in" on the local area of $_A\overlap{\{t^\prime\}}{\{t\}}_{A}$ near the index $m$ and $m^\prime$, we have the following diagram: \begin{equation} \begin{gathered} \includegraphics[width=4cm]{figures/SVD_Condition_Situation_1.pdf} \end{gathered} \end{equation} By using Eq.~\eqref{eq.projectorcontraction}, we can conclude that $m=m^\prime$, otherwise $_A\overlap{\{t^\prime\}}{\{t\}}_{A}=0$. (2) If the open virtual index $m_0$ in the ket layer connects to a local $T$ tensor, we require by the \textit{SVD condition} that there are no other open virtual indices connecting to this $T$ tensor. Then the other indices of this $T$ tensor are all inside the region $A$. Similarly for the index $m^\prime_0$ in the bra layer. In terms of a diagram, $_A\overlap{\{t^\prime\}}{\{t\}}_{A}$ near the area of the index $m_0$ and $m^\prime_0$ can be represented as: \begin{equation} \begin{gathered} \includegraphics[width=4cm]{figures/SVD_Condition_Situation_2.pdf} \end{gathered} \end{equation} where $m_i$ and $m_i^\prime$ with $i=1,2,3\ldots$ denote the other virtual indices of the $T$ tensor in the bra and ket layer respectively, except $m_0$ and $m_0^\prime$. Notice that in the ket layer, the virtual indices $m_i\;(i=1,2,\ldots)$ of the $T$ tensor (all indices except the index $m_0$) are all connected with contracted projector $g$ tensors inside region $A$. Correspondingly, in the bra layer, the virtual indices $m^\prime_i\;(i=1,2,\ldots)$ are also all connected with the same contracted projector $g$ tensors. Hence, due to these projector $g$ tensors and Eq.~\eqref{eq.projectorcontraction}, all the indices except $m_0$ of the $T$ tensor in the ket layer are equal to their respective analogues in the bra layer: \begin{equation}\label{eq.indexindentification_SVD} m_i = m_i^\prime, \quad i=1,2,\ldots \end{equation} otherwise the overlap would be $_A\overlap{\{t^\prime\}}{\{t\}}_{A}=0$. The only remaining question is whether the open indices $m_0$ and $m_0^\prime$ should be identified in order to have a non-vanishing overlap $_A\overlap{\{t^\prime\}}{\{t\}}_{A}$. Using the \textit{local $T$ tensor assumption:}, $m_i\;(i=1,2,\ldots)$ will uniquely determine $m_0$ in order to have nonzero element of the $T$ tensor in the ket layer. Similarly, $m_i^\prime\;(i=1,2,\ldots)$ will uniquely determine $m^\prime_0$ in order for the $T$ tensor in the bra layer to give a nonzero element. Therefore, Eq.~\eqref{eq.indexindentification_SVD} implies that: \begin{equation} m_0 = m_0^\prime \end{equation} such that the overlap $_A\overlap{\{t^\prime\}}{\{t\}}_{A}$ is nonzero. Therefore, both situations (1) and (2) lead to the conclusion that the open indices $\{t\}$ and $\{t^\prime\}$ should be identical in order to have a nonzero overlap $_A\overlap{\{t^\prime\}}{\{t\}}_{A}$. The non-vanishing states $\ket{\{t\}}_{A}$ are orthogonal basis. A similar proof can be derived for the region $\bar{A}$. The orthogonality of each set $\ket{\{t\}}_{A}$ and $\ket{\{t\}}_{\bar{A}}$ implies that Eq.~\eqref{eq.TNSgeneralDecomposition} is indeed an SVD. However, the singular values are not clear at this stage since the basis may not be orthonormal (i.e., the states might not be normalized). \hfill$\Box$ In the following specific discussions of the 3D toric code model, the X-cube model and the Haah code, we will show that we can select a region $A$ and a cut on the TNS such that $\ket{\{t\}}_{A}$ and $\ket{\{t\}}_{\bar{A}}$ are not only orthogonal, but also normalized. In particular for the 3D toric code model and the X-cube model, we can just select the region $A$ to be a cube which satisfies the \textit{SVD condition} directly. See respectively Sec.~\ref{subsec.ToricCode_Entanglement} and \ref{subsec.Xcube_Entanglement} for detailed discussion of these two models and the SVD condition. However, the Haah code is different: a cubic region $A$ does not fulfill the \textit{SVD condition}, and in Sec.~\ref{subsec:HaahcodeSVDCuts} we generalize the \textit{SVD condition} to the \textit{Generalized SVD Condition} and apply $B_c$ operators to make the TNS an SVD. \subsection{Summary of the results}\label{subsec.overviewsummary} We now summarize the major results derived in this paper for the three stabilizer codes. Fundamentally, our calculations come down to the fact that the indices of the nonzero elements of the local tensor $T$ and $g$ are constrained. More specifically, when we calculate the entanglement entropies with a TNS which is an exact SVD, the only task is to count the number of independent Schmidt states $\ket{\{t\}}_{A}$. The number of independent Schmidt states $\ket{\{t\}}_{A}$ is determined by the \textbf{Concatenation lemma}, i.e., when a network of $T$ tensors and $g$ tensors are concatenated, the open indices of the nonzero elements of the resulting tensors are constrained as well. \begin{enumerate} \item The TNS is the exact SVD for the ground states with respect to particular entanglement cuts. The entanglement spectra are flat for models studied in this paper. \item The entanglement of TNS is bounded by the area law: $$S \le \mathrm{Area} \times \log(D),$$ where $D$ is the virtual index dimension and $\mathrm{Area}$ is measured in the units of vertices. For the models studied in this paper, the entanglement entropies are strictly smaller than the area law when one is computing in terms of vertices. For the toric code, the correction is a negative constant, $-\log(2)$. For the X-cube model and Haah code, the correction includes a negative term linear with the system size, presented in Sec.~\ref{subsec.Xcube_Entanglement} and \ref{subsec.Haah_Entanglement}. \item The transfer matrices in Eq.~\eqref{eq.TransferMatrix} of the 3D toric code model and the X-cube model are shown to be a projector whose eigenvalues are either $0$ or $1$. For the 3D toric code, the transfer matrix in the $xy$-plane is a projector of rank 2. For the X-cube model, the transfer matrix is a projector of rank $2^{L_x+L_y-1}$ where $L_x$ and $L_y$ are the lattice sizes in $x$- and $y$- directions respectively. \item We prove that the TNS ground states obtained on the torus using our construction are the $+1$ eigenstates of loop $X$ operators. Hence, our TNS construction does not include all ground states on the torus. The degeneracy of the corresponding transfer matrix is smaller than the GSD on the torus. We can obtain all the ground states with loop/surface $Z$ operators on the TNS, which generate all the wave functions on the torus. We call the TNS with $Z$ operators, ``twisted TNS". Correspondingly, we also obtain more transfer matrices in the $xy$-plane built from the twisted TNS, and these transfer matrices are all the same projectors. The same TNS phenomenon in the 2D toric code model has been studied in Ref.~\onlinecite{schuch2013topological}. \item In our calculations, both the transfer matrix eigenvalue degeneracies, and the corrections to the area law of entanglement entropies are rooted in the \textbf{Concatenation lemma}. Hence, we believe that the two contributions are related. Specifically, suppose we consider our TNS on a 3D cylinder $\mathcal{T}^2_{xy} \times \mathcal{R}_z$, and the entanglement cut splits the system in two halves $z>0$ and $z<0$. Then, for the toric code model, the transfer matrix $\mathrm{TM}_{xy}$ has the degeneracy 2, and the entanglement entropy correction to the area law is $-\log(2)$. For the X-cube model, the transfer matrix $\mathrm{TM}_{xy}$ has the degeneracy $2^{L_x+L_y-1}$, and the entanglement entropy correction to the area law is $-(L_x+L_y-1)\log(2)$ (See Eq.~\eqref{eq.XcubeCylinderEntropy}). Moreover, the GSD on $\mathcal{T}^3$ is generally larger than the transfer matrix degeneracy. Therefore, given these calculations, we conjecture that the negative linear correction to the area law is a signature of the extensive ground state degeneracy. \end{enumerate} \section{3D Toric Code}\label{sec.toriccode} In this section, we construct the TNS for the 3D toric code model and then calculate the entanglement entropy and GSD, both deriving from the \textbf{Concatenation lemma}. The results are the immediate generalizations of those in the 2D toric code model. We find a topological entanglement entropy in accordance to that obtained by Ref.~\onlinecite{zheng2017structure} using a field theoretic approach. This section is organized as follows: In Sec.~\ref{subsec.toriccode}, we briefly review the toric code model in a cubic lattice. In Sec.~\ref{subsec.ToricCode_TNS}, we construct the TNS for the toric code model. In Sec.~\ref{subsec.ConcatenationToricCode}, we prove a \textbf{Concatenation lemma} for toric code TNS, which is useful in the following calculations. In Sec.~\ref{subsec.ToricCode_Entanglement}, we calculate the entanglement entropies on $\mathcal{R}^3$. In Sec.~\ref{subsec.ToricCode_TransferMatrix}, we construct the transfer matrix and prove it is a projector of rank 2. In Sec.~\ref{subsec.ToricCode_GSD}, we show how to construct $8$ ground states on torus by twisting the TNS. \subsection{Hamiltonian of 3D Toric Code Model}\label{subsec.toriccode} The 3D toric code model can be defined on any random lattice. However, for simplicity, we only work on the cubic lattice. On a cubic lattice, the physical spins are defined on the bonds of the lattice, and the Hamiltonian is built from two types of terms: \begin{equation} H = -\sum_{v} A_v - \sum_{p} B_p. \end{equation} where $A_v$ is defined around a vertex $v$, and $B_p$ is defined on a plaquette $p$: \begin{equation} A_v = \prod_{i \in v} Z_i, \quad B_p = \prod_{i \in p} X_i, \end{equation} where $Z_i$ and $X_i$ are Pauli matrices for the $i$-th spin. On a cubic lattice, $A_v$ is composed of $6$ Pauli $Z$ operators while $B_p$ is composed of $4$ Pauli $X$ operators. These two terms are depicted in Fig.~\ref{fig.ToricCodeHamiltonian}. In the 2D toric code, $A_v$ is composed of $4$ Pauli $Z$ operators on a square lattice. The Hamiltonian is the sum of $A_v$ operators on all vertices $v$ and $B_p$ operators on all plaquettes $p$. \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{figures/ToricCodeModel.pdf} \caption{The Hamiltonian terms of the 3D toric code model. Panel (a) is $A_{v}$ which is a product of 6 $Z$ operators, and Panel (b) is $B_{p}$ which is a product of 4 $X$ operators. The circled $X$ and $Z$ represent the Pauli matrices acting on the spin-$1/2$'s. The toric code Hamiltonian includes $A_v$ terms on all vertices $v$ and $B_p$ terms on all plaquettes $p$.} \label{fig.ToricCodeHamiltonian} \end{figure} It is easy to verify that all the Hamiltonian terms commute: \begin{equation} \begin{split} \commute{A_{v}}{A_{v^\prime}} &= 0, \quad \forall\; v,v^\prime \\ \commute{B_{p}}{B_{p^\prime}} &= 0, \quad \forall\; p,p^\prime \\ \commute{A_{v}}{B_{p}} &= 0, \quad \forall\; v,p, \\ \end{split} \end{equation} and their eigenvalues are $\pm 1$: \begin{equation} A_{v}^2 = 1, \quad B_{p}^2 = 1. \end{equation} The ground states $\ket{\mathrm{GS}}$ should satisfy: \begin{equation}\label{eq.ToricCodeGS} \begin{split} A_v \ket{\mathrm{GS}} &= \ket{\mathrm{GS}}, \quad \forall\; v \\ B_p \ket{\mathrm{GS}} &= \ket{\mathrm{GS}}. \quad \forall\; p \end{split} \end{equation} These two sets of equations are enough to derive the local $T$ tensor and to construct TNS for the toric code model. In particular, one of the ground states on the torus that we will find is \begin{eqnarray}\label{GSoftoriccode} |\psi\rangle= \prod_{v} \frac{1+A_v}{2} |0_x\rangle, \end{eqnarray} where $|0_x\rangle$ is the tensor product of all $X=1$ eigenstates defined on each link. See App.~\ref{app.TNS_projected} for a proof that Eq.~\eqref{GSoftoriccode} is indeed the TNS that we will now construct. \subsection{TNS for 3D Toric Code}\label{subsec.ToricCode_TNS} We first introduce a projector $g$ tensor Eq.~\eqref{eq.projector2} on each bond of the lattice. Both the virtual indices and the physical indices take two values, $0$ and $1$. The projector $g$ tensor satisfies: \begin{equation}\label{eq.projectorcondition ToricCode} \begin{gathered} \includegraphics[width=\columnwidth]{figures/TNS_projector_condition.pdf} \end{gathered}. \end{equation} In terms of algebraic equations, these diagrams correspond to: \begin{equation} \begin{split} &g^{s}_{i,j}(-1)^{s}=g^{s}_{i,j}(-1)^{i}=g^{s}_{i,j}(-1)^{j} \\ &g^{1-s}_{i,j}=g^{s}_{1-i,1-j}. \\ \end{split} \end{equation} These two sets of equations are true, because (1) the indices $s$, $i$ and $j$ are identified for nonzero $g^{s}_{i,j}$, (2) the nonzero $g^{s}_{i,j}$ are always $1$ according to Eq.~\eqref{eq.projector2}. We can use these conditions to transfer the action of the physical operators to the virtual operators. Now we introduce an additional $T$ tensor on each vertex of the cubic lattice, which has six virtual indices. Graphically, we represent such a $T$ tensor as: \begin{equation} \begin{gathered} \includegraphics[width=2cm]{figures/TNS_T.pdf} \end{gathered}. \end{equation} Next we need to fix the elements of the $T$ tensor, up to the TNS gauge freedom. The method to fix the $T$ tensor is to make it invariant under the actions of $A_{v}$ and $B_{p}$ operators, in order to implement the local conditions for the ground states in Eq.~\eqref{eq.ToricCodeGS}. The actions of $A_v$ and $B_p$ operators on the local tensors are: \begin{equation}\label{eq.3DToricCode_DericeT} \begin{gathered} \includegraphics[width=0.8\columnwidth]{figures/ToricCode_T_condition_derivation.pdf} \end{gathered}. \end{equation} where we have used Eq.~\eqref{eq.projectorcondition ToricCode} to transfer the physical operators to the virtual ones. We require a strong version of the solution to the above equations. We want the tensors in the dashed red rectangles to be invariant under the actions of any of the $A_v$ and $B_p$ (this is a sufficient constraint which guarantees that the tensors form the ground state), which leads to the following equations: \begin{equation}\label{eq.ToricCodeTcondition} \begin{gathered} \includegraphics[width=0.9\columnwidth]{figures/ToricCode_T_condition.pdf} \end{gathered} \end{equation} In the second set of equations, the first 12 equalities are obvious from the red dashed squares, and the last 3 equalities can be derived from the first 12 ones. Expanding the first set of conditions by using $Z_{ij}=\delta_{ij} (-1)^{i}$, we have: \begin{equation}\label{eq.3Dtoriccode.symmetric} \begin{split} &T_{x\bar{x},y\bar{y},z\bar{z}} = (-1)^{x+\bar{x}+y+\bar{y}+z+\bar{z}} T_{x\bar{x},y\bar{y},z\bar{z}} \\ &\Leftrightarrow \\ &T_{x\bar{x},y\bar{y},z\bar{z}} \begin{cases} =0, &\;\text{if}\;\; x+\bar{x}+y+\bar{y}+z+\bar{z} = 1 \mod{2} \\ \neq 0, &\;\text{if}\;\; x+\bar{x}+y+\bar{y}+z+\bar{z} = 0 \mod{2}, \\ \end{cases} \end{split} \end{equation} where $x,\bar{x},y,\bar{y},z,\bar{z}$ are the six indices of $T$ in the three directions respectively. We emphasize for clarity that $\bar{x}$ is not $-x$; these are notations for different indices. The second set of conditions in Eq.~\eqref{eq.ToricCodeTcondition} further enforces that an even number of index flipping of the virtual indices of a tensor does not change the value of the tensor elements. For instance, in terms of components, we have: \begin{equation} \begin{split} T_{x\bar{x},y\bar{y},z\bar{z}} =& T_{(1-x)(1-\bar{x}),y\bar{y},z\bar{z}} \\ =& T_{(1-x)\bar{x},(1-y)\bar{y},z\bar{z}} \\ =& T_{x\bar{x},y\bar{y},(1-z)(1-\bar{z})} \\ =& \ldots \end{split}. \end{equation} Hence, the nonzero elements of the $T$ tensor are all equal. Up to an overall normalization, we have the unique solution: \begin{equation}\label{eq.ToricCodeTtensor} T_{x\bar{x},y\bar{y},z\bar{z}} = \begin{cases} 0, &\;\text{if}\; x+\bar{x}+y+\bar{y}+z+\bar{z} = 1 \mod{2} \\ 1, &\;\text{if}\; x+\bar{x}+y+\bar{y}+z+\bar{z} = 0 \mod{2}. \\ \end{cases} \end{equation} The TNS is then Eq.~\eqref{eq.TNS} with the local $T$ being Eq.~\eqref{eq.ToricCodeTtensor}. The local $T$ tensors are the same on other spatial manifolds, such as $\mathcal{T}^3$. A similar set of conditions as the first equality in Eq.~\eqref{eq.ToricCodeTcondition} have been introduced by several other names in tensor network literature: $\mathbb{Z}_2$-injectivity\cite{schuch2010peps}, MPO-injectivity\cite{csahinouglu2014characterizing}, $\mathbb{Z}_2$ gauge symmetry\cite{he2014modular} etc. The previous studies were in 2D, and our condition is the 3D generalization. Notice that the first equation in Eq.~\eqref{eq.ToricCodeTcondition} alone will not necessarily lead to topological order. It only implies that the ground state is $\mathbb{Z}_2$ symmetric. The state which only satisfies the first condition in Eq.~\eqref{eq.ToricCodeTcondition} could also be a topological trivial state by tuning the relative strength of the nonzero elements of $T$ tensor. This can be interpreted as a condensation transition from topological phases to trivial phases. See Refs.~\onlinecite{he2014modular,shukla2016boson,marien2016condensation,duivenvoorden2017entanglement,garre2017symmetry} for explanations and examples in the case of 2D TNS. \subsection{Concatenation Lemma}\label{subsec.ConcatenationToricCode} In this section, we consider the contraction of a network of local $T$ tensors with open virtual indices. One example of such a contraction is the tensor network norm Eq.~\eqref{eq.TNSnormContractT} or the transfer matrix Eq.~\eqref{eq.TransferMatrixContractT}. Since the elements of a local $T$ tensor are 0 for the odd sector and 1 for the even sector (see Eq.~\eqref{eq.ToricCodeTtensor}), we will show that, in general, a network of contracted $T$ tensors obeys a similar rule: some elements are zeros while the others are nonzero and identical. A \textbf{Concatenation lemma} is proposed to derive the rule for the contraction of several tensors in general and will be frequently used in the following discussions. For example, we will use this lemma to show in Sec.~\ref{subsec.ToricCode_TransferMatrix} that the transfer matrix $\mathrm{TM}_{xy}$ for the 3D toric code model is a projector of rank 2. \begin{figure}[t] \centering \includegraphics[width=0.4\columnwidth]{figures/Contracted2T.pdf} \caption{Contraction of two local $T$ tensors in the $z$-direction. We emphasize that there is no projector $g$ tensor in this figure.} \label{fig.Contract2T} \end{figure} \begin{framed} \textbf{Concatenation Lemma:} For a network of contracted $T$ tensors Eq.~\eqref{eq.ToricCodeTtensor} with open indices, the open indices need to sum to $0 \mod{2}$, otherwise the element of the network tensor is zero. Moreover, if nonzero, the elements of the network tensor are constants, independent of open indices. \end{framed} This lemma can be easily proved by using $\mathbb{Z}_2$ symmetry Eq.~\eqref{eq.ToricCodeTtensor} and induction. The proof is in App.~\ref{app.ToricCode_Concatenation}. We explain this lemma by a simple example. Suppose we have two $T$ tensors contracted over a pair of indices: \begin{equation} \begin{split} &\mathbf{T}_{x_1,\bar{x}_1,y_1,\bar{y}_1,z_1,x_2,\bar{x}_2,y_2,\bar{y}_2,\bar{z}_2} \\ =&\sum_{\bar{z}_1,z_2} T_{x_1\bar{x}_1,y_1\bar{y}_1,z_1\bar{z}_1} T_{x_2\bar{x}_2,y_2\bar{y}_2,z_2\bar{z}_2} \delta_{\bar{z}_1z_2}. \\ \end{split} \end{equation} Graphically, the tensor $\mathbf{T}$ is represented by Fig.~\ref{fig.Contract2T}. The open indices of the tensor $\mathbf{T}$ need to sum to an even number in order for the elements of the $\mathbf{T}$ tensor to be nonzero. This comes out of writing the constraints of each of the $T$ tensors: \begin{equation} \begin{split} &\begin{cases} &x_1+\bar{x}_1+y_1+\bar{y}_1+z_1+\bar{z}_1 = 0, \mod{2} \\ &x_2+\bar{x}_2+y_2+\bar{y}_2+z_2+\bar{z}_2 = 0, \mod{2} \\ &\bar{z}_1=z_2 \end{cases} \\ \Rightarrow\; &x_1+\bar{x}_1+y_1+\bar{y}_1+z_1+ x_2+\bar{x}_2+y_2+\bar{y}_2+\bar{z}_2 \\ =& 0, \mod{2}. \end{split} \end{equation} Otherwise, the tensor element of $\mathbf{T}$ is zero. Moreover, the elements of the contracted tensor are 1, if nonzero: \begin{widetext} \begin{equation} \mathbf{T}_{x_1,\bar{x}_1,y_1,\bar{y}_1,z_1,x_2,\bar{x}_2,y_2,\bar{y}_2,\bar{z}_2} = \begin{cases} 0 & \text{if}\;\; x_1+\bar{x}_1+y_1+\bar{y}_1+z_1+ x_2+\bar{x}_2+y_2+\bar{y}_2+\bar{z}_2 = 1, \mod{2} \\ 1 & \text{if}\;\; x_1+\bar{x}_1+y_1+\bar{y}_1+z_1+ x_2+\bar{x}_2+y_2+\bar{y}_2+\bar{z}_2 = 0, \mod{2}. \end{cases} \end{equation} \end{widetext} For a more complicated contraction of $T$ tensors, we have: \begin{equation} \mathbf{T}_{\{t\}} = \begin{cases} 0 & \text{if}\;\; \sum_i t_i = 1, \mod{2} \\ \mathrm{Const} & \text{if}\;\; \sum_i t_i = 0, \mod{2} \\ \end{cases} \end{equation} where $\{t\}$ denotes all the indices of the tensor $\mathbf{T}$. We emphasize that the nonzero constant does not depend on $\{t\}$ . \subsection{Entanglement}\label{subsec.ToricCode_Entanglement} We now show that Eq.~\eqref{eq.TNS} is exactly an SVD for the wave function with respect to the entanglement cut illustrated in Fig.~\ref{fig.cut}. For simplicity, suppose that the TNS is defined on infinite $\mathcal{R}^3$. As we have emphasized at the end of Sec.~\ref{subsec.TNSingeneral}, we do not specify the boundary conditions of the TNS, since we are only concerned with the bulk wave functions whose reduced density matrices are assumed not to be influenced by the boundary conditions. If we put the wave function on a large but finite $\mathcal{R}^3$, we have to specify the boundary conditions of the TNS by fixing the indices on the boundary. Suppose the open indices on the boundary are denoted as $\{t^b\}$. The norm of the TNS on open $\mathcal{R}^3$, which can be expressed as a network of contracted $T$ tensors with open virtual indices $\{t^b\}$, is zero when $\sum_i t^b_i=1 \mod{2}$ and nonzero when $\sum_i t^b_i=0 \mod{2}$, according to the \textbf{Concatenation lemma} of the 3D toric code model. Hence, we can only fix the boundary indices $\{t^b\}$ to be $\sum_{i} t^b_i = 0 \mod{2}$. Calculating the entanglement on a nontrivial manifold is ambiguous since multiple degenerate ground states, which cannot be distinguished locally, appear. Their superpositions have different entanglement entropies. We rewrite Eq.~\eqref{eq.TNS} by separating the tensor contractions to a spatial region $A$ and its complement region $\bar{A}$. Region $A$ contains the $g$ tensors near the entanglement cut as illustrated in Fig.~\ref{fig.cut}: \begin{equation}\label{eq.ToricCodeSVD} \begin{split} \ket{\mathrm{TNS}}_{\mathcal{R}^3} = \sum_{\{t\}} \ket{\{t\}}_{A} \otimes \ket{\{t\}}_{\bar{A}} \end{split} \end{equation} where \begin{widetext} \begin{equation}\label{eq.SVDbasisA} \begin{split} |\{t\}\rangle_A =\sum_{\{s\}\in A}\sum_{\{i\} \in A} \mathcal{C}^{A} (g^{s_1}_{t_1 i_1} g^{s_2}_{t_2 i_2}\ldots g^{s_3}_{i_3i_4}g^{s_4}_{i_5i_6} T_{i_7\ldots}T_{i_8\ldots }\ldots )|\{s\}\rangle. \end{split} \end{equation} \end{widetext} Indices denoted by $s$ are the physical indices; indices denoted by $t$ are the open virtual indices straddling the entanglement cut from the region $A$; indices denoted by $i$ are the contracted virtual indices inside the region $A$. The tensors $g^{s_1}_{t_1i_1}$ and $g^{s_2}_{t_1i_2}$ etc are the projector $g$ tensors near the entanglement cut on the region $A$ side as illustrated in Fig.~\ref{fig.cut}; $g^{s_3}_{i_3i_4}$ and $g^{s_4}_{i_5i_6} $ are the projector $g$ tensors inside the region $A$; for this cut, all the $T$ tensors are inside the region $A$. The summation is over all physical indices $\{s\}$ inside the region A. Thereby, $\ket{\{t\}}$ is the TNS for region $A$ with open virtual indices $\{t\}$. We choose a convention of splitting tensors whereby $g$ tensors near the entanglement cut belong to the region $A$, as illustrated in Fig.~\ref{fig.cut}. For instance, when the region $A$ is a cube, we can graphically denote the basis $\ket{\{t\}}$ as Eq.~\eqref{eq.TNSbasis}, where in the bulk of this cube is a TNS, and the red lines are the outgoing virtual indices $\{t\}$. The $g$ tensors connecting with these red lines are inside the cube. Similarly for the region $\bar{A}$: \begin{equation} \begin{split} |\{t\}\rangle_{\bar{A}}=\sum_{\{s\}\in \bar{A}}\sum_{\{i\} \in \bar{A}}\mathcal{C}^{\bar{A}}( g^{s_1}_{i_1i_2}g^{s_2}_{i_3i_4} T_{t_1i_5\ldots}T_{t_2i_6\ldots }\ldots )|\{s\}\rangle. \end{split} \end{equation} Since the TNSs for region $A$ and $\bar{A}$ share the same boundary virtual indices $\{t\}$, then in Eq.~\eqref{eq.ToricCodeSVD} the two basis for region $A$ and $\bar{A}$ have the same label $\{t\}$. For the TNS of Eq.~\eqref{eq.TNS}, the boundary virtual indices $\{t\}$ of the regions $A$ and $\bar{A}$ are contracted over, and thus in Eq.~\eqref{eq.ToricCodeSVD} $\{t\}$ are summed over. \begin{figure}[b] \centering \includegraphics[width=0.6\columnwidth]{figures/Split.pdf} \caption{The splitting of tensors near the entanglement cut.} \label{fig.cut} \end{figure} We now show that $\ket{\{t\}}_{A}$ and $\ket{\{t\}}_{\bar{A}}$ are an orthonormal basis (normalized up to constant) for the region $A$ and the region $\bar{A}$ respectively. Therefore, Eq.~\eqref{eq.ToricCodeSVD} is exactly the SVD for the ground state wave function, i.e., \begin{equation}\label{eq.ToricCodeOrthoNormal} _A\overlap{\{t^\prime\}}{\{t\}}_A \propto \delta_{\{t^\prime\},\{t\}} \delta(\sum_i t_i =0 \mod{2}). \end{equation} \noindent \textbf{Proof:} Applying the \textit{SVD condition} to the toric code TNS, we can immediately conclude that the $\ket{\{t\}}_{A}$ span an orthogonal basis, and the TNS is exactly an SVD. However, the \textit{SVD condition} does not tell us whether the basis is orthonormal. In the following, we show that $\ket{\{t\}}_{A}$ is not only orthogonal, but also orthonormal with a norm independent on ${t}$, which leads to the flat singular values. Following the definition of our basis: \begin{widetext} \begin{equation} \begin{split} _A\overlap{\{t^\prime\}}{\{t\}}_{A}= & \left(\sum_{\{s'\}\in A}\sum_{\{j\} \in A}\mathcal{C}^{A}(g^{s'_1\star}_{t'_1 j_1} g^{s'_2\star}_{t'_2 j_2}\ldots g^{s'_3\star}_{j_3j_4}g^{s'_4\star}_{j_5j_6} T^{\star}_{j_7\ldots}T^{\star}_{j_8\ldots }\ldots )\langle \{s'\}| \right) \\ &\left( \sum_{\{s\}\in A}\sum_{\{i\} \in A}\mathcal{C}^{A}(g^{s_1}_{t_1 i_1} g^{s_2}_{t_2 i_2}\ldots g^{s_3}_{i_3i_4}g^{s_4}_{i_5i_6} T_{i_7\ldots}T_{i_8\ldots }\ldots )|\{s\}\rangle \right). \end{split} \end{equation} \end{widetext} When the open virtual indices $\{t^\prime\} \neq \{t\}$, the overlap is clearly zero, as the spin configurations on the boundary are different due to the projector $g$ tensors. Hence, the basis $\ket{\{t\}}_{A}$ are orthogonal. Next we show that $_A\overlap{\{t\}}{\{t\}}_{A}$ is zero when $\left( \sum_{t_i \in \{t\}} t_i \right)$ is odd. Following the same derivations in Sec.~\ref{subsec.TNSnorm}, we have: \begin{equation} _A\overlap{\{t\}}{\{t\}}_{A} = \mathcal{C}^{A} \left( \ldots TTT \ldots \right) \end{equation} with the open virtual indices $\{t\}$. The contraction $\mathcal{C}^{A}$ is over the $T$ tensors in the region $A$. Applying the \textbf{Concatenation lemma}, $_A\overlap{\{t\}}{\{t\}}_{A}$ is zero if the open indices $\{t\}$ are summed to be $1\mod{2}$: \begin{equation} \sum_i t_i = 1 \mod{2} \;\Rightarrow\; _A\overlap{\{t\}}{\{t\}}_{A}=0. \end{equation} Moreover, \begin{equation} _A\overlap{\{t\}}{\{t\}}_{A}=\mathrm{Const}, \quad\text{when}\; \sum_i t_i = 0 \mod{2}. \end{equation} Hence $\ket{\{t\}}$ is orthonormal basis up to an overall normalization factor that can be obtained by the normalization of $\ket{\mathrm{TNS}}$. \hfill$\Box$\\ The same proof works for the region $\bar{A}$ and $\ket{\{t\}}_{\bar{A}}$. Therefore, we can conclude that Eq.~\eqref{eq.ToricCodeSVD} is indeed an SVD, and the singular values are all identical. Hence, for a entanglement cut, we only need to count the number of singular vectors in Eq.~\eqref{eq.ToricCodeSVD}. For a connected entanglement surface with $N$ open virtual indices, the number of singular vectors in Eq.~\eqref{eq.ToricCodeSVD} is $2^{N-1}$, because the open virtual indices need to sum to be $0\mod{2}$. Hence, the entanglement entropy for a region whose entanglement surface is singly connected is: \begin{equation} S = N\log(2)-\log(2). \end{equation} If the entanglement surface still has $N$ open virtual indices but is separated into $n$ disconnected surfaces, then the entanglement entropy is: \begin{equation} \begin{split} S &= N\log(2) - n\log(2) \\ &= \mathrm{Area}\times \log(2) - n\log(2). \end{split} \end{equation} The above is true because the condition that the open indices need to have an even summation holds true for each component of the entanglement cut. Furthermore, if we place our TNS ground state on a 3D cylinder $\mathcal{T}^2_{xy} \times \mathcal{R}_z$, and the entanglement cut splits the cylinder into two halves $z>0$ and $z<0$, then the entanglement entropy of either side is also $S = \text{Area}\times \log(2) - \log(2)$. The results can be easily generalized to $\mathbb{Z}_K$ lattice gauge models on $\mathcal{R}^3$: \begin{equation} S = \mathrm{Area} \times \log(K) - n\log(K) \end{equation} with the same equation holding on a cylinder $\mathcal{T}^2_{xy} \times \mathcal{R}_z$. The entanglement spectrum is also flat. The area is measured by the number of open virtual indices straddling the entanglement cut. Following the same logic, for the toric code in $(d+1)$ dimensions, all the open virtual indices of region $A$, $\{t_i\}$, have to satisfy a single constraint $\sum_i t_i=0\mod 2$, because they have to obey the \textbf{Concatenation lemma}. If there are $N$ open virtual indices on the surface of region $A$, there are $N-1$ independent open virtual indices. Hence the rank of the reduced density matrix is still $2^{N-1}$, because each independent open index can take 2 values. The entanglement entropy is \begin{equation} S=N\log(2)-\log(2). \end{equation} The topological entanglement entropy $S_{\mathrm{topo}}[\mathcal{T}^{d-1}]$ is independent of the dimensionality, and it obeys the conjecture presented in Ref.~\onlinecite{zheng2017structure}: \begin{equation} \exp(-d S_{\mathrm{topo}}[\mathcal{T}^{d-1}])=\mathrm{GSD}[\mathcal{T}^d] \end{equation} where $\mathrm{GSD}[\mathcal{T}^d]=2^{d}$. \subsection{Transfer Matrix as a Projector}\label{subsec.ToricCode_TransferMatrix} The $z$-direction transfer matrix $\mathrm{TM}_{xy}$ in 3D is defined as a tensor network overlap in the $xy$-plane, with periodic boundary conditions. The indices in the $z$ direction are open and not contracted over (see Eq.~\eqref{eq.TransferMatrix} to Eq.~\eqref{eq.TransferMatrixGraph}). In this section, we will show that $\mathrm{TM}_{xy}$ for the 3D toric code model is a projector of rank 2. Let us denote the indices of the transfer matrix as \begin{equation}\label{eq.TransferMatrixIndex} \begin{split} &\left(\mathrm{TM}_{xy}\right)_{\{z\},\{\bar{z}\}}= \begin{gathered} \includegraphics[width=0.5\columnwidth]{figures/TNS_TM_index.pdf} \end{gathered}, \end{split} \end{equation} where $z_{i,j}$ and $\bar{z}_{i,j}$ are the indices at the position $(i,j)$ on the $xy$-plane. The vector space of this transfer matrix is of dimension $2^{L_xL_y}$. Suppose the vector space is spanned by the basis $e_{\{z\}}$, where \begin{equation}\label{eq.TMbasis} e_{\{z\}} = \bigotimes_{i=1}^{L_x}\bigotimes_{j=1}^{L_y} e_{z_{i,j}} = e_{z_{1,1}} \otimes e_{z_{1,2}} \otimes \ldots e_{z_{L_x,L_y}}. \end{equation} $e_{z_{i,j}}$ is the local ``virtual bond Hilbert space" spin $\ket{0}=\ket{\mathord{\uparrow}}$, $\ket{1}=\ket{\mathord{\downarrow}}$ basis for the index $z_{i,j}$, where $i$ and $j$ are the coordinates of $z_{i,j}$ in the $x$- and $y$-directions respectively. We can consider the matrix multiplication of the transfer matrix $\mathrm{TM}_{xy}$ with an element of the basis $e_{\{\bar{z}\}}$: \begin{equation}\label{eq.TMactBasis} \mathrm{TM}_{xy} \cdot e_{\{\bar{z}\}} = \sum_{\{z\}} \left( \mathrm{TM}_{xy} \right)_{\{z\},\{\bar{z}\}} \end{equation} where $\{\bar{z}\}$ is fixed for the both LHS and RHS. Applying the \textbf{Concatenation lemma} to $\textstyle \left( \mathrm{TM}_{xy} \right)_{\{z\},\{\bar{z}\}}$, we conclude that the terms satisfying \begin{equation} \sum_{i=1}^{L_x}\sum_{j=1}^{L_y} z_{i,j} + \bar{z}_{i,j} = 0 \mod{2} \end{equation} contribute equally to the RHS of Eq.~\eqref{eq.TMactBasis}, while the terms satisfying \begin{equation} \sum_{i=1}^{L_x}\sum_{j=1}^{L_y} z_{i,j} + \bar{z}_{i,j} = 1 \mod{2} \end{equation} do not contribute. Therefore, we can rewrite the summation more precisely: \begin{equation} \begin{split} \mathrm{TM}_{xy} \cdot e_{\{\bar{z}\}} &= \sum_{\{z\} \text{ with } \sum_i z_i+\bar{z}_i \text{ even } } \left( \mathrm{TM}_{xy} \right)_{\{z\},\{\bar{z}\}} \\ &\propto \sum_{\{z\} \text{ with } \sum_i z_i+\bar{z}_i \text{ even } } e_{\{z\}}. \end{split} \end{equation} When the $\{\bar{z}\}$ satisfies $\sum_{i,j} \bar{z}_{i,j}=0 \mod{2}$, we have \begin{equation}\label{eq.ToricCode_projector_even} \begin{split} &\mathrm{TM}_{xy} \cdot e_{\{\bar{z}\}} \propto \sum_{\{z\} \text{ with } \sum_i z_i \text{ even } } e_{\{z\}}, \\ \end{split} \end{equation} while when the $\{\bar{z}\}$ satisfies $\sum_{i,j} \bar{z}_{i,j}=1 \mod{2}$, we have \begin{equation}\label{eq.ToricCode_projector_odd} \begin{split} &\mathrm{TM}_{xy} \cdot e_{\{\bar{z}\}} \propto \sum_{\{z\} \text{ with } \sum_i z_i \text{ odd } } e_{\{z\}}. \\ \end{split} \end{equation} Therefore, $\mathrm{TM}_{xy}$ is a projector of rank 2. Hence, it has only two eigenvalues of $1$, and the corresponding unnormalized eigenvectors are: \begin{equation}\label{eq.ToricCodeTMeigenstate} \begin{split} \sum_{\{z\} \text{ with } \sum_i z_i \text{ even } } e_{\{z\}} \\ \sum_{\{z\} \text{ with } \sum_i z_i \text{ odd } } e_{\{z\}}. \end{split} \end{equation} \subsection{GSD and Transfer Matrix}\label{subsec.ToricCode_GSD} We know that the 3D toric code model has three pairs of nonlocal operators along the cycles of 3D torus: \begin{equation}\label{eq.ToricCodeNonLocal} \begin{split} &W_X[C_x]=\prod_{i\in C_x} X_i, \quad W_Z[\tilde{C}_{yz}]=\prod_{i\in \tilde{C}_{yz}} Z_i; \\ &W_X[C_y]=\prod_{i\in C_y} X_i, \quad W_Z[\tilde{C}_{xz}]=\prod_{i\in \tilde{C}_{xz}} Z_i; \\ &W_X[C_z]=\prod_{i\in C_z} X_i, \quad W_Z[\tilde{C}_{xy}]=\prod_{i\in \tilde{C}_{xy}} Z_i. \\ \end{split} \end{equation} where $C_x$ is a loop along the cycle of $x$ direction on lattice, $\tilde{C}_{yz}$ is a closed surface along the cycles of $yz$ directions on dual lattice, and similarly for the other directions. The figures for these operators are: \begin{equation}\label{eq.figureoperatorsToricCode} \begin{gathered} \includegraphics[width=0.7\columnwidth]{figures/ToricCode_Operators.pdf} \end{gathered} \end{equation} The commutation relations include: \begin{equation} \begin{split} W_X[C_x] W_Z[\tilde{C}_{yz}] &= -W_Z[\tilde{C}_{yz}] W_X[C_x], \\ W_X[C_y] W_Z[\tilde{C}_{xz}] &= -W_Z[\tilde{C}_{xz}] W_X[C_y], \\ W_X[C_z] W_Z[\tilde{C}_{xy}] &= -W_Z[\tilde{C}_{xy}] W_X[C_z]. \end{split} \end{equation} All other combinations of operators commute. Hence, there are 8 degenerate ground states in total on the torus, assuming that Eq.~\eqref{eq.ToricCodeNonLocal} has exhausted all nonlocal operators. We can also put our TNS on the 3-torus, i.e., $\ket{\mathrm{TNS}}_{\mathcal{T}^3}$. It is not hard to verify using Eq.~\eqref{eq.projectorcondition ToricCode} and \eqref{eq.ToricCodeTcondition} that: \begin{equation} \begin{split} W_X[C_x] \ket{\mathrm{TNS}}_{\mathcal{T}^3} &= \ket{\mathrm{TNS}}_{\mathcal{T}^3}, \\ W_X[C_y] \ket{\mathrm{TNS}}_{\mathcal{T}^3} &= \ket{\mathrm{TNS}}_{\mathcal{T}^3}, \\ W_X[C_z] \ket{\mathrm{TNS}}_{\mathcal{T}^3} &= \ket{\mathrm{TNS}}_{\mathcal{T}^3}. \\ \end{split} \end{equation} As already mentioned in Sec.~\ref{subsec.ToricCode_TNS}, $|\mathrm{TNS}\rangle_{\mathcal{T}^3}=|\psi\rangle$ where $|\psi\rangle$ is defined in Eq.~\eqref{GSoftoriccode}(see App.~\ref{app.TNS_projected} for a proof). Both $|\mathrm{TNS}\rangle_{\mathcal{T}^3}$ and $|\psi\rangle$ are $+1$ eigenstates of $W_X$ operators. However, the transfer matrix defined by $\ket{\mathrm{TNS}}_{\mathcal{T}^3}$ does not provide 8 fold degenerate eigenvalues, but only 2, as shown in Sec.~\ref{subsec.ToricCode_TransferMatrix}. We can act with the $W_Z[\tilde{C}_{yz}]$ and $W_Z[\tilde{C}_{xz}]$ on the TNS by using Eq.~\eqref{eq.projectorcondition ToricCode} and \eqref{eq.ToricCodeTcondition} to generate all the ground states. The TNSs obtained by this action in terms of a $xy$-plane of tensors are depicted as below: \begin{equation}\label{eq.figuresToricCodeTransferMatrix} \begin{gathered} \includegraphics[width=0.6\columnwidth]{figures/ToricCode_TransferMatrix.pdf} \end{gathered} \end{equation} The intersection of $W_Z[\tilde{C}_{yz}]$ and $W_Z[\tilde{C}_{xz}]$ with the $xy$-plane is the line $Z$ operators, illustrated by the blue circle $Z$ in Eq.~\eqref{eq.figuresToricCodeTransferMatrix}. $W_Z[\tilde{C}_{xy}]$ acts on the $xy$-plane on the dual lattice, and thus does not change the transfer matrix at all. We denote these $xy$-planes of TNSs in Eq.~\eqref{eq.figuresToricCodeTransferMatrix} as $\mathbf{T}_{xy}^{\alpha,\beta}$ (with open indices along the $z$ direction), where $\alpha,~\beta\in \{0,1\}$ label whether we have inserted the $Z$ operators in the $x$ and $y$ direction respectively. The subindex $xy$ in $\mathbf{T}_{xy}^{\alpha,\beta}$ means that the TNS is on a $xy$-plane. Clearly, $\mathbf{T}_{xy}^{\alpha,\beta}$ are different since they support different holonomies of $W_X[C_x]$ operators and $W_X[C_y]$ operators. After obtaining $\mathbf{T}_{xy}^{\alpha,\beta}$, we can define four twisted transfer matrices correspondingly by contracting the physical indices between bra $\mathbf{T}_{xy}^{\alpha,\beta}$ and ket $\mathbf{T}_{xy}^{\alpha,\beta}$. The twisted transfer matrices are denoted as $\mathrm{TM}_{xy}^{\alpha,\beta}$. For instance, $\mathrm{TM}_{xy}^{0,0}$ is the untwisted transfer matrix in Eq.~\eqref{eq.TransferMatrixIndex}. Each of these transfer matrices $\mathrm{TM}_{xy}^{\alpha,\beta}$ is also a projector of rank 2. The reasons are that (1) the contraction of the projector $g$ tensors between the bra $\mathbf{T}_{xy}^{\alpha,\beta}$ and ket $\mathbf{T}_{xy}^{\alpha,\beta}$ makes the indices in the bra layer and ket layer identical; (2) the $Z$ operators in the bra and ket layer will cancel each other and produce an identity operator. Hence, the transfer matrices built from the twisted TNS $\mathrm{TM}_{xy}^{1,0}$, $\mathrm{TM}_{xy}^{0,1}$ and $\mathrm{TM}_{xy}^{1,1}$ are equal to that built from the untwisted TNS $\mathrm{TM}_{xy}^{0,0}$: \begin{equation}\label{eq.ToricCode_TwistedTM} \mathrm{TM}_{xy}^{\alpha,\beta} = \mathrm{TM}_{xy}^{0,0}, \quad\forall\; \alpha,\beta\in \{0,1\} \end{equation} The transfer matrix has degeneracy 2, and it is the same for each of 4 TNSs which are different. We have $4\times 2 =8$ degenerate eigenstates. We have to emphasize that Eq.~\eqref{eq.ToricCode_TwistedTM} holds true, when there are no physical operators in between the bra and ket layer of $\mathbf{T}_{xy}^{\alpha,\beta}$, and the physical indices of projector $g$ tensors are directly contracted. If a physical operator is inserted, for instance $W_X[C_x]$, then the twisted transfer matrices in the presence of $W_X[C_x]$ are NOT the same as the untwisted one sandwiching the same $W_X[C_x]$. In constructing the TNS on torus, we need to choose the same $\mathbf{T}_{xy}^{\alpha,\beta}$ for each $xy$-plane. If we use different $\mathbf{T}_{xy}^{\alpha,\beta}$ in each $xy$-plane to construct a TNS on torus, the corresponding wave function is no longer a ground state for the 3D toric code model. More specifically, in the 3D toric code model, if we contract $L_z-1$ $\mathbf{T}_{xy}^{0,0}$'s with one twisted $\mathbf{T}_{xy}^{\alpha,\beta}$ Eq.~\eqref{eq.figuresToricCodeTransferMatrix}, then the corresponding wave function will support a pair of loop excitations because the $B_p$ operators up and below $Z$ operators are violated, and is no longer a ground state. Graphically, the contraction of these $L_z-1$ number of $\mathbf{T}_{xy}^{0,0}$ and one $\mathbf{T}_{xy}^{1,0}$ is: \begin{equation} \begin{gathered} \includegraphics[width=0.4\columnwidth]{figures/ToricCode_WrongNorm.pdf} \end{gathered}. \end{equation} The reason for this energy costing is that there is a line of $Z$ operators. However, the nonlocal operators in 3D toric code do not have a line $Z$ operator, but only have surface $Z$ operators. See Eq.~\eqref{eq.figureoperatorsToricCode}. On the other hand, the TNS made of all the same $\mathbf{T}_{xy}^{\alpha,\beta}$, for instance $\mathbf{T}_{xy}^{1,0}$: \begin{equation} \begin{gathered} \includegraphics[width=0.4\columnwidth]{figures/ToricCode_CorrectNorm.pdf} \end{gathered} \end{equation} is allowed, because it corresponds to a closed surface operator included in Eq.~\eqref{eq.figureoperatorsToricCode}. We will come back to this point in Sec.~\ref{subsec.Xcube_GSD} where this issue is subtle and makes a difference when we count GSD from transfer matrices. \begin{figure*}[t] \centering \includegraphics[width=0.6\textwidth]{figures/XCubeModel.pdf} \caption{The Hamiltonian terms of the X-cube model: (a) $A_{v,yz}$, (b) $A_{v,xy}$, (c) $A_{v,xz}$ and (d) $B_{c}$. The circled $X$ and $Z$ represent the Pauli matrices acting on the physical spin-$1/2$'s.} \label{fig.XCubeHamiltonian} \end{figure*} \section{X-cube Model}\label{sec.Xcube} In this section, we construct the TNS for the X-cube model, one of the simplest fracton models. Using it, we then calculate the entanglement entropy and the GSD of this model. The results are generally different from those in conventional topological phases. The entanglement entropy has a linear correction to the area law, and the GSD grows exponentially with the size of the system. This section is organized as follows: In Sec.~\ref{subsec.Xcube}, we briefly review the X-cube model in a cubic lattice. In Sec.~\ref{subsec.Xcube_TNS}, we construct the TNS for the X-cube model. In Sec.~\ref{subsec.ConcatenationXcube}, we prove a \textbf{Concatenation lemma} for the X-cube TNS. In Sec.~\ref{subsec.Xcube_Entanglement}, we calculate the entanglement entropies on $\mathcal{R}^3$. In Sec.~\ref{subsec.Xcube_TransferMatrix}, we construct the transfer matrix and prove that it is a projector of rank $2^{L_x+L_y-1}$. In Sec.~\ref{subsec.Xcube_GSD}, we show how the transfer matrix gives rise to an extensive GSD on torus. The App.~\ref{app.transfermatrixnumerics} is also related to this section. It includes the numerical results of the transfer matrix eigenvalue degeneracy, which further confirm the analytical results in Sec.~\ref{subsec.Xcube_TransferMatrix}. \subsection{Hamiltonian of X-cube Model}\label{subsec.Xcube} The model is defined on a cubic lattice. All the spin $1/2$'s are associated with the bonds of the cubic lattice. The Hamiltonian is: \begin{equation} H = -\sum_{v} \left( A_{v,xy}+A_{v,yz}+A_{v,xz} \right) - \sum_{c} B_c \end{equation} where the summation is taken over all vertices and cubes respectively. Each term is depicted in Fig.~\ref{fig.XCubeHamiltonian}. More specifically, $A_{v,xy}$ is the product of four $Z$ operators around the vertex $v$ in the $xy$ plane. Similarly for $A_{v,yz}$ and $A_{v,xz}$. $B_{c}$ flips the $12$ spins of a cube $c$. It is trivial to show that all the Hamiltonian terms commute: \begin{equation} \begin{split} \commute{A_{v,xy}}{A_{v^\prime,xy}}=& \commute{A_{v,yz}}{A_{v^\prime,yz}}= \commute{A_{v,xz}}{A_{v^\prime,xz}}=0, \\ \commute{A_{v,xy}}{A_{v^\prime,yz}}=& \commute{A_{v,xy}}{A_{v^\prime,xz}}= \commute{A_{v,yz}}{A_{v^\prime,xz}}=0, \\ \commute{B_{c}}{A_{v^\prime,xy}}=& \commute{B_{c}}{A_{v^\prime,yz}}= \commute{B_{c}}{A_{v^\prime,xz}}=0 \\ \commute{B_{c}}{B_{c^\prime}}=&0, \quad\forall\; v,v^\prime,c,c^\prime. \\ \end{split} \end{equation} Hence, this model can be exactly solved. The ground state $\ket{\mathrm{GS}}$ (on $\mathcal{R}^3$) needs to satisfy that: \begin{equation}\label{eq.GScondition} \begin{split} A_{v,xy}\ket{\mathrm{GS}}=&\ket{\mathrm{GS}}, \\ A_{v,yz}\ket{\mathrm{GS}}=&\ket{\mathrm{GS}}, \\ A_{v,xz}\ket{\mathrm{GS}}=&\ket{\mathrm{GS}}, \\ B_{c}\ket{\mathrm{GS}}=&\ket{\mathrm{GS}}, \quad \forall\; v,c. \end{split} \end{equation} This set of equations will be used to derive the local $T$ tensor for the X-cube model. The nonlocal operators of the X-cube model which are required to commute with all Hamiltonian terms on torus include 9 loop operators\cite{vijay2016fracton}: \begin{equation}\label{eq.operatorsXcube} \begin{split} &W_X[C_x] = \prod_{i \in C_x} X_i, W_X[C_y] = \prod_{i \in C_y} X_i, W_X[C_z] = \prod_{i \in C_z} X_i, \\ &W_Z[\tilde{C}_{x,z}] = \prod_{i \in \tilde{C}_x} Z_i, W_Z[\tilde{C}_{y,z}] = \prod_{i \in \tilde{C}_y} Z_i, W_Z[\tilde{C}_{z,x}] = \prod_{i \in \tilde{C}_z} Z_i, \\ &W_Z[\tilde{C}_{x,y}] = \prod_{i \in \tilde{C}_x} Z_i, W_Z[\tilde{C}_{y,x}] = \prod_{i \in \tilde{C}_y} Z_i, W_Z[\tilde{C}_{z,y}] = \prod_{i \in \tilde{C}_z} Z_i, \quad \end{split} \end{equation} where $C_x$ is a straight line along the cycle of the $x$ direction on lattice, and $\tilde{C}_{x,z}$ is a line along the cycle of the $x$ direction on dual lattice while the lattice bonds of $\tilde{C}_{x,z}$ are in the $z$-direction, and similarly for the other directions. The figures for them are: \begin{equation}\label{eq.figureoperatorsXcube} \begin{gathered} \includegraphics[width=0.77\columnwidth]{figures/XCube_Operators.pdf} \end{gathered}. \end{equation} The algebras of these loop operators are grouped into three independent sets: \begin{enumerate} \item The operator (a) in Eq.~\eqref{eq.figureoperatorsXcube} anti-commutes with the operator (f) and the operator (h) when they overlap at one spin. Thus, $W_X[C_x]$ anti-commutes with $W_Z[\tilde{C}_{z,x}]$ and $W_Z[\tilde{C}_{y,x}]$ when they overlap at one spin. \item The operator (b) in Eq.~\eqref{eq.figureoperatorsXcube} anti-commutes with the operator (g) and the operator (i) when they overlap at one spin. Thus, $W_X[C_y]$ anti-commutes with $W_Z[\tilde{C}_{x,y}]$ and $W_Z[\tilde{C}_{z,y}]$ when they overlap at one spin. \item The operator (c) in Eq.~\eqref{eq.figureoperatorsXcube} anti-commutes with the operator (d) and the operator (e) when they overlap at one spin. Thus, $W_X[C_z]$ anti-commutes with $W_Z[\tilde{C}_{x,z}]$ and $W_Z[\tilde{C}_{y,z}]$ when they overlap at one spin. \end{enumerate} All other combinations of operators commute, because they do not overlap at the same physical spin. Each of the three algebras gives a representation of dimension\cite{2017arXiv170804619S} \begin{equation} 2^{L_y+L_z-1}, \quad 2^{L_z+L_x-1}, \quad 2^{L_x+L_y-1}. \quad \end{equation} respectively. Hence the total dimension of the ground state space is $2^{2L_x+2L_y+2L_z-3}$. The derivations of the GSD in terms of these operator algebras are explained in App.~\ref{app.XcubeGSD}. \subsection{TNS for X-cube Model}\label{subsec.Xcube_TNS} Following the same prescription in Sec.~\ref{sec.toriccode}, we can write down the TNS for the X-cube model. First, we introduce a projector $g$ tensor on the bonds of the lattice (see Eq.~\eqref{eq.projector2}). The virtual index is either $0$ or $1$. The $g$ tensor is a projector which projects the physical spin to the virtual indices. The tensor $g$ satisfies Eq.~\eqref{eq.projectorcondition ToricCode}. The TNS is not only composed of the $g$ tensor on the bonds of the lattice, but also the $T$ tensors on the vertices. The $T$ tensor has six virtual indices and no physical index. Unlike the $g$ tensor, the $T$ tensor will be specified by the Hamiltonian terms. We now implement Eq.~\eqref{eq.GScondition} on the TNS. Using the condition Eq.~\eqref{eq.projectorcondition ToricCode}, we can transfer the operators in Hamiltonian terms from physical indices to virtual indices: \begin{equation} \begin{gathered} \includegraphics[width=0.8\columnwidth]{figures/XCube_T_condition_derivation.pdf} \end{gathered}. \end{equation} Requiring that the tensors in the dashed red rectangles are invariant will lead to the following (strong) conditions on the $T$ tensor: \begin{equation}\label{eq.XcubeTtensor} \begin{gathered} \includegraphics[width=0.6\columnwidth]{figures/XCube_T_condition.pdf} \end{gathered}. \end{equation} The first set of conditions is required by the operators $A_{v,xy}$, $A_{v,yz}$ and $A_{v,xz}$ around the vertex $v$. The second set of equations is required by the operators $B_{c}$ around the cube $c$. The $X$ operators acting on the 12 spins of the cube $c$ will be transferred to the virtual spins of the eight $T$ tensors around the cube $c$, using Eq.~\eqref{eq.projectorcondition ToricCode}. The $X$ operators act on the eight quadrants of a $T$ tensor. Clearly, these two sets of the conditions are not independent. The solution to these conditions is: \begin{equation}\label{eq.localT} \begin{split} &T_{x\bar{x},y\bar{y},z\bar{z}} =\begin{cases} 1 & \text{if } \begin{cases} x+\bar{x}+y+\bar{y}=0 \mod{2}, \\ x+\bar{x}+z+\bar{z}=0 \mod{2}, \\ y+\bar{y}+z+\bar{z}=0 \mod{2}. \end{cases} \\ 0 & \text{otherwise}. \end{cases} \end{split} \end{equation} A useful consequence of Eq.~\eqref{eq.XcubeTtensor} is: \begin{equation}\label{eq.XcubeStraightX} \begin{gathered} \includegraphics[width=0.7\columnwidth]{figures/XCube_T_condition_consequence.pdf} \end{gathered}. \end{equation} We now have fixed the TNS for the X-cube model using the local conditions Eq.~\eqref{eq.GScondition}. The wave function on $\mathcal{R}^3$ can also be represented as a tensor contraction of Eq.~\eqref{eq.TNS}. \subsection{Concatenation Lemma}\label{subsec.ConcatenationXcube} In this section, we consider the contraction of a network of local $T$ tensors with open virtual indices for the X-cube model, similar to the idea developed in Sec.~\ref{subsec.ConcatenationToricCode}. However, here the situation is more complicated than the 3D toric code model. The elements of a local $T$ tensor are either $0$ or $1$ in Eq.~\eqref{eq.XcubeTtensor}, depending on the even/odd sector in three directions. The elements of the contracted $T$ tensors will also be either $0$ or $1$ with a similar criterion. \begin{framed} \textbf{Concatenation Lemma:} For a network of the contracted $T$ tensors in Eq.~\eqref{eq.localT}, the sums of the open indices along each $xy$, $yz$ and $xz$ planes have to be even. Otherwise, the tensor element of this network is zero. The nonzero elements are constants independent of the virtual indices. \end{framed} This lemma is a consequence of Eq.~\eqref{eq.localT}. See App.~\ref{app.Xcube_Concatenation} for an induction proof. In this section, we explain this result by considering a simple example. Suppose that we have two $T$ tensors contracted along the $z$ direction: \begin{equation} \begin{split} &\mathbf{T}_{x_1,\bar{x}_1,y_1,\bar{y}_1,z_1,x_2,\bar{x}_2,y_2,\bar{y}_2,\bar{z}_2} \\ =&\sum_{\bar{z}_1,z_2} T_{x_1\bar{x}_1,y_1\bar{y}_1,z_1\bar{z}_1} T_{x_2\bar{x}_2,y_2\bar{y}_2,z_2\bar{z}_2} \delta_{\bar{z}_1z_2}. \\ \end{split} \end{equation} Graphically, $\mathbf{T}$ is the same as depicted in Fig.~\ref{fig.Contract2T}. For each of the two $T$ tensors, we have \begin{equation} \begin{cases} x_1+\bar{x}_1+y_1+\bar{y}_1 &= 0 \mod{2} \\ x_1+\bar{x}_1+z_1+\bar{z}_1 &= 0 \mod{2} \\ y_1+\bar{y}_1+z_1+\bar{z}_1 &= 0 \mod{2} \\ \end{cases} \end{equation} and \begin{equation} \begin{cases} x_2+\bar{x}_2+y_2+\bar{y}_2 &= 0 \mod{2} \\ x_2+\bar{x}_2+z_2+\bar{z}_2 &= 0 \mod{2} \\ y_2+\bar{y}_2+z_2+\bar{z}_2 &= 0 \mod{2}. \\ \end{cases} \end{equation} Therefore, setting $\bar{z}_1 = z_2$ due to the tensor contraction, the open indices of $\mathbf{T}$ need to satisfy: \begin{equation} \begin{split} \begin{cases} x_1+\bar{x}_1+y_1+\bar{y}_1 &= 0, \mod{2} \\ x_2+\bar{x}_2+y_2+\bar{y}_2 &= 0, \mod{2} \\ z_1+\bar{z}_2+x_1+\bar{x}_1+x_2+\bar{x}_2 &= 0, \mod{2} \\ z_1+\bar{z}_2+y_1+\bar{y}_1+y_2+\bar{y}_2 &= 0, \mod{2} \\ \end{cases} \end{split} \end{equation} in order for the elements of the tensor $\mathbf{T}$ to be nonzero. The last set of equations intuitively means that the open indices of the tensor $\mathbf{T}$ (Fig.~\ref{fig.Contract2T}) in each $xy$, $yz$ and $xz$ plane need to have an even summation. Moreover, the elements of the contracted $T$ tensor are 1 independent of indices: \begin{widetext} \begin{equation}\label{eq.Xcube_Ttensor_Example} \mathbf{T}_{x_1,\bar{x}_1,y_1,\bar{y}_1,z_1,x_2,\bar{x}_2,y_2,\bar{y}_2,\bar{z}_2} = \begin{cases} 1 & \text{if}\;\; \begin{cases} x_1+\bar{x}_1+y_1+\bar{y}_1 &= 0, \mod{2} \;\;\text{and}\\ x_2+\bar{x}_2+y_2+\bar{y}_2 &= 0, \mod{2} \;\;\text{and}\\ z_1+\bar{z}_2+x_1+\bar{x}_1+x_2+\bar{x}_2 &= 0, \mod{2} \;\;\text{and}\\ z_1+\bar{z}_2+y_1+\bar{y}_1+y_2+\bar{y}_2 &= 0, \mod{2} \\ \end{cases} \\ 0 & \text{otherwise}. \end{cases} \end{equation} \end{widetext} Generally for a complicated contraction of local $T$ tensors denoted by $\mathbf{T}$, we have: \begin{equation} \mathbf{T}_{\{t\}} = \begin{cases} \mathrm{Const}\neq 0 & \text{\textbf{Concatenation lemma} is satisfied} \\ 0 & \text{else}. \\ \end{cases} \end{equation} This is the notation that we will use when computing the entanglement entropies or the transfer matrix degeneracies. \subsection{Entanglement}\label{subsec.Xcube_Entanglement} \begin{figure*}[t] \centering \includegraphics[width=0.7\textwidth]{figures/RegionA.pdf} \caption{Figures for several regions $A$ for which we calculate the entanglement entropies. (1) Region A is a cube of size $l_x \times l_y \times l_z$. (2) Region A is a cube of $l_x \times l_y \times l_z$ with a hole of size $l_x^\prime \times l_y^\prime \times l_z^\prime $ in it. (3) Region A is a cube of size $l_x \times l_y \times l_z$ and a small cube of height $l_z^\prime$ on top of it. (4) Region A is a cube of size $l_x \times l_y \times l_z$ carved on top of it a small cube of height $l_z^\prime$. } \label{fig.RegionA} \end{figure*} We can show that a cubic region $A$ and its deformations are the exact SVD of $\ket{\mathrm{TNS}}$. See Fig.~\ref{fig.RegionA} for some examples of deformed $A$ regions. We can read out the singular values of $\ket{\mathrm{TNS}}$ for these entanglement cut. Suppose we consider the wave function on $\mathcal{R}^3$ for simplicity (i.e., without dealing with the multiple ground states on $\mathcal{T}^3$). We rewrite the wave function: \begin{equation}\label{eq.XcubeSVD} \ket{\mathrm{TNS}} = \sum_{\{t\}} \ket{\{t\}}_{A} \otimes \ket{\{t\}}_{\bar{A}}. \end{equation} where $\{t\}$ represent the tensor virtual indices connecting the region $A$ and $\bar{A}$. $\ket{\{t\}}_{A}$ is the TNS for the region $A$ with open virtual indices $\{t\}$. Similarly for $\ket{\{t\}}_{\bar{A}}$ for the complement region $\bar{A}$. Because every virtual bond is contracted over for $\ket{\mathrm{TNS}}$, $\ket{\{t\}}_{A}$ and $\ket{\{t\}}_{\bar{A}}$ have to share the same $\{t\}$, and $\{t\}$ is summed over in Eq.~\eqref{eq.XcubeSVD}. Next we show that this set of basis $\ket{\{t\}}_{A}$ is orthonormal: \begin{equation} \begin{split} &_A\langle \{t^\prime \}| \{t\} \rangle_{A} \propto \delta_{\{t^\prime\},\{t\}}, \end{split} \end{equation} when the \textbf{Concatenation lemma} is satisfied for both of $\{t^\prime\}$, $\{t\}$, and the basis $\ket{\{t\}}_{A}$ are not null vectors. \noindent\textbf{Proof:} The open virtual indices $\{t\}$ satisfy the \textit{SVD condition} in Sec.~\ref{subsec.TNSSVD}. Hence, we can conclude that $\ket{\{t\}}_{A}$ and $\ket{\{t\}}_{\bar{A}}$ are orthogonal basis for the region $A$ and $\bar{A}$, and thus Eq.~\eqref{eq.XcubeSVD} is exactly SVD. In order to calculate the entanglement entropies, we need to show that $\ket{\{t\}}_{A}$ and $\ket{\{t\}}_{\bar{A}}$ are not only orthogonal, but also orthonormal. The proof is essentially the same as in Sec.~\ref{subsec.ToricCode_Entanglement}. Here we briefly repeat it. We use the same conventions of SVD basis $\ket{\{t\}}_{A}$ and $\ket{\{t^\prime\}}_{\bar{A}}$ as in Sec.~\ref{subsec.ToricCode_Entanglement}. Suppose $\{t^\prime\} $ and $ \{t\}$ both satisfy \textbf{Concatenation lemma}. Hence, $\ket{\{t\}}_A$ and $\ket{\{t^\prime\}}_A$ are not null vectors. More specifically, the wave function $\ket{\{t\}}_A$ is the same as in Eq.~\eqref{eq.SVDbasisA}. All the virtual indices except $\{t\}$ are contracted over. When $\{t^\prime\} \neq \{t\}$, the basis overlap is zero, because the spins on the boundary of $A$ are different. When $\{t^\prime\} = \{t\}$, the overlap is nonzero. Moreover, the overlaps $_A\langle \{t \}| \{t\} \rangle_A$ are constants independent of $\{t\}$, due to the \textbf{Concatenation lemma}. Hence, $\ket{\{t\}}_{A}$ is an orthonormal basis for the region $A$, up an overall normalization factor. \hfill$\Box$\\ Therefore, using the orthonormal basis $\ket{\{t\}}_{A}$ and $\ket{\{t\}}_{\bar{A}}$, Eq.~\eqref{eq.XcubeSVD} is indeed the SVD. Furthermore, the singular values are all identical. As a result, the entanglement of the region $A$ is determined by the number of basis states $\ket{\{t\}}_A$ which are involved in Eq.~\eqref{eq.XcubeSVD}, i.e., the rank of the contracted tensor for the region $A$. The rank of the contracted tensor can be counted by the \textbf{Concatenation lemma}. We only need to count the number of indices that satisfy the \textbf{Concatenation lemma}. We now list a few simple examples of entanglement entropies. The entanglement cuts are displayed in Fig.~\ref{fig.RegionA}. Correspondingly, their entanglement entropies are: \begin{enumerate} \item When region $A$ is a cube of size $l_x \times l_y \times l_z$: \begin{equation} \begin{split} \frac{S_{A}}{\log2} &= 2l_xl_y+2l_yl_z+2l_xl_z - l_x - l_y - l_z + 1 \\ &= \mathrm{Area} - l_x - l_y - l_z + 1. \\ \end{split}\label{entropy1xcode} \end{equation} The calculation details are the following: The number of indices straddling this entanglement cut is $2l_xl_y+2l_yl_z+2l_xl_z$. This is the maximum possible number of basis states in the SVD of Eq.~\eqref{eq.XcubeSVD}. However, these indices are not free. They are subject to certain constraints, in order for the singular vectors to have non-vanishing norms. Using the \textbf{Concatenation lemma}, we know that the open indices in each $xy$, $yz$, and $xz$ plane must have even summations. We denote the indices to be $t_{i,j,k}$ where $i,j,k$ are the coordinates of such a index. Then, we have; \begin{equation} \begin{split} \sum_{i,j} t_{i,j,k} = 0 \mod{2}, \quad\forall~ k \\ \sum_{i,k} t_{i,j,k} = 0 \mod{2}, \quad\forall~ j \\ \sum_{j,k} t_{i,j,k} = 0 \mod{2}, \quad\forall~ i \\ \end{split} \end{equation} where the summation is only taken over open virtual indices near the entanglement cut in each $xy$, $yz$ and $xz$ plane. Therefore, we have $l_x$, $l_y$, $l_z$ number of constraints respectively. However, these constraints are not independent. Only $l_x+l_y+l_z-1$ of them are independent, because the three sets of constraints sum to be an even number. Hence, the number of free indices is $$2l_xl_y+2l_yl_z+2l_xl_z - l_x - l_y - l_z + 1.$$ The number of singular vectors in Eq.~\eqref{eq.XcubeSVD} is: $$2^{2l_xl_y+2l_yl_z+2l_xl_z - l_x - l_y - l_z + 1},$$ which leads to the entropy written in Eq.~\eqref{entropy1xcode}. \hfill$\Box$ \item When the region $A$ is a cube of size $l_x \times l_y \times l_z$ with an empty hole of size $l_x^\prime \times l_y^\prime \times l_z^\prime$: \begin{equation} \begin{split} \frac{S_{A}}{\log2} =& 2l_xl_y+2l_yl_z+2l_xl_z+ \\ & 2 l_x^\prime l_y^\prime + 2 l_y^\prime l_z^\prime + 2 l_x^\prime l_z^\prime \\ & - l_x^\prime - l_y^\prime - l_z^\prime - l_x - l_y - l_z +2 \\ =& \mathrm{Area} - l_x^\prime - l_y^\prime - l_z^\prime - l_x - l_y - l_z + 2. \end{split} \end{equation} \item When the region $A$ is a cube of size $l_x \times l_y \times l_z$ with an additional convex cube of height $l_z^\prime$ on top (Fig.~\ref{fig.RegionA} (3)): \begin{equation} \begin{split} \frac{S_{A}}{\log2} =& \mathrm{Area} - l_x - l_y - l_z - l_z^\prime + 1. \end{split} \end{equation} \item When the region $A$ is a cube of size $l_x \times l_y \times l_z$ with an additional concave cube of height $l_z^\prime$ on top (Fig.~\ref{fig.RegionA} (4)): \begin{equation} \begin{split} \frac{S_{A}}{\log2} =& \mathrm{Area} - l_x - l_y - l_z - l_z^\prime + 1. \end{split} \end{equation} \end{enumerate} The area part of the entanglement is measured by the number of indices straddling the entanglement cut. The constant contribution to the entanglement entropy is universal\cite{casini2009entanglement}, as it counts the number of connected components of the entanglement surface. As opposed to the toric code case, the constants are positive numbers. We emphasize that the linear corrections to the area law in the entanglement entropies states have not been observed in quantum field theories. Furthermore, if we put the TNS on a cylinder $\mathcal{T}^2_{xy} \times \mathcal{R}_z$ and the entanglement cut splits the cylinder into two halves $z>0$ and $z<0$, then the entanglement entropy of either side is: \begin{equation}\label{eq.XcubeCylinderEntropy} \frac{S_{A}}{\log2} = \mathrm{Area} - L_x-L_y + 1. \end{equation} We emphasize that the entanglement spectrum is flat, because all singular values are identical in Eq.~\eqref{eq.XcubeSVD}. \subsection{Transfer Matrix as a Projector}\label{subsec.Xcube_TransferMatrix} \begin{figure*}[t] \centering \includegraphics[width=0.7\textwidth]{figures/TNS_onT3_EigenstateOfWX.pdf} \caption{Derivations for the first equation in Eq.~\eqref{eq.TNS_WX_eigenstate}. The rest two equations can be proved similarly. In the first step, the physical $X$ operators can be transferred to the virtual level by using Eq.~\eqref{eq.projectorcondition ToricCode}, and in the third step, all the virtual $X$ operators are exactly canceled in pairs (dashed red rectangles in the third figure) due to Eq.~\eqref{eq.XcubeStraightX}. } \label{fig.TNS_WX_eigenstate} \end{figure*} Following the same reasoning explained in Sec.~\ref{subsec.TransferMatrix}, the transfer matrix of the X-cube model in the $xy$-plane is: \begin{equation} \mathrm{TM}_{xy} = \mathcal{C}^{\mathcal{T}^2_{xy}} \left( \ldots TTT \ldots \right) \end{equation} with open virtual indices in the $z$-direction. Graphically, see Eq.~\eqref{eq.TransferMatrixGraph} and Eq.~\eqref{eq.TransferMatrixIndex}. In this section, we will show that the $\mathrm{TM}_{xy}$ for the X-cube model is also a projector. However, the projection is more complicated than in the 3D toric code example. Using the transfer matrix basis $e_{\{\bar{z}\}}$ defined in Eq.~\eqref{eq.TMbasis}, we show that: \begin{equation}\label{eq.XcubeTransferMatrixMultiplication} \begin{split} \mathrm{TM}_{xy} \cdot e_{\{\bar{z}\}} &= \sum_{\text{Concatenation Lemma}} \left(\mathrm{TM}_{xy}\right)_{\{z\},\{\bar{z}\}} \\ &\propto \sum_{\text{Concatenation Lemma}} e_{\{z\}} \\ \end{split} \end{equation} where the notation $\{z\}$ and $\{\bar{z}\}$ collectively denote all $z$ indices perpendicular to the $xy$-plane. The summation with the Concatenation lemma means that: \begin{equation}\label{eq.XcubeTMcondition} \begin{split} \sum_{i} z_{i,j} + \bar{z}_{i,j} = 0 \mod{2}, \quad\forall~ j \\ \sum_{j} z_{i,j} + \bar{z}_{i,j} = 0 \mod{2}, \quad\forall~ i \\ \end{split} \end{equation} where the subindex $i,j$ of $z_{i,j}$ denote the positions of $z_{i,j}$ in the $x$- and $y$-direction respectively. These two equations mean that in each $xz$ and $yz$ plane, the indices have an even summation. For instance, the red dashed rectangles below: \begin{equation} \begin{gathered} \includegraphics[width=\columnwidth]{figures/Xcube_TransferMatrix_Condition.pdf} \end{gathered}. \end{equation} Among the $L_x+L_y$ equations in Eq.~\eqref{eq.XcubeTMcondition}, only $L_x+L_y-1$ are linearly independent, because the summations of the two sets of constraints are the same: \begin{equation} \begin{split} &\sum_{j}\left( \sum_{i} z_{i,j} + \bar{z}_{i,j} \right) = 0 \mod{2} \\ \Leftrightarrow &\sum_{i}\left( \sum_{j} z_{i,j} + \bar{z}_{i,j} \right) = 0 \mod{2}. \\ \end{split} \end{equation} The summation in Eq.~\eqref{eq.XcubeTransferMatrixMultiplication} can be separated into $L_x+L_y-1$ number of different ``parity" sectors, similar to the 3D toric code case Eq.~\eqref{eq.ToricCodeTMeigenstate}. Hence, $\mathrm{TM}_{xy}$ is a projector of rank $2^{L_x+L_y-1}$. It has $2^{L_x+L_y-1}$ degenerate nonzero eigenvalues. \subsection{GSD and Transfer Matrix}\label{subsec.Xcube_GSD} The TNS, which gives us the single ground state on $\mathcal{R}^3$, has the minimum energy by construction. If we contract these tensors on the torus $\mathcal{T}^3$ with periodic boundary conditions, then we still yield only one ground state. Moreover, this ground state is the $+1$ eigenstate of all $W_X$ operators in Eq.~\eqref{eq.operatorsXcube}: \begin{equation}\label{eq.TNS_WX_eigenstate} \begin{split} W_{X}[C_x] \ket{\mathrm{TNS}}_{\mathcal{T}^3} &= \ket{\mathrm{TNS}}_{\mathcal{T}^3}, \\ W_{X}[C_y] \ket{\mathrm{TNS}}_{\mathcal{T}^3} &= \ket{\mathrm{TNS}}_{\mathcal{T}^3}, \\ W_{X}[C_z] \ket{\mathrm{TNS}}_{\mathcal{T}^3} &= \ket{\mathrm{TNS}}_{\mathcal{T}^3}. \\ \end{split} \end{equation} These three equations can be proved by using Eq.~\eqref{eq.projectorcondition ToricCode} and Eq.~\eqref{eq.XcubeTtensor}; the derivations are summarized in Fig.~\ref{fig.TNS_WX_eigenstate}. Other ground states on the torus can be obtained by acting with the nonlocal operators $W_{Z}[\tilde{C}_{z,x}]$ and $W_{Z}[\tilde{C}_{z,y}]$ in Eq.~\eqref{eq.operatorsXcube} on the TNS $\ket{\mathrm{TNS}}_{\mathcal{T}^3}$. The physical operator $W_{Z}[\tilde{C}_{z,x}]$ and $W_{Z}[\tilde{C}_{z,y}]$ can be transferred to the virtual indices using Eq.~\eqref{eq.projectorcondition ToricCode}. After applying $W_{Z}[\tilde{C}_{z,x}]$ and $W_{Z}[\tilde{C}_{z,y}]$ in Eq.~\eqref{eq.operatorsXcube} on TNS, we can generate $2^{L_x+L_y}$ TNSs exemplified by Fig.~\ref{fig.XcubeTwistedTM}. The intersections of $W_{Z}[\tilde{C}_{z,x}]$ and $W_{Z}[\tilde{C}_{z,y}]$ with the $xy$-plane are the blue circled $Z$ operators in Fig.~\ref{fig.XcubeTwistedTM}. We denote these planes of tensors in Fig.~\ref{fig.XcubeTwistedTM} as $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ where $\vec{\alpha}$ and $\vec{\beta}$ are binary vectors (values in $\{0,1\}$) of length $L_x$ and $L_y$ representing the absence or presence of Pauli $Z$ operators. For instance, the untwisted plane of TNS is $\mathbf{T}_{xy}^{\vec{0},\vec{0}}$ using this convention. This notation $\mathbf{T}_{xy}^{\vec{0},\vec{0}}$ is similar to that in Sec.~\ref{subsec.ToricCode_GSD}. Inserting a $Z$ operator at the virtual level will change the holonomy of $W_X$ in the $xy$-plane. For instance, for Panel (a) in Fig.~\ref{fig.XcubeTwistedTM}, the $W_{X}$ operator along the first row has a $-1$ eigenvalue, while the $W_{X}$ operators along the second, the third and the fourth row have a $+1$ eigenvalue. Each of these $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ will generate a transfer matrix $\mathrm{TM}_{xy}^{\vec{\alpha},\vec{\beta}}$ which has $2^{L_x+L_y-1}$ degenerate eigenvalues. The reasons are that (1) when building the transfer matrices from the twisted $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$, the contraction over the physical indices of the projector $g$ tensors makes the virtual indices from the bra layer and ket layer identical; (2) the $Z$ operators in the bra layer cancel their analogues respectively in the ket layer. Hence, the transfer matrices $\mathbf{TM}_{xy}^{\vec{\alpha},\vec{\beta}}$ obtained from the twisted $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ are equal to that obtained from the untwisted one $\mathbf{T}_{xy}^{\vec{0},\vec{0}}$. \begin{equation} \mathrm{TM}_{xy}^{\vec{\alpha},\vec{\beta}} = \mathrm{TM}_{xy}^{\vec{0},\vec{0}}, \quad \forall\; \vec{\alpha}, \vec{\beta} \end{equation} Thus, the transfer matrices $\mathrm{TM}_{xy}^{\vec{\alpha},\vec{\beta}}$ built from the twisted $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ are also projectors of dimension $2^{L_x+L_y-1}$. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{figures/Xcube_TransferMatrix.pdf} \caption{Examples for the X-cube TNS in a $xy$-plane, obtained by acting $W_{Z}[\tilde{C}_{z,x}]$ and $W_{Z}[\tilde{C}_{z,y}]$ on the constructed TNS. The intersection of one $W_{Z}[\tilde{C}_{z,x}]$ operator and one $W_{Z}[\tilde{C}_{z,y}]$ operator with the $xy$-plane is only one Pauli $Z$ operator, i.e., the circled blue $Z$ in this figure.} \label{fig.XcubeTwistedTM} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.6\textwidth]{figures/Xcube_TransferMatrix_FourChoice.pdf} \caption{We act $W_{Z}[\tilde{C}_{x,y}]$ and $W_{Z}[\tilde{C}_{y,x}]$ operators on Panel (a) in Fig.~\ref{fig.XcubeTwistedTM}. Hence, we have four TNSs in this $xy$-plane that can be related to each other. See the text for the explanations.} \label{fig.twistedTNS} \end{figure*} The TNS on torus is then constructed as the contraction of $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ on each $xy$-plane. However, the subtlety of the X-cube model is that in constructing TNS on the torus, there are still degree of freedom we can play with. In the 3D toric code case Sec.~\ref{subsec.ToricCode_GSD}, as we evolve the state in the $z$ direction, we have to use the same $\mathbf{T}_{xy}^{\alpha,\beta}$ in each plane, otherwise the corresponding wave function is no longer a ground state. However, we have more choices for the X-cube model. In each plane of $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$, we have four choices that do not change the energy: we can still act $W_{Z}[\tilde{C}_{x,y}]$ and $W_{Z}[\tilde{C}_{y,x}]$ on TNS in the $xy$-plane without affecting other $xy$-planes. These choices do not raise the energy because $W_{Z}[\tilde{C}_{x,y}]$ and $W_{Z}[\tilde{C}_{y,x}]$ are the nonlocal operators of the X-cube model Eq.~\eqref{eq.operatorsXcube}: they do not cost any energy. For each of the $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ built from Fig.~\ref{fig.XcubeTwistedTM}, we can find 3 others: all $4$ planes of tensors $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ can be used in the $z$ direction. Take Panel (a) of Fig.~\ref{fig.XcubeTwistedTM} as an example at one point in the $z$ direction. The 4 $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ are depicted in Fig.~\ref{fig.twistedTNS}. Their expressions are: \begin{enumerate} \item For Panel (a), we do not apply any operators. \item For Panel (b), we apply $W_{Z}[\tilde{C}_{y,x}]$ on TNS. \item For Panel (c), we apply $W_{Z}[\tilde{C}_{x,y}]$ on TNS. \item For Panel (d), we apply both $W_{Z}[\tilde{C}_{x,y}]$ and $W_{Z}[\tilde{C}_{y,x}]$ on TNS. \end{enumerate} Due to this choice, four twisted $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ will be grouped together, and there are \begin{equation}\label{eq.ambiguity} \frac{2^{L_x+L_y} }{4} \end{equation} number of groups of twisted $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$. Hence, the total GSD that we can obtain from the transfer matrices built from $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ is: \begin{equation} 2^{L_x+L_y-1} \times \frac{2^{L_x+L_y} }{4}\times 4^{L_z} = 2^{2L_x+2L_y+2L_z-3}. \end{equation} Each factor in the above formula has an explanation: \begin{enumerate} \item $2^{L_x+L_y-1}$ is the degeneracy of each transfer matrix. \item $\frac{2^{L_x+L_y} }{4}$ is the number of groups of the twisted $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ due to the ``ambiguity" explained in the paragraph before Eq.~\eqref{eq.ambiguity}. \item $4^{L_z}$ is the number of combinations for $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$, since for each $xy$ plane we can pick any of the 4 $\mathbf{T}_{xy}^{\vec{\alpha},\vec{\beta}}$ belonging to the same group. \end{enumerate} This is the total GSD of X-cube model on torus. \section{Haah Code}\label{sec.Haah} In this section, we derive the TNS for the Haah code following a similar prescription as that in Sec.~\ref{subsec.ToricCode_TNS} and Sec.~\ref{subsec.Xcube_TNS}. We then compute the entanglement entropies using the TNS for several types of entanglement cuts. This section is organized as follows. In Sec.~\ref{subsec.HaahCode}, we review the Hamiltonian of the Haah code. In Sec.~\ref{subsec.TNS_Haah}, we present the construction of TNS for the Haah code. In Sec.~\ref{subsec:HaahcodeSVDCuts}, we discuss the entanglement cuts for which the TNS is an exact SVD. In Sec.~\ref{subsec.Haah_Entanglement}, we discuss the cubic entanglement cut, where the TNS is not an exact SVD. The calculation of the entanglement entropies proceeds in the same way as that of the 3D toric code model or X-cube model: one counts the number of constraints for open indices. \subsection{Hamiltonian of Haah code}\label{subsec.HaahCode} The Haah code is defined on a cubic lattice. As opposed to the 3D toric code and the X-cube model discussed in Sec.~\ref{sec.toriccode} and \ref{sec.Xcube}, there are two spin-$1/2$'s defined on each \emph{vertex} of a cubic lattice. The Hamiltonian of the Haah code is a sum of commuting operators where each term is the product of Pauli $X$ or $Z$ operators. Specifically, there are two types of the Hamiltonian terms: \begin{eqnarray} H= -\sum_{a,b,c}A_{abc}-\sum_{a,b,c} B_{abc}. \end{eqnarray} The $A$ and $B$ operators are defined on each cube in the cubic lattice, and the indices $a,b,c$ represent the vertex coordinates. If we choose the space to be $\mathcal{R}^3$, then $a, b, c\in \mathbb{Z}$. If we choose the space to be a 3D torus of the size $L_x\times L_y\times L_z$ with periodic boundary condition on each side, then $a\in \mathbb{Z}_{L_x}$, $b\in \mathbb{Z}_{L_y}$ and $c\in \mathbb{Z}_{L_z}$. The operators defined on $a=0, b=0, c=0$ are \begin{eqnarray} \begin{split} A_{000}&=Z^{L}_{110}Z^L_{101}Z^L_{011}Z^{L}_{111}Z^{R}_{100}Z^R_{010}Z^R_{001}Z^{R}_{111}\\ B_{000}&=X^{L}_{000}X^L_{110}X^L_{101}X^L_{011}X^R_{000}X^R_{100}X^R_{010}X^R_{001}. \end{split} \end{eqnarray} The superscripts $L/R$ represent the left or the right spin on a vertex where the Pauli operators act on. The subscripts $(ijk)\in \mathbb{Z}_2\times\mathbb{Z}_2\times \mathbb{Z}_2$ represent the coordinate of vertices (on a cube). The operators $A_{abc}$ and $B_{abc}$ on all other cubes can be obtained by translation. Pictorially the two types of terms are: \begin{equation}\label{HaahHamiltonian} \begin{gathered} \includegraphics[width=0.8\columnwidth]{figures/HaahCodeModel.pdf} \end{gathered} \end{equation} It is straightforward to check that all the Hamiltonian terms commute. \subsection{TNS for Haah Code}\label{subsec.TNS_Haah} \begin{figure*}[t] \centering \includegraphics[width=0.6\columnwidth]{figures/Haah_221.pdf} \includegraphics[width=0.5\columnwidth]{figures/Haah_222.pdf} \caption{Tensor contraction for the Haah code TNS. (a) The lattice size is $2 \times 3 \times 3$. (b) The lattice size is $3 \times 3 \times 3$ } \label{fig.HaahTNS} \end{figure*} The ground state $|\mathrm{\mathrm{GS}}\rangle$ is obtained by requiring \begin{eqnarray} A_{abc}|\mathrm{GS}\rangle=|\mathrm{GS}\rangle\label{AconditionHaah} \\ B_{abc}|\mathrm{GS}\rangle=|\mathrm{GS}\rangle\label{BconditionHaah} \end{eqnarray} for every $a,b,c$. We can solve these two equations similarly to the 3D toric code model in Sec.~\ref{subsec.ToricCode_TNS} and the X-cube model in Sec.~\ref{subsec.Xcube_TNS} to obtain a TNS representation. We now specify the projector $g$ tensor and the local $T$ tensor. There are 2 types of $g$ tensors $g^{L}$ and $g^{R}$ associated with the left and right physical spins on each vertex. Each $g$ tensor has 1 physical index $s$ and 4 virtual indices $i,j,k,l$. The reason for these 4 virtual indices (rather than 2 virtual indices as in the toric code and the X-cube examples) is that, for each vertex, the virtual indices from $T$ tensors (to be defined below) in the neighboring 8 octants need to be fully contracted; this requires the $g$ tensor to have $4$ virtual indices. The index assignment of the left and right $g$ tensors, $g^{Ls}_{ijkl}$ and $g^{Rs}_{ijkl}$, are: \begin{equation}\label{eq.projector4L} \begin{split} &g^{Ls}_{ijkl}= \begin{gathered} \includegraphics[width=0.65\columnwidth]{figures/left.pdf} \end{gathered} \end{split} \end{equation} and \begin{equation}\label{eq.projector4R} \begin{split} &g^{Rs}_{ijkl}=\begin{gathered} \includegraphics[width=0.65\columnwidth]{figures/Right.pdf} \end{gathered} \end{split} \end{equation} where $s$ is the physical index in $\{\ket{0}=\ket{\mathord{\uparrow}},\ket{1}=\ket{\mathord{\downarrow}}\}$, and $ijkl$ are virtual indices. We use a blue dot for the $g$ tensor on the right spin and a red dot for the $g$ tensor on the left spin. The green dots at the center of each cube represent $T$ tensors (which we define below). Similar to the toric code model and the X-cube model, we require that the $g$ tensor acts as a projector from the physical index to the four virtual indices: \begin{equation}\label{Haahgtensorprojectioncond} \begin{split} g^{Ls}_{ijkl} &= \begin{cases} 1 & i=j=k=l=s \\ 0 & \text{otherwise} \end{cases}, \\ g^{Rs}_{ijkl} &= \begin{cases} 1 & i=j=k=l=s \\ 0 & \text{otherwise} \end{cases}. \end{split} \end{equation} The four virtual indices of $g^{Ls}_{ijkl}$ extend along the III, VIII, VII, VI octants (as shown in Eq.~\eqref{eq.projector4L}), and the four virtual indices of $g^{Rs}_{ijkl}$ extend along the II, VII, IV, V octants (as shown in Eq.~\eqref{eq.projector4R}). The tensor $T_{\{i\}}$ is defined at the center of each cube, and every $T$ tensor has 8 virtual indices. Graphically, the $T$ tensor is: \begin{equation}\label{HaahTtensor} \begin{split} &T_{i_1i_2i_3i_4i_5i_6i_7i_8}= \begin{gathered} \includegraphics[width=0.35\columnwidth]{figures/HaahT.pdf} \end{gathered}. \end{split} \end{equation} The $T$ tensor is contracted to 8 of the total 16 (8 vertices times 2 degrees of freedom per vertex) $g$ tensors located at the cube corners via the virtual indices. The reason for only 8 virtual indices (instead of 16 virtual indices) in the $T$ tensor is that among 16 spins around the cube $(a,b,c)$ only eight of them are addressed by the Pauli $Z$ operators in the $A_{abc}$ term of the Hamiltonian. The elements of the $T$ tensor for a given set of virtual indices $i_1i_2i_3i_4i_5i_6i_7i_8$ are determined by solving Eq.~\eqref{AconditionHaah} and Eq.~\eqref{BconditionHaah}. Imposing the condition Eq.~\eqref{AconditionHaah} and transferring the physical $Z$ operators to the virtual level, we find that: \begin{equation} \begin{gathered} \includegraphics[width=0.35\columnwidth]{figures/Haah_T_derivation_Z.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.35\columnwidth]{figures/HaahT_noindex.pdf} \end{gathered} \end{equation} which amounts to \begin{equation}\label{HaahTtensorconstraint} T_{i_1i_2i_3i_4i_5i_6i_7i_8}= (-1)^{\sum_{n=1}^{8} i_n} T_{i_1i_2i_3i_4i_5i_6i_7i_8}, \end{equation} where $i_1, \cdots, i_8$ are the eight virtual indices of the $T$ tensor defined in Eq.~\eqref{HaahTtensor}. Hence, \begin{equation} \begin{split} T_{i_1i_2i_3i_4i_5i_6i_7i_8}=0,\;\mathrm{if}\; \sum_{n=1}^{8}i_n=1\mod 2. \end{split} \end{equation} Imposing the condition Eq.~\eqref{BconditionHaah} and transferring the physical $X$ operators to the virtual level, we find that \begin{equation}\label{HaahXtensorconstruction} \begin{split} &\begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/HaahT_noindex.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_01.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_02.pdf} \end{gathered}\\ =& \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_03.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_04.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_05.pdf} \end{gathered}\\ =& \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_06.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_07.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_08.pdf} \end{gathered}\\ =& \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_09.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_10.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_11.pdf} \end{gathered}\\ =& \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_12.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_13.pdf} \end{gathered} = \begin{gathered} \includegraphics[width=0.25\columnwidth]{figures/Haah_T_derivation_X_14.pdf} \end{gathered}.\\ \end{split} \end{equation} In terms of components, Eq.~\eqref{HaahXtensorconstruction} means that $T_{i_1i_2i_3i_4i_5i_6i_7i_8}=T_{i'_1i'_2i'_3i'_4i'_5i'_6i'_7i'_8}$ where $i'_1i'_2i'_3i'_4i'_5i'_6i'_7i'_8$ are obtained by flipping arbitrary pairs of indices from $i_1i_2i_3i_4i_5i_6i_7i_8$. For example, \begin{eqnarray}\label{HaahXoperatorconstraints} \begin{split} T_{i_1i_2i_3i_4i_5i_6i_7i_8}&=T_{(1-i_1)(1-i_2)i_3i_4i_5i_6i_7i_8}\\ &=T_{i_1(1-i_2)(1-i_3)i_4i_5i_6i_7i_8}\\ &=T_{i_1i_2(1-i_3)(1-i_4)i_5i_6i_7i_8}\\ &=... \end{split} \end{eqnarray} Combining Eq.~\eqref{HaahTtensorconstraint} and \eqref{HaahXoperatorconstraints}, we find that any configuration of $T_{i_1i_2i_3i_4i_5i_6i_7i_8}$ satisfying the condition $\sum_{k=1}^8 i_k=0\mod 2$ are equal. We can rescale the $T$ tensor such that $T_{i_1i_2i_3i_4i_5i_6i_7i_8}=1$ for $\sum_{k=1}^8 i_k=0\mod 2$, i.e., \begin{equation}\label{HaahTtensor2} T_{i_1i_2i_3i_4i_5i_6i_7i_8}= \begin{cases} 1 & \sum_{n=1}^{8}i_n=0 \mod{2}\\ 0 & \sum_{n=1}^{8}i_n=1 \mod{2}.\\ \end{cases} \end{equation} For simplicity, we consider the space to be $\mathcal{R}^3$ where the Haah code has a unique ground state. \begin{equation}\label{eq.HaahTNS} \ket{\mathrm{TNS}} = \sum_{\{s\}} \mathcal{C}^{R^3} \left( g^{L,s_1}g^{R,s_2}g^{L,s_3}g^{R,s_4} \ldots TTT \ldots \right) \ket{\{s\}}. \end{equation} We emphasize that the contraction of the Haah code TNS is quite different from that of the 3D toric code model and the X-cube model. The main difference is that the $g$ tensor has 4 virtual indices for the Haah code, while it has only 2 virtual indices for the 3D toric code and the X-cube model. As an example of contraction, we take two blocks of size $2 \times 2 \times 1$ and $2 \times 2 \times 2$ in Fig.~\ref{fig.HaahTNS}. The $T$ tensors with their virtual indices are drawn explicitly. Each red or blue node in the two figures is a projector $g$ tensor, whose physical index is not drawn; we only draw the virtual legs that are connected to the $T$ tensors inside the blocks. In the block $2\times 2 \times 2$, all the 8 virtual indices of the two $g$ tensors (4 per each $g$ tensor) in the middle of all the cubes are contracted with $T$ tensors, while other $g$ tensors have open virtual indices (which are not explicitly drawn). \subsection{Entanglement Entropy for SVD Cuts} \label{subsec:HaahcodeSVDCuts} In this section, we compute the entanglement entropies for two types of cuts for which the TNS is an SVD. \subsubsection{Two types of SVD Cuts} To compute the entanglement entropy, we use the same convention adopted in the discussion of the 3D toric code (in Sec.~\ref{sec.toriccode}) and the X-cube model (in Sec.~\ref{sec.Xcube}): the open virtual indices of the region $A$ connect directly to the $g$ tensors inside $A$ while the open virtual indices of the region $\bar{A}$ connect with $T$ tensors inside $\bar{A}$. We further choose a region $A$ such that the TNS is an SVD, and compute the entanglement entropy. We find two types of entanglement cuts for which the Haah code TNS is an exact SVD. For the general cubic region $A$, we need an extra step to perform the SVD of the TNS. This derivation will be presented in Sec.~\ref{subsubsec.HaahCodeSVD}. We now specify the two types of regions that the Haah code TNS is an exact SVD. \begin{enumerate} \item Region $A$ only consists of the spins connecting to a set of $(l-1)$ $T$ tensors which are contracted along a certain direction. Figure \ref{fig.Haah_SVD2} shows an example with $l-1=3$ contracted along the $z$ direction. (Since in Sec.~\ref{sec.toriccode} and \ref{sec.Xcube}, we used $l$ as the number of vertices along each side of region $A$, there are $l-1$ bonds (or cubes) along each side.) \item Region $A$ contains all the spins connecting with $T$ tensors which are contracted in a ``tripod-like" shape, where three legs extend along $x, y, z$ directions. If there are $l_x-1$ cubes in the $x$ leg, $l_y-1$ cubes in the $y$ leg, and $l_z-1$ cubes in the $z$ leg, then there are $1+(l_x-2)+(l_y-2)+(l_z-2)=l_x+l_y+l_z-5$ cubes (or $T$ tensors) region $A$. Figure \ref{fig.Haah_SVD1} shows an example with $l_x=l_y=l_z=3$. \end{enumerate} In the first case and for $l=2,3$, we use brute-force numerics to find that the reduced density matrix is diagonal (see App.~\ref{app.HaahcodeBruteforce} for details), which shows that the TNS is an exact SVD. \begin{figure}[H] \centering \includegraphics[width=0.25\columnwidth]{figures/Haah_SVD2.pdf} \caption{Region $A$ contains all the spins connecting with $l-1$ $T$ tensors which are contracted along z direction. The figure shows an example with $l=4$.} \label{fig.Haah_SVD2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.4\columnwidth]{figures/Haah_SVD1.pdf} \caption{Region $A$ contains all the spins connecting with $T$ tensors which are contracted in a ``tripod-like" shape, where three legs extend along $x, y, z$ directions. There are three legs extending along $x, y, z$ directions respectively. In general, three legs can have different length, each with $l_x-1, l_y-1, l_z-1$ cubes along three directions. This figure shows an example where $l_x=l_y=l_z=3$. } \label{fig.Haah_SVD1} \end{figure} In order to show that the above cuts correspond to an SVD, we follow the arguments developed in Sec.~\ref{subsec.TNSSVD}. In Sec.~\ref{subsec.TNSSVD}, we proposed a \emph{SVD Condition}. However, we find that the region $A$ of both types, shown in Fig.~\ref{fig.Haah_SVD2} and \ref{fig.Haah_SVD1}, do not satisfy the \emph{SVD Condition}: Two open virtual indices in region $\bar{A}$ connects with the same $T$ tensor, which violates the \emph{SVD Condition}. For instance, the $g_1$ and $g_2$ in Fig.~\ref{fig.Haah_SVD1} connects to the same $T$ tensor in their upper-left cube which is in the region $\bar{A}$. Here, we propose a \emph{Generalized SVD Condition} which suffices to prove that the entanglement cut corresponding to Fig.~\ref{fig.Haah_SVD2} and \ref{fig.Haah_SVD1} are SVD. \textit{Generalized SVD condition: Let $\{t\}$ be the set of open virtual indices. Given a set of physical indices $\{s\}$ inside region $\bar{A}$, if $\{t\}$ can be uniquely determined by the $\{s\}$ inside region $\bar{A}$ via the $g$ tensor projection condition Eq.~\eqref{Haahgtensorprojectioncond} and $T$ tensor constraints Eq.~\eqref{HaahTtensor2}, then $\ket{\{t\}}_{\bar{A}}$ is orthogonal. Since $\ket{\{t\}}_A$ is orthogonal because all the open virtual indices are connected with $g$ tensors, the TNS $|TNS\rangle=\sum_{\{t\}}\ket{\{t\}}_A\otimes \ket{\{t\}}_{\bar{A}}$ is SVD. } To prove the \emph{Generalized SVD Condition}, we notice that if we have two different sets of open virtual indices $\{t\}_{\bar{A}}$ and $\{t'\}_{\bar{A}}$, the physical indices $\{s\}_{\bar{A}}$ and $\{s'\}_{\bar{A}}$ which connect (via $g$ tensors) to the $T$ tensors on the boundary of region $\bar{A}$ cannot be the same. Otherwise, if $\{s\}_{\bar{A}}=\{s'\}_{\bar{A}}$, since the physical indices $\{s\}_{\bar{A}}$ and $\{s'\}_{\bar{A}}$ in the region $\bar{A}$ uniquely determine the open virtual indices $\{t\}_{\bar{A}}$ and $\{t'\}_{\bar{A}}$, $\{t\}_{\bar{A}}=\{t'\}_{\bar{A}}$, hence it is in contradiction with our assumption $\{t\}_{\bar{A}}\neq \{t'\}_{\bar{A}}$. Therefore, $\{t\}_{\bar{A}}\neq \{t'\}_{\bar{A}}$ implies $\{s\}_{\bar{A}}\neq \{s'\}_{\bar{A}}$, and hence $_{\bar{A}} \langle\{t\}|\{t'\}\rangle_{\bar{A}}=0$. This is in the same spirit of the proof in Sec.~\ref{subsec.TNSSVD}. The proof of normalization of the wave function is independent of $\{t\}$ is also the same as in Sec.~\ref{subsec.TNSSVD}. Furthermore, $_A \langle\{t\}|\{t'\}\rangle_A=0$ for $\{t\}\neq \{t'\}$ is the straightforward because $\{t\}_A$ are connected with $g$ tensors. In summary, if the entanglement cut satisfies the \emph{Generalized SVD Condition}, we have \begin{enumerate} \item $_A\langle \{t\}|\{t'\}\rangle_A \propto \delta_{\{t\},\{t'\}}$ when $\ket{\{t\}}_{A}$ and $\ket{\{t^\prime\}}_{A}$ are not null vectors; \item $_{\bar{A}}\langle \{t\}|\{t'\}\rangle_{\bar{A}} \propto \delta_{\{t\},\{t'\}}$ when $\ket{\{t\}}_{\bar{A}}$ and $\ket{\{t^\prime\}}_{\bar{A}}$ are not null vectors. \end{enumerate} This shows that the TNS is an SVD. We explain the \emph{Generalized SVD Condition} in the simplest example, i.e., $l=2$ in case 1. There is only one $T$ tensor, and the region A contains $8$ physical spins. \begin{center} \includegraphics[width=0.5\columnwidth]{figures/HaahT_connected.pdf} \end{center} All other spins apart from the eight connecting with the $T$ tensor belong to region $\bar{A}$. Because the virtual indices and physical indices are related by the $g$ tensor which is a projector, we use $i_1$ to denote the values of both virtual indices and physical indices connecting with left $g$ tensor located at $(x,y,z)=(0,0,1)$. Here, we use the coordinate convention where the $(x,y,z)=(0,0,0)$ is located at the left down frontmost corner as in Fig.~\ref{fig.HaahTNS}. Similarly we use $ i_2, i_3, i_4, i_5, i_6, i_7, i_8$ to label the values of the virtual/physical indices on the remaining seven nodes connecting with the same $T$ tensor. Hence the set of open indices is effectively $\{i_1, i_2, i_3, i_4, i_5, i_6, i_7, i_8\}$ (after identified by the $g$ tensors). We further consider how the physical indices from the region $\bar{A}$ constrain the open indices. Consider the $T$ tensor in the region $\bar{A}$ (which we denote by $T'$) which shares two spins $i_7, i_8$ with the region $A$ (The $T^\prime$ tensor lives in the lower right corner): \begin{equation} \begin{gathered} \includegraphics[width=5cm]{figures/HaahT_barA.pdf} \end{gathered} \end{equation} Since six among the eight virtual indices of $T'$ are contracted with $g$ tensors inside region $\bar{A}$, the remaining two open virtual indices, i.e., $i_7$ and $i_8$ are subject to one constraint from the $T'$ tensor: \begin{eqnarray} i_7+i_8=\mathrm{fixed}, \end{eqnarray} where ``fixed" means that the sum is fixed by the physical indices inside the region $\bar{A}$. We can similarly consider the constraints coming from other $T$ tensors in region $\bar{A}$. The whole set of constraints are listed as follows: \begin{eqnarray} i_7+i_8&=&\mathrm{fixed}\nonumber\\ i_1+i_2&=&\mathrm{fixed}\nonumber\\ i_5&=&\mathrm{fixed}\nonumber\\ i_6&=&\mathrm{fixed}\nonumber\\ i_6+i_7&=&\mathrm{fixed}\nonumber\\ i_2+i_3&=&\mathrm{fixed}\nonumber\\ i_5&=&\mathrm{fixed}\nonumber\\ i_8&=&\mathrm{fixed}\nonumber\\ i_1&=&\mathrm{fixed}\nonumber\\ i_4&=&\mathrm{fixed}\nonumber\\ i_3&=&\mathrm{fixed}\nonumber\\ i_7&=&\mathrm{fixed}. \end{eqnarray} The ``fixed" on the right hand side of the equations means that the virtual indices or the sum of the virtual indices are fixed by the physical indices in the region $\bar{A}$. All variables and equations are defined module 2. The above equations uniquely determine all the open virtual indices $i_1...i_8$. Therefore, such a choice of region $A$ of the entanglement cut satisfies the \emph{Generalized SVD Condition}. For the first type of the region $A$ with general $l$, and the second type of region $A$ with general $l_x, l_y, l_z$, we can similarly check that the TNS satisfies the \emph{Generalized SVD Condition}. Numerically, we checked that the Haah code TNS indeed satisfies the \emph{Generalized SVD Condition} for $2\leq l\leq 9$ for the first type, and $3\leq l_x\leq 8, 3\leq l_y\leq 8, 3\leq l_z\leq 8$ for the second type. The numerical procedure for this check is to list all the constraints for the indices in the region $A$ and find how many solutions exist for these constraints. \subsubsection{Entanglement entropy} We now compute the entanglement entropy for the exact SVD TNSs. We first consider the case 1 with general $l$, such as in Fig.~\ref{fig.Haah_SVD2}. All the spins connecting with $l-1$ contracted $T$ tensors along the $z$ directions are in region $A$, and the remaining belong to region $\bar{A}$. The number of open virtual indices, after identified by the local $g$ tensors, is $8+7(l-2)=7l-6$. The number of constraints from the local $T$ tensors is simply the number of $T$ tensors $l-1$, because they are all independent. Hence the number of independent open virtual indices is $7l-6-(l-1)=6l-5$. Therefore, the entanglement entropy is \begin{eqnarray}\label{HaahexactSVDtype1} \frac{S(A)}{\log 2}=6l-5. \end{eqnarray} In appendix.~\ref{app.HaahcodeBruteforce}, we numerically brute-force compute the reduced density matrix for $l=2$ and $l=3$, and find that the results match the general formula Eq.~\eqref{HaahexactSVDtype1}. We further consider the case 2 --- the region $A$ of tripod shape. The legs in the $x,y,z$ direction contains $l_x-1, l_y-1, l_z-1$ $T$ tensors respectively. We first count the total number of open virtual indices. When $l_x=l_y=l_z=3$ as shown in Fig.~\ref{fig.Haah_SVD1}, there are 26 physical spins (or $g$ tensors) in total. However, there is one $g$ tensor (at the left spin of $(x,y,z)=(1,1,1)$) whose four virtual indices are all contracted by the $T$ tensors within region $A$. Hence the number of open virtual indices, after identified by the local $g$ tensor, is 25. Moreover, we notice that adding one $T$ tensor in one of the three legs of region $A$ brings 7 extra spins. Therefore the total number of open virtual indices (after identified by the $g$ tensor) is $(26-1)+7(l_x-3)+7(l_y-3)+7(l_z-3)=7l_x+7l_y+7l_z-38$. We further numerically count the number of constraints that these open virtual indices satisfy. We find the number of constraints is the number of cubes minus 1, i.e., $(l_x+l_y+l_z-5)-1=l_x+l_y+l_z-6$. Therefore the number of independent open virtual indices is $(7l_x+7l_y+7l_z-38)-(l_x+l_y+l_z-6)=6l_x+6l_y+6l_z-32$. The entanglement entropy is \begin{eqnarray} \frac{S(A)}{\log 2}=6l_x+6l_y+6l_z-32. \end{eqnarray} \subsection{Entanglement Entropy for Cubic Cuts}\label{subsec.Haah_Entanglement} In this section, we consider the case where the region $A$ is a cube of size $l \times l \times l$, where $l$ is the number of vertices in each direction of the cube. The cut is chosen such that all the open virtual indices straddling the region A are connected to $g$ tensors in the region $A$ (i.e., all the physical spins near the boundary belong to the region $A$). For example, for $l=2$ as shown in \eqref{HaahTtensor}, all 16 physical spins belong to the region $A$. For $l=3$ as shown in Fig.~\ref{fig.HaahTNS} (b), all 54 physical spins belong to the region $A$. For the simplicity of notations, in this section, we denote the Hamiltonian terms as $A_c$ and $B_c$ where the subscript refers to a cube $c$. \subsubsection{SVD for TNS}\label{subsubsec.HaahCodeSVD} For the cubic region $A$, we find that the TNS for the Haah code is different from that for the toric code and X-cube model: the TNS for the Haah code is \emph{not} an exact SVD. The TNS basis in the region $A$, $\ket{\{t\}}_{A}$, are orthonormal, since the open virtual indices are connected with $g$ tensors. However, the TNS basis $\ket{\{t\}}_{\bar{A}}$ in the region $\bar{A}$ are \emph{not} orthogonal. In other words, the basis $\ket{\{t\}}_{\bar{A}}$ is over complete. The subtlety that the TNS bipartition is not an exact SVD manifests as follows: the singular vectors in the region $A$ for the ground states of the Haah code have to be the eigenvectors of all $A_c$ and $B_c$ operators that actually lie in the region $A$, and the corresponding eigenvalues should all be $1$. Notice that our TNS basis state $\ket{\{t\}}_{A}$, if not null, are the eigenvectors of all $A_c$ operators inside the region $A$ with eigenvalues $1$, and are also the eigenvectors of $B_c$ operators with eigenvalues $1$ when $B_c$ operators are deep inside the region $A$, i.e., when they do not act on any spin at the boundary of $A$. However, $\ket{\{t\}}_{A}$ are \emph{not} the eigenvectors of $B_c$ operators, when $B_c$ operators are inside the region $A$ but also adjacent to the region $A$'s boundary. The reason is that the $B_c$ operators adjacent to the region $A$'s boundary, when acting on the TNS basis $\ket{\{t\}}_{A}$, will flip the physical spins on the boundary, and thus flip the open virtual indices $\{t\}$ due to the projector $g$ tensors. Therefore, the basis $\ket{\{t\}}_{A}$ is no longer the singular vectors for the Haah code. This is not an \textit{a priori} problem, but a result of the geometry of the Haah code, whose spins cannot be written on bonds but have to be written on sites. A similar situation would occur if the 2D toric code model would be re-written to have its spins on sites. The method to find the correct SVD for the TNS is to use the $\ket{\{t\}}_{A}$ to construct the eigenvectors of $B_c$ operators by projection. We prove the following statement: \textit{If $\ket{\{t^\prime\}}_{A}=B_c \ket{\{t\}}_{A}$ when $B_c$ is inside the region $A$ and also adjacent to the region $A$'s boundary, then $_A\langle \{t'\}|\{t\}\rangle_A=0$ and $\ket{\{t^\prime\}}_{\bar{A}}=\ket{\{t\}}_{\bar{A}}$.} \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{figures/Haah_transfer_AA.pdf} \caption{Transferring the Pauli $X$ operators of the $B_c$ operator from the region $A$ (a) to the region $\bar{A}$ (b).} \label{fig.Haah_transfer} \end{figure} The proof is as follows. The first part of the statement is a consequence of the $\ket{\{t\}}_A$ basis state orthogonality. Indeed, $B_c$ flips physical spins located at the region $A$'s boundary. Thus the two sets $\{t\}$ and $\{t'\}$ are distinct. The second part of the statement is more involved. Suppose for simplicity that we consider two nearest neighbor $T$ tensors for the region $A$ and $\bar{A}$ in Fig.~\ref{fig.Haah_transfer}. The $B_c$ operator acts on the right cube Fig.~\ref{fig.Haah_transfer} (a). The physical spins on the boundary of the region $A$ which are flipped by $B_c$ are those covered by circled $X$ in Fig.~\ref{fig.Haah_transfer} (a). Then these Pauli $X$ operators can be transferred to the virtual indices due to projector $g$ tensors, and the virtual indices of the $T$ tensor in the region $\bar{A}$ obtain two $X$ operators as in Fig.~\ref{fig.Haah_transfer} (b). Notice that the $T$ tensor for the Haah code is invariant under this action (see the 12th cube in Eq.~\eqref{HaahXtensorconstruction}). This is also true for other $T$ tensors in the region $\bar{A}$ that are affected by $B_c$. The transfer of $X$ operators from the region $A$ to the region $\bar{A}$ gives exactly the same equations in Eq.~\eqref{HaahXtensorconstruction} when we solve for the $T$ tensors. Hence, the X operators transferred to the open virtual indices in the region $\bar{A}$ do not change the state at all, i.e., $\ket{\{t^\prime\}}_{\bar{A}}=\ket{\{t\}}_{A}$. As a consequence, we can perform the following factorization \begin{equation} \begin{split} &\ket{\{t\}}_{A} \otimes \ket{\{t\}}_{\bar{A}} + \ket{\{t^\prime\}}_{A} \otimes \ket{\{t^\prime\}}_{\bar{A}} \\ =& \Big[(1+B_c) \ket{\{t\}}_{A}\Big] \otimes \ket{\{t\}}_{\bar{A}}. \end{split} \end{equation} The left part of the tensor product is an eigenstate of $B_c$ with eigenvalue 1. Therefore, in the TNS decomposition Eq.~\eqref{eq.TNSgeneralDecomposition}, we can group the basis state $\ket{\{t\}}_{A}$ which are connected by this $B_c$ operator. This factorization can be extended to any product of $B_c$ operators inside the region $A$ and also adjacent to the region $A$'s boundary. Notice that any such product has at least one $X$ operator belonging to only one $B_c$ and so is different from the identity. When acting with all the possible products of these $B_c$ operator (including the identity) on a given $|\{t\}\rangle_A$ will generate as many unique states as there are $B_c$'s. The TNS can be brought to the following form \begin{eqnarray} |\mathrm{TNS}\rangle=\sum_{\{t\}'}\bigg[\prod_{c} \bigg(\frac{1+B_c}{2}\bigg)\ket{\{t\}}_A\bigg]\otimes |\{t\}\rangle_{\bar{A}}, \end{eqnarray} where the product over $c$ only involves the $B_c$ operators inside the region A and also adjacent to the region $A$'s boundary and the sum over $\{t\}'$ is over the open virtual index configurations that are not related by the action of these $B_c$ operators. \subsubsection{Counting the number of TNS basis in region $A$: Notations} To compute the upper bound of the entanglement entropy, we need to find the number of singular vectors in the region $A$ that are also eigenstates of any $B_c$ operators fully lying in the region $A$. This number that we denote as $\mathrm{basis}(\mathrm{TNS}(A))$ is \begin{equation} \mathrm{basis}(\mathrm{TNS}(A))= 2^{N-N_B}, \end{equation} where $N$ is the number of independent open virtual indices and $N_B$ is the number of $B_c$ operators inside the region $A$ and also adjacent to the region $A$'s boundary. Every open virtual index connected to a $g$ tensor located in $A$ and at the boundary of this region. Since each $g$ tensor has a unique independent virtual index, we have $N=N_g-N_c$ where $N_g$ is the number of $g$ tensors in $A$ and at the boundary of this region and $N_c$ is the number of constraints on the open indices coming from the $T$ tensors within the region $A$. We thus get \begin{equation} \log_2 (\mathrm{basis}(\mathrm{TNS}(A)))= N_{g}-N_c-N_{B} \end{equation} and the upper bound on the entanglement entropy reads \begin{eqnarray} S(A) = (N_{g}-N_c-N_{B})\log 2. \end{eqnarray} \subsubsection{Counting $N_{g}$ and $N_B$} We first count $N_{g}$. The number of $g$ tensors can be computed by looking at Fig.~\ref{fig.HaahTNS} (b). We consider the region $A$ with size $l_x\times l_y\times l_z$ (Notice that $l_x, l_y, l_z$ are the number of vertices in each direction). In eight corners, there are $8\times 2=16$ vertices. On the four hinges along $x$ direction, there are $2\times 4\times (l_x-2)$ vertices, where 2 means there are two spins on each vertex, and 4 means four hinges. And similar for $2\times 4\times (l_y-2)$ and $2\times 4\times (l_z-2)$ in the $y$ and $z$ directions respectively. For the $xy$-plane, there are $2\times 2\times (l_x-2)(l_y-2)$, where the first 2 comes from two spins per vertex, and the second 2 comes from two $xy$-planes. Similarly $2\times 2\times (l_x-2)(l_z-2)$ and $2\times 2\times (l_y-2)(l_z-2)$ for $xz$ and $yz$ plane respectively. Therefore, the total number of $g$ tensors is \begin{eqnarray} \begin{split} N_g=&16+8(l_x-2)+8(l_y-2)+8(l_z-2)\\ &+4(l_x-2)(l_y-2)+4(l_x-2)(l_z-2)\\ &+4(l_y-2)(l_z-2)\\ =&4l_xl_y+4l_xl_z+4l_yl_z-8l_x-8l_y-8l_z+16. \end{split} \end{eqnarray} We further count $N_B$. As explained in Sec.~\ref{subsubsec.HaahCodeSVD}, $N_B$ is the number of $B_c$ operators inside the region $A$ and adjacent to the boundary of the region $A$. For a cubic region $A$ with size $l\times l\times l$ (which is the case we consider below), the number of such $B_c$ operators are \begin{equation} \begin{split} N_B=(l-1)^3-(l-3)^3=6l^2-24l+26, \forall l\geq 3. \end{split} \end{equation} For $l=2$, we just have one $B_c$ operator. Hence we have \begin{equation} N_B=6l^2-24 l +26-\delta_{l,2}, \forall l\geq 2. \end{equation} \subsubsection{Counting $N_c$: Contribution from the $T$ tensors} \label{subsec:countingNcfromTtensors} The open indices may be constrained by the $T$ tensors fully inside the region $A$. In the following, we will discuss the specific entanglement cuts where $l_x=l_y=l_z=l$. We rely on numerical calculations to evaluate $N_c$. We first consider the examples $l=2$ and $l=3$ in detail, and then we describe our algorithm to search the number of linearly independent constraints. For $l=2$, as shown in Eq.~\eqref{HaahTtensor}, no $g$ tensor has all virtual indices contracted. There is only one $T$ tensor within region $A$. The element of the $T$ tensor is \begin{equation} T_{i_1i_2i_3i_4i_5i_6i_7i_8} \end{equation} where $i_1,i_2,i_3,i_4,i_5,i_6,i_7,i_8$ are all contracted virtual indices. Because they are contracted with $g$ tensors where at least one virtual index is open, all the contracted virtual indices $i_1,i_2,i_3,i_4,i_5,i_6,i_7,i_8$ are equal to some open indices, and we denote them as \begin{equation} \begin{split} i_1=t_1,\; i_2=t_2,\; i_3=t_3,\; i_4=t_4, \\ i_5=t_5,\; i_6=t_6,\; i_7=t_7,\; i_8=t_8. \end{split} \end{equation} The constraints on $\{i\}$'s are hence equivalent to the constraints on $\{t\}$'s, i.e., \begin{equation} t_1+t_2+t_3+t_4+t_5+t_6+t_7+t_8=0\mod 2. \end{equation} There is only one constraint from the $T$ tensor. Hence $N_c=1$ for $l=2$. For $l=3$, as shown in the Fig.~\ref{fig.HaahTNS} (b), we have eight constraints from eight $T$ tensors which involve the open indices via the $g$ tensors. The eight equations are \begin{equation} \begin{split} \sum_{n=1}^8 i_n^{(x,y,z)}=0\mod 2, \; x,y,z\in \{0,1\} \end{split} \end{equation} where the up-index $(x,y,z)$ represents the position of the $T$ tensor, and $n$ counts the eight indices of each cube in the $2\times 2 \times 2$ cut. All the $i$'s are contracted virtual indices. However, except the virtual indices that are connected with the central two $g$ tensors (which are defined on the two spins at the vertex $(x,y,z)=(1,1,1)$), all other indices (which are defined on two spins at vertices $(x,y,z)$, $x,y,z\in\{0,1,2\}$ except $(x,y,z)=(1,1,1)$) are equal to some open indices via $g$ tensors. Specifically, the virtual indices that are connected with the two center $g$ tensors are \begin{equation} \begin{split} i^{000}_{4}=i^{100}_{3}=i^{010}_{1}=i^{001}_{7}\mod 2\\ i^{000}_{5}=i^{110}_{2}=i^{101}_{8}=i^{011}_{6}\mod 2. \end{split} \end{equation} Since we only count the number of constraints for the open indices, we need to Gauss-eliminate all these eight virtual indices $i^{000}_{4}, i^{100}_{3}, i^{010}_{1}, i^{001}_{7}, i^{000}_{5}, i^{110}_{2}, i^{101}_{8}, i^{011}_{6}$ from the above 8 equations. Therefore, we obtain $8-2=6$ independent equations in terms of open indices only. Hence there are 6 constraints for the open indices. For the general $l$, we apply the same principle. We first enumerate all possible constraints from the $T$ tensors, and then we Gauss-eliminate all the virtual indices that are contracted within the region $A$. Hence we obtain a set of equations purely in terms of the open indices. The number of constraints is the rank of this set of equations. We list the number of linear independent constraints for the open indices as follows: \begin{widetext} \begin{equation} \begin{array}{@{}*{15}{c}@{}} l(\ge 3)\quad & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ N_c\quad & 6 & 12 & 18 & 24 & 30 & 36 & 42 & 48 & 54 & 60 & 66 & 72 & 78 & 84 \\ \end{array} \end{equation} \end{widetext} Hence, for $l \geq 3$, there are \begin{equation} 6l-12 \end{equation} linearly independent constraints for the open indices. Taking into account the fact that when $l=2$ the number of constraints is $1$, we infer that the number of constraints for a generic $l$ is: \begin{equation} 6l-12+\delta_{l,2}. \end{equation} \subsubsection{Entanglement entropy} We are ready to collect all the data we have obtained and compute the entanglement entropy for the cubic cut. For the entanglement cut of size $l\times l\times l$, the total number of $g$ tensors is \begin{equation} \begin{split} N_g &=12l^2-24l+16. \end{split} \end{equation} The number of of $T$ tensor constraints is \begin{equation} \begin{split} N_{c} =6l-12+\delta_{l,2}, \forall l\geq 2. \end{split} \end{equation} The number of $B_c$ operators is \begin{eqnarray} N_B=6l^2-24l+26-\delta_{l,2}, \forall l\geq 2. \end{eqnarray} Therefore the upper bound of the entanglement entropy reads \begin{equation}\label{Haahlllentanglemententropy} \begin{split} \frac{S}{\log 2} =& N_{g}-N_c - N_B \\ =& 6l^2 - 6l + 2, \forall l\geq 2. \end{split} \end{equation} The entanglement entropies also have negative linear corrections. If the region $\bar{A}$ is much larger than the region $A$, we conjecture that the region $\bar{A}$ will not impose any additional constraint. In that case, the upper bound would be saturated. The numerical calculations in App.~\ref{app.HaahcodeBruteforce} also support this conjecture. \section{Conclusion and Discussion}\label{sec.discussion} In this paper, we present our TNS construction for three stabilizer models in 3D. The ground states of these stabilizer codes are the eigenstates of all local Hamiltonian terms with $+1$ eigenvalues. The constructions of these TNSs share the same general idea and work in other dimensions as well: \begin{enumerate} \item We introduce a projector $g$ tensor for each physical spin which identifies the physical index with the virtual indices. \item The physical operators acting on the TNS can be transferred to the virtual indices using Eq.~\eqref{eq.projectorcondition ToricCode}. \item The local $T$ tensors contracted with the projector $g$ tensors are specified by the local Hamiltonian terms. \end{enumerate} After we obtain the TNS for the ground state, we can prove that the TNS is an exact SVD for the ground state with some specific entanglement cuts. The entanglement spectra are completely flat for the models studied in this paper. The entanglement entropies can be computed by counting the number of singular vectors. For the 3D toric code model, the entanglement entropies have a constant correction to the area law, $-\log(2)$. For the X-cube model and the Haah code, the entanglement entropies have linear corrections to the area law as shown in Sec.~\ref{subsec.Xcube_Entanglement} and \ref{subsec.Haah_Entanglement}. The analytical calculation of the entanglement entropies is rooted in the \textbf{Concatenation lemma}, since the \textbf{Concatenation lemma} is introduced to count the number of singular vectors. The \textbf{Concatenation lemmas} are rooted the symmetry properties of the local tensors. For instance, Eq.~\eqref{eq.ToricCodeTcondition} and \eqref{eq.ToricCodeTtensor} for the 3D toric code model. The transfer matrices can also be constructed. For the 3D toric code and the X-cube models, we prove that the transfer matrix is a projector whose dimension is counted by the \textbf{Concatenation lemma} as well. For the 3D toric code model, the transfer matrix is of dimension 2. For the X-cube model, the transfer matrix is of dimension $2^{L_x+L_y-1}$ where $L_x$ and $L_y$ are the sizes of the torus in the $x$ and $y$ directions respectively. The GSD on the torus is generally larger than the degeneracy of the transfer matrix. Since both the entanglement entropies and the transfer matrix degeneracies are rooted in the \textbf{Concatenation lemma} (or more fundamentally the symmetry properties of the local $T$ tensors), we believe that these two phenomena are related. Moreover, we conjecture that the negative linear correction to the area law is a signature of fracton models. This is similar to the negative constant correction (i.e., the topological entanglement entropy\cite{kitaev2006topological,levin2006detecting}) in 2D. In this paper, the TNSs are all the ground states of some exactly solvable local models. If we move away from these fine-tuned points without going through phase transitions, we expect the transfer matrix degeneracies to be still robust, since these degeneracies give rise to the GSD. In Ref.~\onlinecite{haegeman2015shadows}, this statement has been numerically verified in the 2D toric code model and its phase transitions to the trivial phases. If we move away from the fine-tuned points, we also expect that the linear term of the entanglement entropies for the fracton models does not vanish, although the specific coefficients of the linear terms might change. An important result is about the topological entanglement entropy. The topological entanglement entropy for the fracton models was first introduced in Ref.~\onlinecite{2017arXiv171001744M}, and is defined as the linear combinations of the entanglement entropies of different regions, in order to exactly cancel the area law. See Ref.~\onlinecite{2017arXiv171001744M} for the definition details. Importantly, the topological entanglement entropies of fracton models are linear with respect to the sizes of the entanglement cuts. Furthermore, Ref.~\onlinecite{2017arXiv171001744M} argues using perturbation theories that the topological entanglement entropies, of the same three models as in our paper are robust to adiabatic perturbations. Hence, Ref.~\onlinecite{2017arXiv171001744M} indicates that there should be a linear correction to the area law which does not vanish, even when moving away from the fine-tuned wave functions. However, we also have to admit that the rigorous statements, about the entanglement spectra, entropies and the transfer matrix degeneracies of a generic fracton model ground state, need to be verified by the numerical studies for the 3D fracton models in the future. \section*{Acknowledgments} H. He and Y. Zheng wish to thank the support from the physics department of Princeton University. N. R. acknowledges M. Hermanns and O. Petrova for fruitful discussions. The authors are grateful to G.~Sierra and B. Bradlyn for numerous and enlightening discussions about related topics of fracton models. B. A. B. wishes to thank Ecole Normale Superieure, UPMC Paris, and the Donostia International Physics Center for their generous sabbatical hosting during some of the stages of this work. B. A. B. acknowledges support for the analytic work from NSF EAGER grant DMR - 1643312, ONR - N00014-14-1-0330, NSF-MRSEC DMR-1420541. The computational part of the Princeton work was performed under department of Energy de-sc0016239, Simons Investigator Award, the Packard Foundation, and the Schmidt Fund for Innovative Research. N. R. was supported by Grants No. ANR-17-CE30-0013-01 and No. ANR-16-CE30-0025. \emph{Note added}: A week before the submission of this manuscript, a very interesting paper Ref.~\onlinecite{2017arXiv171001744M} appeared which computes, by a completely different method, the entanglement entropies of the X-Cube model and the Haah code. When particularized to their cut, our approach gives the same results for these two models. Furthermore, we thank Rahul Nandkishore and Siddharth Parameswaran for discussions that made us realize that the bound obtained in an earier version of our manuscript for the entanglement entropy for the Haah code could be tightened and saturated so as to give matching results to their paper.